V-Blaze and V-Cloud Online Help


Values:false, true, language_model


The lid parameter enables you to use the ASR engine's Language Identification Module to identify the language spoken in the input audio. When lid identifies the language, the appropriate language model is selected based on the spoken language detected in the audio. For example, if Spanish is detected, then the resulting transcript will be in Spanish. Additionally, if you require an alternate model, specify it using lid=model.

  • lid=true - automatically selects the language identification model based on the LID and language models that are available.

  • lid=language_model - the alternate language model.

  • lid=false - lid is not used.

The following parameters provide additional options when using the lid tag:

  • lidmaxtime (default 20.0 seconds) - maximum audio duration (seconds) to analyze. For example, if lidmaxtime=20, the ASR engine will analyze 20 seconds of audio at most.

  • lidthreshold (default 0.7) - specifies the required confidence level before lid will stop analyzing audio.


Language identification will stop analyzing audio once confidence exceeds the value specified in lidthreshold or goes over the audio duration limit set in lidmaxtime.

Language Identification information is written in a lidinfo section of the JSON transcript of an audio file. The JSON transcript also contains information about the language model specified during transcription and the model selected by the Language Identification module to transcribe each utterance. See Receiving Language Identification Information for more information about using the lid tag and for examples of the contents of the lidinfo section and model information.

Language identification is a licensed optional feature.