STT En FastConformer Hybrid Transducer-CTC Large Streaming 80ms

STT En FastConformer Hybrid Transducer-CTC Large Streaming 80ms

Logo for STT En FastConformer Hybrid Transducer-CTC Large Streaming 80ms
Description
This collection contains the large version (114M) of the streaming speech recognition model trained on NeMo ASRSET for English with look-ahead of 80ms. All models are cache-aware hybrid FastConformer with both Transducer and CTC decoders.
Publisher
NVIDIA
Latest Version
1.20.0
Modified
June 22, 2023
Size
405.37 MB

Model Overview

This collection contains large size versions of cache-aware FastConformer-Hybrid (around 114M parameters) trained on a large scale english speech. These models are trained for streaming ASR with look-ahead of 80ms which be used for very low-latency streaming applications. All models are hybrid with both Transducer and CTC decoders.

Model Architecture

FastConformer [4] is an optimized version of the Conformer model [1] with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with joint Transducer and CTC decoder loss. You may find more information on the details of FastConformer here: Fast-Conformer Model and about Hybrid Transducer-CTC training here: Hybrid Transducer-CTC. You may find more on how to switch between the Transducer and CTC decoders in the documentations.

These models are cache-aware versions of Hybrid FastConfomer which are trianed for streaming ASR. You may find more info on cache-aware models here: Cache-aware Streaming Conformer

Training

The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this example script and this base config. The SentencePiece tokenizers [2] for these models were built using the text transcripts of the train set with this script.

Datasets

All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of several thousand hours of English speech:

  • Librispeech 960 hours of English speech
  • Fisher Corpus
  • Switchboard-1 Dataset
  • WSJ-0 and WSJ-1
  • National Speech Corpus (Part 1, Part 6)
  • VCTK
  • VoxPopuli (EN)
  • Europarl-ASR (EN)
  • Multilingual Librispeech (MLS EN) - 2,000 hours subset
  • Mozilla Common Voice (v7.0)
  • People's Speech - 12,000 hrs subset

Note: older versions of the model may have trained on smaller set of datasets.

Performance

The list of the available models in this collection is shown in the following tables for both CTC and Transducer decoders. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.

Transducer Decoder

Version Tokenizer Vocabulary Size att_context_size LS test-other LS test-clean WSJ Eval92 WSJ Dev93 NSC Part 1 MLS Test MCV 7 Test Train Dataset
1.20.0 SPE Unigram 1024 [70,1] 6.5 2.7 1.9 3.2 6.9 9.1 11.5 NeMo ASRSET 3.0

CTC Decoder

Version Tokenizer Vocabulary Size att_context_size LS test-other LS test-clean WSJ Eval92 WSJ Dev93 NSC Part 1 MLS Test MCV 7 Test Train Dataset
1.20.0 SPE Unigram 1024 [70,1] 8.1 3.5 2.3 3.5 7.2 10.2 13.2 NeMo ASRSET 3.0

How to Use this Model

The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for streaming or for fine-tuning on another dataset. You may use this script to simulate streaming ASR with these models: cache-aware streaming simulation

Automatically load the model from NGC

import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecHybridRNNTCTCBPEModel.from_pretrained(model_name="stt_en_fastconformer_hybrid_large_streaming_80ms")

Transcribing text with this model

Using Transducer mode inference:

python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py \
  pretrained_name="stt_en_fastconformer_hybrid_large_streaming_80ms" \
  audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"

Using CTC mode inference:

python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py \
  pretrained_name="stt_en_fastconformer_hybrid_large_streaming_80ms" \
  audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" \
  decoder_type="ctc"

Input

This model accepts 16000 KHz Mono-channel Audio (wav files) as input.

Output

This model provides transcribed speech as a string for a given audio sample.

Limitations

Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.

References

[1] Conformer: Convolution-augmented Transformer for Speech Recognition

[2] Google Sentencepiece Tokenizer

[3] NVIDIA NeMo Toolkit

[4] Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition

Licence

License to use this model is covered by the NGC TERMS OF USE unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the NGC TERMS OF USE.