NGC | Catalog
CatalogModelsECAPA TDNN

ECAPA TDNN

Logo for ECAPA TDNN
Description
ECAPA TDNN model for Speaker Verification and Diarization tasks
Publisher
NVIDIA
Latest Version
1.16.0
Modified
April 4, 2023
Size
85.79 MB

Model Overview

Speaker Recognition is a broad research area that solves two major tasks: speaker identification (who is speaking?) and speaker verification (is the speaker who they claim to be?). In this work, we focus on far-field, text-independent speaker recognition when the identity of the speaker is based on how the speech is spoken, not necessarily on what is being said. Typically such SR systems operate on unconstrained speech utterances, which are converted into vectors of fixed length, called speaker embeddings. Speaker embeddings are also used in automatic speech recognition (ASR) and speech synthesis.

This model with modified ecapa based encoder[1] is trained end-to-end using angular softmax loss for speaker verification and diarization purposes and for extracting speaker embeddings

Model Architecture

ECAPA models consists of blocks of time delay neural blocks (TDNNs) and squeeze and excite (SE) layers unified with blocks of Res2Block layers. For faster training with similar performance numbers on diarization tasks we replaced Res2Blocks with group convolution layers This encoded information is then pooled by attention means to get speaker embeddings.[1]

Training

These models were trained on a composite dataset comprising of several thousand hours of speech, compiled from various publicly available sources. The NeMo toolkit [2] was used for training this model over few hundred epochs on multiple GPUs.

Datasets

The following datasets are used for training

Performance

This ECAPA model which is based on layers of TDNNs and SEs structure with 22.3M parameters achieves 0.92% EER on voxceleb clean test trial file and Also achieves the following results on common evaluation datasets (without finetuning on any dev set):

EVALUATIONTYPE NIST_SRE_200 AMI(Lapel) AMI(MixHeadset) CH109
ORACLEKNOWN #SPEAKERS 7.1 1.94 2.31 1.19
ORACLEUNKNOWN #SPEAKERS 6.78 2.58 2.13 1.73

How to use this model

For training and extracting embeddings detailed step by step, procedure has provided in Speaker Verification notebook. and Embeddings extraction script

Extract Embeddings

For a single audio file, one can also extract embeddings inline using

import nemo.collections.asr as nemo_asr
speaker_model = nemo_asr.models.EncDecSpeakerLabelModel.from_pretrained(model_name='ecapa_tdnn')
embs = speaker_model.get_embedding('audio_path')

Speaker Verification

Speaker Verification is a task of verifying if two utterances are from the same speaker or not. We provide a helper function to verify the audio files and return True if two provided audio files are from the same speaker, False otherwise. The audio files should be 16KHz mono channel wav files.

import nemo.collections.asr as nemo_asr
speaker_model = nemo_asr.models.EncDecSpeakerLabelModel.from_pretrained(model_name='ecapa_tdnn')
decision = speaker_model.verify_speakers('path/to/one/audio_file','path/to/other/audio_file')

Input

This model accepts 16000 KHz Mono-channel Audio (wav files) as input.

Output

This model provides embeddings of size 192 from a speaker for a given audio sample.

Limitations

This model is trained on both telephonic and non-telephonic speech from voxceleb datasets, Fisher and switch board. If your domain of data differs from trained data or doesnot show relatively good performance consider finetuning for that speech domain.

References

[1] ECAPA-TDNN Embeddings for Speaker Diarization [2] NVIDIA NeMo Toolkit

Licence

License to use this model is covered by the license of the NeMo Toolkit [2]. By downloading the public and release version of the model, you accept the terms and conditions of this license.