NGC | Catalog
CatalogModelsTTS En FastSpeech 2

TTS En FastSpeech 2

Logo for TTS En FastSpeech 2
Description
FastSpeech 2 speech synthesis model trained on female English speech
Publisher
NVIDIA
Latest Version
1.0.0
Modified
April 4, 2023
Size
95.09 MB

Model Overview

FastSpeech 2 is a non-autoregressive Transformer-based model that generates mel spectrograms from text, and predicts duration, energy, and pitch as intermediate steps.

Model Architecture

FastSpeech 2 is composed of a Transformer-based encoder, a 1D-convolution-based variance adaptor that predicts variance information of the output spectrogram, and a Transformer-based decoder. The variance information predicted includes the duration of each input token in the final spectrogram, and the pitch and energy per-frame of the output.

For more information about the model architecture, see the FastSpeech 2 paper [1].

Training

This model is trained on LJSpeech sampled at 22050Hz filtering out samples with words that are out-of-vocabulary(OOV) from CMUdict. This model has been tested on generating female English voices with an American accent. Supplementary data (durations, pitches, energies) were calculated using dataset preprocessing scripts that can be found in the NeMo library [2]. All NeMo models are trained in accordance with the model yaml. In particular, this model was trained on 1 NVIDIA Quadro RTX 8000 GPU for 400 epochs with a batch size of 64.

Performance

No performance information available at this time.

How to Use this Model

This model can be automatically loaded from NGC.

NOTE: In order to generate audio, you also need a 22050Hz vocoder from NeMo. This example uses the HiFi-GAN model.

# Load FastSpeech 2
from nemo.collections.tts.models import FastSpeech2Model
spec_generator = FastSpeech2Model.from_pretrained("tts_en_fastspeech2")

# Load vocoder
from nemo.collections.tts.models import Vocoder
model = Vocoder.from_pretrained(model_name="tts_hifigan")

# Generate audio
import soundfile as sf
parsed = spec_generator.parse("You can type your sentence here to get nemo to produce speech.")
spectrogram = spec_generator.generate_spectrogram(tokens=parsed)
audio = model.convert_spectrogram_to_audio(spec=spectrogram)

# Save the audio to disk in a file called speech.wav
sf.write("speech.wav", audio.to('cpu').numpy(), 22050)

Input

This model accepts batches of text.

Output

This model generates mel spectrograms.

Limitations

This checkpoint only works well with vocoders that were trained on 22050Hz data. Otherwise, the generated audio may be scratchy or choppy-sounding.

Versions

1.0.0 (current): The original version released with NeMo 1.0.0.

References

FastSpeech 2/2s paper: https://arxiv.org/abs/2006.04558 LJSpeech preprocessing scripts: https://github.com/NVIDIA/NeMo/tree/v1.0.0/scripts/dataset_processing/ljspeech

Licence

License to use this model is covered by the NGC TERMS OF USE unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the NGC TERMS OF USE.