NGC | Catalog
CatalogModelsBERT-Large(fine-tuning) - SQuAD 1.1, seqLen=384

BERT-Large(fine-tuning) - SQuAD 1.1, seqLen=384

Logo for BERT-Large(fine-tuning) - SQuAD 1.1, seqLen=384
Description
Pretrained weights for the BERT-Large(fine-tuning) model. (Large, SQuAD 1.1, seqLen=384)
Publisher
NVIDIA
Latest Version
2
Modified
April 4, 2023
Size
3.75 GB

BERT-Large(fine-tuning) for TensorFlow

Pretrained weights for the BERT-Large(fine-tuning) model. (Large, SQuAD 1.1, seqLen=384)

Using the Model

Training

Model-scripts available in the NGC model scripts registry.

Re-training

The model was generated using BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper, from the NGC model scripts registry. For researchers aiming to improve upon or tailor the model, we recommend starting with information in README. It captures details about the architecture, accuracy and performance result, and corresponding scripts.

Inferencing

For a quick start follow the sections on inference in the model-scripts quick start guide

Datasets