NGC | Catalog
CatalogModelsBertBaseUncasedSQuADv1

BertBaseUncasedSQuADv1

Logo for BertBaseUncasedSQuADv1
Description
BERT Base Uncased model for Question Answering finetuned with NeMo on SQuAD v1.1 dataset.
Publisher
NVIDIA
Latest Version
1
Modified
April 4, 2023
Size
417.7 MB

Overview

This is a checkpoint for the BERT Base Uncased for Question Answering finetuned in NeMo https://github.com/NVIDIA/NeMo on the uncased question answering dataset SQuADv1.1 https://rajpurkar.github.io/SQuAD-explorer/. The model is trained for 2 epochs on a DGX1 with 8 V100 GPUs using Apex/Amp optimization level O2. On the development data this model achieves an exact match (EM) score of 82.74 and F1 score of 89.79. The pretrained BERT model checkpoint was also trained with NeMo, on uncased English Wikipedia and BookCorpus.

Please be sure to download the latest version in order to ensure compatibility with the latest NeMo release.

  • BERT-STEP-7388.pt - finetuned BERT encoder weights
  • TokenClassifier-STEP-7388.pt - finetuned BERT question answering head weights
  • bert-config.json - the config file used to initialize BERT network architecture in NeMo

More Details

BERT, or Bidirectional Encoder Representations from Transformers, is a neural approach to pre-train language representations which obtains near state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks, including SQuAD Question Answering dataset. Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage. Unlike SQuADv1.1, SQuADv2.0 can contain questions that are unanswerable.

Apart from the BERT architecture this model also includes a question answering model head, which is stacked on top of BERT. This question answering head is a token classifier and is, more specifically, a single fully connected layer.

Training works as follows: The user provides training and evaluation data in text form in JSON format. This data is parsed by scripts in NeMo and converted into model input. The input sequence is a concatenatenation of a tokenized query and its according reading passage. The question answering head predicts for each token in the reading passage or context if it is the start or end of the answer span. The model is trained using cross entropy loss.

Documentation

Source code and developer guide is available at https://github.com/NVIDIA/NeMo Refer to documentation at https://docs.nvidia.com/deeplearning/nemo/neural-modules-release-notes/index.html Code to pretrain and reproduce this model checkpoint are available at https://github.com/NVIDIA/NeMo.

This model checkpoint can be used for either inference or finetuning on custom question answering datasets, as long as they are in the required format. More details at https://github.com/NVIDIA/NeMo.

In the following we show examples for how to finetune BERT on SQuAD v1.1, how to do evaluation and inference.

Before you start, please download SQuADv1.1 dataset either directly from https://rajpurkar.github.io/SQuAD-explorer/ or using https://github.com/NVIDIA/NeMo/blob/master/examples/nlp/scripts/get_squad.py.

Usage example 1: Training

  1. Download the pretrained BERT encoder checkpoint BERT-STEP-2285714.pt and bert-config.json from https://ngc.nvidia.com/catalog/models/nvidia:bertbaseuncasedfornemo

  2. Run BERT Base Uncased on e.g. DGX1 with 8 V100 GPUs

     cd examples/nlp/question_answering;
    
     python -m torch.distributed.launch --nproc_per_node=8 question_answering_squad.py --mode train_eval --amp_opt_level O2 --num_gpus 8 --train_file=/path_to/squad/v1.1/train-v1.1.json --eval_file /path_to/squad/v1.1/dev-v1.1.json --bert_checkpoint /path_to/BERT-STEP-2285714.pt --bert_config /path_to/bert-config.json --pretrained_model_name bert-base-uncased --batch_size 3 --num_epochs 2 --lr_policy SquareRootAnnealing --optimizer fused_adam --lr 3e-5 --do_lower_case --no_data_cache
    

    Checkpoints will be store at --work_dir folder.

Usage example 2: Evaluation

  1. Download finetuned checkpoints BERT-STEP-7388.pt, TokenClassifier-STEP-7388.pt, put them into /path_to/checkpoints_dir. Download bert-config.json.

  2. Run evaluation:

     cd examples/nlp/question_answering;
    
     python question_answering_squad.py --mode eval --amp_opt_level O2 --eval_file /path_to/squad/v1.1/dev-v1.1.json --checkpoint_dir /path_to/checkpoints_dir --bert_config /path_to/bert-config.json --pretrained_model_name bert-base-uncased --do_lower_case --no_data_cache
    

Usage example 3: Inference

  1. Download finetuned checkpoints BERT-STEP-7388.pt, TokenClassifier-STEP-7388.pt, put them into /path_to/checkpoints_dir. Download bert-config.json.

  2. Run inference on custom test_file.json:

     cd examples/nlp/question_answering;
    
     python question_answering_squad.py --mode test --amp_opt_level O2 --test_file test_file.json --checkpoint_dir /path_to/checkpoints_dir --bert_config /path_to/bert-config.json --pretrained_model_name bert-base-uncased --do_lower_case --no_data_cache
    

Metrics will be logged, model predictions are stored at --output_prediction_file