NGC | Catalog
CatalogModelsQA squadv2.0 Megatroncased

QA squadv2.0 Megatroncased

Logo for QA squadv2.0 Megatroncased
Description
Cased question answering model with Megatron encoder finetuned on SQuADv2.0
Publisher
NVIDIA
Latest Version
1.0.0rc1
Modified
April 4, 2023
Size
1.15 GB

Model Overview

This is an cased question answering model with a Megatron 340M parameter encoder finetuned on dataset SQuADv2.0 [1]. With Question Answering, or Reading Comprehension, given a question and a passage of content (context) that may contain an answer for the question, the model predicts the span within the text with a start and end position indicating the answer to the question.

Model Architecture

The current version of the question answering model The model is based on the architecture presented in "Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism" paper [2]. In this particular instance, the model has 24 Transformer blocks. On top of that it is using a span prediction head, that is equivalent to token classification with 2 classes: one for the start of the span and one for the end of the span. All model parameters are jointly fine-tuned on the downstream task. More specifically, an input text is fed to the Megatron encoder model, and the output states are further fed to the span prediction.

Training

The model was trained with cased Megatron 340M.

Dataset

The model was trained on SQuADv2.0 [1] corpus for question answering. For datasets like SQuAD 2.0, this model supports cases when the answer is not contained in the content.

Performance

Evaluation on the SQuAD2.0 dev set:

Exact Match 84.73%

F1 87.89%

How to use this model

The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.

Automatically load the model from NGC

import nemo
import nemo.collections.nlp as nemo_nlp
model = nemo_nlp.models.question_answering.qa_model.QAModel.from_pretrained(model_name="qa_squadv2.0_megatron_cased")

Inference

python [NEMO_GIT_FOLDER]/examples/nlp/question_answering/question_answering_squad.py do_training=false pretrained_model=qa_squadv2.0_megatron_cased model.validation_ds.file=[SOURCE_FILE] model.dataset.version_2_with_negative=true model.dataset.do_lower_case=false

Input

The model takes a Json file as input that follows the SQuAD format.

Output

The model outputs a JSON file as output for prediction and n-Best list.

Limitations

The length of the input text is currently constrained by the maximum sequence length of the encoder model, which is 512 tokens after tokenization.

References

[1] https://rajpurkar.github.io/SQuAD-explorer/

[2] https://arxiv.org/abs/1909.08053

[3] NVIDIA NeMo Toolkit

Licence

License to use this model is covered by the NGC TERMS OF USE unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the NGC TERMS OF USE.