NGC | Catalog
CatalogModelsBioBERTBaseCasedForNeMo

BioBERTBaseCasedForNeMo

Logo for BioBERTBaseCasedForNeMo
Description
-
Publisher
Nvidia
Latest Version
1
Modified
April 4, 2023
Size
500.53 MB

Overview

This is a checkpoint for the BioBERT Base Cased model compatible with NeMo that is converted from https://github.com/dmis-lab/biobert#download. This model has the same network architecture as the original BERT, but instead of Wikipedia and BookCorpus it is pretrained on PubMed, a large biomedical text corpus, which achieves better performance in biomedical downstream tasks, such as question answering(QA), named entity recognition(NER) and relationship extraction(RE). This model was trained for 1M steps. For more information please refer to the original paper https://academic.oup.com/bioinformatics/article/36/4/1234/5566506.

The model achieves SAcc/MRR/LAcc of 39/59.86/47.03 on QA dataset BioASQ-7b-factoid and macro precision/recall/f1 of 78.22/80.2/79.15 on RE dataset ChemProt.

Please be sure to download the latest version in order to ensure compatibility with the latest NeMo release.

  • BERT.pt - pretrained BERT encoder weights
  • TokenClassifier.pt - pretrained BERT masked language model head weights
  • SequenceClassifier.pt - pretrained BERT next sentence prediction head weights. This is optional and not needed if you only use masked language model loss.
  • bert_config.json - the config file used to initialize BERT network architecture in NeMo

More Details

For more details regarding BERT and pretraining please refer to https://ngc.nvidia.com/catalog/models/nvidia:bertbasecasedfornemo. For more details about BioBERT and training setup please refer to https://academic.oup.com/bioinformatics/article/36/4/1234/5566506.

Documentation

Source code and developer guide is available at https://github.com/NVIDIA/NeMo Refer to documentation at https://docs.nvidia.com/deeplearning/nemo/neural-modules-release-notes/index.html

This model checkpoint can be used for either finetuning BioBERT on your custom dataset, or finetuning downstream tasks. All of these tasks and scripts can be found at https://github.com/NVIDIA/NeMo.

In the following we show examples for how to finetune BioBERT on different downstream tasks.

Usage example 1: Finetune on BioASQ-factoid dataset

Visit https://github.com/NVIDIA/NeMo/blob/master/examples/nlp/biobert_notebooks/biobert_qa.ipynb

Usage example 2: Finetune on RE dataset ChemProt

Visit https://github.com/NVIDIA/NeMo/blob/master/examples/nlp/biobert_notebooks/biobert_re.ipynb

Usage example 2: Finetune on NER dataset NBCI

Visit https://github.com/NVIDIA/NeMo/blob/master/examples/nlp/biobert_notebooks/biobert_ner.ipynb