NGC | Catalog
CatalogModelsJoint Intent and Slot Classification Bert

Joint Intent and Slot Classification Bert

Logo for Joint Intent and Slot Classification Bert
Description
Intent and Slot classification of the qeuries for the weather chat bot (trained on weather chat bot data).
Publisher
NVIDIA
Latest Version
deployable_v1.0
Modified
October 6, 2023
Size
422.76 MB

IntentAndSlotClassification Model Card =========================================================

Model Overview --------------

Joint Intent classification and Slot classification is a task of classifying an Intent and detecting all relevant Slots (Entities) for this Intent in a query. For example, in the query: What is the weather in Santa Clara tomorrow morning?, we would like to classify the query as a Weather Intent, and detect Santa Clara as a Location slot and tomorrow morning as a date_time slot.

Intended Use ------------

Intents and Slots names are usually task specific and defined as labels in the training data. This is a fundamental step that is executed in any task-driven Conversational Assistant. The primary use case of this model is to jointly identify Intents and Entities in a given user query.

Model Architecture ------------------

This is a pretrained Bert based model with 2 linear classifier heads on the top of it, one for classifying an intent of the query and another for classifying slots for each token of the query. This model is trained with the combined loss function on the Intent and Slot classification task on the given dataset.

For each query the model will classify it as one the intents from the intent dictionary and for each word of the query it will classify it as one of the slots from the slot dictionary, including out of scope slot for all the remaining words in the query which does not fall in another slot category. Out of scope slot (O) is a part of slot dictionary that the model is trained on.

Training Data -------------

We used a proprietary data set that was collected via Mechanical Turk to describe different queries in weather domain.

List of the recognized Intents for this model:

  • weather
  • temperature, Temperature_yes_no
  • rainfall, rainfall_yes_no
  • snow, snow_yes_no
  • humidity, humidity_yes_no
  • windspeed
  • sunny
  • cloudy
  • context

List of the recognized Entities:

  • O (out of scope)
  • weathertime
  • weatherplace
  • temperatureunit
  • current_location
  • wind_speed_unit
  • rainfallunit
  • snowunit
  • alert_type
  • weatherforecastdaily

Evaluation ----------

Training dataset included 9500 queries related to the weather topic and about 100K total words in the queries (slots) and 2000 queries for testing. The model was trained for 30 epochs after which it stopped giving improvements. We got around 95% intent accuracy and 93% slot accuracy for the given dataset.

How to Use This Model ---------------------

These model checkpoints are intended to be used with the Train Adapt Optimize (TAO) Toolkit. In order to use these checkpoints, there should be a specification file (.yaml) that specifies hyperparameters, datasets for training and evaluation, and any other information needed for the experiment. For more information on the experiment spec files for each use case, please refer to the TAO Toolkit User Guide.

Note: The model is encrypted and will only operate with the model load key tao-encode.

  • To fine-tune from a model checkpoint (.tlt), use the following command (`` parameter should be a valid path to the file that specifies the fine-tuning hyperparameters, the dataset to fine-tune on, the dataset to evaluate on, epochs number):
!tao intent_slot_classification finetune -e \
 -m \
 -g 
  • To evaluate an existing dataset using a model checkpoint (.tlt), use the following command (`` parameter should be a valid path to the file that specifies the dataset that is being evaluated):
!tao intent_slot_classification evaluate -e \
 -m 
  • To evaluate a model checkpoint (.tlt) on a set of query examples, use the following command (`` parameter should be a valid path to the file that specifies list of queries to test):
!tao intent_slot_classification infer -e \
 -m 

References ----------

License -------

By downloading and using the models and resources packaged with TAO Conversational AI, you would be accepting the terms of the Riva license

Suggested reading -----------------

Ethical AI ----------

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.