NGC | Catalog
CatalogContainersMerlin Tensorflow Inference

Merlin Tensorflow Inference

Logo for Merlin Tensorflow Inference
Description
This container allows users to deploy NVTabular workflows and TensorFlow models to Triton Inference server for production.
Publisher
NVIDIA
Latest Tag
22.05
Modified
March 1, 2024
Compressed Size
5.01 GB
Multinode Support
No
Multi-Arch Support
No
22.05 (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

What is Merlin for Recommender Systems?

NVIDIA Merlin is a framework for accelerating the entire recommender systems pipeline on the GPU: from data ingestion and training to deployment. Merlin empowers data scientists, machine learning engineers, and researchers to build high-performing recommenders at scale. Merlin includes tools that democratize building deep learning recommenders by addressing common ETL, training, and inference challenges. Each stage of the Merlin pipeline is optimized to support hundreds of terabytes of data, all accessible through easy-to-use APIs. With Merlin, better predictions than traditional methods and increased click-through rates are within reach.

The Merlin ecosystem has four main components: Merlin ETL, Merlin Dataloaders and Training, and Merlin Inference.

Merlin Inference with Triton Inference Server

The Merlin-inference container allows users to deploy NVTabular workflows and HugeCTR or TensorFlow models to Triton Inference server for production.

A less often discussed challenge is how to deploy preprocessing and feature engineering workflows. Making sure that the same transformations happen to the data as were used at training time takes significant engineering effort. With NVTabular's Triton back end we take care of that for you. During training workflows dataset statistics are collected which can then be applied to the production data as well.

NVTabular and HugeCTR supports Triton Inference Server to provide GPU-accelerated inference. Triton Inference Server simplifies the deployment of AI models at scale in production. It is an open source inference serving software that lets teams deploy trained AI models from any framework (TensorFlow, TensorRT, PyTorch, ONNX Runtime, or a custom framework), from local storage or Google Cloud Platform or AWS S3 on any GPU- or CPU-based infrastructure (cloud, data center, or edge). The NVTabular ETL workflow and trained deep learning models (TensorFlow or HugeCTR) can be deployed easily with only a few steps to production. Both NVTabular and HugeCTR provide end-to-end examples for deployment: NVTabular examples and HugeCTR examples.

Getting Started

Launch Merlin-Inference Container

You can pull the inference container with the following command:

docker run -it --name tritonserver --gpus=all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -v ${PWD}:/models -p 8000:8000 -p 8001:8001 -p 8002:8002 nvcr.io/nvidia/merlin/merlin-tensorflow-inference:22.03

The container will open a shell when the run command execution is completed. It should look similar to this:

root@02d56ff0738f:/opt/tritonserver# 

Other NVIDIA Merlin containers

Merlin containers are available in the NVIDIA container repository at the following locations: Table 1: Merlin Containers

Container name Container location Functionality
Merlin-training https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-training NVTabular and HugeCTR
Merlin-tensorflow-training https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-tensorflow-training NVTabular, TensorFlow and Tensorflow Embedding plugin
Merlin-pytorch-training https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-pytorch-training NVTabular and PyTorch
Merlin-inference https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-inference NVTabular, HugeCTR and Triton Inference
Merlin-tensorflow-inference https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-tensorflow-inference NVTabular, Tensorflow and Triton Inference
Merlin-pytorch-inference https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-pytorch-inference NVTabular, PyTorch and Triton Inference

Examples and Tutorials

We provide a collection of examples, use cases, and tutorials for NVTabular and HugeCTR as Jupyter notebooks in our repository. These Jupyter notebooks are based on the following datasets:

  • MovieLens
  • Outbrain Click Prediction
  • Criteo Click Ads Prediction
  • RecSys2020 Competition Hosted by Twitter
  • Rossmann Sales Prediction With the example notebooks we cover the following:
  • Preprocessing and feature engineering with NVTabular
  • Advanced workflows with NVTabular
  • Accelerated dataloaders for TensorFlow and PyTorch
  • Scaling to multi-GPU and multi nodes systems
  • Integrating NVTabular with HugeCTR
  • Deploying to inference with Triton

For more sample models and their end-to-end instructions for HugeCTR visit the link: https://github.com/NVIDIA/HugeCTR/tree/master/samples

Learn More

If you are interested in learning more about how NVTabular works under the hood, we have API documentation that outlines in detail the specifics of the calls available within the library. The following are the suggested readings for those who want to learn more about HugeCTR.

HugeCTR User Guide: https://github.com/NVIDIA/HugeCTR/blob/master/docs/hugectr_user_guide.md

Questions and Answers: https://github.com/NVIDIA/HugeCTR/blob/master/docs/QAList.md

Sample models and their end-to-end instructions: https://github.com/NVIDIA/HugeCTR/tree/master/samples

NVIDIA Developer Site: https://developer.nvidia.com/nvidia-merlin#getstarted

NVIDIA Developer Blog: https://medium.com/nvidia-merlin

Contributing

If you wish to contribute to the Merlin library directly please see Contributing.md. We are particularly interested in contributions or feature requests for feature engineering or preprocessing operations that you have found helpful in your own workflows.

License

By pulling and using the container, you accept the terms and conditions of this End User License Agreement.