NGC Catalog
CLASSIC
Welcome Guest
Containers
Merlin HugeCTR

Merlin HugeCTR

For copy image paths and more information, please view on a desktop device.
Logo for Merlin HugeCTR
Features
Description
The Merlin HugeCTR container enables you to perform data preprocessing, feature engineering, train models with HugeCTR, and then serve the trained model with Triton Inference Server.
Publisher
NVIDIA
Latest Tag
nightly
Modified
May 2, 2025
Compressed Size
8.09 GB
Multinode Support
No
Multi-Arch Support
Yes
nightly (Latest) Security Scan Results

Linux / arm64

Sorry, your browser does not support inline SVG.

Linux / amd64

Sorry, your browser does not support inline SVG.

What is Merlin for Recommender Systems?

NVIDIA Merlin is a framework for accelerating the entire recommender systems pipeline on the GPU: from data ingestion and training to deployment. Merlin empowers data scientists, machine learning engineers, and researchers to build high-performing recommenders at scale. Merlin includes tools that democratize building deep learning recommenders by addressing common ETL, training, and inference challenges. Each stage of the Merlin pipeline offers an easy-to-use API and is optimized to support hundreds of terabytes of data.

The Merlin HugeCTR container enables you to perform data preprocessing, feature engineering, train models with HugeCTR, and then serve the trained model with Triton Inference Server.

About the Merlin HugeCTR Container

The Merlin HugeCTR container includes the following key components to simplify developing and deploying your recommender system:

  • NVTabular performs data preprocessing and feature engineering for tabular data. The library can operate on small and large datasets--scaling to manipulate terabyte-scale datasets that are used to train deep learning recommender systems.(Deprecated from 24.06)
  • HugeCTR can train deep learning recommender models and is written in CUDA C++ to provide optimal performance with NVIDIA GPUs. The library is a recommender-specific framework with optimized data loaders that can perform distributed training across multiple GPUs and nodes. HugeCTR provides strategies for scaling large embedding tables beyond available memory.
  • Triton Inference Server to provide GPU-accelerated inference. Triton Inference Server simplifies the deployment of AI models at scale in production. The server is an open source inference serving software that enables teams to deploy trained AI models from any framework: TensorFlow, TensorRT, PyTorch, ONNX Runtime, or a custom framework. The server can serve models from local storage or Google Cloud Platform or AWS S3 on any GPU- or CPU-based infrastructure (cloud, data center, or edge). The NVTabular ETL workflow and trained deep learning models (TensorFlow or HugeCTR) can be deployed easily with only a few steps to production.

Getting Started

Launch the Merlin HugeCTR Container

You can launch the Merlin HugeCTR container with the following command:

docker run --gpus all --rm -it -p 8888:8888 -p 8797:8787 -p 8796:8786 --ipc=host --cap-add SYS_NICE nvcr.io/nvidia/merlin/merlin-hugectr:latest /bin/bash

If you have a Docker version less than 19.03, change --gpus all to --runtime=nvidia.

The container will open a shell when the run command completes execution, you will be responsible for starting the jupyter lab on the docker container. Should look similar to below:

root@2efa5b50b909:

Start the jupyter-lab server:

jupyter-lab --allow-root --ip='0.0.0.0' --NotebookApp.token=''

Now you can use any browser to access the jupyter-lab server, via :8888 Once in the server, navigate to the /nvtabular/ directory and explore the code base or try out some of the examples. Within the container is the codebase, along with all of our dependencies, particularly RAPIDS Dask-cuDF. The easiest way to get started is to simply launch the container above and explore the examples within.

Other NVIDIA Merlin containers

Merlin containers are available in the NVIDIA container repository at the following locations: Table 1: Merlin Containers

Container name Container location Functionality
merlin-hugectr https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-hugectr Merlin and HugeCTR
merlin-pytorch https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-pytorch Merlin and PyTorch
merlin-hugectr https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-tensorflow Merlin and TensorFlow

From 24.06, merlin-hugectr also has SOK, trt_plugins installed.

Examples and Tutorials

We provide a collection of examples, use cases, and tutorials for HugeCTR as Jupyter notebooks in our repository. For sample models and their end-to-end instructions for HugeCTR visit the link: https://github.com/NVIDIA/HugeCTR/tree/master/samples

Learn More

  • HugeCTR Documentation
  • NVTabular Documentation
  • HugeCTR
  • NVTabular
  • Triton Inference Server
  • NVIDIA Developer Site
  • NVIDIA Developer Blog

License

By pulling and using the container, you accept the terms and conditions of this End User License Agreement.