NGC | Catalog
Logo for IndeX
Description
NVIDIA IndeX™ is a leading volume visualization tool for HPC. It takes advantage of the computational horsepower of GPUs to deliver real-time performance on large datasets by distributing visualization workloads across a GPU-accelerated cluster.
Publisher
NVIDIA
Latest Tag
2.2
Modified
April 1, 2024
Compressed Size
265.62 MB
Multinode Support
No
Multi-Arch Support
No
2.2 (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

NVIDIA IndeX

NVIDIA IndeX™ is a leading volume visualization tool for HPC. It takes advantage of the computational horsepower of GPUs to deliver real-time performance on large datasets by distributing visualization workloads across a GPU-accelerated cluster.

The present NVIDIA IndeX docker image represents a restricted demo of NVIDIA IndeX. When started (please follow the instructions below) the browser shows a visualization of a Core-collapse Supernova volume dataset (courtesy note). NVIDIA IndeX enables you to

  • Inspect the volume interactively.
  • Modify the transfer function interactively.
  • Reimplement the sampling function (using CUDA).

These interactions, amongst other features, enable you to

  • Derive detailed information from the dataset.
  • Identify new features in the dataset.
  • Make new discoveries faster.

More info on NVIDIA IndeX can be found on it's page website. For licenes or other inquiries please contact us.

  • See the NGC Container User Guide for prerequisites and setup steps for all HPC containers.

  • The document also describes the steps to pull NGC containers.

The present release is based on CUDA 11.1 and requires GPU/CUDA driver version 455.23 or higher.

Pulling the NVIDIA IndeX container

This example illustrates the steps to pull and run the NVIDIA IndeX container from the nvidia-docker command line interface CLI. First, please issue the following command to log into the NGC container registry:

docker login nvcr.io 

When prompted for a user name please $oauthtoken which is a special username that indicates that you will authenticate with an API and not with a username and password. You will then be asked for the password. Here, please enter your NVIDIA GPU Cloud API key. For pulling the NVIDIA IndeX docker image please issue the following command:

docker pull nvcr.io/nvidia-hpcvis/index:2.2

Below you'll find information on how to run the container on one or multiple hosts with docker or singularity.

Running single-node with Docker

Once the docker image is downloaded to your machine, please run the NVIDIA IndeX container as follows:

docker run --runtime nvidia -p 8080:8080 nvcr.io/nvidia-hpcvis/index:2.2 --single

The NVIDIA IndeX server applications starts immediately and loads a dataset. You can now connect with the Chrome (or Chromium) browser to the NVIDIA IndeX server running on the machine. Please open

http://<ip>:8080

in your Chrome browser, where the IP refers to the server that runs the NVIDIA IndeX. The Chrome browser loads the HTML5-based NVIDIA IndeX client web interface. These connection instructions apply for the next scenarios as well.

Running multi-node with Docker

Please note that multi-node containers need a special license.

There are a few considerations to take into account when starting a IndeX cluster:

  • One node acts as a viewer/head node, the remaining nodes will be workers.
  • For the nodes to communicate with each other you will need to select which node is the discovery node. For simplicity, the viewer node can be the discovery node too. In the examples below, the viewer node IP will be under $VIEWER_DISC_IP.
  • You will need to expose a few ports: 5555 (discovery), 10000, 10001 (cluster) and 8080 (web UI).
  • For Infiniband instructions please see the next section.

Launching the viewer:

sudo docker run --runtime nvidia \
    -p 8080:8080 -p 10000:10000 -p 10001:10001 -p 5555:5555 \
    nvcr.io/nvidia-hpcvis/index:2.2 \
        -dice::network::mode TCP_WITH_DISCOVERY \
        -dice::network::discovery_address $VIEWER_DISC_IP5555 \
        -app::cluster_size 2

Launching the worker node:

sudo docker run \
    -p 8080:8080 -p 10000:10000 -p 10001:10001 -p 5555:5555 \
    nvcr.io/nvidia-hpcvis/index:2.2 \
        -dice::network::mode TCP_WITH_DISCOVERY \
        -dice::network::discovery_address $VIEWER_DISC_IP:5555 \
        -app::cluster_size 2 \
        -app::host_mode remote_service

If you are having multiple interfaces attached to your hosts (in different subnets), please select the cluster interface by choosing it's subnet (-dice::network::cluster_interface_address 192.168.1.0/24) or ip (-dice::network::cluster_interface_address 192.168.1.X).

Running multi-node with Docker - Infiniband version

For Infiniband you will have to use host networking with a few tweaks to access the hardware capabilities.

Launching the viewer:

sudo docker run --runtime nvidia \
    --shm-size='16G' --device=/dev/infiniband --cap-add=IPC_LOCK --net=host \
    nvcr.io/nvidia-hpcvis/index:2.2 \
        -dice::network::mode TCP_WITH_DISCOVERY \
        -dice::network::discovery_address $VIEWER_DISC_IP:5555 \
        -app::cluster_size 2

Launching the worker node:

sudo docker run \
    --shm-size='16G' --device=/dev/infiniband --cap-add=IPC_LOCK --net=host \
    nvcr.io/nvidia-hpcvis/index:2.2 \
        -dice::network::mode TCP_WITH_DISCOVERY \
        -dice::network::discovery_address $VIEWER_DISC_IP:5555 \
        -app::cluster_size 2 \
        -app::host_mode remote_service

Running single-node with Singularity

singularity run --nv docker://nvcr.io/nvidia-hpcvis/index:2.2 --single --components

The NVIDIA IndeX server applications starts immediately and loads a dataset. You can now connect with the Chrome (or Chromium) browser to the NVIDIA IndeX server running on the machine. Please open

http://<ip>:8080

in your Chrome browser, where the IP refers to the server that runs the NVIDIA IndeX. The Chrome browser loads the HTML5-based NVIDIA IndeX client web interface.

Running multi-node with Singularity

Starting the Viewer node:

singularity run --nv docker://nvcr.io/nvidia-hpcvis/index:2.2 \
    -dice::network::mode TCP_WITH_DISCOVERY \
    -dice::network::discovery_address $VIEWER_DISC_IP:5555 \
    -app::cluster_size 2

Starting the Worker node:

singularity run --nv docker://nvcr.io/nvidia-hpcvis/index:2.2 \
    --add project_remote.prj \
    -dice::network::mode TCP_WITH_DISCOVERY \
    -dice::network::discovery_address $VIEWER_DISC_IP:5555 \

If you are having multiple interfaces attached to your hosts (in different subnets), please select the cluster interface by choosing it's subnet (-dice::network::cluster_interface_address 192.168.1.0/24) or ip (-dice::network::cluster_interface_address 192.168.1.X).

Running Singularity under Slurm

When using Slurm, the container scripts detect this automatically so you can launch everything with one command from the launch node:

srun -N3 \
    singularity run --nv docker://nvcr.io/nvidia-hpcvis/index:2.2 \
        -app::cluster_size 3

Where the size cluster is 3.

However, if you want to run the commands individually like in the previous sections, disable the Slurm job ID export SLURM_JOB_ID=

srun -N3 \
    singularity run --nv docker://nvcr.io/nvidia-hpcvis/index:2.2 \
        -app::cluster_size 3

Loading your own license

If you have your own NVIDIA IndeX license, you must bind mount to the container path /opt/nvidia-index/demo/license.lic.

For Docker, add the following command line parameter: -v /host/path/to/license.lic:/opt/nvidia-index/demo/license.lic

For Singularity, add the following command line parameter: -B /host/path/to/license.lic:/opt/nvidia-index/demo/license.lic