NGC | Catalog

HOOMD-blue

Logo for HOOMD-blue
Description
HOOMD-blue is a highly flexible and scalable particle simulation toolkit. It makes use of high-level Python scripts to set initial conditions, control simulation parameters, and extract data for in situ analysis.
Publisher
The Glotzer Group
Latest Tag
v2.6.0
Modified
April 1, 2024
Compressed Size
700.5 MB
Multinode Support
No
Multi-Arch Support
No

HOOMD-blue

HOOMD-blue is a highly flexible and scalable particle simulation toolkit. It makes use of high-level Python scripts to set initial conditions, control simulation parameters, and extract data for in situ analysis.

More information about HOOMD-blue is available on the HOOMD-blue webpage.

Please Cite HOOMD-blue if it is used in any published work.

System Requirements

Before running the NGC HOOMD-blue container, please ensure that your system meets the following requirements.

  • Pascal (sm60) or Volta (sm70) NVIDIA GPU(s)
  • CUDA driver version >= 384.81
  • One of the following container runtimes

A HOOMD-blue executable optimized for your system hardware will be chosen automatically at runtime.


NOTE

For early access to ARM64 container content please see: https://developer.nvidia.com/early-access-arm-containers ---

Running HOOMD-blue

Command invocation

Typical HOOMD-blue invocation involves executing a HOOMD-blue script with Python.

$ python3 script.py [options]

Where

  • python3: Python interpreter executable
  • script.py: script containing instructions for HOOMD-blue execution
  • [options]: command-line options for HOOMD-blue. An exhaustive list of options is available on the HOOMD-blue documentation website

Examples

HOOMD-blue relies on Python scripts for instructions to run. The HOOMD-blue developers maintain a collection of example scripts for benchmarking, available via git:

$ git clone https://github.com/joaander/hoomd-benchmarks.git

This command downloads several sub-folders containing benchmarking scripts. The scripts can be bind-mounted and executed by the HOOMD-blue container. For example, to run the microspheres benchmark:

Docker:

$ cd hoomd-benchmarks/microsphere
$ nvidia-docker run -ti --rm --privileged -v $(pwd):/host_pwd nvcr.io/hpc/hoomd-blue:v2.6.0 python3 /host_pwd/bmark.py

Singularity:

$ cd hoomd-benchmarks/microsphere
$ singularity build hoomd-blue_v2.6.0.simg docker://nvcr.io/hpc/hoomd-blue:v2.6.0
$ singularity run --nv hoomd-blue_v2.6.0.simg python3 bmark.py

More detailed instructions on using the NGC HOOMD-blue container in Docker and Singularity can be found below.

Running with Singularity

Pull the image

Save the NGC HOOMD-blue container as a local Singularity image file:

$ singularity build hoomd-blue_v2.6.0.simg docker://nvcr.io/hpc/hoomd-blue:v2.6.0

This command saves the container in the current directory as hoomd-blue_v2.6.0.simg

Note: Singularity/2.x

In order to pull NGC images with singularity version 2.x and earlier, NGC container registry authentication credentials are required.

To set your NGC container registry authentication credentials:

$ export SINGULARITY_DOCKER_USERNAME='$oauthtoken'
$ export SINGULARITY_DOCKER_PASSWORD=

More information describing how to obtain and use your NVIDIA NGC Cloud Services API key can be found here.

Note: Singularity 3.1.x - 3.2.x

There is currently a bug in Singularity 3.1.x and 3.2.x causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

$ LD_LIBRARY_PATH="" singularity exec ...

Once the local Singularity image has been pulled, the following modes of running are supported:

Singularity Aliases

To simplify the examples below, define the following command aliases. These may be set as environment variables in a shell or batch script.

Singularity alias

SINGULARITY will be used to launch processes within the NGC HOOMD-blue container using the Singularity runtime:

$ export SINGULARITY="$(which singularity) run --nv -B $(pwd):/host_pwd hoomd-blue_v2.6.0.simg"

Where:

  • run: specifies mode of execution
  • --nv: exposes the host GPU to the container
  • -B $(pwd):/host_pwd: bind mounts the current working directory in the container at /host_pwd
  • hoomd-blue_v2.6.0.simg: path of the saved Singularity image file

Local workstation with Singularity

This mode of running is suitable for interactive execution from a local workstation containing one or more GPUs. There are no requirements other than those stated in the System Requirements section.

Command line

To launch one HOOMD-blue process per GPU, use:

$ ${SINGULARITY} mpirun -mca pml ^ucx -mca btl smcuda,self --bind-to core -n  python3 script.py

Where:

  • -mca pml ^ucx -mca btl smcuda,self: MPI parameters to disable UCX and set the byte-transfer layer (may significantly increase single-node performance)
  • --bind-to core: Distributes MPI ranks evenly among CPU cores (ensures GPU affinity)
  • ``: specifies the number of GPUs available on the local workstation
  • SINGULARITY: singularity alias defined above
  • script.py: path of a HOOMD-blue Python script

Interactive shell

To invoke an interactive shell, run /bin/bash within the container:

$ ${SINGULARITY} /bin/bash

While Singularity provides an interactive shell via singularity shell, this invocation ignores container entrypoint scripts. Thus, the preferred method to access an interactive shell is via a singularity run command directed at /bin/bash.

To run a HOOMD-blue Python script while using the interactive shell:

$ mpirun -mca pml ^ucx -mca btl smcuda,self --bind-to core -n  python3 /host_pwdscript.py

Where:

  • -mca pml ^ucx -mca btl smcuda,self: MPI parameters to disable UCX and set the byte-transfer layer (may significantly increase single-node performance)
  • --bind-to core: Distributes MPI ranks evenly among CPU cores (ensures GPU affinity)
  • ``: specifies the number of GPUs available on the local workstation
  • /host_pwd/script.py: path of a HOOMD-blue Python script

Cluster mpirun with Singularity

Clusters with a local compatible OpenMPI installation may launch the NGC HOOMD-blue container using the host provided mpirun or mpiexec launcher.

Cluster mpirun requirements

To use the cluster provided mpirun command to launch the NGC HOOMD-blue container, OpenMPI/3.0.2 or newer is required.

Running with mpirun

Running with mpirun maintains tight integration with the resource manager.

Launch HOOMD-blue within the container, using mpirun:

$ mpirun -n  --bind-to core ${SINGULARITY} python3 /host_pwd/script.py

Where:

  • ``: total task count
  • --bind-to core: Distributes MPI ranks evenly among CPU cores (ensures GPU affinity)
  • SINGULARITY: Singularity alias defined above
  • /host_pwd/script.py: path of a HOOMD-blue Python script

Container mpirun with Singularity

The NGC HOOMD-blue container allows the user to launch parallel MPI jobs from fully within the container. This mode has the least host requirements, but does necessitate additional setup steps, as described below.

Singularity mpirun Requirements

  • Passwordless rsh/ssh between compute nodes

Running mpirun in your container

The internal container OpenMPI installation requires an OpenMPI hostfile to specify the addresses of all nodes in the cluster. The OpenMPI hostfile takes the following form:



...

Generation of this nodelist file via bash script will vary from cluster to cluster. Common examples include:

SLURM
HOSTFILE=".hostfile.${SLURM_JOB_ID}"
for host in $(scontrol show hostnames); do
  echo "${host}" >> ${HOSTFILE}
done
PBS
HOSTFILE=$(pwd)/.hostfile.${PBS_JOBID}
for host in $(uniq ${PBS_NODEFILE}); do
  echo "${host}" >> ${HOSTFILE}
done

Additionally, mpirun must be configured to start the OpenMPI orted process within the container runtime. Set environment variables so that mpirun starts the OpenMPI orted process within the container:

$ export SIMG=hoomd-blue_v2.6.0.simg
$ export OMPI_MCA_plm=rsh
$ export OMPI_MCA_plm_rsh_args='-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR'
$ export OMPI_MCA_orte_launch_agent="${SINGULARITY} /usr/bin/orted"

To launch HOOMD-blue using mpirun:

$ ${SINGULARITY} --check_gpu=false mpirun --hostfile= --np= --bind-to core python3 /host_pwd/script.py

Where:

  • SINGULARITY: Singularity alias defined above
  • --check_gpu=false: skip check for a GPU on the launch node
  • ``: textfile list of compute hosts
  • ``: total task count
  • --bind-to core: Distributes MPI ranks evenly among CPU cores (ensures GPU affinity)
  • /host_pwd/script.py: path of a HOOMD-blue Python script

Running with nvidia-docker

NGC supports the Docker runtime through the nvidia-docker plugin.

nvidia-docker Aliases

To simplify the examples below, define the following command aliases. These may be set as environment variables in a shell or batch script.

nvidia-docker alias

DOCKER will be used to launch processes within the NGC HOOMD-blue container using the nvidia-docker runtime:

$ export DOCKER="nvidia-docker run --device=/dev/infiniband --cap-add=IPC_LOCK --privileged -it --rm -v $(pwd):/host_pwd nvcr.io/hpc/hoomdblue:v2.6.0"

Where:

  • DOCKER: alias used to store the base Docker command
  • run: specifies the mode of execution
  • --device=/dev/infiniband --cap-add=IPC_LOCK: grants container access to host infiniband device(s)
  • --privileged: grants container access to host resources
  • -it: allocates ptty
  • --rm: makes the container ephemeral (remove the container on exit)
  • -v $(pwd):/host_pwd: bind mounts the current working directory in the container as /host_pwd
  • nvcr.io/hpc/hoomd-blue:v2.6.0: URI to the NGC HOOMD-blue image

Local workstation with nvidia-docker

This mode of running is suitable for interactive execution from a local workstation containing one or more GPUs. There are no requirements other than those stated in the System Requirements section.

Command line execution with nvidia-docker

To launch one HOOMD-blue process per GPU, use:

$ ${DOCKER} mpirun --allow-run-as-root -mca pml ^ucx -mca btl smcuda,self -n  python3 /host_pwd/script.py

Where:

  • -mca pml ^ucx -mca btl smcuda,self: MPI parameters to disable UCX and set the byte-transfer layer (may significantly increase single-node performance)
  • ``: specifies the number of GPUs available on the local workstation
  • /host_pwd/script.py: the path of a HOOMD-blue Python script

Interactive shell with nvidia-docker

To start an interactive shell within the container environment, launch the container using the alias set earlier, DOCKER:

$ ${DOCKER}

To run a HOOMD-blue Python script while using the interactive shell:

$ mpirun -n  python3 /host_pwd/script.py

Where:

  • ``: specifies the number of GPUs available on the local workstation
  • /host_pwd/script.py: the path of a HOOMD-blue Python script

Suggested Reading

HOOMD-blue Documentation