NGC | Catalog
Logo for MELD
Description
MELD is a tool for inferring the structure of biomolecules from sparse, ambiguous, or noisy data. MELD combines semi-reliable data with atomistic physical models using Bayesian inference.
Publisher
Justin MacCallum, Alberto Perez, and Ken Dill
Latest Tag
200930-0.4.15
Modified
April 1, 2024
Compressed Size
2.67 GB
Multinode Support
No
Multi-Arch Support
No

MELD

MELD (Modeling Employing Limited Data) is a tool for inferring the structure of biomolecules from sparse, ambiguous, or noisy data. MELD can harness such problematic information in a physics-based, Bayesian framework for improved protein structure determination. MELD was the winner of the 13th Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction (CASP13) competition in the data-assisted targets category.

JL MacCallum, A Perez, and KA Dill, Determining protein structures by combining semireliable data with atomistic physical models by Bayesian inference, PNAS, 2015, 112(22), pp. 6985-6990.

Getting Started

We will walk through how to run an example simulation. For more details, see the MELD documentation.

Requirements

In essence MELD provides a very customizable way of running REMD. We can decide to change temperature, hamiltonian or both along the replica ladder and the way we do it is very customizable. The minimum number of replicas you need to run MELD is two. So at a minimum, you need to have two GPUs in your workstation or node.

You will also need nvidia-docker2 to use GPUs. See here for information on how to install nvidia-docker2. This container is build on CUDA 11 and requires nvidia drivers 450.36.06+. Summarizing:

  • Two or more GPUs, Pascal or better architecture
  • nvidia-docker2
  • NVIDIA driver 450.36.06+

Test your setup

Run the following test to confirm that you have a compatible driver, nvidia-docker2, and GPU. This will run a brief functional test.

docker run -it --gpus all nvcr.io/hpc/meld:200930-0.4.15 python -m simtk.testInstallation

After the test runs, you should see an indication that the computed forces were within tolerance.

Setting up

Create your input files for your MELD simulation. See the documentation for setting up a simulation. We'll assume that your simulation is specified by the input file named setup.py.

Decide how many GPUs or replicas you will use and set N_REPLICAS appropriately. For example, if you'd like to use two replicas (and therefore two GPUs), you'd set N_REPLICAS=2. Preprocess your simulation input file.

docker run -it --gpus all  --user $(id -u):$(id -g) -w /workspace -v $(pwd):/workspace nvcr.io/hpc/meld:200930-0.4.15 python setup.py

Running the preprocessing will create a directory named Data in the current working directory.

Running

Launch the simulation using the mpirun built into the container.

docker run -it --gpus all  --user $(id -u):$(id -g) -w /workspace -v $(pwd):/workspace nvcr.io/hpc/meld:200930-0.4.15 /usr/local/openmpi/bin/mpirun -np 2 -host localhost:2 launch_remd

If you're launching on a cluster, Pyxis plugin for SLURM makes it really easy to run multi-node! Launching the run with Pyxis installed on your cluster is as easy as:

#!/bin/bash -l
#SBATCH --nodes=2
#SBATCH --tasks-per-node=8

srun --container-image nvcr.io/hpc/meld:200930-0.4.15 \
     --container-mounts ${PWD}:/workspace \
     --container-workdir /workspace \
     launch_remd