NAMD on Jean Zay

Introduction

NAMD is a molecular dynamics software which is traditionally used to simulate large systems.

Useful sites

Available versions

VersionVariants
3.0-alpha3 namd/3.0-a3
3.0-alpha9 namd/3.0-a9
2.13 namd/2.13-mpi-charmpp-mpi
2.13 namd/2.13-mpi-charmpp-smp
2.13 CUDAnamd/2.13-mpi-cuda
2.13 CUDA pour Replica exchange namd/2.13-mpi-cuda-charmpp-mpi-smp
2.9 namd/2.9-mpi-charmpp-mpi-smp

Submission script for the default CPU partition

namd.slurm
#!/bin/bash
#SBATCH --nodes=10              # Number of nodes
#SBATCH --ntasks-per-node=40    # Number of MPI tasks per node
#SBATCH --cpus-per-task=1       # Number of OpenMP threads
#SBATCH --hint=nomultithread    # Disable hyperthreading
#SBATCH --job-name=DnaJ         # nom du job
#SBATCH --output=%x.o%j         # Output file %x is the jobname, %j the jobid 
#SBATCH --error=%x.o%j          # Error file
#SBATCH --time=10:00:00         # Expected runtime HH:MM:SS (max 100h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@cpu  # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>  # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev        # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4         # Uncomment for job requiring more than 20h (up to 4 nodes)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load the module
module load namd/2.13-mpi-charmpp-mpi
 
# Execute commands
srun namd2 DnaJ.namd

Submission script for the default GPU partition

Important: The version currently installed does not permit using several compute nodes. Therefore, you should not use the srun or charmrun commands.

On GPU, the +idlepoll option is necessary to obtain good performance.

namd_gpu.slurm
#!/bin/bash
#SBATCH --nodes=1               # Number of nodes
#SBATCH --ntasks-per-node=1     # Number of MPI tasks per node
#SBATCH --cpus-per-task=40      # Number of OpenMP threads
#SBATCH --hint=nomultithread    # Disable hyperthreading
#SBATCH --gres=gpu:4            # Allocate 4 GPUs per node
#SBATCH --job-name=DnaJ         # Job name
#SBATCH --output=%x.o%j         # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j          # Error file
#SBATCH --time=10:00:00         # Expected runtime HH:MM:SS (max 100h for V100, 20h for A100)
##
## Please, refer to comments below for
## more information about these 3 last options.
##SBATCH --account=<account>@v100 # To specify gpu accounting: <account> = echo $IDRPROJ
##SBATCH --qos=qos_gpu-dev        # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_gpu-t4         # Uncomment for job requiring more than 20h (up to 16 GPU, V100 only)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Manage modules
module load namd/2.13-mpi-cuda
 
# Execute commands
namd2 +p40 +idlepoll DnaJ.namd

Submission script for Replica Exchange on the default GPU partition

Important : A multinode GPU version of NAMD is installed but we recommend to use it only for Replica Exchange simulations.

It was compiled ignoring a NAMD configure error about performance.

You should only use it with one GPU per replica with the option ' +devicesperreplica 1'.

namd_gpu_RE.slurm
#!/bin/bash                                                                                                                                                                                  
#SBATCH --nodes=4               # Number of Nodes
#SBATCH --ntasks-per-node=4     # Number of MPI tasks per node
#SBATCH --cpus-per-task=10      # Number of OpenMP threads
#SBATCH --hint=nomultithread    # Disable hyperthreading
#SBATCH --gres=gpu:4            # Allocate 4 GPUs per node
#SBATCH --job-name=test_RE      # Job name
#SBATCH --output=%x.o%j         # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j          # Error file
#SBATCH --time=10:00:00         # Expected runtime HH:MM:SS (max 100h for V100, 20h for A100)
##
## Please, refer to comments below for
## more information about these 3 last options.
#SBATCH --account=<account>@v100 # To specify gpu accounting: <account> = echo $IDRPROJ
##SBATCH --qos=qos_gpu-dev       # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_gpu-t4        # Uncomment for job requiring more than 20h (up to 16 GPU, V100 only)
 
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load the module
module load namd/2.13-mpi-cuda-charmpp-mpi-smp
 
export replicas=16
 
mkdir -p output/{0..$replicas}
 
set -x
srun $(which namd2) +idlepoll +devicesperreplica 1
     +replicas $replicas job0.conf +stdout output/%d/job0.%d.log

Comments:

  • All jobs have resources defined in Slurm per partition and per QoS (Quality of Service) by default. You can modify the limits by specifying another partition and / or QoS as shown in our documentation detailing the partitions and Qos.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify the project accounting (hours allocation for the project) for which to count the job's computing hours as indicated in our documentation detailing the project hours management.