Amber sur Jean Zay

Présentation

Amber est une suite logicielle de simulation de systèmes biomoléculaire par dynamique moléculaire.

Liens Utiles

Versions disponibles

Version Modules à charger
20 MPI, CUDA amber/20-mpi-cuda
22 MPI, CUDA amber/22-mpi-cuda
22 MPI, CUDA + AmberTools 23 amber/22-AmberTools23-mpi-cuda

Variantes de pmemd disponibles

  • pmemd : version OpenMP CPU
  • pmemd.MPI : version MPI CPU
  • pmemd.cuda/pmemd.cuda_SPFP : version OpenMP mono-GPU, précision mixte
  • pmemd.cuda_SPFP.MPI : version MPI multi-GPU, précision mixte
  • pmemd.cuda_DPFP : version OpenMP mono-GPU, Double précision
  • pmemd.cuda_DPFP.MPI : version MPI multi-GPU, Double précision

Exemple de script de soumission sur CPU

#!/bin/bash                                                                                                                                                                                                       
#SBATCH --nodes=1                      # 1 node is used
#SBATCH --ntasks-per-node=40           # 40 MPI tasks
#SBATCH --cpus-per-task=1              # Number of OpenMP threads per MPI task
#SBATCH --hint=nomultithread           # Disable hyperthreading
#SBATCH --job-name=pmemd               # Jobname
#SBATCH --output=%x.%j                 # Standard output file (%x is the job name, %j is the job number)
#SBATCH --error=%x.%j                  # Standard error file
#SBATCH --time=10:00:00                # Expected runtime HH:MM:SS (max 100h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>cpu        # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>       # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev             # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4              # Uncomment for job requiring more than 20h (up to 4 nodes)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load needed modules
module load amber/20-mpi-cuda-ambertools-mpi
 
# Remove some warnings
export PSM2_CUDA=0
export OMPI_MCA_opal_warn_on_missing_libcuda=0 
 
srun pmemd.MPI -O -i prod.in -o prod.out -p sys.prmtop -c sys.rst \
       -r sys.rst -ref sys.inpcrd -x sys.mdcrd 

Exemple d'utilisation sur la partition GPU

Script de soumission mono-GPU

pmemd_monogpu.slurm
#!/bin/bash
#SBATCH --nodes=1                   # 1 node is used
#SBATCH --ntasks-per-node=1         # 4 MPI tasks
#SBATCH --cpus-per-task=10          # Number of OpenMP threads per MPI task
#SBATCH --gres=gpu:1                # Number of GPUs per node
#SBATCH --hint=nomultithread        # Disable hyperthreading
#SBATCH --job-name=pmemd            # Jobname
#SBATCH --output=%x.%j              # Standard output file (%x is the job name, %j is the job number)
#SBATCH --error=%x.%j               # Standard error file
#SBATCH --time=10:00:00             # Expected runtime HH:MM:SS (max 100h for V100, 20h for A100)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@v100   # To specify gpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>    # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_gpu-dev          # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_gpu-t4           # Uncomment for job requiring more than 20h (up to 16 GPU, V100 only)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load needed modules
module load amber/20-mpi-cuda
 
 pmemd.cuda -O -i prod.in -o prod.out -p sys.prmtop -c sys.rst \
       -r sys.rst -ref sys.inpcrd -x sys.mdcrd