AMS on Jean Zay

Introduction

AMS is a molecular modeling software using the density functional theory.

Useful sites

Available versions

Attention: a bug was found in versions prior to 2020.103. It impacts analytical frequencies computation in some cases.

Version Modules to load
2022.103 MPIams/2022.103-mpi
2021.102 MPIams/2021.102-mpi
2020.103 MPIams/2020.103-mpi
2020.101 MPIams/2020.101-mpi
2019.305 MPIadf/2019.305-mpi
2019.104 MPI CUDAadf/2019.104-mpi-cuda cuda/10.1.1

Information about GPU porting

All the ADF functions are not available on GPU. Please consult the dedicated page on the Web site if you wish to use this version.

Example of usage on the CPU partition

adf.slurm
#!/bin/bash
#SBATCH --nodes=1            # Number of nodes
#SBATCH --ntasks-per-node=40 # Number of tasks per node
#SBATCH --cpus-per-task=1    # Number of OpenMP threads per task
#SBATCH --hint=nomultithread # Disable hyperthreading
#SBATCH --job-name=ADF               # Jobname
#SBATCH --output=ADF.o%j           # Output file
#SBATCH --error=ADF.o%j           # Error file
#SBATCH --time=10:00:00      # Expected runtime HH:MM:SS (max 100h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@cpu       # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>       # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev      # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4      # Uncomment for job requiring more than 20h (up to 4 nodes)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load the necessary modules
module load adf/2019.104-mpi-cuda
 
export SCM_TMPDIR=$JOBSCRATCH
 
# Execute command
./opt.inp

Usage example on the GPU partition

adf.slurm
#!/bin/bash
#SBATCH --nodes=1            # Number of nodes
#SBATCH --gres=gpu:4        # Allocate 4 GPUs per node
#SBATCH --ntasks-per-node=40 # Number of tasks per node
#SBATCH --cpus-per-task=1    # Number of OpenMP threads per task
#SBATCH --hint=nomultithread # Disable hyperthreading
#SBATCH --job-name=ADF               # Jobname
#SBATCH --output=ADF.o%j           # Output file
#SBATCH --error=ADF.o%j           # Error file
#SBATCH --time=10:00:00      # Expected runtime HH:MM:SS (max 100h for V100, 20h for A100)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@v100      # To specify gpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>       # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_gpu-dev      # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_gpu-t4      # Uncomment for job requiring more than 20h (up to 16 GPU, V100 only)
 
# Manage modules
module purge
module load adf/2019.104-mpi-cuda cuda/10.1.1
 
# JOBSCRATCH is automatically deleted at the end of the job
export SCM_TMPDIR=$JOBSCRATCH
 
# Execution
./opt.inp

Comments:

  • All jobs have resources defined in Slurm per partition and per QoS (Quality of Service) by default. You can modify the limits by specifying another partition and / or QoS as shown in our documentation detailing the partitions and Qos.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify the project accounting (hours allocation for the project) for which to count the job's computing hours as indicated in our documentation detailing the project hours management.