QMCPACK on Jean Zay

Introduction

QMCPACK is an electronic structure modelisation program of molecules and solids, using a Monte-Carlo method.

Useful sites

Available versions

Version Modules to load
3.10.0 CUDAqmcpack/3.10.0-mpi-cuda
3.10.0qmcpack/3.10.0-mpi
3.9.2 CUDAqmcpack/3.9.2-mpi-cuda
3.7.0 qmcpack/3.7.0-mpi
gcc/8.3.0
3.7.0 CUDAqmcpack/3.7.0-mpi-cuda
gcc/8.3.0 cuda/10.1.1
3.7.0 CUDA (CUDA-Aware MPI)qmcpack/3.7.0-mpi-cuda
gcc/8.3.0 cuda/10.1.1
openmpi/3.1.4-cuda

Submission script on the CPU partition

gmcpack_mpi.slurm
#!/bin/bash                                                                                                                                                                                         
#SBATCH --nodes=1                   # Number of nodes
#SBATCH --ntasks-per-node=4         # Number of tasks per node
#SBATCH --cpus-per-task=10          # Number of cores for each task (important to get all memory)
#SBATCH --hint=nomultithread        # Disable hyperthreading
#SBATCH --job-name=qmcpack_mpi      # Jobname 
#SBATCH --output=%x.o%j             # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j              # Error file
#SBATCH --time=10:00:00             # Expected runtime HH:MM:SS (max 100h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@cpu    # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>    # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev          # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4           # Uncomment for job requiring more than 20 hours (up to 4 nodes)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load needed modules
module load gcc/8.3.0
module load qmcpack/3.7.0-mpi
 
# echo commands
set -x
 
cd ${SLURM_SUBMIT_DIR}/dmc
 
# Execute code
srun qmcpack C2CP250_dmc_x2.in.xml

Submission script on the GPU partition

gmcpack_multi_gpus.slurm
#!/bin/bash                                                                                                                                                                                         
#SBATCH --nodes=1                    # Number of nodes
#SBATCH --ntasks-per-node=4          # Number of tasks per node
#SBATCH --gres=gpu:4                 # Allocate GPUs
#SBATCH --cpus-per-task=10           # Number of cores for each task (important to get all memory)
#SBATCH --hint=nomultithread         # Disable hyperthreading
#SBATCH --time=00:10:00              # Expected job duration (HH:MM:SS)
#SBATCH --job-name=qmcpack_multi_gpu # Jobname 
#SBATCH --output=%x.o%j              # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j               # Error file
#SBATCH --time=10:00:00              # Expected runtime HH:MM:SS (max 100h for V100, 20h for A100)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@v100   # To specify gpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>    # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_gpu-dev          # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_gpu-t4           # Uncomment for job requiring more than 20 hours (up to 16 GPU, V100 only)
 
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load needed modules
module load gcc/8.3.0
module load cuda/10.1.1
module load qmcpack/3.7.0-mpi-cuda
 
# echo commands
set -x
 
cd ${SLURM_SUBMIT_DIR}/dmc
 
# Execute code with the right binding. 1GPU per task
srun /gpfslocalsup/pub/idrtools/bind_gpu.sh qmcpack C2CP250_dmc_x2.in.xml

Submission script on the GPU partition with CUDA-Aware MPI

gmcpack_cuda_aware.slurm
#!/bin/bash                                                                                                                                                                                         
#SBATCH --nodes=1                     # Number of nodes
#SBATCH --ntasks-per-node=4           # Number of tasks per node
#SBATCH --gres=gpu:4                  # Allocate GPUs
#SBATCH --cpus-per-task=10            # Number of cores for each task (important to get all memory)
#SBATCH --hint=nomultithread          # Disable hyperthreading
#SBATCH --time=00:10:00               # Expected job duration (HH:MM:SS)
#SBATCH --job-name=qmcpack_cuda_aware # Jobname 
#SBATCH --output=%x.o%j              # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j               # Error file
#SBATCH --time=10:00:00              # Expected runtime HH:MM:SS (max 100h for V100, 20h for A100)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@v100   # To specify gpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>    # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_gpu-dev          # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_gpu-t4           # Uncomment for job requiring more than 20 hours (up to 16 GPU, V100 only)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load needed modules
module load gcc/8.3.0
module load cuda/10.1.1
module load openmpi/3.1.4-cuda
module load qmcpack/3.7.0-mpi-cuda
 
# echo commands
set -x
 
cd ${SLURM_SUBMIT_DIR}/dmc
 
# Execute code with the right binding. 1GPU per task
srun /gpfslocalsup/pub/idrtools/bind_gpu.sh qmcpack C2CP250_dmc_x2.in.xml

Comments:

  • All jobs have resources defined in Slurm per partition and per QoS (Quality of Service) by default. You can modify the limits by specifying another partition and / or QoS as shown in our documentation detailing the partitions and Qos.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify the project accounting (hours allocation for the project) for which to count the job's computing hours as indicated in our documentation detailing the project hours management.