VASP on Jean Zay

Introduction

VASP is an atomistic simulation software for the physics/chemistry of materials. It solves the Shrödinger equation by using the density functional theory and the Hartree-Fock method, as well as other more advanced methods.

To obtain access to VASP at IDRIS, you must first register with VASP. VASP provides group licences; however, it is necessary for each person of the group to be registered. As asked by VASP, a verification is done by the IDRIS user support team before access can be opened to you.

Available versions

Version Variants
6.4.2mpi
6.4.0mpi-cuda-openacc, cuda-openacc-mlff_patched, mpi-cuda-openacc-vaspsol, mpi-ml_ff_patched, 6.4.0-mpi-vaspsol
6.3.2mpi-cuda-openacc_ml-ff-abinitio-patched, mpi_ml-ff-abinitio-patched
6.3.0mpi, mpi-cuda-openacc
6.2.0mpi, mpi-cuda-openacc
6.1.2mpi
5.4.4.pl2 mpi-vaspsol
5.4.4mpi-cuda, mpi-vtst

Available executable files

On CPU

  • vasp_std : Standard version of VASP
  • vasp_gam : Version for calculations with only the Gamma point
  • vasp_ncl : Version for non-collinear calculations

On GPU

Version 6

  • vasp_std
  • vasp_gam
  • vasp_ncl

Version 5

  • vasp_gpu : Standard version, ported on GPU with CUDA
  • vasp_ncl_gpu : Version for non-collinear calculations, ported on GPU with CUDA

Important notice

Some problems have been detected with the vasp_gam version. If your jobs run into memory problems, please use an alternative VASP version by loading the following modules, in the right order:

module load intel-compilers/18.0.5 intel-mpi/18.0.5
module load vasp/5.4.4-mpi-cuda

Example of usage on the CPU partition

Submission script on the CPU partition

vasp_cpu.slurm
#!/bin/bash
#SBATCH --nodes=1               # 1 node reserved 
#SBATCH --ntasks-per-node=40    # 40 MPI tasks
#SBATCH --cpus-per-task=1       # 1 OpenMP thread
#SBATCH --hint=nomultithread    # Disable hyperthreading
#SBATCH --job-name=VASP
#SBATCH --output %x.o%j         # Output file %x is the jobname, %j the jobid 
#SBATCH --error=%x.o%j          # Error file
#SBATCH --time=10:00:00         # Expected runtime  HH:MM:SS (max 100h)
##
## Please refer to the "Comments" below for
## more information about the following 4 options.
##SBATCH --account=<account>@cpu  # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>  # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev        # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4         # Uncomment for job requiring more than 20h (only one node)
 
# Cleans out the modules loaded in interactive and inherited by default 
module purge
 
# Load the necessary modules
module load vasp/5.4.4-mpi-cuda
 
srun vasp_std

Example of usage on the GPU partition

The functions ported on GPU have limitations. Please consult the page Information about GPU porting.

On the tests we effectuated, overloading GPUs by activating the MPS gives the best results. It is important to effectuate tests to find the optimal number of MPI tasks per GPU.

Submission script on the GPU partition

vasp_gpu.slurm
#!/bin/bash
#SBATCH --nodes=1               # 1 node reserved 
#SBATCH --ntasks-per-node=8     # 8 MPI tasks (that is, 2 per GPU)
#SBATCH --cpus-per-task=5       # 5 OpenMP threads (to obtain all the memory of the node)
#SBATCH --hint=nomultithread    # Disable hyperthreading 
#SBATCH --gres=gpu:4            # 4 GPUs requested
#SBATCH --constraint=mps        # Activates the MPS
#SBATCH --jobname VASP
#SBATCH --output=%x.o%j         # Output file %x is the jobname, %j the jobid 
#SBATCH --error=%x.o%j          # Error file
#SBATCH --time=10:00:00         # Expected runtime  HH:MM:SS (max 100h for V100, 20h for A100)
##
## Please refer to the "Comments" below for
## more information about the following 4 options.
##SBATCH --account=<account>@v100  # To specify gpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>   # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_gpu-dev         # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_gpu-t4          # Uncomment for job requiring more than 20h (up to 16 GPU, V100 only)
 
# Cleans out modules loaded in interactive and inherited by default 
module purge
 
# Load the necessary modules
module load vasp/5.4.4-mpi-cuda
 
srun vasp_gpu

Comments:

  • All jobs have resources defined in Slurm per partition and per QoS (Quality of Service) by default. You can modify the limits by specifying another partition and / or QoS as shown in our documentation detailing the partitions and Qos.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify the project accounting (hours allocation for the project) for which to count the job's computing hours as indicated in our documentation detailing the project hours management.