LAMMPS on Jean Zay

Introduction

LAMMPS is a classical molecular dynamics simulator code specialised in materials modeling.

Useful sites

Available versions

Version Variants
2023.08.02lammps/20230802-mpi, lammps/20230802-mpi-cuda
2023.03.28 lammps/20230328-mpi, lammps/20230328-mpi-cuda
2022.06.23lammps/20220623-mpi-cuda, lammps/20220623.2-mpi, lammps/20220623.2-mpi-plumed
2021.09.29lammps/20210929-mpi
2021.07.02lammps/20210702-mpi
2020.10.29lammps/20201029-mpi
2020.07.21lammps/20200721-mpi intel-mkl/2020.1
2020.06.30lammps/20200630-mpi-cuda-kokkos
intel-mkl/2020.1 cuda/10.2
2019.06.05lammps/20190605-mpi intel-mkl/2019.4
2019.06.05lammps/20190605-mpi-cuda-kokkos
intel-mkl/2019.4 cuda/10.1.1
2018.08.31lammps/20180831-mpi intel-mkl/2019.4
2017.09.22lammps/20170922-mpi intel-mkl/2019.4
2017.08.11lammps/20170811-mpi

Remarks

  • A deadlock problem was identified with version 2023.08.02 CPU. An alternate version is available with:
module load gcc/12.2.0 openmpi/4.1.1                                                                                                                                                                              
module load lammps/20230802-mpi

Submission script on the CPU partition

lammps.in
#!/bin/bash
#SBATCH --nodes=1               # Number of Nodes
#SBATCH --ntasks-per-node=40    # Number of MPI tasks per node
#SBATCH --cpus-per-task=1       # Number of OpenMP threads
#SBATCH --hint=nomultithread    # Disable hyperthreading
#SBATCH --job-name=rhodo        # Jobname
#SBATCH --output=%x.o%j         # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j          # Error file
#SBATCH --time=10:00:00         # Expected runtime HH:MM:SS (max 100h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@cpu       # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>       # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev      # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4      # Uncomment for job requiring more than 20h (up to 4 nodes)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load the module
module load lammps/20200721-mpi intel-mkl/2020.1
 
# Execute commands
srun lmp -i rhodo.in

Submission script on GPU partition

lammps_gpu.slurm
#!/bin/bash
#SBATCH --nodes=1               # Number of Nodes
#SBATCH --ntasks-per-node=40    # Number of MPI tasks per node
#SBATCH --cpus-per-task=1       # Number of OpenMP threads
#SBATCH --gres=gpu:4                # Allocate 4 GPUs/node
#SBATCH --hint=nomultithread    # Disable hyperthreading
#SBATCH --job-name=rhodo        # Jobname
#SBATCH --output=%x.o%j         # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j          # Error file
#SBATCH --time=10:00:00         # Expected runtime HH:MM:SS (max 100h for V100, 20h for A100)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@v100      # To specify gpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>       # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_gpu-dev      # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_gpu-t4      # Uncomment for job requiring more than 20h (up to 16 GPU, V100 only)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load the module
module load lammps/20200630-mpi-cuda-kokkos intel-mkl/2020.1 cuda/10.2
 
# Execute commands
srun lmp -i rhodo.in

Comments:

  • All jobs have resources defined in Slurm per partition and per QoS (Quality of Service) by default. You can modify the limits by specifying another partition and / or QoS as shown in our documentation detailing the partitions and Qos.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify the project accounting (hours allocation for the project) for which to count the job's computing hours as indicated in our documentation detailing the project hours management.