xTB/Crest on Jean Zay

Introduction

xTB is the Semiempirical Extended Tight-Binding Program Package developed by the Grimme group in Bonn.

Crest is an extension to the xTB program. It functions as an IO based OMP scheduler (i.e., calculations are performed by the xtb program) and tool for the creation and analysation of structure ensembles.

Useful sites

Available versions

xTB:

Version Modules to load
6.4.1 xtb/6.4.1
6.5.0 xtb/6.5.0

Crest:

Version Modules to load
2.11.1 crest/2.11.1

Submission script for xTB on the CPU partition

xtb_omp.slurm
#!/bin/bash                                                                                                                                                                                         
#SBATCH --nodes=1                   # Number of nodes
#SBATCH --ntasks-per-node=1         # Number of tasks per node
#SBATCH --cpus-per-task=4           # Number of cores for each task (important to get all memory)
#SBATCH --hint=nomultithread        # Disable hyperthreading
#SBATCH --job-name=xtb_omp          # Jobname 
#SBATCH --output=%x.o%j             # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j              # Error file
#SBATCH --time=10:00:00             # Expected runtime HH:MM:SS (max 100h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@cpu           # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>             # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev          # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4           # Uncomment for job requiring more than 20 hours (up to 4 nodes)
 
# echo commands
set -x
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load needed modules
module load xtb/6.4.1
 
# go into the submission directory
cd ${SLURM_SUBMIT_DIR}
 
# number of OpenMP threads
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
 
# Execute code
srun xtb struc.xyz -gfn2 -g h2o -T 4

Submission script for Crest on the CPU partition

crest_omp.slurm
#!/bin/bash                                                                                                                                                                                         
#SBATCH --nodes=1                   # Number of nodes
#SBATCH --ntasks-per-node=1         # Number of tasks per node
#SBATCH --cpus-per-task=4           # Number of cores for each task (important to get all memory)
#SBATCH --hint=nomultithread        # Disable hyperthreading
#SBATCH --job-name=crest_omp        # Jobname 
#SBATCH --output=%x.o%j             # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j              # Error file
#SBATCH --time=10:00:00             # Expected runtime HH:MM:SS (max 100h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@cpu           # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>             # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev          # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4           # Uncomment for job requiring more than 20 hours (up to 4 nodes)
 
# echo commands
set -x
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load needed modules
module load crest/2.11.1
 
# go into the submission directory
cd ${SLURM_SUBMIT_DIR}
 
# number of OpenMP threads
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
 
# Execute code
srun crest struc.xyz -gfn2 -g h2o -T 4

Comments:

  • All jobs have resources defined in Slurm per partition and per QoS (Quality of Service) by default. You can modify the limits by specifying another partition and / or QoS as shown in our documentation detailing the partitions and Qos.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify the project accounting (hours allocation for the project) for which to count the job's computing hours as indicated in our documentation detailing the project hours management.