Gaussian on Jean Zay

Introduction

Gaussian is a general quantum chemistry software.

Useful sites

Available versions

Version Modules to load Commentaries
g16 rev-C01gaussian/g16-revC01 Version for CPU and GPU production
g09 rev-D01gaussian/g09-revD01Version for CPU production

Gaussian16

Input file

Gaussian16 allows the use of command line options for link0 parameters. We recommend using these.

We strongly advise not using the following lines in the input file:

%mem
%nprocshared (deprecated)

Be aware that using %mem introduced some memory problems on the tests cases. Therefore, you should use:

g09 -m 140GB ...

You must also specify the quantity of memory that Gaussian will be allowed to use.
A good idea is to set the memory parameter to 80% of the memory by using the formula: 0.8*cpus-per-task*4GB.

You should replace %nprocshared (or %nproc) with the command line option:

g16 -c="0-39" ...

0-39 means that you want to use the 40 cores of the machine. Please adapt it to the number of cores you are willing to use.

Important notice: If you use less than a complete node (–cpus-per-task < 40) you have to use the 'g16_cpu_list' to give the right number of cores.

Command line options are available on the Options page of Gaussian Web site.

Script de soumission sur la partition CPU

g16.slurm
#!/bin/bash
#SBATCH --nodes=1               # Number of Nodes
#SBATCH --ntasks-per-node=1     # Number of MPI tasks per node
#SBATCH --cpus-per-task=40      # Number of OpenMP threads
#SBATCH --hint=nomultithread    # Disable hyperthreading
#SBATCH --job-name=job          # Jobname
#SBATCH --output=%x.o%j         # Output file %x is the jobname, %j the jobid 
#SBATCH --error=%x.o%j          # Error file
#SBATCH --time=20:00:00         # Runtime limit HH:MM:SS (max 100h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@cpu       # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>       # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev      # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4      # Uncomment for job requiring more than 20h (only one node)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load the module
module load gaussian/g16-revC01
 
# Use JOBSCRATCH as temporary directory for job files ( erased at the end of job! )
# Instead, you can use a subdirectory of $SCRATCH to keep the temporary files needed to restart jobs
# If you want to keep chk and/or rwf file, you can also specify the link0 commands
export GAUSS_SCRDIR=$JOBSCRATCH
 
# Run of Gaussian on 40 cores (-c="0-39" and --cpus-per-task=40)
# asking for 140GB of memory (-m=140GB) : maximum is 4GB per core reserved.
g16 -c="0-39" -m=140GB < job.com
## If you run with less than 40 cores please use instead
## g16 -c=$(g16_cpu_list) -m=<0.8*number of cores*4>GB

Script de soumission sur la partition GPU

g16.slurm
#!/bin/bash
#SBATCH --nodes=1               # Number of Nodes
#SBATCH --ntasks-per-node=1     # Number of MPI tasks per node
#SBATCH --cpus-per-task=40      # Number of OpenMP threads
#SBATCH --gres=gpu:4            # Allocate 4 GPUs
#SBATCH --hint=nomultithread    # Disable hyperthreading
#SBATCH --job-name=job          # Jobname
#SBATCH --output=%x.o%j         # Output file %x is the jobname, %j the jobid 
#SBATCH --error=%x.o%j          # Error file
#SBATCH --time=20:00:00         # Runtime limit HH:MM:SS (max 100h for V100, 20h for A100)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@v100      # To specify gpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>       # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_gpu-dev      # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_gpu-t4      # Uncomment for job requiring more than 20h (up to 16 GPU, V100 only)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load the module
module load gaussian/g16-revC01
 
# Use JOBSCRATCH as temporary directory for job files ( erased at the end of job! )
# Instead, you can use a subdirectory of $SCRATCH to keep the temporary files needed to restart jobs
# If you want to keep chk and/or rwf file, you can also specify the link0 commands
export GAUSS_SCRDIR=$JOBSCRATCH
 
# Run of Gaussian on 4 gpus (-g="0-3=0-3")
# asking for 100GB of memory (-m=100GB) : maximum is 4GB per core reserved.
g16 -c="$(g16_cpu_list)" -m=100GB -g="$(g16_gpu_list)" < job.com

Gaussian09

Input file

Caution: A Gaussian input file must have the parallelisation commands in the header. If not, the code will only use one core.

You should also specify the quantity of memory which the software could use.
The order of magnitude is 80% of the available memory depending on the calculation: 0.8*cpus-per-task*4 GB

%nprocshared=40
%mem=128GB

Submission script on the CPU partition

g09.slurm
#!/bin/bash
#SBATCH --nodes=1               # Number of Nodes
#SBATCH --ntasks-per-node=1     # Number of MPI tasks per node
#SBATCH --cpus-per-task=40      # Number of OpenMP threads
#SBATCH --hint=nomultithread    # Disable hyperthreading
#SBATCH --job-name=gaussian_test# Jobname
#SBATCH --output=%x.o%j         # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j          # Error file
#SBATCH --time=01:00:00
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@cpu       # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>       # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev      # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4      # Uncomment for job requiring more than 20h (only one node)
 
# Print environment
env
 
# Manage modules
module purge
module load gaussian/g09-revD01
 
 
# The following line tells Gaussian to use the JOBSCRATCH (efficient space)
# If you wish to keep the checkpoint file, just add to your input file:
# %chk=input.chk 
# replacing "input" with the name of your job
export GAUSS_SCRDIR=$JOBSCRATCH 
 
g09 < input.com