OpenMolcas on Jean Zay

Introduction

OpenMolcas is a molecular modelling software specialised in post Hartree-Fock advanced methods.

Useful sites

Available versions

OpenMolcas

Version Modules to load
21.06 openmolcas/21.06
19.11 openmolcas/19.11-mpi
intel-mkl/19.0.5

Information

OpenMolcas uses a driver for directly implementing the srun execution command. Therefore, this command should not be added to your submission script.

Environment variables

Molcas uses environment variables to modify the execution behaviour:

  • Project: project name (this variable is used to choose the name of the output files).
  • MOLCAS_MEM: the quantity of memory per MPI task that Molcas can use.
  • MOLCAS_WORKDIR: the directory where the generated temporary files will be stored.

For MOLCAS_WORKDIR, we advise you to use either the $SCRATCH disk space (the files will be accessibles after the end of the job) or the $JOBSCRATCH disk space (all the files will be destroyed at the end of the job).

For the other variables, we recommend that you consult the page about the variables in the documentation.

Submission script on the CPU partition

molcas.slurm
#!/usr/bin/env bash
#SBATCH --nodes=1               # Using only 1 node
#SBATCH --ntasks-per-node=40    # 40 MPI tasks
#SBATCH --cpus-per-task=1       # No OpenMP 
#SBATCH --hint=nomultithread    # Disable hyperthreading
#SBATCH --job-name=molcas       # Job name
#SBATCH --output=molcas.o%j     # Standard output file (%j is the job number)
#SBATCH --time=10:00:00         # Expected runtime HH:MM:SS (max 100h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@cpu       # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>       # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev      # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4      # Uncomment for job requiring more than 20h (only one node)
 
set -x
module purge
module load  openmolcas/19.11-mpi intel-mkl/19.0.5
 
### Definition of variables ###
export Project=dhe
if [ -z "$SLURM_CPUS_PER_TASK" ]
then
   export MOLCAS_MEM=$SLURM_MEM_PER_CPU
else
   export MOLCAS_MEM=$(( $SLURM_MEM_PER_CPU * $SLURM_CPUS_PER_TASK ))
fi
export HomeDir=PWD
export MOLCAS_NPROCS=$SLURM_NTASKS
export MOLCAS_WORKDIR=$JOBSCRATCH
export CurrDir=$(pwd)
 
pymolcas ${Project}.cas.input

Comments:

  • All jobs have resources defined in Slurm per partition and per QoS (Quality of Service) by default. You can modify the limits by specifying another partition and / or QoS as shown in our documentation detailing the partitions and Qos.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify the project accounting (hours allocation for the project) for which to count the job's computing hours as indicated in our documentation detailing the project hours management.