OpenMolcas on Jean Zay


OpenMolcas is a molecular modelling software specialised in post Hartree-Fock advanced methods.

Useful sites

Available versions


Version Modules to load
21.06 openmolcas/21.06
19.11 openmolcas/19.11-mpi


OpenMolcas uses a driver for directly implementing the srun execution command. Therefore, this command should not be added to your submission script.

Environment variables

Molcas uses environment variables to modify the execution behaviour:

  • Project: project name (this variable is used to choose the name of the output files).
  • MOLCAS_MEM: the quantity of memory per MPI task that Molcas can use.
  • MOLCAS_WORKDIR: the directory where the generated temporary files will be stored.

For MOLCAS_WORKDIR, we advise you to use either the $SCRATCH disk space (the files will be accessibles after the end of the job) or the $JOBSCRATCH disk space (all the files will be destroyed at the end of the job).

For the other variables, we recommend that you consult the page about the variables in the documentation.

Submission script on the CPU partition

#!/usr/bin/env bash
#SBATCH --nodes=1               # Using only 1 node
#SBATCH --ntasks-per-node=40    # 40 MPI tasks
#SBATCH --cpus-per-task=1       # No OpenMP 
#SBATCH --hint=nomultithread    # Disable hyperthreading
#SBATCH --job-name=molcas       # Job name
#SBATCH --output=molcas.o%j     # Standard output file (%j is the job number)
#SBATCH --time=10:00:00         # Expected runtime HH:MM:SS (max 100h)
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@cpu       # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>       # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev      # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4      # Uncomment for job requiring more than 20h (only one node)
set -x
module purge
module load  openmolcas/19.11-mpi intel-mkl/19.0.5
### Definition of variables ###
export Project=dhe
if [ -z "$SLURM_CPUS_PER_TASK" ]
export HomeDir=PWD
export CurrDir=$(pwd)
pymolcas ${Project}.cas.input


  • All jobs have resources defined in Slurm per partition and per QoS (Quality of Service) by default. You can modify the limits by specifying another partition and / or QoS as shown in our documentation detailing the partitions and Qos.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify the project accounting (hours allocation for the project) for which to count the job's computing hours as indicated in our documentation detailing the project hours management.