ORCA on Jean Zay

Introduction

ORCA is a general quantum chemistry software with a specialisation in spectroscopy.

Useful sites

Available versions

VersionModules to load
5.0.3 orca/5.0.3-mpi
5.0.1 orca/5.0.1-mpi
5.0.0 orca/5.0.0-mpi
4.2.0 orca/4.2.0-mpi-cuda

Important: This product cannot be used on GPU. The CUDA of the name of the module comes from the dependence on Open MPI.

Forewarning

The following message appears during the code execution:

  The library attempted to open the following supporting CUDA libraries,
  but each of them failed.  CUDA-aware support is disabled.
  libcuda.so.1: cannot open shared object file: No such file or directory
  libcuda.dylib: cannot open shared object file: No such file or directory
  /usr/lib64/libcuda.so.1: cannot open shared object file: No such file or directory
  /usr/lib64/libcuda.dylib: cannot open shared object file: No such file or directory
  If you are not interested in CUDA-aware support, then run with
  --mca mpi_cuda_support 0 to suppress this message.  If you are interested
  in CUDA-aware support, then try setting LD_LIBRARY_PATH to the location
  of libcuda.so.1 to get passed this issue.

This is linked to the implementation of the CUDA-aware OpenMPI library used. It is not an error but a warning which can be ignored.

Submission script on the CPU partition

Important: It is necessary to specify the complete path of the orca executable file.

We advise you to use only one compute node.

orca.slurm
#!/bin/bash
#SBATCH --nodes=1               # Number of nodes
#SBATCH --ntasks-per-node=40    # Number of MPI tasks per node
#SBATCH --cpus-per-task=1       # Number of OpenMP threads
#SBATCH --hint=nomultithread    # Disable hyperthreading
#SBATCH --job-name=orca         # Jobname
#SBATCH --output=%x.o%j         # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j          # Error file
#SBATCH --time=20:00:00         # Expected runtime HH:MM:SS (max 100h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@cpu       # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>       # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev      # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4      # Uncomment for job requiring more than 20h (up to 4 nodes)
# Print environment
env
 
# Manage modules
module purge
module load orca/4.2.0-mpi-cuda
 
# Execute commands
$(which orca) input_opt >  opt.out

Submission script on the GPU partition

Submission impossible.

Comments:

  • All jobs have resources defined in Slurm per partition and per QoS (Quality of Service) by default. You can modify the limits by specifying another partition and / or QoS as shown in our documentation detailing the partitions and Qos.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify the project accounting (hours allocation for the project) for which to count the job's computing hours as indicated in our documentation detailing the project hours management.