Jean Zay : Execution of an OpenMP parallel code in batch

The jobs are managed on all of the nodes by the Slurm software.

To submit an OpenMP job in batch on Jean Zay, it is necessary to:

  • Create a submission script:
    openmp.slurm
    #!/bin/bash
    #SBATCH --job-name=omp              # name of the job
    #SBATCH --nodes=1                   # number of nodes
    #SBATCH --ntasks=1                  # number of tasks (a single process here)
    #SBATCH --cpus-per-task=20          # number of OpenMP threads
    # /!\ Caution, "multithread" in Slurm vocabulary refers to hyperthreading.
    #SBATCH --hint=nomultithread        # reserve physical (not logical) cores
    #SBATCH --time=00:01:00             # maximum execution time requested (HH:MM:SS)
    #SBATCH --output=omp%j.out          # name of output file
    #SBATCH --error=omp%j.out           # name of error file (here, in common with output)
     
    # cleans out the modules loaded in interactive and inherited by default
    module purge
     
    # loads modules
    module load ...
     
    # echo of launched commands
    set -x
     
    # number of OpenMP threads
    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK 
     
    # Binding
    export OMP_PLACES=cores
     
    # code execution
    ./omp_exe
  • Submit this script via the sbatch command:
    $ sbatch openmp.slurm

Comments:

  • We recommend that you compile and execute your codes under the same environment: Use exactly the same command module load … at the execution and at the compilation.
  • In this example, we assume that the omp_exe file is found in the submission directory which is the directory in which we enter the sbatch command.
  • The computation output file of the omp<job_number>.out is also found in the submission directory. It is created at the start of the job execution: Editing or modifying it while the job is running can disrupt the execution.
  • Slurm default behaviour has made the module purge necessary: The modules which have been loaded in your environment at the moment when you launch sbatch are passed to the submitted job.
  • All jobs have resources defined in Slurm per partition and per QoS (Quality of Service) by default. You can modify the limits by specifying another partition and / or QoS as shown in our documentation detailing the partitions and Qos.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify the project accounting (hours allocation for the project) for which to count the job's computing hours as indicated in our documentation detailing the project hours management.
  • We strongly recommend that you consult our documentation detailing the project hours management to ensure that the hours consumed by your jobs are deducted from the correct accounting.