Jean Zay: Execution of a sequential job in batch

Jobs are managed on all the nodes by the software Slurm . They are distributed into “classes” principally on the basis of elapsed time, number of cores and the memory requested.

To submit a sequential job in batch on Jean Zay, you must:

  • Create a submission script:

    seq.slurm
    #!/bin/bash
    #SBATCH --job-name=Seq              # name of the job
    #SBATCH --nodes=1                   # number of nodes
    #SBATCH --ntasks-per-node=1         # number of MPI tasks per node
    #SBATCH --hint=nomultithread        # reservation of physical cores (no hyperthreading)
    #SBATCH --time=00:01:00             # maximum execution time requested (HH:MM:SS)
    #SBATCH --output=Seq%j.out          # name of output file
    #SBATCH --error=Seq%j.out           # name of error file (here, in common with the output)
     
    # go into the submission directory 
    cd ${SLURM_SUBMIT_DIR}
     
    # clean out the modules loaded in interactive and inherited by default
    module purge
     
    # loading the modules
    module load ...
     
    # echo of launched commands
    set -x
     
    # execution
    ./a.out
  • Submit this script via the sbatch command:

    $ sbatch seq.slurm

Comments:

  • We recommend that you compile and execute your codes under the same environment: Use exactly the same command module load … at the execution and at the compilation.
  • The reservation of physical cores is assured with the –hint=nomultithread option (no hyperthreading).
  • In this example, we assume that the a.out executable file is found in the submission directory which is the directory in which we enter the sbatch command: The $SLURM_SUBMIT_DIR variable is automatically recovered by Slurm.
  • The computation output file Seq_numero_job.out is also found in the submission directory. It is created at the start of the job execution: Editing or modifying it while the job is running can disrupt the execution.
  • Slurm default behaviour has made the module purge necessary: The modules which have been loaded in your environment at the moment when you launch sbatch are taken into account in the submitted job.
  • All jobs have resources defined in Slurm per partition and per QoS (Quality of Service) by default. You can modify the limits by specifying another partition and / or QoS as shown in our documentation detailing the partitions and Qos.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify the project accounting (hours allocation of the project) for which to count the computing hours of the job as indicated in our documentation detailing the project hours management.
  • We strongly recommend that you consult our documentation detailing the project hours management to ensure that the hours consumed by your jobs are deducted from the correct accounting.