Table des matières
Jean Zay: Memory allocation with Slurm on CPU partitions
The Slurm options --mem
and --mem-per-cpu
are currently disabled on Jean Zay because they do not allow you to properly configure the memory allocation of your job. The memory allocation is automatically determined from the number of reserved CPUs.
To adjust the amount of memory allocated to your job, you must adjust the number of CPUs reserved per task (or process) by specifying the following option in your batch scripts, or when using salloc
in interactive mode:
--cpus-per-task=... # --cpus-per-task=1 by default
Comment: By default, --cpus-per-task=1
and the resulting amount of memory allocated is sufficient for most of the jobs launched on the CPU partitions. Therefore, the majority of users do not need to modify the value of this option. This page is addressed to users who need a greater amount of memory.
On the cpu_p1 partition
Each node of the cpu_p1
partition offers 156 GB of usable memory. The memory allocation is automatically determined on the basis of:
- 3.9 GB per CPU core if hyperthreading is deactivated (Slurm option
--hint=nomultithread
).
For example, a job specifying --ntasks=1 --cpus-per-task=5
on the cpu_p1
partition has access to 1 x 5 x 3.9 GB = 19.5 GB of memory if hyperthreading is deactivated (if not, half of that memory).
On the prepost partition
The nodes of the prepost
partition give access to 2.88 TB of useful memory for 48 CPU cores. The memory allocation here is determined automatically on the basis of:
- 60 GB per CPU core when hyperthreading is deactivated (Slurm option
--hint=nomultithread
)
For example, a job specifying --ntasks=1 --cpus-per-task=12
on the prepost
partition will have access to 1 x 12 x 60 GB = 720 GB of memory if hyperthreading is deactivated (if not, half of that memory).
Comments
- You can increase the value of
--cpus-per-task
as long as your request does not exceed the total amount of memory available on the node. Be careful, the computing hours are counted proportionately. For example, by specifying--ntasks=10 --cpus-per-task=2
, 20 CPU cores will be reserved and invoiced for your job. - If you reserve a compute node in exclusive mode, you will have access to the entire memory capacity of the node regardless of the value of
--cpus-per-task
. The invoice will be the same as for a job running on an entire node. - For OpenMP codes, if the value of
--cpus-per-task
does not coincide with the number of threads you want to use when executing your code, it is necessary to specify the environment variable:export OMP_NUM_THREADS=...
- The amount of memory allocated for your job can be seen by running the command:
scontrol show job $JOBID # searches for value of the "mem" variable
Important: While the job is in the wait queue (PENDING), Slurm estimates the memory allocated to a job based on logical cores. Therefore, if you have reserved physical cores (with
--hint=nomultithread
), the value indicated can be two times inferior to the expected value. This is updated and becomes correct when the job is launched.