Jean Zay: Interactive execution of a GPU code

Connection on the front end

Access to the front end is done via an ssh connection:

$ ssh login@jean-zay.idris.fr

The resources of this interactive node are shared between all the connected users:
As a result, the interactive on the front end is reserved exclusively for compilation and script development.

Important: The Jean Zay front end nodes are not equipped with GPU. Therefore, these nodes cannot be used for executions requiring one or more GPU.

To effectuate interactive executions of your GPU codes on the accelerated compute nodes, you must use one of the following two commands:

However, if the computations require a large amount of GPU resources (in number of cores, memory, or elapsed time), it is necessary to submit a batch job.

Obtaining a terminal on a GPU compute node

It is possible to open a terminal directly on an accelerated compute node on which the resources have been reserved for you (here, 1 GPU on the default gpu partition) by using the following command:

$ srun --pty --nodes=1 --ntasks-per-node=1 --cpus-per-task=10 --gres=gpu:1 --hint=nomultithread [--other-options] bash

Comments:

  • An interactive terminal is obtained with the --pty option.
  • The reservation of physical cores is assured with the --hint=nomultithread option (no hyperthreading).
  • The memory allocated for the job is proportional to the number of requested CPU cores . For example, if you request 1/4 of the cores of a node, you will have access to 1/4 of its memory. On the default gpu partition, the --cpus-per-task=10 option allows reserving 1/4 of the node memory per GPU. On the gpu_p2 partition (--partition=gpu_p2), you need to specify --cpus-per-task=3 to reserve 1/8 of the node memory per GPU, and thus be coherent with the node configuration. You may consult our documentation on this subject: Memory allocation on GPU partitions.
  • --other-options contains the usual Slurm options for job configuration (--time=, etc.): See the documentation on batch submission scripts in the index section Execution/Commands of a GPU code.
  • The reservations have all the resources defined in Slurm by default, per partition and per QoS (Quality of Service). You can modify the limits of them by specifying another partition and/or QoS as detailed in our documentation about the partitions and QoS.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify on which project account (project hours allocation) to count the computing hours of the job as explained in our documentation about computing hours management.
  • We strongly recommend that you consult our documentation detailing computing hours management on Jean Zay to ensure that the hours consumed by your jobs are deducted from the correct allocation.

The terminal is operational after the resources have been granted:

$ srun --pty --nodes=1 --ntasks-per-node=1 --cpus-per-task=10 --gres=gpu:1 --hint=nomultithread bash
srun: job 1369723 queued and waiting for resources
srun: job 1369723 has been allocated resources
bash-4.2$ hostname
r6i3n6
bash-4.2$ nvidia-smi
Fri Apr 10 19:09:08 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.14       Driver Version: 430.14       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-SXM2...  On   | 00000000:1C:00.0 Off |                    0 |
| N/A   44C    P0    45W / 300W |      0MiB / 32510MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
 
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

You can verify if your interactive job has started by using the squeue command. Complete information about the status of the job can be obtained by using the scontrol show job <job identifier> command.

When the terminal is operational, you can launch your executable files in the usual way: ./your_executable_file.

Important: MPI is not currently usable in this configuration.

To leave the interactive mode, use exit command :

bash-4.2$ exit

Caution: If you do not yourself leave the interactive mode, the maximum allocation duration (by default or specified with the --time option) is applied and this amount of hours is then counted for the project you have specified.

Interactive execution on the GPU partition

If you don't need to open a terminal on a compute node, it is also possible to start the interactive execution of a code on the compute nodes directly from the front end by using the following command (here, with 4 GPU on the default gpu partition) :

$ srun --nodes=1 --ntasks-per-node=4 --cpus-per-task=10 --gres=gpu:4 --hint=nomultithread [--other-options] ./my_executable_file

Comments:

  • The --hint=nomultithread option reserves physical cores (no hyperthreading).
  • By default, the allocated CPU memory is proportional to the number of requested cores. For example, if you request 1/4 of the cores of a node, you will have access to 1/4 of its memory. On the default gpu partition, the --cpus-per-task=10 option allows reserving 1/4 of the node memory per GPU.On the gpu_p2 partition (--partition=gpu_p2), you need to specify --cpus-per-task=3 to reserve 1/8 of the node memory per GPU, and thus be coherent with the node configuration. You may consult our documentation on this subject: Memory allocation on GPU partitions.
  • --other-options contains the usual Slurm options for job configuration (--time=, etc.): See the documentation on batch submission scripts in the index section Execution/Commands of a GPU code.
  • Reservations have all the resources defined in Slurm by default, per partition and per QoS (Quality of Service). You can modify their limits by specifying another partition and/or QoS as detailed in our documentation about the partitions and QoS.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify on which project account (project hours allocation) to count the computing hours of the job as explained in our documentation about computing hours management.
  • We strongly recommend that you consult our documentation detailing computing hours management on Jean Zay to ensure that the hours consumed by your jobs are deducted from the correct allocation.

Reserving reusable resources for more than one interactive execution

Each interactive execution started as described in the preceding section is equivalent to a different job. As with all the jobs, they are susceptible to being placed in a wait queue for a certain length of time if the computing resources are not available.

If you wish to do more than one interactive execution in a row, it may be pertinent to reserve all the resources in advance so that they can be reused for the consecutive executions. You should wait until all the resources are available at one time at the moment of the reservation and not reserve for each execution separately.

Reserving resources (here, for 4 GPU on the default gpu partition) is done via the following command:

$ salloc --nodes=1 --ntasks-per-node=4 --cpus-per-task=10 --gres=gpu:4 --hint=nomultithread [--other-options]

Comments:

  • The --hint=nomultithread option reserves physical cores (no hyperthreading).
  • By default, the allocated CPU memory is proportional to the number of requested cores. For example, if you request 1/4 of the cores of a node, you will have access to 1/4 of its memory. On the default gpu partition, the --cpus-per-task=10 option allows reserving 1/4 of the node memory per GPU. On the gpu_p2 partition (--partition=gpu_p2), you need to specify --cpus-per-task=3 to reserve 1/8 of the node memory per GPU, and thus be coherent with the node configuration. You may consult our documentation on this subject: Memory allocation on GPU partitions.
  • --other-options contains the usual Slurm options for job configuration (--time=, etc.): See the documentation on batch submission scripts in the index section Execution/Commands of a GPU code.
  • The reservations have all the resources defined in Slurm by default, per partition and per QoS (Quality of Service). You can modify the limits of them by specifying another partition and/or QoS as detailed in our documentation about the partitions and QoS.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify on which project account (project hours allocation) to count the computing hours of the job as explained in our documentation about computing hours management.
  • We strongly recommend that you consult our documentation detailing computing hours management on Jean Zay to ensure that the hours consumed by your jobs are deducted from the correct allocation.

The reservation becomes usable after the resources have been granted:

$ salloc --nodes=1 --ntasks-per-node=4 --cpus-per-task=10 --gres=gpu:4 --hint=nomultithread [--other-options]
salloc: Pending job allocation 1369712
salloc: job 1369712 queued and waiting for resources
salloc: job 1369712 has been allocated resources
salloc: Granted job allocation 1369712

You can verify that your reservation is active by using the squeue command. Complete information about the status of the job can be obtained by using the scontrol show job <job identifier> command.

You can then start the interactive executions by using the srun command:

$ srun [--other-options] ./code

Comment: If you do not specify any option for the srun command, the options for salloc (for example, the number of tasks) will be used by default.

Important:

  • After reserving resources with salloc, you are still connected on the front end (you can verify this with the hostname command). It is imperative to use the srun command so that your executions use the reserved resources.
  • If you forget to cancel the reservation, the maximum allocation duration (by default or specified with the --time option) is applied and this amount of hours is then counted for the project you have specified. Therefore, in order to cancel the reservation, you must manually enter:
$ exit
exit
salloc: Relinquishing job allocation 1369712