Jean Zay: Visualisation nodes

Description

The configuration is composed of 5 scalar-type nodes with the following characteristics:

  • 2 Intel Cascade Lake 6248 processors (20 cores at 2.5 GHz), equalling 40 cores per node
  • 192 GB of memory per node
  • 1 Nvidia Quatro P6000 GPU

Usage

It is preferable to use these visualisation nodes with software capable of using the available GPUs such as VisIt or ParaView which are capable of using the available GPUs and use the pre-/post or front-end nodes for software such as Ferret or NcView.

For usage in interactive, you should use the idrvnc-alloc script from one of the Jean Zay front-end nodes to reserve the time (up to 4 hours or 1 hour by default) and memory (10 cores, equalling 40 GB of memory) resources. Use the graphics card with data compression between the visualisation server and your local machine via a client/server VNC connection: This requires installing a client VNC on your remote machine.

For usage in batch, you can submit a Slurm job from a Jean Zay front-end by using a specific partition, visu: With this visu partition, you can launch a job on one of the Jean Zay jean-zay-visu visualisation nodes on which the computing hours are not deducted from your allocation. By default, the execution time is 10 minutes and cannot exceed 1 hour (i.e. time=HH:MM:SS ≤ 1:00:00).

Example of batch job on a visu node

#!/bin/bash
#SBATCH --job-name=paraview_avec_pvbatch   # name of job
#SBATCH --nodes=1                 # 1 node
#SBATCH --ntasks-per-node=1       # 1 process
#SBATCH --time=00:45:00               # here 45 mn, default 10 mn, max 4 hours
#SBATCH --output=paraview_MHD%j.out
#SBATCH --error=paraview_MHD%j.err
##SBATCH --account <my_project>@cpu   # If needed set CPU hour accounting : <my_project> = echo $IDRPROJ
#SBATCH --partition=visu          # To run on visualization node
 
cd ${SLURM_SUBMIT_DIR}                # go in the submission directory
module purge                          # purges the loaded interactive modules inherited by default
 
module load python/3.7.3
module load paraview/5.8.0-osmesa-mpi-python3-nek5000    # loads the ParaView version permitting offscreen-rendering
 
export PYTHONPATH=$PYTHONPATH:/gpfslocalsup/pub/anaconda-py3/2019.03/lib/python3.7/site-packages/
 
set -x
 
srun --unbuffered pvbatch --force-offscreen-rendering script.py

Example of idrvnc-alloc session

[login@jean-zay1: ~]$ idrvnc-alloc
salloc: Pending job allocation 633386
salloc: job 633386 queued and waiting for resources
salloc: job 633386 has been allocated resources
salloc: Granted job allocation 633386
salloc: Waiting for resource configuration
salloc: Nodes jean-zay-visu1 are ready for job
INFO 2020-11-03 17:15:10,376 Starting VNC server. Please wait...:
INFO 2020-11-03 17:15:10,502 --Launching VNC server. Please wait before attempting to connect...
INFO 2020-11-03 17:15:15,509 --VNC server launched. Please connect.
URL to connect: jean-zay-visu1.idris.fr:20
Password VNC:   xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Then, you launch your local client VNC and, in the window which opens (shown above), you put the visu node number and the port number on the next to last line. Example:

URL to connect: jean-zay-visu1.idris.fr:20

Click on connect and then on the following window:

Enter the password displayed on the last line:

Password VNC:   xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

The following will open a window of the visu node on your local machine: fenetre1-idr.jpeg

For example, we will search the available ParaView version and choose the binary version which is capable of reading all the possible types of files:

fenetre2-idr.jpeg

And we will launch ParaView with vglrun (for the graphic software using OpenGL libraries) :

fenetre3-idr.jpeg