Jean Zay: Compilation of an MPI parallel code in Fortran, C/C++

The MPI libraries available on Jean Zay are Intel MPI and Open MPI.

The Open MPI libraries are compiled with and without the CUDA-aware MPI support. To compile a CUDA-aware MPI code, please refer to the page CUDA-aware MPI and GPUDirect.

The various versions of the MPI libraries available on Jean Zay can be activated through the module command.

You must also activate the compilers (PGI or Intel) before compiling.

Loading examples

  • Intel MPI :
$ module avail intel-mpi
-------------------------------------------------------------------------- /gpfslocalsup/pub/module-rh/modulefiles --------------------------------------------------------------------------
intel-mpi/5.1.3(16.0.4)   intel-mpi/2018.5(18.0.5)  intel-mpi/2019.4(19.0.4)  intel-mpi/2019.6  intel-mpi/2019.8  
intel-mpi/2018.1(18.0.1)  intel-mpi/2019.2(19.0.2)  intel-mpi/2019.5(19.0.5)  intel-mpi/2019.7  intel-mpi/2019.9
 
$ module load intel-compilers/19.0.4 intel-mpi/19.0.4
  • Open MPI (without CUDA-aware MPI, you must choose one of the modules with no -cuda extension) :
$ module avail openmpi
-------------------------------------------------------- /gpfslocalsup/pub/modules-idris-env4/modulefiles/linux-rhel8-skylake_avx512 --------------------------------------------------------
openmpi/3.1.4       openmpi/3.1.5  openmpi/3.1.6-cuda  openmpi/4.0.2       openmpi/4.0.4       openmpi/4.0.5       openmpi/4.1.0       openmpi/4.1.1       
openmpi/3.1.4-cuda  openmpi/3.1.6  openmpi/4.0.1-cuda  openmpi/4.0.2-cuda  openmpi/4.0.4-cuda  openmpi/4.0.5-cuda  openmpi/4.1.0-cuda  openmpi/4.1.1-cuda     
 
$ module load pgi/20.4 openmpi/4.0.4

Compilation

The compilation and linking of an MPI program are done by using the MPI wrappers of the compilers associated to the chosen library:

  • Intel MPI :
$ mpiifort source.f90
 
$ mpiicc source.c
 
$ mpiicpc source.C
  • Open MPI :
$ mpifort source.f90
 
$ mpicc source.c
 
$ mpic++ source.C

Comment

  • The execution of an MPI program have to be done in batch (via srun command or Slurm job) to avoid frontend node crash.