Jean Zay: The module command

To load the products installed on Jean Zay, it is necessary to use the module command.

The version currently available on Jean Zay is version 4.3.1, which brings many new features compared to version 3 (historically installed on IDRIS computing machines) and in particular makes it possible to automatically load the prerequisites of the products (compiler, MPI library) when loading a module.

Warning: the modules accessible by default are not compatible with the gpu_p5 partition (extension equipped with NVIDIA A100 80 GB GPU installed in June 2022). It is necessary to first load the cpuarch/amd module to be able to list and load the modules compatible with this partition. For more information, see the dedicated section.

New features of Environment Modules v4

In addition to correcting numersous bugs, Environment Modules v4 introduces new functionalities including:

  • Automatic management of dependencies between the modules (see the following section).
  • Improved choice of module default versions: The most recent version is used as the default, rather than the last version in lexicographical order. For example, version 1.10 is now correctly identified as more recent than version 1.9.
  • Advanced output filtering of the module avail sub-command via the options:
    • -d displays only the default version of each module.
    • -L displays only the most recent version of each module.
    • -C <texte> allows searching all the modules whose names or versions contain the pattern <texte>
  • Colourization of outputs to improve legibility.
  • Performance improvement.

For more details, please consult the offical site of Environment Modules: Release notes and Differences between versions 3.2 and 4.

Automatic management of dependencies

The functionality of automatic gestion of dependencies between modules introduced in the Environment Modules v4, is used at IDRIS in order to guarantee the coherence of the loaded environment.
This means that the module command will assure that all compiler and MPI library prerequisites are met each time a module is loaded.

  • When a module is loaded into a virgin environment (with no previously selected compiler nor MPI library), a default environment is loaded if necessary (selection of compiler and/or MPI library). For example, the NetCDF library is available with 2 Intel environments (18.0.5 and 19.0.4); by default, the Intel 19.0.4 environment is loaded:
    $ module purge
    $ module load netcdf
    Loading netcdf/4.7.2-mpi
      Loading requirement: intel-compilers/19.0.4 intel-mpi/19.0.4
    $ module list
    Currently Loaded Modulefiles:
     1) intel-compilers/19.0.4   2) intel-mpi/19.0.4   3) netcdf/4.7.2-mpi
    $ which ncdump
    /.../netcdf/4.7.2/intel-19.0.4-cqo7jj3yxaprhm23gr2tfq2f4epw7k3r/bin/ncdump
  • When a module is loaded into an environment which is already constrained by a compiler which was previously loaded (and possibly an MPI library), the product installation compiled with this same compiler version (and respectively of the MPI library) is automatically selected. For example, the NetCDF library is available for the Intel 18.0.5 and 19.0.4 environments; to verify the module command behaviour, you can first load the Intel 18.0.5 environment.
    $ module list
    Currently Loaded Modulefiles:
     1) intel-compilers/18.0.5   2) intel-mpi/18.0.5 
    $ module load netcdf
    $ module list
    Currently Loaded Modulefiles:
     1) intel-compilers/18.0.5   2) intel-mpi/18.0.5   3) netcdf/4.7.2-mpi  
    $ which ncdump
    /.../netcdf/4.7.2/intel-18.0.5-4q5xoewvla54i45rdh743eu7bm7wnsi7/bin/ncdump
  • On the other hand, if no installation was implemented for the previously loaded environment, an error message will be displayed (indicating a conflict, for example); in this case, the module is not loaded. For example, if you try to load the NetCDF library in the Intel 19.0.5 environment where it is not available:
    $ module list
    Currently Loaded Modulefiles:
     1) intel-compilers/19.0.5  
    $ module load netcdf
    Loading intel-compilers/19.0.4
      ERROR: Conflicting intel-compilers is loaded
     
    Loading intel-compilers/18.0.5
      ERROR: Conflicting intel-compilers is loaded
     
    Loading netcdf/4.7.2-mpi
      ERROR: Load of requirement intel-compilers/19.0.4 or intel-compilers/18.0.5 failed
     
    $ module list
    Currently Loaded Modulefiles:
     1) intel-compilers/19.0.5

Comment: When a product is available for more than one environment, it could be necessary to load the compiler and/or the MPI library before loading the product in order to assure that the environment used is the right one.

Displaying the installed products

To display the products installed on Jean Zay, it is necessary to use the avail sub-command.

$ module avail
------------------------- /gpfslocalsup/pub/module-rh/modulefiles -------------------------
arm-forge             intel-all/16.0.4        intel-compilers/18.0.1  intel-vtune/18.0.5     pgi/19.7   
cuda/9.2              intel-all/18.0.1        intel-compilers/18.0.5  intel-vtune/19.0.2     pgi/19.9   
cuda/10.0             intel-all/18.0.5        intel-compilers/19.0.2  intel-vtune/19.0.4     pgi/19.10  
cuda/10.1.1           intel-all/19.0.2        intel-compilers/19.0.4  intel-vtune/19.0.5     
cuda/10.1.2           intel-all/19.0.4        intel-compilers/19.0.5  nccl/2.4.2-1+cuda9.2   
cudnn/9.2-v7.5.1.10   intel-all/19.0.5        intel-itac/18.0.1       nccl/2.4.2-1+cuda10.1  
cudnn/10.1-v7.5.1.10  intel-itac/19.0.4       intel-itac/18.0.5       pgi/19.5               
 
------------------------- /gpfslocalsup/pub/modules-idris-env4/modulefiles/linux-rhel7-x86_64 -------------------------
abinit/7.0.5                fftw/3.3.8-mpi           nco/4.8.1                         trilinos/12.12.1-mpi      
abinit/8.8.2-mpi            fftw/3.3.8-mpi-cuda      ncview/2.1.7-mpi                  udunits2/2.2.24           
abinit/8.10.3-mpi           gaussian/g09-revD01      netcdf-fortran/4.5.2              uuid/1.6.2                
adf/2019.104-mpi-cuda       gaussian/g16-revC01      netcdf-fortran/4.5.2-mpi          valgrind/3.14.0-mpi
anaconda-py2/2019.03        gcc/4.8.5                netcdf/4.7.2                      valgrind/3.14.0-mpi-cuda
anaconda-py3/2019.03        gcc/5.5.0                netcdf/4.7.2-mpi                  vasp/5.4.4-mpi-cuda       
arpack-ng/3.7.0-mpi         gcc/6.5.0                netlib-lapack/3.8.0               vim/8.1.0338              
autoconf/2.69               gcc/8.2.0                netlib-scalapack/2.0.2-mpi        visit/2.13.0-mpi          
automake/1.16.1             gcc/8.3.0                netlib-scalapack/2.0.2-mpi-cuda   vtk/8.1.2-mpi  
bigdft/devel-0719-mpi-cuda  gcc/9.1.0                nwchem/6.8.1-mpi                  xedit/1.2.2               
blitz/1.0.1                 gcc/9.1.0-cuda-openacc   octave/4.4.1-mpi
boost/1.62.0                gdb/8.2.1                opa-psm2/11.2.77
[...]   

(Non-exhaustive display based on a command output edited in December 2019)

Searching for a particular product

It is possible to search for aparticular product by entering: module avail <first letters of the product name>.
For example, to display products beginning by cud :

$ module avail cud
 
------------------------- /gpfslocalsup/pub/module-rh/modulefiles -------------------------
cuda/9.2  cuda/10.0  cuda/10.1.1  cuda/10.1.2  cudnn/9.2-v7.5.1.10  cudnn/10.1-v7.5.1.10

Verifying which modules are already loaded

The sub-command list allows you to verify which modules are loaded in your current environment at a given moment:

$ module list
Currently Loaded Modulefiles:    
 1) intel-compilers/19.0.4   4) intel-vtune/19.0.4     7) intel-itac/19.0.4  
 2) intel-mkl/19.0.4         5) intel-advisor/19.0.4   8) intel-all/19.0.4   
 3) intel-mpi/19.0.4         6) intel-tbb/19.0.4      

If no module is loaded, the following message appears:

$ module list
No Modulefiles Currently Loaded.

Loading a product

Loading products is done with the load sub-command and then entering one of the following:

  • The complete name of the module in order to select a precise product version:
    $ module load intel-compilers/19.0.4
  • or the beginning of the module name which selects the product default version:
    $ module load intel-compilers

The module load command does not return any information when it runs without problem. It could, therefore, be necessary to use the module list command to know which version was loaded.
On the other hand, an error could occur and prevent the loading of a module. In this case, an error message is returned; for example:

  • If the module to load does not exist:
    $ module load intel-compilers-19/19.0.4
    ERROR: Unable to locate a modulefile for 'intel-compilers-19/19.0.4'
  • If a conflict exists between the module to load and one of the previously loaded modules:
    $ module list
    Currently Loaded Modulefiles:
     1) intel-compilers/19.0.5
     
    $ module load intel-compilers/19.0.4
    Loading intel-compilers/19.0.4
      ERROR: Conflicting intel-compilers is loaded

Loading more than one product

It is possible to load more than one product at the same time.

  • List the products on the same command line:
    $ module load intel-compilers/19.0.4 intel-mpi/19.0.4 intel-mkl/19.0.4

    Caution: The order of the modules listed can be important!
    For example, the following command does not give the desired result as intel-compilers/18.0.5 and intel-mpi/18.0.5 are not used finally:

    $ module load netcdf intel-compilers/18.0.5 intel-mpi/18.0.5
    Loading netcdf/4.7.2-mpi
      Loading requirement: intel-compilers/19.0.4 intel-mpi/19.0.4
     
    Loading intel-compilers/18.0.5
      ERROR: Conflicting intel-compilers is loaded
     
    Loading intel-mpi/18.0.5
      ERROR: Conflicting intel-mpi is loaded
     
    $ module list
    Currently Loaded Modulefiles:
     1) intel-compilers/19.0.4   2) intel-mpi/19.0.4   3) netcdf/4.7.2-mpi


    In this case, you need to load intel-compilers/18.0.5 and intel-mpi/18.0.5 before netcdf to have the expected result:

    $ module load intel-compilers/18.0.5 intel-mpi/18.0.5 netcdf
     
    $ module list
    Currently Loaded Modulefiles:
     1) intel-compilers/18.0.5   2) intel-mpi/18.0.5   3) netcdf/4.7.2-mpi
  • Certains modules are shortcuts which allow loading more than one module in one operation. This if the case of modules named intel-all/XX.Y.Z which load several modules defining a compte Intel environment for a chosen XX.Y.Z version.
    For example, with Intel 19.0.4 :
    $ module load intel-all/19.0.4
    Loading intel-all/19.0.4
      Loading requirement: intel-compilers/19.0.4 intel-mkl/19.0.4 intel-mpi/19.0.4
        intel-vtune/19.0.4 intel-advisor/19.0.4 intel-tbb/19.0.4 intel-itac/19.0.4
     
    $ module list
    Currently Loaded Modulefiles:
     1) intel-compilers/19.0.4   4) intel-vtune/19.0.4     7) intel-itac/19.0.4  
     2) intel-mkl/19.0.4         5) intel-advisor/19.0.4   8) intel-all/19.0.4   
     3) intel-mpi/19.0.4         6) intel-tbb/19.0.4

Unloading a product

You can remove a product from your environment by using the unload sub-command. Additionally, you can delete all the modules with the purge sub-command:

$ module list
Currently Loaded Modulefiles:
 1) intel-compilers/19.0.4   2) intel-mpi/19.0.4   3) intel-mkl/19.0.4
 
$ module unload intel-mkl/19.0.4
$ module list
Currently Loaded Modulefiles:
 1) intel-compilers/19.0.4   2) intel-mpi/19.0.4
 
$ module purge
$ module list
No Modulefiles Currently Loaded.

No information is returned if the product unloading is effectuated without a problem.

Changing a product version

When you wish to change the version of a product already loaded, you can use the switch sub-command :

$ module list
Currently Loaded Modulefiles:
 1) intel-compilers/19.0.4   2) intel-mpi/18.0.5
$ module switch intel-mpi/19.0.4
$ module list
Currently Loaded Modulefiles:
 1) intel-compilers/19.0.4   2) intel-mpi/19.0.4

As above, no message is returned by the command if the switching occurs without a problem.

Cautionary note for the linking

At the compilation of your codes, even after loading the suitable module to use a library, it will no doubt be necessary to specify the libraries to be used during the linking.
For example, to compile with the mathmatical library HYPRE:

$ module load hypre
$ ifort -o test -lHYPRE test.f90

On the other hand, note that the paths towards the header files, the Fortran modules, the static libraries (.a) and the dynamic libraries (.so) are set automatically without needing to be defined.

For all these points, in case of a problem, contact the User Support Team.

Modules compatible with gpu_p5 partition

Jobs targeting the gpu_p5 partition (extension installed in June 2022 and equipped with NVIDIA A100 80 GB GPUs) must use specific/dedicated modules because the nodes that compose this partition are equipped with AMD CPUs while all the other Jean Zay nodes are equipped with Intel CPUs.

Warning: you must also recompile your codes with the dedicated modules before using them on this partition. If you plan to launch a code on several partitions using the 2 types of CPU (AMD and Intel), it is necessary to compile separate binaries.

Important: the modules accessible by default are not compatible with this partition. It is therefore necessary to load the cpuarch/amd module first to have access to the dedicated modules:

$ module load cpuarch/amd
$ module avail
------------------------------------------- /gpfslocalsup/pub/module-rh/modulefiles -------------------------------------------
arm-forge/19.1.1              intel-all/2019.5(19.0.5)   intel-itac/2020.1         intel-tbb/2019.6(19.0.4)        pgi/20.1  
arm-forge/20.1.2              intel-all/2020.0           intel-itac/2020.2         intel-tbb/2019.8(19.0.5)        pgi/20.4  
arm-forge/20.2.1              intel-all/2020.1           intel-itac/2020.3         intel-tbb/2020.0
[...]
 
----------------------------- /gpfslocalsup/pub/modules-idris-env4/modulefiles/linux-rhel8-x86_64 -----------------------------
anaconda-py2/2019.03  gcc/7.3.0               gcc/10.1.0-cuda-openacc  python/3.6.15  python/3.10.4             
anaconda-py3/2019.03  gcc/8.2.0               magma/2.5.4-cuda         python/3.7.3   pytorch-gpu/py3/1.11.0    
anaconda-py3/2020.11  gcc/8.3.0               nccl/2.9.6-1-cuda        python/3.7.5   sox/14.4.2                
anaconda-py3/2021.05  gcc/8.4.1(8.3.1)        openfoam/2112-mpi        python/3.7.6   sparsehash/2.0.3          
cudnn/8.1.1.33-cuda   gcc/9.1.0               openmpi/4.0.5-cuda       python/3.7.10  tensorflow-gpu/py3/2.8.0  
gcc/4.9.4             gcc/9.1.0-cuda-openacc  openmpi/4.1.1-cuda       python/3.8.2   
gcc/5.5.0             gcc/9.3.0               python/2.7.16            python/3.8.8   
gcc/6.5.0             gcc/10.1.0              python/3.5.6             python/3.9.12