Ada: Execution of a parallel MPI code in batch

The LoadLeveler system is responsible for managing jobs on all of the nodes.  Jobs are put into “classes” principally according to the Elapsed time, number of cores, and memory requested.  You can consult the structure of Ada classes here.

In order to submit an MPI job in batch, you must:

  • Create a submission script. The following is an example registered in the mpi.ll file:
    $ more mpi.ll
    # Name of the LoadLeveler job  
    # @ job_name    = Mpi
    # Job standard output file
    # @ output      = $(job_name).$(jobid)
    # Job standard error file
    # @ error       = $(job_name).$(jobid)
    # Type of job
    # @ job_type    = parallel
    # Number of processes requested (here 64)
    # @ total_tasks = 64
    # Maximum Elapsed time for the entire job in hh:mm:ss (10mn ici)
    # @ wall_clock_limit = 00:10:00
    # @ queue
    # To have the command echoes
    set -x
    # Temporary job folder
    cd $TMPDIR
    # The LOADL_STEP_INITDIR variable is automatically positioned by
    # The LoadLeveler directory in which one enters the command llsubmit
    cp $LOADL_STEP_INITDIR/a.out .
    # Execution of an MPI programme.
    poe ./a.out
  • Submit the script (only on Ada) via the  llsubmit command:
$ llsubmit  mpi.ll


  • Since March 4, 2014, we default set the variable MP_USE_BULK_XFER to yes to enable RDMA. This feature allows increasing the performance of collective mpi communications and of computation-communication overlapping. However, some codes may be less efficient when this variable is set to yes. You can disable RDMA for your code by setting the variable to no just before the execution of your binary (export MP_USE_BULK_XFER=no or setenv MP_USE_BULK_XFER no).
  • In this example, let us suppose that the executable file a.out is found in the submission directory, which is the directory in which one enters the llsubmit command.  (The LOADL_STEP_INITDIR variable is set automatically.)
  • The output file of the computation Mpi.$(jobid) is also found in the submission directory; it is created at the beginning of the job execution.  You should not edit or modify this file while the job is running.
  • Memory:  The value by default is 3.5 GB by reserved core (therefore, by MPI task).  If you request more than 64 cores (keyword:  total_tasks), then you cannot go beyond this memory limit.  Otherwise, the maximum value that you can request is 7.0 GB by core which is reserved via the keyword:   # @ as_limit = 7.0gb.
  • If your job contains relatively long sequential commands (such as pre- and post-processing and transfers or archiving of large files), the use of multistep jobs may be justified.