Gromacs
đ§ Page en construction
Liens utilesâ
- đ Site officiel : https://www.gromacs.org
- đ Documentation : https://manual.gromacs.org/
- đŹ Forum / support : https://gromacs.bioexcel.eu
- 𧏠DépÎt Git : https://gitlab.com/gromacs/gromacs
Versions disponiblesâ
La liste des versions disponibles est accessible avec les commandes suivantes :
- Partitions CPU et V100
- Partition A100
- Partition H100
module purge
module avail gromacs
module purge
module load arch/a100
module avail gromacs
module purge
module load arch/h100
module avail gromacs
Conseils GĂ©nĂ©rauxâ
mdrun est un logiciel ayant beaucoup de possibilités d'optimisation de performances.
Il est important de réaliser des tests et de lire la page dédiée sur la documentation.
Conseils pour l'exĂ©cution sur GPUâ
Gromacs est un logiciel qui bénéficie de beaucoup de développements GPU.
Il est trĂšs important de lire la page sur les performances de mdrun pour optimiser les calculs sur Jean Zay.
Spécialement, deux variantes principales sont disponibles sur Jean Zay :
- MPI : Version MPI (les modules ont un
-mpidans leur nom) - tMPI : Version avec la partie MPI émulée par des threads.
La version tMPI est plus efficace sur GPU mais est limitée à l'utisation d'un seul noeud.
Exemples de scripts de soumissionâ
â ïž Les scripts de soumission sont des exemples Ă modifier en fonction des ressources nĂ©cessaires pour le calcul.
Nous vous invitons à bien lire les pages de documentation concernant la réservation des ressources.
- Partition CPU
- Partition V100 MPI
- Partition V100 tMPI
- Partition A100 MPI
- Partition A100 tMPI
- Partition H100 MPI
- Partition H100 tMPI
#!/bin/bash
#SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks-per-node=20 # Number of MPI tasks per node
#SBATCH --cpus-per-task=2 # Number of core for each MPI task
#SBATCH --job-name=gromacs
#SBATCH --output=%x.%j # output in <job-name>.<jobid>
#SBATCH --error=%x.%j # errors <job-name>.<jobid>
#SBATCH --account=<project_id>@cpu # project_id available with idracct
#SBATCH --time=02:00:00
module purge
module load gromacs # check the available versions with module avail gromacs
#!/bin/bash
#SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks-per-node=4 # Number of MPI tasks per node
#SBATCH --cpus-per-task=10 # Number of core for each MPI task
#SBATCH --gpus-per-node=4
#SBATCH --job-name=gromacs
#SBATCH --output=%x.%j # output in <job-name>.<jobid>
#SBATCH --error=%x.%j # errors <job-name>.<jobid>
#SBATCH --account=<project_id>@v100 # project_id available with idracct
#SBATCH --time=02:00:00
module purge
module load gromacs # check the available versions with module avail gromacs
srun gmx_mpi mdrun -deffnm production -ntomp 4
#!/bin/bash
#SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks-per-node=4 # Number of MPI tasks per node
#SBATCH --cpus-per-task=10 # Number of core for each MPI task
#SBATCH --gpus-per-node=4
#SBATCH --job-name=gromacs
#SBATCH --output=%x.%j # output in <job-name>.<jobid>
#SBATCH --error=%x.%j # errors <job-name>.<jobid>
#SBATCH --account=<project_id>@v100 # project_id available with idracct
#SBATCH --time=02:00:00
module purge
module load gromacs/2024.3-cuda # check the available versions with module avail gromacs
export GMX_GPU_PME_PP_COMMS=true
export GMX_GPU_DD_COMMS=true
gmx mdrun -ntmpi 4 -npme 1 -ntomp 5 \
-update gpu -bonded gpu \
-nb gpu -pme gpu -pmefft gpu \
-deffnm production -v
#!/bin/bash
#SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks-per-node=8 # Number of MPI tasks per node
#SBATCH --cpus-per-task=8 # Number of core for each MPI task
#SBATCH --gpus-per-node=8
#SBATCH --constraint=a100
#SBATCH --job-name=gromacs
#SBATCH --output=%x.%j # output in <job-name>.<jobid>
#SBATCH --error=%x.%j # errors <job-name>.<jobid>
#SBATCH --account=<project_id>@a100 # project_id available with idracct
#SBATCH --time=02:00:00
module purge
module load arch/a100
module load gromacs # check the available versions with module avail gromacs
srun gmx_mpi mdrun -deffnm production -ntomp 4
#!/bin/bash
#SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks-per-node=8 # Number of MPI tasks per node
#SBATCH --cpus-per-task=8 # Number of core for each MPI task
#SBATCH --gpus-per-node=8
#SBATCH --constraint=a100
#SBATCH --job-name=gromacs
#SBATCH --output=%x.%j # output in <job-name>.<jobid>
#SBATCH --error=%x.%j # errors <job-name>.<jobid>
#SBATCH --account=<project_id>@a100 # project_id available with idracct
#SBATCH --time=02:00:00
module purge
module load arch/a100
module load gromacs/2024.3-cuda # check the available versions with module avail gromacs
export GMX_GPU_PME_PP_COMMS=true
export GMX_GPU_DD_COMMS=true
gmx mdrun -ntmpi 8 -npme 1 -ntomp 5 \
-update gpu -bonded gpu \
-nb gpu -pme gpu -pmefft gpu \
-deffnm production -v
#!/bin/bash
#SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks-per-node=4 # Number of MPI tasks per node
#SBATCH --cpus-per-task=24 # Number of core for each MPI task
#SBATCH --gpus-per-node=4
#SBATCH --constraint=h100
#SBATCH --job-name=gromacs
#SBATCH --output=%x.%j # output in <job-name>.<jobid>
#SBATCH --error=%x.%j # errors <job-name>.<jobid>
#SBATCH --account=<project_id>@h100 # project_id available with idracct
#SBATCH --time=02:00:00
module purge
module load arch/h100
module load gromacs # check the available versions with module avail gromacs
srun gmx_mpi mdrun -deffnm production -ntomp 4
#!/bin/bash
#SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks-per-node=4 # Number of MPI tasks per node
#SBATCH --cpus-per-task=24 # Number of core for each MPI task
#SBATCH --gpus-per-node=4
#SBATCH --constraint=h100
#SBATCH --job-name=gromacs
#SBATCH --output=%x.%j # output in <job-name>.<jobid>
#SBATCH --error=%x.%j # errors <job-name>.<jobid>
#SBATCH --account=<project_id>@h100 # project_id available with idracct
#SBATCH --time=02:00:00
module purge
module load arch/h100
module load gromacs/2024.3-cuda # check the available versions with module avail gromacs
gmx mdrun -ntmpi 4 -npme 1 -ntomp 5 \
-update gpu -bonded gpu \
-nb gpu -pme gpu -pmefft gpu \
-deffnm production -v