In the future, most systems in high-performance computing (HPC) will have a hierarchical hardware design, e.g., a cluster of ccNUMA or shared memory nodes with each node having several multi-core CPUs. Parallel programming must combine the distributed memory parallelization on the node inter-connect with the shared memory parallelization inside each node. There are many mismatch problems between hybrid hardware topology and the hybrid or homogeneous parallel programming models on such hardware. Hybrid programming with a combination of MPI and OpenMP is often slower than pure MPI programming. There are also several opportunities where hybrid programming has significant advantages.
The second part addresses future directions of the Message Passing Interface (MPI) standardization by the MPI-3 Forum.
Dr. Rolf Rabenseifner is head of Parallel Computing – Training and Application Services at the High Performance Computing Center Stuttgart (HLRS). He was member of the MPI-2 Forum and member of the steering committee of the MPI-3 Forum. He is involved in MPI optimization and benchmarking, e.g., in the HPC Challenge Benchmark Suite. In workshops and summer schools, he teaches parallel programming models in many universities and labs in Germany.
L'annonce de ce séminaire est disponible sous forme d'un fichier pdf.
La présentation faite au cours de ce séminaire est disponible à l'url suivante : https://fs.hlrs.de/projects/rabenseifner/publ/mpi_openmp_IDRIS2009_2to1.pdf