ENCCS Workshop – Intermediate Topics in MPI
December 8 @ 09:00 - December 11 @ 12:00 CETFree
The Message Passing Interface (MPI) is the de facto standard for distributed memory parallelism in high performance computing (HPC). MPI is the dominant programming model for modern day supercomputers and will continue to be critical in enabling researchers to scale up their HPC workloads to forthcoming pre-exascale and exascale systems within EuroHPC and elsewhere.
This workshop targets programmers in both academia and industry who already have experience with basic MPI and are ready to take the next step to more advanced usage. Topics which will be covered include communicators, groups, derived data types, one-sided communication, non-blocking collectives and hybrid MPI+threading approaches. Lectures will be interleaved with hands-on exercises. All exercises will be written in C, but the instructors will be able to answer questions about MPI in Fortran and Python.
- Familiarity with MPI in C/C++, Fortran or Python, either from introductory courses or workshops (e.g. PDC’s Introduction to MPI or the SNIC course Introduction to parallel programming using message passing with MPI) or through self-taught usage.
- Familiarity with C/C++
- Basic Linux command line skills
- Existing access to a SNIC cluster or own computer with MPI and compilers installed.
For registration use the link below
Day 1 – Tuesday 8 December 2020 – Communicators, groups, derived datatypes
Day 2 – Wednesday 9 December 2020 – One-sided communication
Day 3 – Thursday 10 December 2020 – Collective communication (including nonblocking)
Day 4 – Friday 11 December 2020 – MPI and threads