Introduction to MPI

This is an introduction to using MPI for writing parallel programs to run on clusters and multi-coresystems, largely for the purposes of "high-performance computing". It covers all of the principles of MPI, and teaches the use of all of the basic facilities of MPI (i.e. the ones that are used in most HPC applications), so that attendees will be able to write serious programs and work on ones they get from other people.

All examples are given in Fortran 90 and C, and attendees can use whichever of these they prefer for the practicals, or call the C interface from C++.

Lectures

The first three lectures cover most of the fundamentals of using MPI in real programs and programming in simple collective communication (similar to SIMD - Single Instruction, Multiple Data).

Introduction (also in the form of a Handout for the MPhil )

Using MPI (also in the form of a Handout for the MPhil )

Datatypes and Collectives (also in the form of a Handout for the MPhil )

The next three cover slightly more advanced topics, including the basics of point-to-point communication, and cover the remainder of the main facilities that most programmers will need.

Point-to-Point Transfers (also in the form of a Handout for the MPhil )

More on Datatypes and Collectives (also in the form of a Handout for the MPhil )

Error Handling (also in the form of a Handout for the MPhil )

The next two include communication between subsets of processes and asynchronous (non-blocking) communication, which only some programmers will need, but they are very important for some applications.

Communicators etc. (also in the form of a Handout for the MPhil )

More on Point-to-Point (also in the form of a Handout for the MPhil )

The next two are a summary of the most critical points from later lectures on the practical use of MPI, a description of how problems can be split between processes, and a description of how to use MPI's virtual topologies for grid-like decompositions.

Miscellaneous Guidelines (also in the form of a Handout for the MPhil )

Problem Decomposition (also in the form of a Handout for the MPhil )

Topologies

The next three cover unfortunately complicated aspects, which are needed to avoid problems in large, portable, production codes; they are not part of the `core' course.

Composite Types and Language Standards

Attributes and I/O

Debugging, Performance and Tuning

Extra Lectures

These lectures cover aspects that will not affect people who use only the facilities recommended in this course, but may affect people working on MPI programs written by others, and are needed by people who want to go further with MPI. Most people are recommended not to use the facilities they describe, as the problems they introduce usually outweigh their benefit.

One-sided Communication

Advanced Completion Issues

Other Features Not Covered

Auxiliary Material

Practical exercises

These are practical exercises to use the facilities taught, and are intended to be worked through in order.

Programs and data used in the practicals

Interface proformas for use in the practicals

These are programs and data used in the exercises, and summaries of the interfaces to the MPI procedures (because they are not described in full in the lectures). Anyone working through the exercises will need to download them.

Specimen answers to the exercises

These are some specimen answers, which may help if you get stuck with an exercise. All are in Fortran 90 and C, and some are in C++ using the C interface.

The next three are some MPI codes written by the author, which both show what can be done and are useful in their own right.

Code to provide a globally consistent POSIX timer

An example of how to use the profiling facility

An MPI timer/tester that written for HPC benchmarking