This is a detailed overview of using parallelism for achieving more computation in the same amount of elapsed time, covering all of using multiple programs in parallel, "distributed memory" and"shared memory" designs. It concentrates on principles rather than details, to help attendees make the right decision, proceed in the right direction and know what work and skills are involved. It is also designed for programmers who need to support parallel codes, by giving an understanding of the fundamental principles.
The course is split into three lectures, and some people will want to drop out after only one or two. However, it is NOT written so that people can 'drop in' - the later lectures depend on material covered in the earlier ones. The three lectures cover:
Even for the first lecture, you must be reasonably competent at writing non-trivial scripts in Python, perl, something like bash, or in a programming language. You should not attempt to program in parallel before you can program in serial.
For the second two lectures, you MUST be a reasonably competent and experienced serial programmer in some language like Python, Fortran, C, C++ or Java. This is NOT an easy area to understand, and this is not an easy course.
The introduction, running complete serial programs in parallel, and a brief overview of parallel programming:
Parallel programming proper, including currently used parallel environments:
Shared memory models and programming in more depth:
There is a glossary of commonly used jargon in the area of parallel programming - all terms used in the course are explained, but there are so many needed that it can be confusing.
The previous version of the course was significantly different, and includes some material that may be of interest to some people.