Skip to content

EPCCed/archer2-MPI-2020-05-14

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation






ARCHER 2 MPI course (May 2020)

CC BY-NC-SA 4.0

The world’s largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. The course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture.

Parallel programming by definition involves co-operation between processes to solve a common task. The programmer has to define the tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues.

The course is normally delivered in an intensive three-day format using EPCC’s dedicated training facilities. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts.

Intended Learning Outcomes

On completion of this course, students should be able to:

  • Understand the message-passing model in detail.

  • Implement standard message-passing algorithms in MPI.

  • Debug simple MPI codes.

  • Measure and comment on the performance of MPI codes.

  • Design and implement efficient parallel programs to solve regular-grid problems.

Pre-requisite Programming Languages:

Fortran, C or C++. It is not possible to do the exercises in Python or Java.

Message Passing Programming with MPI

Dates:14th, 15th and 22nd May 2020

Location:Online

Installing MPI locally

Note that all registered users will be given access to the Cirrus system. Although having MPI installed on your laptop will be convenient, do not worry if these instructions do not work for you.

Linux

Linux users need to install the GNU compilers and a couple of MPI packages, e.g. for Ubuntu:

user@ubuntu$ sudo apt-get install gcc
user@ubuntu$ sudo apt-get install openmpi-bin
user@ubuntu$ sudo apt-get install libopenmpi-dev

Mac

Mac users need to install compilers from the Xcode developer package. It is easiest to install MPI using the Homebrew package manager - here are Instructions on how to install Xcode and Homebrew.

Now install OpenMPI:

user@mac$ brew install open-mpi

Windows

We recommend that Windows users access the EPCC systems (e.g. Cirrus or NEXTGenIO) using MobaXterm.

However, that may not be possible at present due to the ongoing security issues affecting many supercomputers worldwide including systems at EPCC.

One solution is to install a Linux virtual machine (e.g. Ubuntu) and follow the Linux installation instructions above.

I know that some users have been able to install MPI natively on Windows using the Intel Parallel Studio compilers and the Intel MPI library.

Guest accounts on the NEXTGenIO system

ssh -XY [email protected]
ssh -XY nextgenio-login2

Here are the assignments of account names to users

Lecture Slides

Unless otherwise indicated all material is Copyright © EPCC, The University of Edinburgh, and is only made available for private study.

Wednesday

Thursday

Friday

Notes

Exercise Material

Unless otherwise indicated all material is Copyright © EPCC, The University of Edinburgh, and is only made available for private study.


This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

CC BY-NC-SA 4.0

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages