55
Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc St-Onge

Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

  • Upload
    others

  • View
    13

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Practical Introduction toMessage-Passing Interface (MPI)

October 1st, 2015

1

By: Pier-Luc St-Onge

Page 2: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Partners and Sponsors

2

Page 3: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Setup for the workshop1. Get a user ID and password paper (provided in class):

##:**********

2. Access to local computer (replace ## and ___ with appropriate values, “___” is provided in class):a. User name: csuser##b. Password: ___@[S##

3. SSH connection to Guillimin (replace **********):a. Host name: guillimin.hpc.mcgill.cab. User name: class##c. Password: **********

3

Page 4: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Import Examples and Exercises

● On Guillimin:cp -a /software/workshop/cq-formation-intro-mpi ~/

cd cq-formation-intro-mpi

● From GitHub:module add git # If on Guillimin

git clone -b mcgill \

https://github.com/calculquebec/cq-formation-intro-mpi.git

cd cq-formation-intro-mpi

4

Page 5: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Outline

● General Concepts● A First Code● MPI Data Types● Point-to-point Communications● Synchronization Between Processes● Collective Communications● Conclusion

5

Page 6: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

6

General Concepts

Page 7: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Parallel Algorithm - Why?

● When do you need to parallelize your algorithm?○ The algorithm takes too much time on a single

processor or even on a full compute node.○ The algorithm uses too much memory

■ The algorithm may fit in cache memory, but only if the data is split on multiple processors

○ The algorithm does a lot of read and write operations (I/O) on the network storage■ The data does not fit on a single node■ Bandwidth and latency of a single node

7

Page 8: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Vocabulary and Concepts● Serial tasks

○ Any task that cannot be split in two simultaneous sequences of actions

○ Examples: starting a process, reading a file, any communication between two processes

● Parallel tasks○ Data parallelism: same action applied on different

data. Could be serial tasks done in parallel.

○ Process parallelism: one action on one set of data, but action is split in multiple processes or threads.■ Data partitioning: rectangles or blocks

8

Page 9: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Serial Code Parallelization● Implicit Parallelization | minimum work for you

○ Threaded libraries (MKL, ACML, GOTO, etc.)○ Compiler directives (OpenMP)○ Good for desktops and shared memory machines

● Explicit Parallelization | work is required !○ You tell what should be done on what CPU○ Solution for distributed clusters (shared nothing!)

● Hybrid Parallelization | work is required !○ Mix of implicit and explicit parallelization

■ Vectorization and parallel CPU instructions○ Good for accelerators (CUDA, OpenCL, etc.)

9

Page 10: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Distributed Memory Model

10

Net

wor

k

Process 1

A(10)

Process 2

A(10)

Different variables!

Page 11: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

In Practice on a Cluster

11

● The scheduler provides a list of worker node names

● The job spawns processes on each worker node○ How to manage all processes?

● Processes must communicate together

○ How to manage

communications? By using “sockets” each time? No!

Page 12: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

● Model for distributed memory approach○ Each process is identified by unique integer

○ Programmer manages memory by placing data in a particular process

○ Programmer sends data between processes (point-to-point communications)

○ Programmer performs collective operations on sets of processes

● MPI is a specification for a standardized library: subroutines linked with your code○ History: MPI-1 (1994), MPI-2 (1997), MPI-3 (2012)

Solution - MPI orMessage Passing Interface!

12

Page 13: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Different MPI ImplementationsDifferent MPI Modules● OpenMPI, MVAPICH2, Intel MPI, …

○ They come with a different compilation wrapper■ Compilation arguments may differ

○ Execution arguments may also differ■ mpiexec or mpirun, -n or -np

● When working on a cluster, please use provided MVAPICH2 or OpenMPI libraries: they have been compiled with Infiniband support (20-40Gbps).

● MPI library must match compiler used (Intel, PGI, or GCC) both at compile and at run time.○ {GNU, Intel, PGI} * {OpenMPI, MVAPICH2}

13

Page 14: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

A First Code

14

Page 15: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Basic Features of MPI Program

● Include basic definitions○ C: #include <mpi.h>○ Fortran: INCLUDE 'mpif.h' (or USE mpi)

● Initialize MPI environment● Get information about processes● Send information between two specific

processes (point-to-point communications)● Send information between groups of

processes (collective communications)● Terminate MPI environment

15

Page 16: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Six Basic MPI Functions● You need to know these six functions:

○ MPI_Init: initialize MPI environment○ MPI_Comm_size: number of processes in a group

○ MPI_Comm_rank: unique integer value identifying each process in a group (0 <= rank < size)

○ MPI_Send: send data to another process○ MPI_Recv: receive data from another process○ MPI_Finalize: close MPI environment

● MPI_COMM_WORLD is a default communicator (defined in mpi.h), refers to the group of all processes in the job

● Each statement executes independently in each process

16

Page 17: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Example:A Smiley from N Processors

● smiley.c#include <stdio.h>

#include <mpi.h>

int main (int argc, char * argv[]){ int rank, size;

MPI_Init( &argc, &argv );

MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size );

printf( "%3d/%-3d :-D\n", rank, size );

MPI_Finalize();

return 0;}

17

● smiley.f90PROGRAM smileyIMPLICIT NONEINCLUDE 'mpif.h'

INTEGER ierr, rank, size

CALL MPI_Init(ierr)

CALL MPI_Comm_rank (MPI_COMM_WORLD, & rank, ierr)

CALL MPI_Comm_size (MPI_COMM_WORLD, & size, ierr)

WRITE(*,*) rank, '/', size, ' :-D'

CALL MPI_Finalize(ierr)

END PROGRAM smiley

Page 18: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Compiling your MPI Code

● Not defined by the standard● More or less similar for all implementations:

○ Need to specify include directory and MPI library

○ But usually a compiler wrapper (mpicc, mpif90) does it for you automatically

● On the Guillimin cluster:module add ifort_icc openmpi

mpicc smiley.c -o smiley

mpif90 smiley.f90 -o smiley

18

Page 19: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Running your MPI Code● Not defined by the standard● A launching program (mpirun, mpiexec,

mpirun_rsh, ...) is used to start your MPI program● Particular choice of launcher depends on MPI

implementation and on the machine used● A hosts file is used to specify on which nodes to run

MPI processes (--hostfile nodes.txt), but will run on localhost by default or will use the host file provided by the batch system

● On a worker node of Guillimin:

mpiexec -n 4 ./smiley

19

Page 20: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Exercises - Part 11. If not done already, log in to Guillimin:

ssh class##@guillimin.hpc.mcgill.ca

2. Check for loaded software modules:module list

3. See all available modules:module av

4. Load necessary modules:module add ifort_icc openmpi

5. Check loaded modules again6. Verify that you have access to the correct mpi package:

which mpicc # which mpif90/software/CentOS-6/tools/openmpi-1.6.3-intel/bin/mpicc

20

Page 21: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Exercises - Part 2

1. Make sure to get workshop filesa. See page 4: from /software or from GitHub

cd ~/cq-formation-intro-mpi

2. Compile code:mpicc smiley.c -o smileympif90 smiley.f90 -o smiley

3. Edit smily.pbs: reserve 4 processors (ppn=4) for 5 minutes (00:05:00). Edit:

mpiexec -n 4 ./smiley > smiley.out

21

Page 22: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Exercises - Part 31. Verify your smiley.pbs:

#!/bin/bash#PBS -l nodes=1:ppn=4#PBS -l walltime=00:05:00#PBS -V#PBS -N smileycd $PBS_O_WORKDIRmpiexec -n 4 ./smiley > smiley.out

2. Submit your job:qsub smiley.pbs

3. Check the job status:qstat -u $USER

4. Check the output (smiley.out)22

Page 23: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Exercises - Part 4● Copy smiley.{c,f90} to smileys.{c,f90}● Modify smileys.c or smileys.f90 such that

each process will print a different smiley:○ If rank is 0, print :-|○ If rank is 1, print :-)○ If rank is 2, print :-D○ If rank is 3, print :-P

● Compile the new code● Copy smiley.pbs to smileys.pbs and modify

the copied version accordingly○ Submit a new job and check the output

23

Page 24: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Another Example - hello.{c,f90}#include <math.h>

// [...]

float a, b;

if (rank == 0) {

a = sqrt(2.0);

b = 0.0;

}

if (rank == 1) {

a = 0.0;

b = sqrt(3.0);

}

printf("On proc %d: a, b = \t%f\t%f\n",

rank, a, b);

24

Page 25: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

MPI Data Types

Portable data types forheterogeneous compute nodes

25

Page 26: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

MPI Basic Data Types (C)

26

MPI Data type C Data type

MPI_CHAR char

MPI_SHORT short

MPI_INT int

MPI_LONG long int

MPI_UNSIGNED_CHAR unsigned char

MPI_UNSIGNED_SHORT unsigned short

MPI_UNSIGNED unsigned int

MPI_UNSIGNED_LONG unsigned long int

MPI_FLOAT float

MPI_DOUBLE double

MPI_LONG_DOUBLE long double

Page 27: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

MPI Basic Datatypes (Fortran)

27

MPI Data type Fortran Data type

MPI_INTEGER INTEGER

MPI_INTEGER8 INTEGER(selected_int_kind(18))

MPI_REAL REAL

MPI_DOUBLE_PRECISION DOUBLE_PRECISION

MPI_COMPLEX COMPLEX

MPI_DOUBLE_COMPLEX DOUBLE_COMPLEX

MPI_LOGICAL LOGICAL

MPI_CHARACTER CHARACTER(1)

Page 28: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

MPI Advanced Datatypes

28

MPI Data type C or Fortran

MPI_PACKED Data packed with MPI_Pack

MPI_BYTES 8 binary digits

Page 29: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Point-to-point Communications

29

qsub -I -l nodes=1:ppn=2 -l walltime=5:00:00

Page 30: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Six Basic MPI Functions

● MPI_Init: initialize MPI environment● MPI_Comm_size: number of processes in a

group● MPI_Comm_rank: unique integer value

identifying each process in a group(0 <= rank < size)

● MPI_Send: send data to another process● MPI_Recv: receive data from another process● MPI_Finalize: close MPI environment

30

Page 31: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

MPI_Send / MPI_Recv

● Passing message between two different MPI processes (point-to-point communication)

● If one process sends, another initiates the matching receive

● The exchange data types are predefined for portability

● MPI_Send / MPI_Recv is blocking!(There are also non-blocking versions)

31

Page 32: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

MPI: Sending a messageC: MPI_Send(&data, count, data_type, dest, tag, comm)

Fortran: MPI_Send(data, count, data_type, dest, tag, comm, ierr)

● data: variable to send● count: number of data elements to send● data_type: type of data to send● dest: rank of the receiving process● tag: the label of the message● comm: communicator - set of involved processes● ierr: error code (return value for C)

32

Page 33: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

MPI: Receiving a messageC: MPI_Recv(&data, count, data_type, source, tag, comm, &status)

Fortran: MPI_Recv(data, count, data_type, source, tag, comm, status, ierr)

● source: rank of the sending process (or can be set to MPI_ANY_SOURCE)

● tag: must match the label used by sender (or can be set to MPI_ANY_TAG)

● status: a C structure (MPI_Status) or an integer array with information in case of an error (source, tag, actual number of bytes received)

● MPI_Send and MPI_Recv are blocking!33

Page 34: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

MPI_Send / MPI_RecvExample 1 // examples/sendRecvEx1.c

int rank, size, buffer = -1, tag = 10; MPI_Status status;

MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size );

if (size >= 2 && rank == 0) { buffer = 33; MPI_Send( &buffer, 1, MPI_INT, 1, tag, MPI_COMM_WORLD ); } if (size >= 2 && rank == 1) { MPI_Recv( &buffer, 1, MPI_INT, 0, tag, MPI_COMM_WORLD, &status ); printf("Rank %d\tbuffer= %d\n", rank, buffer); if (buffer != 33) printf("fail\n"); }

MPI_Finalize();

34

Page 35: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Exercise: Sending a Matrix

● Goal: sending a matrix 4x4 from process 0 to process 1

● Edit file send_matrix.{c,f90}● Compile your modified code● Run it with 2 processes

35

Page 36: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Synchronization Between Processes

36

Page 37: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Processes Waiting for Communications

● When using blocking communications, unbalanced workload may cause processes to be waiting

● Worst case: everyone waiting after everyone○ Must avoid cases of deadlocks○ Some bugs may lead to non-matching send/receive

37

Page 38: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

MPI_Send / MPI_RecvExample 2 ///////////////////// // Should not work // // Why? // /////////////////////

if (size >= 2 && rank == 0) { MPI_Send( &buffer1, 1, MPI_INT, 1, 10, MPI_COMM_WORLD ); MPI_Recv( &buffer2, 1, MPI_INT, 1, 20, MPI_COMM_WORLD, &status ); }

if (size >= 2 && rank == 1) { MPI_Send( &buffer2, 1, MPI_INT, 0, 20, MPI_COMM_WORLD ); MPI_Recv( &buffer1, 1, MPI_INT, 0, 10, MPI_COMM_WORLD, &status ); }

38

Page 39: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

MPI_Send / MPI_RecvExample 2 - Solution A ////////////////////////////// // Exchange Send/Recv order // // on one processor // //////////////////////////////

if (size >= 2 && rank == 0) { MPI_Send( &buffer1, 1, MPI_INT, 1, 10, MPI_COMM_WORLD ); MPI_Recv( &buffer2, 1, MPI_INT, 1, 20, MPI_COMM_WORLD, &status ); }

if (size >= 2 && rank == 1) { MPI_Recv( &buffer1, 1, MPI_INT, 0, 10, MPI_COMM_WORLD, &status ); MPI_Send( &buffer2, 1, MPI_INT, 0, 20, MPI_COMM_WORLD ); }

39

Page 40: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

MPI_Send / MPI_RecvExample 2 - Solution B ////////////////////////////////// // Use non-blocking send: Isend // //////////////////////////////////

MPI_Request request;

if (size >= 2 && rank == 0) { MPI_Isend( &buffer1, 1, MPI_INT, 1, 10, MPI_COMM_WORLD, &request ); MPI_Recv( &buffer2, 1, MPI_INT, 1, 20, MPI_COMM_WORLD, &status ); }

if (size >= 2 && rank == 1) { MPI_Isend( &buffer2, 1, MPI_INT, 0, 20, MPI_COMM_WORLD, &request ); MPI_Recv( &buffer1, 1, MPI_INT, 0, 10, MPI_COMM_WORLD, &status ); }

MPI_Wait( &request, &status ); // Wait until send is complete

40

Page 41: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Non-blocking Communications

● MPI_Isend() starts the transfer and returns control○ Advantage: between the start and the end of

transfer the code can do other things

○ Warning: during a transfer, the buffer cannot be reused or deallocated

○ MPI_Request: additional structure for the communication

● You need to check for completion!○ MPI_Wait(&request, &status): the code waits

for the completion of the transfer41

Page 42: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Exercise: Exchanging Vectors

● Goal: exchanging a small vector of data● Edit file exchange.{c,f90}● Compile your modified code● Run it with 2 processes

42

Page 43: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Collective Communications

One step beyond

43

Page 44: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Collective Communications● Involve ALL processes in the communicator

(MPI_COMM_WORLD)● MPI_Bcast: Same data sent from “root” process

to all the others● MPI_Reduce: “root” process collects data from

the others and performs an operation (min, max, add, multiply, etc ...)

● MPI_Scatter: distributes variables from the “root” process to each of the others

● MPI_Gather: collects variables from all processes to the “root" one

44

Page 45: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

MPI_BcastExampleint rank, size, root = 0;float a[2];//…

if (rank == root) { a[0] = 2.0f; a[1] = 4.0f;}

MPI_Bcast( a, 2, MPI_FLOAT, root, MPI_COMM_WORLD );

// Print result

45

P0

P0

P1

P2

P3

a

a

a

a

2,4

2,4

2,4

2,4

2,4

Page 46: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

MPI_ReduceExampleint rank, size, root = 0;float a[2], res[2];//…a[0] = 2 * rank + 0;a[1] = 2 * rank + 1;

MPI_Reduce( a, res, 2, MPI_FLOAT, MPI_SUM, root, MPI_COMM_WORLD );

if (rank == root) { // Print result}

46

P0

P0

P1

P2

P3

Sa=a1+a2+a3+a4Sb=b1+b2+b3+b4

a1,b1

a2,b2

a3,b3

a4,b4

Page 47: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Exercise: Approximation of Pi

● Edit pi_collect.{c,f90}○ Use MPI_Bcast to broadcast the number of iterations○ Use MPI_Reduce to compute the final sum

● Run:

mpiexec -n 2 ./pi_collect < pi_collect.in

47

Page 48: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Exercise: MPI_Wtime

● Edit pi_collect.{c,f90} in order to measure elapsed time for computing Pi○ Only process 0 will compute elapsed time

double t1, t2; if (rank == 0) { t1 = MPI_Wtime(); } // Computing Pi… if (rank == 0) { t2 = MPI_Wtime(); printf("Time = %.16f sec\n", t2 - t1); }

48

Page 49: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

MPI_Scatter and MPI_Gather● MPI_Scatter: one-to-all communication -

different data sent from root process to all others in the communicator, following the rank order

● MPI_Gather: data collected by the root process. Is the opposite of Scatter

49

P0a1,a2,a3,a4,a5,a6,a7,a8

P0a1,a2 P1

a3,a4 P2a5,a6 P3

a7,a8

Scatter Gather

Page 50: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

ExampleMPI_Scatter, MPI_Gather// Scatter

int i, sc = 2; // sendcountfloat a[16], b[2];

if (rank == root) { for (i = 0; i < 16; i++) { a[i] = i; }}

MPI_Scatter (a, sc, MPI_FLOAT, b, sc, MPI_FLOAT, root, MPI_COMM_WORLD);

// Print rank, b[0] and b[1]

50

// Gather

int i, sc = 2; // sendcountfloat a[16], b[2];

b[0] = rank;b[1] = rank;

MPI_Gather(b, sc, MPI_FLOAT, a, sc, MPI_FLOAT, root, MPI_COMM_WORLD);

if (rank == root) { // Print a[0] through a[15]}

Page 51: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

ExerciseDot Product of Two Vectors

● Edit dot_prod.{c,f90}○ Dot product = ATB = a1*b1+a2*b2+...○ 2*8400 integers: dp = 1*8400+2*8399+...+8400*1○ The root process initializes vectors A and B○ Use MPI_Scatter to split A and B○ Use MPI_Reduce to compute the final result

● (Optional) Replace MPI_Reduce with MPI_Gather and let the root process compute the final result

51

Page 52: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Conclusion

52

Page 53: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

MPI routines we know ...

● MPI environment○ MPI_Init, MPI_Finalize

● Information on processes○ MPI_Comm_rank, MPI_Comm_size

● Point-to-point communications○ MPI_Send, MPI_Recv○ MPI_Isend, MPI_Wait

● Collective communications○ MPI_Bcast, MPI_Reduce○ MPI_Scatter, MPI_Gather

53

Page 54: Message-Passing Interface (MPI) Practical Introduction to - McGill... · 2015-10-02 · Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 1 By: Pier-Luc

Further readings

● The standard itself, news, development:○ http://www.mpi-forum.org

● Online reference book:○ http://www.netlib.org/utk/papers/mpi-book/mpi-

book.html

● Calcul Quebec's wiki:○ https://wiki.calculquebec.ca/w/MPI/en

● Detailed MPI tutorials:○ http://people.ds.cam.ac.uk/nmm1/MPI/

○ http://www.mcs.anl.gov/research/projects/mpi/tutorial/

54