80
Programming with MPI on GridRS Dr. Márcio Castro e Dr. Pedro Velho

Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Programming with MPI on GridRSDr. Márcio Castro e Dr. Pedro Velho

Page 2: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Science Research

Challenges

Some applications require tremendous computing power- Stress the limits of computing power and storage

- Who might be interested in those applications?

- Simulation and analysis in modern science or e-Science

Page 3: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Example LHCLarge Hadron

Collider (CERN)

LHC Computing Grid

Worldwide collaboration with more than 170 computing centers in 34 countries

A lot of data to store: 1 GByte/sec

Need high computing power to obtain results in feasible time

How can we achieve this goal?

Page 4: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Old school high performance

computing

• Sit and wait for a new processor– Wait until CPU speed doubles

– Speed for free• Don’t need to recompile or rethink the code• Don’t need to pay a computer scientist to do the job

Unfortunately this is not true anymore...

Page 5: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

The need for HPC

Moore’s Law (is it true or not?)- Moore said in 1965 that the number of transistors will double approximately every 18 months

- But, it’s wrong to think that if the number of transistors doubles then processors run two times faster every 18 months

- Clock speed and transistor density are not co-related!

Can I buy a 20GHz processor today?

Page 6: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Single processor is not enough

– Curiously Moore’s law is still true• Number of transistor still doubles every 18 months

– However there are other factors that limit CPU clock speed:• Heating• Power consumption

– Super computers are too expensive for medium size problems

The solution is to use distributed computing!

Page 7: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Distributed Computing

- Cluster of workstations• Commodity PCs (nodes) interconnected through a local network• Affordable• Medium scale• Very popular among researchers

- Grid computing• Several cluster interconnected through a wide area network• Allows resource sharing (less expensive for universities)• Huge scale

This is the case of GridRS!

Page 8: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

DistributedProgramming

- Applications must be rewritten

• No shared memory between nodes (processes need to communicate)

• Data exchange through network

- Possible solutions

• Ad Hoc: Work only for the platform it was designed for

• Programming models:

Ease data exchange and process identification Portable solution Examples: MPI, PVM, Charm++

Page 9: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Message Passing Interface (MPI)

- It is a programming model

• Not a specific implementation or product• Describes the interface and basic functionalities

- Scalable

• Must handle multiple machines

- Portable

• Socket API may change from one OS to another

- Efficient

• Optimized communication algorithms

Page 10: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

MPI Programming

- MPI implementations (all free):

• OpenMPI (GridRS)http://www.open-mpi.org/

• MPICHhttp://www.mpich.org

• LAM/MPIhttp://www.lam-mpi.org/

Page 11: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

- MPI References

• BooksMPI: The Complete Reference, by Snir, Otto, Huss-Lederman, Walker, and Dongarra, MIT Press, 1996.

Parallel Programming with MPI, by Peter Pacheco, Morgan Kaufmann, 1997.

• The standard:

http://www.mpi-forum.org

MPI Programming

Page 12: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

– SPMD model: Single Program Multiple Data• All processes execute the “same source code”• But, we can define specific blocks of code to be executed by specific processes (if-else statements)

– MPI offers:• A way of identifying processes• Abstraction of low-level communication• Optimized communication

– MPI doesn't offer:• Performance gains for free• Automatic transformation of a sequential to a parallel code

MPI Programming

Page 13: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Possible way of parallelizing an application with MPI:

MPI Programming

Page 14: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Possible way of parallelizing an application with MPI:

Start from sequential version

MPI Programming

Page 15: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Possible way of parallelizing an application with MPI:

Start from sequential version

Split the applicationin tasks

MPI Programming

Page 16: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Possible way of parallelizing an application with MPI:

Start from sequential version

Choose a parallel strategy

Split the applicationin tasks

MPI Programming

Page 17: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Possible way of parallelizing an application with MPI:

Start from sequential version

Choose a parallel strategy

Split the applicationin tasks

Implement with MPI

MPI Programming

Page 18: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Possible way of parallelizing an application with MPI:

Start from sequential version

Choose a parallel strategy

Split the applicationin tasks

Implement with MPI

MPI Programming

Some parallel strategies

• Master/Slave

• Pipeline

• Divide and Conquer

Page 19: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Parallel Strategies

Master/Slave

- Master is one process that centralizes all tasks

- Slaves request tasks when needed

- Master sends tasks on demand

MasterSlave 1 Slave 2

Page 20: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Parallel Strategies

Master/Slave

- Master is one process that centralizes all tasks

- Slaves request tasks when needed

- Master sends tasks on demand

MasterSlave 1 Slave 2

Request

Page 21: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Parallel Strategies

Master/Slave

- Master is one process that centralizes all tasks

- Slaves request tasks when needed

- Master sends tasks on demand

MasterSlave 1 Slave 2

Task 1

Page 22: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Parallel Strategies

Master/Slave

- Master is one process that centralizes all tasks

- Slaves request tasks when needed

- Master sends tasks on demand

MasterSlave 1 Slave 2

Request

Page 23: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Parallel Strategies

Master/Slave

- Master is one process that centralizes all tasks

- Slaves request tasks when needed

- Master sends tasks on demand

MasterSlave 1 Slave 2

Task 2

Page 24: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Parallel Strategies

Master/Slave

- Master is one process that centralizes all tasks

- Slaves request tasks when needed

- Master sends tasks on demand

MasterSlave 1 Slave 2

Page 25: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Parallel Strategies

Master/Slave

- Master is one process that centralizes all tasks

- Slaves request tasks when needed

- Master sends tasks on demand

MasterSlave 1 Slave 2

Result 2

Page 26: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Parallel Strategies

Master/Slave

- Master is one process that centralizes all tasks

- Slaves request tasks when needed

- Master sends tasks on demand

MasterSlave 1 Slave 2

Result 1

Page 27: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Parallel Strategies

Master/Slave

- Master is one process that centralizes all tasks

- Slaves request tasks when needed

- Master sends tasks on demand

MasterSlave 1 Slave 2FinishFinish

Page 28: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Parallel Strategies

Master/Slave

- Master is one process that centralizes all tasks

- Slaves request tasks when needed

- Master sends tasks on demand

MasterSlave 1 Slave 2

Page 29: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Master/Slave

- Master is often the bottleneck

- Scalability is limited due to centralization

- Possible to use replication to improve performance

- Good for heterogenous platforms

Parallel Strategies

Page 30: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Pipeline

- Each process plays a specific role (pipeline stage)- Data follows in a single direction- Parallelism is achieved when the pipeline is full

Time

Parallel Strategies

Page 31: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Pipeline

- Each process plays a specific role (pipeline stage)- Data follows in a single direction- Parallelism is achieved when the pipeline is full

Time

Task 1

Parallel Strategies

Page 32: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Pipeline

- Each process plays a specific role (pipeline stage)- Data follows in a single direction- Parallelism is achieved when the pipeline is full

Time

Task 1Task 2

Parallel Strategies

Page 33: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Pipeline

- Each process plays a specific role (pipeline stage)- Data follows in a single direction- Parallelism is achieved when the pipeline is full

Time

Task 1Task 2Task 3

Parallel Strategies

Page 34: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Pipeline

- Each process plays a specific role (pipeline stage)- Data follows in a single direction- Parallelism is achieved when the pipeline is full

Time

Task 1Task 2Task 3Task 4

Parallel Strategies

Page 35: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Pipeline

- Scalability is limited by the number of stages

- Synchronization may lead to “bubbles”

Example: slow sender and fast receiver

- Difficult to use on heterogenous platforms

Parallel Strategies

Page 36: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Divide and Conquer

- Recursively “breaks” tasks into smaller tasks- Or process the task if it is “small enought”

Parallel Strategies

Page 37: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Divide and Conquer

- Recursively “breaks” tasks into smaller tasks- Or process the task if it is “small enought”

Parallel Strategies

Page 38: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Divide and Conquer

- Recursively “breaks” tasks into smaller tasks- Or process the task if it is “small enought” Work(60)

Parallel Strategies

Page 39: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Divide and Conquer

- Recursively “breaks” tasks into smaller tasks- Or process the task if it is “small enought” Work(60)

Work(40)

Work(20)

Parallel Strategies

Page 40: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Divide and Conquer

- Recursively “breaks” tasks into smaller tasks- Or process the task if it is “small enought” Work(60)

Work(40)

Work(20)Work(20)

Work(20)

Parallel Strategies

Page 41: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Divide and Conquer

- Recursively “breaks” tasks into smaller tasks- Or process the task if it is “small enought” Work(60)

Work(40)

Work(20)Work(20)

Work(20)

Parallel Strategies

Page 42: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Divide and Conquer

- Recursively “breaks” tasks into smaller tasks- Or process the task if it is “small enought” Work(60)

Work(40)

Work(20)Work(20)

Work(20)

Result(20)

Parallel Strategies

Page 43: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Divide and Conquer

- Recursively “breaks” tasks into smaller tasks- Or process the task if it is “small enought” Work(60)

Work(40)

Work(20)Work(20)

Work(20)

Result(20)Result(20)

Parallel Strategies

Page 44: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Divide and Conquer

- Recursively “breaks” tasks into smaller tasks- Or process the task if it is “small enought” Work(60)

Work(40)

Work(20)Work(20)

Work(20)

Result(40)

Result(20)

Parallel Strategies

Page 45: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Divide and Conquer

- Recursively “breaks” tasks into smaller tasks- Or process the task if it is “small enought” Work(60)

Work(40)

Work(20)Work(20)

Work(20)

Result(40)

Parallel Strategies

Page 46: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Divide and Conquer

- Recursively “breaks” tasks into smaller tasks- Or process the task if it is “small enought” Work(60)

Work(40)

Work(20)Work(20)

Work(20)

Result(60)

Parallel Strategies

Page 47: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Divide and Conquer

- More scalable

- Possible to use replicated branches

- In practice it may be difficult to “break” tasks

- Suitable for branch and bound algorithms

Parallel Strategies

Page 48: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

1) Log in for the first time on GridRS

- $ ssh [email protected]

2) Configure the SSH and OpenMPI environment on GridRS

- https://bitbucket.org/schnorr/gridrs/wiki

Hands-on GridRS

Page 49: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

#include  <mpi.h>#include  <stdio.h>

int  main(int  argc,  char  **argv){      /*  A  local  variable  to  store  the  hostname  */  char  hostname[1024];    /*  Initialize  MPI  */  MPI_Init(&argc,  &argv);    /*  Discover  the  hostname  */  gethostname(hostname,  1023);    printf(“Hello  World  from  %s\n”,  hostname);    /*  Finalize  MPI  */  return  MPI_Finalize();}

Exercise 0: Hello World

MPI Programming

Write the following hello world program in your home directory.

Compile the source code on the frontend:

$ mpicc my_source.c -o my_binary

Page 50: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

MPI Programming

Configure GridRS environment, compile and run your app:

- https://bitbucket.org/schnorr/gridrs/wiki/Run_a_Local_experiment

Use timesharing while allocating resources:

- $ oarsub -l nodes=3 -t timesharing -I

Running with “X” processes (you can choose the nb. of processes)- $  mpirun  -­‐-­‐mca  btl  tcp,self  -­‐np  X  -­‐-­‐machinefile  $OAR_FILE_NODES  ./my_binary

Page 51: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

How many processing units are available?int  MPI_Comm_size(MPI_Comm  comm,  int  *psize)– comm

– Communicator: used to group processes– For grouping all processes together use MPI_COMM_WORLD

– psize– Returns the total amount of processes in this communicator

MPI Programming

int  size;MPI_Comm_size(MPI_COMM_WORLD,  &size);Example:

Page 52: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Exercise 1

- Create a program that prints hello world and the total number of available process on the screen

- Try several processes configurations with –np to see if your program is working

MPI Programming

Page 53: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Assigning Process Rolesint  MPI_Comm_rank(MPI_Comm  comm,  int  *rank)

– comm – Communicator: specifies the process that can communicate

– For grouping all processes together use MPI_COMM_WORLD– rank

– Returns the unique ID of the calling process in this communicator

MPI Programming

int  rank;MPI_Comm_rank(MPI_COMM_WORLD,  &rank);Example:

Page 54: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Exercise 2

- Create a program that prints hello world, the total number of available process and the process rank on the screen

- Try several processes configurations with –np to see if your program is working

MPI Programming

Page 55: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Exercise 3 – Master/Slave

if “I am process 0” then

Print: “I’m the master!”

else

Print: “I’m slave <ID> of <SIZE>!”, replacing “ID” by the process rank and SIZE by the number of processes.

MPI Programming

Page 56: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Running Multi-site experiments

- https://bitbucket.org/schnorr/gridrs/wiki/Run_a_Multi-site_experiment

MPI Programming

Page 57: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Sending messages (synchronous)

– Receiver waits for message to arrive (blocking)– Sender unblocks receiver when the message arrives

Proc 1 Proc 2

Time

MPI Programming

Page 58: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Sending messages (synchronous)

– Receiver waits for message to arrive (blocking)– Sender unblocks receiver when the message arrives

Proc 1 Proc 2

MPI_Recv()

Time

MPI Programming

Page 59: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Sending messages (synchronous)

– Receiver waits for message to arrive (blocking)– Sender unblocks receiver when the message arrives

Proc 1 Proc 2

MPI_Recv()

Blocked until message arrives

Time

MPI Programming

Page 60: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Sending messages (synchronous)

– Receiver waits for message to arrive (blocking)– Sender unblocks receiver when the message arrives

Proc 1 Proc 2

MPI_Recv()

Time

MPI_Send()

MPI Programming

receiver is unblocked whenthe message arrives

Page 61: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Sending messages (synchronous)int  MPI_Send(void  *buf,  int  count,  MPI_Datatype  dtype,  

             int  dest,  int  tag,  MPI_Comm  comm)

– buf: pointer to the data to be sent

– count: number of data elements in buf –  dtype: type of elements in buf– dest: rank of the destination process– tag: a number to “classify” the message (optional)– comm: communicator

MPI Programming

Page 62: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Receiving messages (synchronous)int  MPI_Recv(void  *buf,  int  count,  MPI_Datatype  dtype,     int  src,  int  tag,  MPI_Comm  comm,  MPI_Status  *status)

– buf: pointer to where data will be stored– count: maximum number of elements that buf can handle–  dtype:type of elements in buf– src: rank of sender process (use MPI_ANY_SOURCE to receive from any source)– tag: message tag (use MPI_ANY_TAG to receive any message)– comm: communicator–  status: information about the received message, if desired can be ignored using MPI_STATUS_IGNORE

MPI Programming

Page 63: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Exercise 4 – Master/Slave with messages

- Master receives one message per slave

- Slaves send a single message to the master with their rank

- When the master receives a message, it prints the received rank

MPI Programming

Page 64: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

• Exercise 5 – Computing π by Monte Carlo Method

2

1

1

Area of Circle = π r 2 = π (1) 2 = π

A = π 4

P(I) = A = π 4

MPI Programming

Page 65: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Asynchronous/Non-blocking messages

– Process signs it is waiting for a message– Continue working meanwhile– Receiver can check any time if the message is arrived

Proc 1 Proc 2MPI_Irecv()

Time

MPI Programming

Page 66: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Asynchronous/Non-blocking messages

– Process signs it is waiting for a message– Continue working meanwhile– Receiver can check any time if the message is arrived

Proc 1 Proc 2MPI_Irecv()

Time

MPI Programming

other computation

Page 67: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Asynchronous/Non-blocking messages

– Process signs it is waiting for a message– Continue working meanwhile– Receiver can check any time if the message is arrived

Proc 1 Proc 2MPI_Irecv()

Time

MPI Programming

other computation

receiver doesn’t block!

Page 68: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Asynchronous/Non-blocking messages

– Process signs it is waiting for a message– Continue working meanwhile– Receiver can check any time if the message is arrived

Proc 1 Proc 2MPI_Irecv()

Time

MPI_Isend()

MPI Programming

other computation

receiver doesn’t block!

Page 69: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Asynchronous/Non-blocking messages

– Process signs it is waiting for a message– Continue working meanwhile– Receiver can check any time if the message is arrived

Proc 1 Proc 2MPI_Irecv()

Time

MPI_Isend()

MPI Programming

other computation

receiver doesn’t block!

receiver checks if a message is arrived so it can use it: MPI_Wait()

Page 70: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Master wants to send a message to everybody– First solution, master sends N-1 messages– Optimized collective communication: send is done in parallel

Time

Time

Master Proc 1 Proc 2 Proc 3 Master Proc 1 Proc 2 Proc 3

MPI Programming

Page 71: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Master wants to send a message to everybody– First solution, master sends N-1 messages– Optimized collective communication: send is done in parallel

Time

Send(1)

Time

Master Proc 1 Proc 2 Proc 3 Master Proc 1 Proc 2 Proc 3

MPI Programming

Page 72: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Master wants to send a message to everybody– First solution, master sends N-1 messages– Optimized collective communication: send is done in parallel

Time

Send(1)

Time

Constant time to send a message

Master Proc 1 Proc 2 Proc 3 Master Proc 1 Proc 2 Proc 3

MPI Programming

Page 73: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Master wants to send a message to everybody– First solution, master sends N-1 messages– Optimized collective communication: send is done in parallel

Time

Send(1)

Send(2)

Time

Constant time to send a message

Master Proc 1 Proc 2 Proc 3 Master Proc 1 Proc 2 Proc 3

MPI Programming

Page 74: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Master wants to send a message to everybody– First solution, master sends N-1 messages– Optimized collective communication: send is done in parallel

Time

Send(1)

Send(2)

Send(3)

Time

Constant time to send a message

Master Proc 1 Proc 2 Proc 3 Master Proc 1 Proc 2 Proc 3

MPI Programming

Page 75: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Master wants to send a message to everybody– First solution, master sends N-1 messages– Optimized collective communication: send is done in parallel

Time

Send(1)

Send(2)

Send(3)

Time

Master Proc 1 Proc 2 Proc 3 Master Proc 1 Proc 2 Proc 3

MPI Programming

Page 76: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Master wants to send a message to everybody– First solution, master sends N-1 messages– Optimized collective communication: send is done in parallel

Time

Send(1)

Send(2)

Send(3)

TimeBroadcast

completed in 3 slices of time

Master Proc 1 Proc 2 Proc 3 Master Proc 1 Proc 2 Proc 3

MPI Programming

Page 77: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Master wants to send a message to everybody– First solution, master sends N-1 messages– Optimized collective communication: send is done in parallel

Time

Send(1)

Send(2)

Send(3)

Time

Master Proc 1 Proc 2 Proc 3 Master Proc 1 Proc 2 Proc 3

MPI Programming

Page 78: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Master wants to send a message to everybody– First solution, master sends N-1 messages– Optimized collective communication: send is done in parallel

Time

Send(1)

Send(2)

Send(3)

Time

Send(1)

Master Proc 1 Proc 2 Proc 3 Master Proc 1 Proc 2 Proc 3

MPI Programming

Page 79: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Master wants to send a message to everybody– First solution, master sends N-1 messages– Optimized collective communication: send is done in parallel

Time

Send(1)

Send(2)

Send(3)

Time

Send(1)

Send(2) Send(3)

Master Proc 1 Proc 2 Proc 3 Master Proc 1 Proc 2 Proc 3

MPI Programming

Page 80: Programming with MPI on GridRSschnorr/download/talks/gridrs-tutorial.pdfMessage Passing Interface (MPI) - It is a programming model • Not a specific implementation or product •

Master wants to send a message to everybody– First solution, master sends N-1 messages– Optimized collective communication: send is done in parallel

Time

Send(1)

Send(2)

Send(3)

Time

Send(1)

Send(2) Send(3)

Finishes in 2 slices oftime

Master Proc 1 Proc 2 Proc 3 Master Proc 1 Proc 2 Proc 3

MPI Programming