22
Optimizing MPI Communication on Multi-GPU Systems using CUDA Inter-Process Communication Sreeram Potluri * Hao Wang* Devendar Bureddy* Ashish Kumar Singh* Carlos Rosales + Dhabaleswar K. Panda* *Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University + Texas Advanced Computing Center 1

Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

  • Upload
    others

  • View
    14

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

Optimizing MPI Communication on Multi-GPU Systems using

CUDA Inter-Process Communication

Sreeram Potluri* Hao Wang* Devendar Bureddy*

Ashish Kumar Singh* Carlos Rosales+ Dhabaleswar K. Panda*

*Network-Based Computing Laboratory Department of Computer Science and Engineering

The Ohio State University

+Texas Advanced Computing Center

1

Page 2: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

Outline

•  Motivation

•  Problem Statement

•  Using CUDA IPC

•  CUDA IPC based Designs in MVAPICH2

–  Two Sided Communication

–  One-sided Communication

•  Experimental Evaluation

•  Conclusion and Future Work

2

Page 3: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

•  GPUs are becoming a common component of modern clusters – higher compute density and performance/watt

•  3 of the top 5 systems in the latest Top 500 list use GPUs

•  Increasing number of HPC workloads are being ported to GPUs - many of these use MPI

•  MPI libraries are being extended to support communication from GPU device memory

GPUs for HPC

3

Page 4: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

4

At Sender: !!

cudaMemcpy (sbuf, sdev)!MPI_Send (sbuf, . . . )!!

At Receiver:!!

MPI_Recv (rbuf, . . . )!cudaMemcpy (rdev, rbuf)!

MVAPICH/MVAPICH2 for GPU Clusters

Earlier

At Sender: !!

MPI_Send (sdev, . . . )!!

At Receiver:!!

MPI_Recv (rdev, . . . )!

Now

PCIe

GPU

CPU

NIC

Switch

inside MVAPICH2

•  Efficient overlap copies over the PCIe with RDMA transfers over the network

•  Allows us to select efficient algorithms for MPI collectives and MPI datatype processing

•  Available with MVAPICH2 v1.8 (http://mvapich.cse.ohio-state.edu) 4

Page 5: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

Motivation

5

CPU

GPU 1 GPU 0

Memory

I/O Hub

Process 0 Process 1 •  Multi-GPU node architectures are becoming common

•  Until CUDA 3.2 –  Communication between processes

staged through the host

–  Shared Memory (pipelined)

–  Network Loopback [asynchronous)

•  CUDA 4.0 –  Inter-Process Communication (IPC)

–  Host bypass

–  Handled by a DMA Engine

–  Low latency and Asynchronous

–  Requires creation, exchange and mapping of memory handles - overhead

HCA

Page 6: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

6

Comparison of Costs

Cop

y La

tenc

y (u

sec)

50

100

150

200

CUDA IPC Copy

Copy Via Host

CUDA IPC Copy + Handle Creation & Mapping Overhead

49 usec

3 usec

228 usec

•  Comparison of bare copy costs between two processes on one node, each using a different GPU (outside MPI)

•  8 Bytes

Page 7: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

Outline

•  Motivation

•  Problem Statement

•  Basics of CUDA IPC

•  CUDA IPC based Designs in MVAPICH2

–  Two Sided Communication

–  One-sided Communication

•  Experimental Evaluation

•  Conclusion and Future Work

7

Page 8: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

Problem Statement

8

•  Can we take advantage of CUDA IPC to improve performance of MPI communication between GPUs on a node?

•  How do we address the memory handle creation and mapping overheads?

•  What kind of performance do the different MPI communication semantics deliver with CUDA IPC? –  Two-sided Semantics

–  One-sided Semantics

•  How do CUDA IPC based designs impact the performance of end-applications?

Page 9: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

Outline

•  Motivation

•  Problem Statement

•  Basics of CUDA IPC

•  CUDA IPC based Designs in MVAPICH2

–  Two Sided Communication

–  One-sided Communication

•  Experimental Evaluation

•  Conclusion and Future Work

9

Page 10: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

Basics of CUDA IPC Process 0 Process 1

cudaIpcGetMemhandle (&handle, base_ptr)

cudaIpcOpenMemhandle (&base_ptr, handle)

IPC handles

cudaMemcpy (rbuf_ptr, base_ptr + displ)

cudaEventRecord (&ipc_event, event_handle)

cudaIpcGetEventHandle (&event_handle, event)

cudaStreamWaitEvent (0, event)

other CUDA calls that can modify the sbuf

IPC memory handle should be closed at Process 1 before the buffer is freed at Process 0

cudaIpcOpenEventhandle (&ipc_event, event_handle)

cuMemGetAddressRange (&base_ptr, sbuf_ptr)

Done

sbuf_ptr rbuf_ptr

10

Page 11: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

Outline

•  Motivation

•  Problem Statement

•  Basics of CUDA IPC

•  CUDA IPC based Designs in MVAPICH2

–  Two Sided Communication

–  One-sided Communication

•  Experimental Evaluation

•  Conclusion and Future Work

11

Page 12: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

Design of Two-sided Communication •  MPI communication costs

–  synchronization

–  data movement

•  Small message communication –  minimize synchronization overheads

–  pair-wise eager buffers for host-host communication

–  associated pair-wise IPC buffers on GPU

–  synchronization using CUDA Events

•  Large message communication –  minimize number for copies - rendezvous protocol

–  minimize memory mapping overheads using a mapping cache

12

Page 13: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

Design of One-sided Communication

13

•  Separates communication from synchronization

•  Window

•  Communication calls - put, get, accumulate

•  Synchronization calls –  active - fence, post-wait/start-complete

–  passive – lock-unlock

–  period between two synchronization calls is a communication epoch

•  IPC memory handles created and mapped during window creation

•  Put/Get implemented as cudaMemcpyAsync

•  Synchronization using CUDA Events

Page 14: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

Outline

•  Motivation

•  Problem Statement

•  Basics of CUDA IPC

•  CUDA IPC based Designs in MVAPICH2

–  Two Sided Communication

–  One-sided Communication

•  Experimental Evaluation

•  Conclusion and Future Work

14

Page 15: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

•  Intel Westmere node

–  2 NVIDIA Tesla C2075 GPUs

–  Red Hat Linux 5.8 and CUDA Toolkit 4.1

•  MVAPICH/MVAPICH2 - High Performance MPI Library for IB,

10GigE/iWARP and RoCE

–  Available since 2002

–  Used by more than 1.930 organizations (HPC centers, Industries and Universities)

in 68 countries

–  More than 111,000 downloads from OSU site directly

–  Empowering many TOP500 clusters

•  5th ranked 73,278-core cluster (Tsubame 2.0) at Tokyo Institute of Technology

•  7th ranked 111,104-core cluster (Pleiades) at NASA

•  25th ranked 62,976-core cluster (Ranger) at TACC

–  http://mvapich.cse.ohio-state.edu

Experimental Setup

15

Page 16: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

0 1000 2000 3000 4000 5000 6000

1 16 256 4K 64K 1M

Ban

dwid

th (M

Bps

)

Message Size (Bytes)

0

500

1000

1500

2000

4K 16K 64K 256K 1M 4M

Late

ncy

(use

c)

Message Size (Bytes)

0

10

20

30

40

50

1 4 16 64 256 1K

Late

ncy

(use

c)

Message Size (Bytes)

16

Two-sided Communication Performance

0.0

10.0

20.0

30.0

40.0

1 4 16 64 256 1024

Late

ncy

(use

c)

Message Size (Bytes)

SHARED-MEM CUDA IPC

70% 46%

78% considerable improvement in MPI performance due to host bypass

Page 17: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

0 1000 2000 3000 4000 5000 6000

1 16 256 4K 64K 1M

Ban

dwid

th (M

Bps

)

Message Size (Bytes)

0

10

20

30

40

50

2 8 32 128 512

Late

ncy

(use

c)

Message Size (Bytes)

17

One-sided Communication Performance (get + active synchronization vs. send/recv)

0

5

10

15

20

1 4 16 64 256 1K

Late

ncy

(use

c)

Message Size (Bytes)

SHARED-MEM-1SC CUDA-IPC-1SC CUDA-IPC-2SC

30%

27% Better performance compared to two-sided semantics.

0

500

1000

1500

2000

4K 16K 64K 256K 1M 4M

Late

ncy

(use

c)

Message Size (Bytes)

Page 18: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

18

One-sided Communication Performance (get + passive synchronization)

0.0

10.0

20.0

30.0

40.0

1 4 16 64 256 1024

Late

ncy

(use

c)

Message Size (Bytes)

SHARED-MEM CUDA IPC

0 100 200 300 400 500 600

0 100 200 300

Late

ncy

(use

c)

Target Busy Loop (usec)

true asynchronous progress

•  Lock + 8 Gets + Unlock with the target in a busy loop (128KB messages)

Page 19: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

Lattice Boltzmann Method

0

20

40

60

80

100

120

140

256x256x64 256x512x64 512x512x64

LB S

tep

Late

ncy

(mse

c)

Dataset per GPU

2SIDED-SHARED-MEM 2SIDED-IPC 1SIDED-IPC

•  Computation fluid dynamics code with support for multi-phase flows with large density ratios

•  Modified to use MPI communication from GPU device memory - one-sided and two-sided semantics

•  Up to 16% improvement in per step

16%

19

Page 20: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

Outline

•  Motivation

•  Problem Statement

•  Basics of CUDA IPC

•  CUDA IPC based Designs in MVAPICH2

–  Two Sided Communication

–  One-sided Communication

•  Experimental Evaluation

•  Conclusion and Future Work

20

Page 21: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

21

Conclusion and Future Work •  Take advantage of CUDA IPC to improve MPI communication between GPUs

on a node

•  70% improvement in latency and 78% improvement in bandwidth for two-sided communication

•  One-sided communication gives better performance and allows for truly asynchronous communication

•  16% improvement in execution time of Lattice Boltzmann Method code

•  Studying the impact on other applications while exploiting computation-communication overlap

•  Exploring efficient designs for inter-node one-sided communication on GPU clusters

Page 22: Optimizing MPI Communication on Multi-GPU Systems using ...nowlab.cse.ohio-state.edu/static/media/... · Process 0 Process 1 • Multi-GPU node architectures are becoming common •

Thank You!

{potluri, wangh, bureddy, singhas, panda} @cse.ohio-state.edu

[email protected]

Network-Based Computing Laboratory

http://nowlab.cse.ohio-state.edu/

MVAPICH Web Page http://mvapich.cse.ohio-state.edu/

22