30
Alex Ishii and Denis Foley with Eric Anderson, Bill Dally, Glenn Dearth Larry Dennison, Mark Hummel and John Schafer NVSWITCH AND DGX - 2 NVLINK - SWITCHING CHIP AND SCALE - UP COMPUTE SERVER

NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

  • Upload
    others

  • View
    17

  • Download
    1

Embed Size (px)

Citation preview

Page 1: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

Alex Ishii and Denis Foley withEric Anderson, Bill Dally, Glenn DearthLarry Dennison, Mark Hummel and John Schafer

NVSWITCH AND DGX-2 NVLINK-SWITCHING CHIP AND SCALE-UP COMPUTE SERVER

Page 2: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

2©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

NVIDIA DGX-2 SERVER AND NVSWITCH

16 Tesla™ V100 32 GB GPUsFP64: 125 TFLOPSFP32: 250 TFLOPSTensor: 2000 TFLOPS512 GB of GPU HBM2

Single-Server Chassis10U/19-Inch Rack Mount 10 kW Peak TDPDual 24-core Xeon CPUs1.5 TB DDR4 DRAM30 TB NVMe Storage

New NVSwitch Chip18 2nd Generation NVLink™ Ports 25 GBps per Port900 GBps Total Bidirectional Bandwidth450 GBps Total Throughput

12 NVSwitch NetworkFull-Bandwidth Fat-Tree Topology2.4 TBps Bisection BandwidthGlobal Shared MemoryRepeater-less

® ™™

Page 3: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

3©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

OUTLINE

The Case for “One Gigantic GPU” and NVLink Review

NVSwitch — Speeds and Feeds, Architecture and System Applications

DGX-2 Server — Speeds and Feeds, Architecture, and Packaging

Signal-Integrity Design Highlights

Achieved Performance

Page 4: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

4©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

MULTI-CORE AND CUDA WITH ONE GPU

Users explicitly express parallel work in NVIDIA CUDA®

GPU Driver distributes work to available Graphics Processing Clusters (GPC)/Streaming Multiprocessor (SM) cores

GPC/SM cores can compute on data in any of the second-generation High Bandwidth Memories (HBM2s)

GPC/SM cores use shared HBM2s to exchange data

GPU

GPC

GPC

Hub

NVLink

Copy Engines

PCIe I/OWork (data andCUDA Kernels)

Results (data)

CPU

HBM

2H

BM2

L2

Cach

eL2

Ca

che

L2

Cach

eL2

Ca

che

XBAR

NVLink

Page 5: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

5©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

TWO GPUS WITH PCIEAccess to HBM2 of other GPU is at PCIe BW (32 GBps (bidirectional))

Interactions with CPU compete with GPU-to-GPU

PCIe is the “Wild West” (lots of performance bandits)

GPU0 GPU1

HBM

2 +

L2 C

ache

sH

BM2

+ L2

Cac

hes

GPC

GPC

Hub

NVLink

Copy Engines

PCIe I/O

NVLink

Work (data and

CUDA Kernels)

Results (data)

CPU

XBAR

HBM

2 + L2 CachesH

BM2 + L2 Caches

XBAR

GPC

GPC

Hub

PCIe I/O

NVLink

NVLink

Copy Engines

Page 6: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

6©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

TWO GPUS WITH NVLINKAll GPCs can access all HBM2 memories

Access to HBM2 of other GPU is at multi-NVLink bandwidth (300 GBps bidirectional in V100 GPUs)

NVLinks are effectively a “bridge” between XBARs

No collisions with PCIe traffic

GPU0 GPU1

HBM

2 +

L2 C

ache

sH

BM2

+ L2

Cac

hes

GPC

GPC

Hub

NVLink

Copy Engines

PCIe I/O

NVLink

Work (data and

CUDA Kernels)

Results (data)

CPU

XBAR

HBM

2 + L2 CachesH

BM2 + L2 Caches

XBAR

GPC

GPC

Hub

PCIe I/O

NVLink

NVLink

Copy Engines

Page 7: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

7©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

THE “ONE GIGANTIC GPU” IDEALHighest number of GPUs possible

Single GPU Driver process controls all work across all GPUs

From perspective of GPCs, all HBM2s can be accessed without intervention by other processes (LD/ST instructions, Copy Engine RDMA, everything “just works”)

Access to all HBM2s is independent of PCIe

Bandwidth across bridged XBARs is as high as possible (some NUMA is unavoidable)

GPU4 GPU8

NVLink XBAR

CPU

CPU

GPU9

GPU12GPU13

GPU14GPU15

GPU5GPU6

GPU7

GPU0GPU1

GPU2GPU3

GPU10GPU11

Page 8: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

8©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

Problem Size CapacityProblem size is limited by aggregate HBM2 capacity of entire set of GPUs, rather than capacity of single GPU

Strong ScalingNUMA-effects greatly reduced compared to existing solutionsAggregate bandwidth to HBM2 grows with number of GPUs

Ease of UseApps written for small number of GPUs port more easilyAbundant resources enable rapid experimentation

“ONE GIGANTIC GPU” BENEFITS

GPU4 GPU8

NVLink XBAR

CPU

CPU

GPU9

GPU12GPU13

GPU14GPU15

GPU5GPU6

GPU7

GPU0GPU1

GPU2GPU3

GPU10GPU11

?

Page 9: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

9©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

INTRODUCING NVSWITCH

Parameter Spec

Bidirectional Bandwidth per NVLink 51.5 GBps

NRZ Lane Rate (x8 per NVLink) 25.78125 Gbps

Transistors 2 Billion

Process TSMC 12FFN

Die Size 106 mm^2

Parameter Spec

Bidirectional Aggregate Bandwidth 928 GBps

NVLink Ports 18

Mgmt Port (config, maintenance, errors) PCIe

LD/ST BW Efficiency (128 B pkts) 80.0%

Copy Engine BW Efficiency (256 B pkts) 88.9%

Page 10: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

10©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

NVSWITCH BLOCK DIAGRAMGPU-XBAR-bridging device; not a general networking device

Packet-Transforms make traffic to/from multiple GPUs look like they are to/from single GPU

XBAR is non-blocking

SRAM-based buffering

NVLink IP blocks and XBAR design/verification infrastructure reused from V100

Management

XBAR

(18

X 1

8)

Port Logic 0

Routing

Error Check &Statistics Collection

Classification &Packet Transforms

Transaction Tracking &Packet Transforms

NVLink 0

PHY

DLTL

PHY

NVSwitch

Port Logic 17

Routing

Error Check &Statistics Collection

Classification &Packet Transforms

Transaction Tracking &Packet Transforms

NVLink 17

PHY

DLTL

PHY

PCIe I/O

Page 11: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

11©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

GPU0 GPU1

HBM

2 +

L2 C

ache

sH

BM2

+ L2

Cac

hes

GPC

GPC

Hub

NVLink

Copy Engines

PCIe I/O

NVLink

Work (data and

CUDA Kernels)

Results (data)

CPU

XBAR

HBM

2 + L2 CachesH

BM2 + L2 Caches

XBAR

GPC

GPC

Hub

PCIe I/O

NVLink

NVLink

Copy Engines

NVLINK: PHYSICAL SHARED MEMORYVirtual-to physical address translation is done in GPCs

NVLink packets carry physical addresses

NVSwitch and DGX-2 follow same model

Physical Addressing

Page 12: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

12©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

NVSWITCH: RELIABILITYHop-by-hop error checking deemed sufficient in intra-chassis operating environment

Low BER (10-12 or better) on intra-chassis transmission media removes need for high-overhead protections like FEC

HW-based consistency checks guard against “escapes” from hop-to-hop

SW responsible for additional consistency checks and clean-up

XBAR

ECC

on in

-flig

ht p

kts

Port Logic

Routing

Error Check &Statistics Collection

Classification &Packet Transforms

Transaction Tracking & Packet Transforms

ECC on SRAMs and in-flight pkts

Access-Control checks for mis-configured pkts (either config/app errors or corruption)

Configurable buffer allocations

NVLink

Data Link (DL) pkt tracking and retry for external links

CRC on pkts (external)

PHY

DLTL

PHY

NVSwitch

ManagementHooks for SW-based scrubbing for table corruptions and error conditions (orphaned packets)

Hooks for partial-reset and queue draining

Page 13: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

13©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

Packet-processing and switching (XBAR) logic is extremely compact

With more than 50% of area taken-up by I/O blocks, maximizing “perimeter”-to-area ratio was key to an efficient design

Having all NVLink ports exit die on parallel paths simplified package substrate routing

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

NVLINK PHYS

Non-NVLink Logic

NVSWITCH: DIE ASPECT RATIO

Page 14: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

14©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

No NVSwitchConnect GPU directlyAggregate NVLinks into gangs for higher bandwidthInterleaved over the links to prevent camping Max bandwidth between two GPUs limited to the bandwidth of the gang

With NVSwitchInterleave traffic across all the links and to support full bandwidth between any pair of GPUsTraffic to a single GPU is non-blocking, so long as aggregate bandwidth of six NVLinks is not exceeded

SIMPLE SWITCH SYSTEM

V100 V100

V100

Gang

V100 V100

V100

NVS

WIT

CH

Page 15: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

15©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

Add NVSwitches in parallel to support more GPUs

Eight GPU closed system can be built with three NVSwitches with two NVLinks from each GPU to each switch

Gang width is still six — traffic is interleaved across all the GPU links

GPUs can now communicate pairwise using the full 300 GBps bidirectionalbetween any pair

NVSwitch XBAR provides unique paths from and source to any destination —non-blocking, non-interfering

SWITCH CONNECTED SYSTEMS

8 GPUs, 3 NVSwitches

V100 V100

V100 V100

V100 V100

V100 V100

NVS

WIT

CH

Page 16: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

16©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

Taking this to the limit —connect one NVLink from each GPU to each of six switches

No routing between different switch planes required

Eight NVLinks of the 18 available per switch are used to connect to GPUs

Ten NVLinks available per switch for communication outside the local group (only eight are required to support full bandwidth)

This is the GPU baseboard configuration for DGX-2

NON-BLOCKING BUILDING BLOCK

V100

V100

V100

V100

V100

V100

V100

V100

NVS

WIT

CH

Page 17: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

17©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

Two of these building blocks together form a fully connected 16 GPU cluster

Non-blocking, non-interfering (unless same destination is involved)

Regular load, store, atomics just work

Left-right symmetry simplifies physical packaging, and manufacturability

DGX-2 NVLINK FABRIC

V100

V100

V100

V100

V100

V100

V100

V100

NVS

WIT

CH

NVS

WIT

CH

V100

V100

V100

V100

V100

V100

V100

V100

Page 18: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

18©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

Parameter DGX-2 Spec

CPUs Dual Xeon Platinum 8168

CPU Memory 1.5 TB DDR4

Aggregate Storage 30 TB (8 NVMes)

Peak Max TDP 10 kW

Dimensions (H/W/D) 17.3”(10U)/ 19.0”/32.8”(440.0mm/482.3mm/834.0mm)

Weight 340 lbs (154.2 kgs)

Cooling (forced air) 1,000 CFM

Parameter DGX-2 Spec

Number of Tesla V100 GPUs 16

Aggregate FP64/FP32 125/250 TFLOPS

Aggregate Tensor (FP16) 2000 TFLOPS

Aggregate Shared HBM2 512 GB

Aggregate HBM2 Bandwidth 14.4 TBps

Per-GPU NVLink Bandwidth 300 GBps bidirectional

Chassis Bisection Bandwidth 2.4 TBps

InfiniBand NICs 8 Mellanox EDR

NVIDIA DGX-2: SPEEDS AND FEEDS

Page 19: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

19©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

Xeon sockets are QPI connected, but affinity-binding keeps GPU-related traffic off QPI

PCIe tree has NICs connected to pairs of GPUs to facilitate GPUDirectTM RDMAs over IB network

Configuration and control of the NVSwitches is via driver process running on CPUs

DGX-2 PCIE NETWORK

NVSWITCH

NVSWITCH

NVSWITCH

NVSWITCH

NVSWITCH

V100

V100

V100

V100 NVSWITCH

V100

V100

V100

V100

NVSWITCH

NVSWITCH

NVSWITCH

V100

V100

V100

V100

V100

V100

V100

V100

PCIE SW x86x86

PCIE SW

PCIE SW

PCIE SW

PCIE SW

PCIE SW

x6x6

PCIE SW

PCIE SW

PCIE SW

PCIE SW

PCIE SW

PCIE SW

PCIE SW

PCIE SW

100G NIC

100G NIC

100G NIC

100G NIC

100G NIC

100G NIC

100G NIC

100G NIC

NVS

WIT

CH

NVS

WIT

CH

QPIQPI

Page 20: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

20©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

Two GPU Baseboards with eight V100 GPUs and six NVSwitches on each

Two Plane Cards carry 24 NVLinks each

No repeaters or redrivers on any of the NVLinks conserves board space and power

NVIDIA DGX-2: GPUS + NVSWITCH COMPLEX

Page 21: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

21©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

GPU Baseboards get power and PCIe from front midplane

Front midplane connects to I/O-Expander PCB (with PCIe switches and NICs) and CPU Motherboard

48V PSUs reduce current load on distribution paths

Internal “bus bars” bring current from each PSU to front-mid plane

NVIDIA DGX-2: SYSTEM PACKAGING

Page 22: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

22©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

Forced-air cooling of Baseboards, I/O Expander, and CPU provided by ten 92 mm fans

Four supplemental 60 mm internal fans to cool NVMe drives and PSUs

Air to NVSwitches is pre-heated by GPUs, requiring “full height” heatsinks

NVIDIA DGX-2: SYSTEM COOLING

Page 23: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

23©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

NVLink repeaterless topology trades-off longer traces to GPUs for shorter traces to Plane Cards

Custom SXM and Plane Card connectors for reduced loss and crosstalk

TX and RX grouped in pin-fields to reduce crosstalk

DGX-2: SIGNAL INTEGRITY OPTIMIZATIONS

All TX

All RX

Page 24: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

24©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

Test has each GPU reading data from another GPU across bisection (from GPU on different Baseboard)

Raw bisection bandwidth is 2.472 TBps

1.98 TBps achieved Read bisection bandwidth matches theoretical 80% bidirectional NVLink efficiency

“All-to-all” (each GPU reads from eight GPUs on other PCB) results are similar

DGX-2: ACHIEVED BISECTION BW

0

500

1000

1500

2000

2500

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

GB/

sec

Number of GPUs

Page 25: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

25©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

Results are “iso-problem instance” (more GFLOPS means shorter running time)

As problem is split over more GPUs, it takes longer to transfer data than to calculate locally

Aggregate nature of HBM2 capacity enables larger problem sizes

DGX-2: CUFFT (16K X 16K)

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

1 2 3 4 5 6 7 8

GFL

OPS

½ DGX-2 (All-to-All NVLink Fabric) DGX-1TM (V100 and Hybrid Cube Mesh)

Number of GPUs

Page 26: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

26©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

Important communication primitive in Machine-Learning apps

DGX-2 provides increased bandwidth and lower latency compared to two 8-GPU servers (connected with InfiniBand)

NVLink network has efficiency superior to InfiniBand on smaller message sizes

DGX-2: ALL-REDUCE BENCHMARK

Message Size (B)

Band

wid

th (

MB/

s)

DGX-2 (NVLink Fabric) 2 DGX-1 (4 100 Gb IB)

128K 256K 512K 1M 2M 4M 8M 16M 32M 64M 128M 256M 512M0

20000

40000

60000

80000

100000

120000

3X

8X

Page 27: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

27©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

DGX-2: >2X SPEED-UP ON TARGET APPSTwo DGX-1 Servers (V100) Compared to DGX-2 − Same Total GPU Count

DGX-2 with NVSwitchTwo DGX-1 (V100)

2X FASTER 2.4X FASTER 2X FASTER 2.7X FASTER

AI TrainingHPC

Physics(MILC benchmark)

4D Grid

Weather(ECMWF benchmark) All-to-all

Language Model(Transformer with MoE)

All-to-all

Recommender (Sparse Embedding) Reduce & Broadcast

Two DGX-1 servers have dual socket Xeon E5 2698v4 Processor. Eight Tesla V100 32 GB GPUs. Servers connected via four EDR IB/GbE ports | DGX-2 server has dual-socket Xeon Platinum 8168 Processor. 16 Tesla V100 32GB GPUs

TM

Page 28: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

28©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

Eight GPU baseboard with six NVSwitches

Two HGX-2 boards can be passively connected to realize 16-GPU systems

ODM/OEM partners build servers utilizing NVIDIA HGX-2 GPU baseboards

Design guide provides recommendations

NVSWITCH AVAILABILITY: NVIDIA HGX-2™

Page 29: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS

29©2018 NVIDIA CORPORATION©2018 NVIDIA CORPORATION

New NVSwitch18-NVLink-Port, Non-Blocking NVLink Switch 51.5 GBps-per-Port928 GBps Aggregate Bidirectional Bandwidth

DGX-2“One Gigantic GPU” Scale-Up Server with 16 Tesla V100 GPUs125 TF (FP64) to 2000 TF (Tensor)512 GB Aggregate HBM2 CapacityNVSwitch Fabric Enables >2X Speed-Up on Target Apps

SUMMARY

Page 30: NVSWITCH AND DGX-2 · 2018-08-21 · ©2018 NVIDIA CORPORATION 2 NVIDIA DGX-2 SERVER AND NVSWITCH 16 Tesla™ V100 32 GB GPUs. FP64: 125 TFLOPS. FP32: 250 TFLOPS. Tensor: 2000 TFLOPS