Acceleration of an Adaptive Cartesian Mesh CFD Code in the ... · parallelization or optimization....

Preview:

Citation preview

Acceleration of

an Adaptive Cartesian Mesh CFD Code in the

Current Generation

Processor Architectures

Harichand M V1, Bharatkumar Sharma2, Sudhakaran G1, V Ashok1

1 Vikram Sarabhai Space Centre 2 Nvidia Graphics

Agenda

● What to expect in this presentation?

○ Quick introduction to PARAS3D : CFD code

○ Constraints

○ Learnings

2

Quick Background

3

ISROThe Indian Space Research Organization (ISRO) is the primary space agency of the Indian government, and

is among the largest space research organizations in the world. Its primary objective is to advance

space technology and use its applications for national benefit, including the development and deployment

of communication satellites for television broadcast, telecommunications and meteorological applications, as

well as remote sensing satellites for management of natural resources

VSSCVikram Sarabhai Space Center is a major space research center of ISRO focusing on rocket and space

vehicles for India’s Satellite program.

Supercomputing HistorySAGA first supercomputer with GPU in India developed by the Indian Space Research Organization was

used to tackle complex aeronautical problems. Listed in Top 500 in June 2012.

Software Info

● Used for the aerodynamic design and analysis

of launch vehicles in ISRO and aircraft design

● Adaptive Cartesian Mesh Legacy CFD code

● Fully automatic grid generation for any complex

geometry

● RANS, Explicit Residual Update, Second Order

● Typical cell count around 50-60 millions

4

Solver Flow Chart

● Compute Fluxes○ Consists of reconstruction (2nd Order)

○ Riemann Solver for flux computation

○ Requires two level neighbours for each

direction

● Compute local time step for each

cell

● Update cell value based on fluxes

computed

● Explicit update suitable for data

parallelism

5

Mesh Structure.

● Each cell can go 14 levels deep

● Each face, 1 or 4 neighbours

● Cell dependance for reconstruction (Two

levels in each direction) on face varies

from 2 to 20

6

Features of the legacy solver

● Data structure○ Forest of oct-trees maintained using child pointers

○ Lot of pointer chasing while computation

○ Cell Structure

■ Centroid

■ Size

■ Neighbors (All six directions)

■ Conserved Flow Variable Vector (Internal Energy,

Momentum, Density etc.)

○ Face Structure

■ Left and right Cell index

■ Area of the face

■ Axis along which face is aligned (X / Y / Z)

■ Reconstructed variables from left side and right

side 7

Features of the legacy solver

● MPI Parallelism○ Each sub-domain is a rectangular box of

base cells

○ Synchronous communication of ghost cells

● CUDA C implementation to target GPU○ 4.5-7x single GPU node vs single CPU node

having 2 Quad Core Xeon processors.

○ The speed up depends on the complexity of

the geometry, level of grid adaptation and

size of the problem under consideration.

8

Requirements of New Software

● Should work in hybrid cluster environment

● Easily extensible

○ Maintaining 2 software stack would not be an ideal condition unless required so

● Easy to validate during testing phase

● Ideally adopt to new architecture without fundamental change in code design

9

3 Ways to Accelerate Applications

Applications

Libraries

Easy to use

Most Performance

Programming

Languages

Most Performance

Most Flexibility

Compiler

Directives

OpenACC

11

OpenACC

12

OpenACC

13

Single SourceIncremental

OpenACC

▪ Maintain existing sequential code

▪ Add annotations to expose parallelism

▪ After verifying correctness, annotate more of the code

▪ Rebuild the same code on multiple architectures

▪ Compiler determines how to parallelize for the desired machine

▪ Sequential code is maintained

Low Learning Curve

▪ OpenACC is meant to be easy to use, and easy to learn

▪ Programmer remains in familiar C, C++, or Fortran

▪ No reason to learn low-level details of the hardware.

3 Ways to Accelerate Applications

Applications

Libraries

Easy to use

Most Performance

Programming

Languages

Most Performance

Most Flexibility

Compiler

Directives

Previous versionNew version

Development Cycle

▪ Analyze your code to determine most likely places needing parallelization or optimization.

▪ Parallelize your code by starting with the most time consuming parts and check for correctness.

▪ Optimize your code to improve observed speed-up from parallelization.

Analyze

ParallelizeOptimize

Analyze

Results (Profiling)

17

Profiling CPU Application

Observations in Data Layout:

layout: ○ AOS

○ Pointer Chasing because of Oct

Tree Structure

○ If – Else Statements

18

#define SIZE 1024 * 1024

struct Image_AOS {

double r;

double g;

double b;

double hue;

double saturation;

};

Image_AOS gridData[SIZE];

ARRAY OF STRUCTURES

double u0 = gridData[threadIdx.x].r;

Thread 1

19

… … … …

double u0 = gridData.r[threadIdx.x];

Thread 0

Thread 1

Thread 2

#define SIZE 1024 *1024

struct Image_SOA {

double r[SIZE];

double g[SIZE];

double b[SIZE];

double hue[SIZE];

double saturation[SIZE];

};

Image_SOA gridData;

STRUCTURES OF ARRAYS

TRANSACTIONS AND REPLAYSWith replays, requests take more time and use more resources

More instructions issued

More memory traffic

Increased execution time

Inst. 0

Issued

Inst. 1

Issued

Inst. 2

Issued

Execution time

Threads

0-7/24-31

Threads

8-15

Threads

16-23

Inst. 0

Completed

Inst. 1

Completed

Inst. 2

Completed

Threads

0-7/24-31

Threads

8-15

Threads

16-23

Transfer data for inst. 0

Transfer data for inst. 1

Transfer data for inst. 2

Extra latencyExtra work (SM)

Extra memory traffic

Data Layout

● Structure of Array with in-lined member access

21

Cell Grouping for Data parallelism (GPU Specific)

● Grouped cells into multiple categories based on their data dependency

● Seperate kernel for each group

22

CPU Scalability

23

24

Profiling CPU

Application

Change Data

Structure

Cell Re-Grouping

OpenACC Directives

Manage

Data

Movement

Initiate

Parallel

Execution

Optimize

Loop

Mappings

#pragma acc data copyin(a,b) copyout(c){...#pragma acc parallel {#pragma acc loop gang vector

for (i = 0; i < n; ++i) {c[i] = a[i] + b[i];...

}}...

}

Parallelize

● Loop parallelism on○ Reconstruction

○ Flux computation

○ Local time step computation & cell update

26

Unified Virtual Addressing

UVA: Single Address Space

System

Memory

CPU GPU

GPU

Memory

PCI-e

0x0000

0xFFFF

0x0000

0xFFFF

System

Memory

CPU GPU

GPU

Memory

PCI-e

0x0000

0xFFFF

No UVA: Separate Address Spaces

29

Profiling CPU

Application

Change Data

Structure

Cell Re-Grouping

OpenACCPragmas

Results (GPU)

30

Analysis

31

● Memory Utilization vs Compute Utilization

● Four possible combinations:

PERFORMANCE LIMITER CATEGORIES

Comp Mem

Compute

Bound

Comp Mem

Bandwidth

Bound

Comp Mem

Latency

Bound

Comp Mem

Compute and

Bandwidth

Bound

60%

DRILL DOWN FURTHER

• Main bottleneck is found to be memory latency

• GPU performance bottle-neck in register spilling and

latency

• Kernels used on average 150 Register/Thread

Occupancy: Know your hardware

GPU Utilization

▪ Each SM has limited resources:

• max. 64K Registers (32 bit) distributed between

threads

• Max 255 register per thread

• max. 48KB of shared memory per block (96KB per

SMM)

• Full occupancy: 2048 threads per SM (64 warps)

▪ When a resource is used up, occupancy is reduced

(*) Values vary with Compute Capability

LATENCY

● GPUs cover latencies by having a lot of work in flight

warp 0

warp 1

warp 2

warp 3

warp 4

warp 5

warp 6

warp 7

warp 8

warp 9

The warp issues

The warp waits (latency)

Fully covered latencywarp 0

warp 1

warp 2

warp 3

No warp issues

Exposed latency, not enough warps

36

Profiling CPU

Application

Change Data

Structure

Cell Re-Grouping

OpenACCPragmas

Analysis

Optimization Strategies

● Latency Bound: Register Spilling

○ Clean up some unused variable

○ -maxregcount

○ Splitting kernel

● Amdahl’s law

○ MPS

37

MULTI Process Service (MPS)

● For Legacy MPI Applications

N=4N=2N=1 N=8

Multicore CPU only

N=4N=2 N=8

GPU parallelizable part

CPU parallel part

Serial part

GPU-accelerated

N=1

90% on GPU

10% on CPU → Not a lot that we expect to improve here

Processes sharing GPU with MPS

● Maximum Overlap

Process A Process B

Context A Context B

GPU Kernels from

Process A

Kernels from

Process B

MPS Process

Results

● 2 X performance grain from the original version ( CPU vs CPU )

● Scalability to thousands of CPU cores

● 4.4 X performance in the Dual Volta GPU version compared to Dual CPU (28

cores Skylake).

40

Profiling CPU Application

Change Data Structure

Cell Re- Grouping

OpenACC Pragmas

Analysis

Register:maxregcount

Kernel Splitting

MPS

Conclusion

● A legacy cartesian mesh solver was refactored with 2X performance

improvement in CPU

● OpenACC based GPU parallelism improved performance by 4.4 X in Volta

GPUs

Future Work

● Hybrid CPU + GPU computation with asymmetric load partitioning

42

Recommendation

● PCAST

○ Helps testing program for correctness, and determine points of divergence.

○ Detecting when results diverge between CPU and GPU versions of code & between the same

code run on different processor architectures

● Unified Memory:

○ Consider using Unified Memory for any new application development.

○ Get your code running on the GPU much sooner!43

$ pgcc -Minfo=accel -ta=tesla:autocompare -o a.out example.c

$ PGI_COMPARE=summary,rel=1 ./a.outcomparing a1 in example.c, function main line 26 comparing a2 in example.c, function main line 26 compared 2 blocks, 2000 elements, 8000 bytes no errors found relative tolerance = 0.100000, rel=1

Recommended