63
© NVIDIA and UC Davis 2008 Advanced Data-Parallel Programming: Data Structures and Algorithms John Owens UC Davis

Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Advanced Data-Parallel Programming:

Data Structures and Algorithms

John Owens

UC Davis

Page 2: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

One Slide Summary of Today

GPUs are great at running many closely-coupled

but independent threads in parallel

The programming model specifies a kernel program over independent threads

GPU computing boils down to:

Define a computation domain that generates many parallel

threads

This is the data structure

Iterate in parallel over that computation domain, running a program over all threads

This is the algorithm

2

Page 3: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Outline

Data Structures

GPU Memory Model

Taxonomy

Algorithmic Building Blocks

Sample Application

Map

Gather & Scatter

Reductions

Scan (parallel prefix)

Sort, search, …

3

Page 4: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

GPU Memory Model

More restricted memory access than CPU

Allocate/free memory only before computation

Transfers to and from CPU are explicit

GPU is controlled by CPU, can’t initiate transfers, access

disk, etc.

To generalize, for complex/irregular data structures

GPUs are better at accessing data structures

CPUs are better at building data structures

Active research topics here!

As CPU-GPU bandwidth improves, consider doing data

structure tasks on their “natural” processor

4

Page 5: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

GPU Memory Model

Limited memory access during computation (kernel)

Registers (per fragment/thread)

Read/write

Shared memory (shared among threads)

Does not exist in general

CUDA allows access to shared memory btwn threads via

per-block shared memory

Global memory (historical)

Read-only during computation

Write-only at end of computation (precomputed address)

Global memory (new)

Allows general scatter/gather (read/write)

– No collision rules!

– Exposed in AMD R520+ GPUs, NVIDIA G80+ GPUs

5

Page 6: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Properties of GPU Data Structures

To be efficient, must support

Parallel read

Parallel write

Parallel iteration

Generalized arrays fit these

requirements

Dense (complete) arrays

Sparse (incomplete) arrays

Adaptive arrays

6

Physical Memory Page Table Virtual Domain

Page 7: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Think In Parallel

The GPU is (at its core) a data-parallel processor

Thousands of parallel threads

Thousands of data elements to process

All data processed by the same program

SPMD computation model

Contrast with task parallelism (somewhat supported by

GPUs) and ILP (a possible direction for future GPUs)

Best results when you “Think Data Parallel”

Design your algorithm for data-parallelism

Understand parallel algorithmic complexity and efficiency

Use data-parallel algorithmic primitives as building blocks

7

Page 8: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Data-Parallel Algorithms

Efficient algorithms require efficient building blocks

This talk: data-parallel building blocks

Map

Gather & Scatter

Reduce

Scan

8

Page 9: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

CUDA Optimization Strategies

(Redux)

Optimize Algorithms for the GPU

Maximize independent parallelism

Maximize arithmetic intensity (math/bandwidth)

Sometimes it’s better to recompute than to cache

Do more computation on the GPU to avoid costly data

transfers

Optimize Memory Access Locality (“coherence”)

Take Advantage of On-Chip Per-Block Shared

Memory

Use Parallelism Efficiently

9

Page 10: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Sample Motivating Application

How bumpy is a surface that we

represent as a grid of samples?

Algorithm:

Loop over all elements

At each element, compare the value of that element to the

average of its neighbors (“difference”). Square that

difference.

Now sum up all those differences.

But we don’t want to sum all the diffs that are 0.

So only sum up the non-zero differences.

This is a fake application—don’t take it too seriously.

10 Picture courtesy http://www.artifice.com

Page 11: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Sample Motivating Application

for all samples:

neighbors[x,y] =

0.25 * ( value[x-1,y]+

value[x+1,y]+

value[x,y+1]+

value[x,y-1] ) )

diff = (value[x,y] - neighbors[x,y])^2

result = 0

for all samples where diff != 0:

result += diff

return result

11

Page 12: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Sample Motivating Application

for all samples:

neighbors[x,y] =

0.25 * ( value[x-1,y]+

value[x+1,y]+

value[x,y+1]+

value[x,y-1] ) )

diff = (value[x,y] - neighbors[x,y])^2

result = 0

for all samples where diff != 0:

result += diff

return result

12

Page 13: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

The Map Operation

Given: Array or stream of data elements A

Function f(x)

map(A, f) = applies f(x) to all ai A

CUDA implementation is straightforward

Statements in CUDA kernels are applied in parallel to all threads that execute them

Map is as simple as:

// for all samples – all threads execute this code

neighbors[x][y] =

0.25f * (value[x-1][y]+

value[x+1][y]+

value[x][y+1]+

value[x][y-1]);

diff = (value[x][y] - neighbors[x][y]);

diff *= diff; // squared difference

13

Page 14: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Making Map Efficient

Amortize the cost of GPU memory access

Pattern is load from GPU memory, compute, store to GPU

memory

Therefore make “compute” as dense as possible

Maximize arithmetic intensity

Maximize number of concurrent threads

Use many blocks and many threads per block

Minimize register usage

14

Page 15: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Sample Motivating Application

for all samples:

neighbors[x,y] =

0.25 * ( value[x-1,y]+

value[x+1,y]+

value[x,y+1]+

value[x,y-1] ) )

diff = (value[x,y] - neighbors[x,y])^2

result = 0

for all samples where diff != 0:

result += diff

return result

Page 16: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Scatter vs. Gather

Gather: p = a[i]

Global data structure is read-only

Scatter: a[i] = p

New capability for GPUs

Must be careful of conflicts

Page 17: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Sample Motivating Application

for all samples:

neighbors[x,y] =

0.25 * ( value[x-1,y]+

value[x+1,y]+

value[x,y+1]+

value[x,y-1] ) )

diff = (value[x,y] - neighbors[x,y])^2

result = 0

for all samples where diff != 0:

result += diff

return result

17

Page 18: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Parallel Reductions

Given:

Binary associative operator with identity I

Ordered set s = [a0, a1, …, an-1] of n elements

reduce( , s) returns a0 a1 … an-1

Example:

reduce(+, [3 1 7 0 4 1 6 3]) = 25

Reductions common in parallel algorithms

Common reduction operators are +, , min and max

Note floating point is only pseudo-associative

18

Page 19: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Tree-Based Parallel Reductions

Traditional algorithm

Requires synchronization at each level of tree

Synchronized through main memory, making it …

… completely bandwidth-bound

Memory writes and reads are off-chip, no reuse of intermediate sums

CUDA solves this by exposing on-chip per-block shared memory

Reduce blocks of data in shared memory to save bandwidth

19

4 7 5 9

11 14

25

3 1 7 0 4 1 6 3

Page 20: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Parallel Reduction: Interleaved

Addressing

20

10 1 8 -1 0 -2 3 5 -2 -3 2 7 0 11 0 2 Values (shared memory)

0 1 2 3 4 5 6 7

11 1 7 -1 -2 -2 8 5 -5 -3 9 7 11 11 2 2 Values

0 1 2 3

18 1 7 -1 6 -2 8 5 4 -3 9 7 13 11 2 2 Values

0 1

24 1 7 -1 6 -2 8 5 17 -3 9 7 13 11 2 2 Values

0

41 1 7 -1 6 -2 8 5 17 -3 9 7 13 11 2 2 Values

Thread

IDs

Step 1

Stride 1

Step 2

Stride 2

Step 3

Stride 4

Step 4

Stride 8

Thread

IDs

Thread

IDs

Thread

IDs

Interleaved addressing results in bank conflicts

Page 21: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Parallel Reduction: Sequential

Addressing

21

10 1 8 -1 0 -2 3 5 -2 -3 2 7 0 11 0 2 Values (shared memory)

0 1 2 3 4 5 6 7

8 -2 10 6 0 9 3 7 -2 -3 2 7 0 11 0 2 Values

0 1 2 3

8 7 13 13 0 9 3 7 -2 -3 2 7 0 11 0 2 Values

0 1

21 20 13 13 0 9 3 7 -2 -3 2 7 0 11 0 2 Values

0

41 20 13 13 0 9 3 7 -2 -3 2 7 0 11 0 2 Values

Thread

IDs Step 1

Stride 8

Step 2

Stride 4

Step 3

Stride 2

Step 4

Stride 1

Thread

IDs

Thread

IDs

Thread

IDs

Sequential addressing is conflict free!

Page 22: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Parallel Reduction Complexity

log(N) parallel steps, each step S does N/2S

independent ops

Step Complexity is O(log N)

For N=2D, performs S [1..D]2D-S = N-1 operations

Work Complexity is O(N)—It is work-efficient

i.e. does not perform more operations than a sequential

algorithm

With P threads physically in parallel (P processors),

time complexity is O(N/P + log N)

Compare to O(N) for sequential reduction

22

Page 23: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Sample Motivating Application

for all samples:

neighbors[x,y] =

0.25 * ( value[x-1,y]+

value[x+1,y]+

value[x,y+1]+

value[x,y-1] ) )

diff = (value[x,y] - neighbors[x,y])^2

result = 0

for all samples where diff != 0:

result += diff

return result

23

Page 24: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Common Situations in Parallel

Computation

Many parallel threads that need to partition data

Split

Many parallel threads and variable output per thread

Compact / Expand / Allocate

24

Page 25: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Split Operation

Given an array of true and false elements (and

payloads)

Return an array with all true elements at the

beginning

Examples: sorting, building trees 25

F T F F T F F T

F F F F F T T T

3 6 1 4 0 7 1 3

3 1 4 7 1 6 0 3

Flag

Payload

Page 26: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Variable Output Per Thread:

Compact

Remove null elements

Example: collision detection

26

3 7 4 1 3

3 0 7 0 4 1 0 3

Page 27: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Variable Output Per Thread

Allocate Variable Storage Per Thread

Examples: marching cubes, geometry generation 27

A

B

C D

E

F

G

2 1 0 3 2

H

Page 28: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

“Where do I write my output?”

In all of these situations, each thread needs to answer that simple question

The answer is:

“That depends on how much the other threads need to write!”

In a serial processor, this is simple

“Scan” is an efficient way to answer this question in parallel

28

Page 29: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Parallel Prefix Sum (Scan)

Given an array A = [a0, a1, …, an-1]

and a binary associative operator with identity I,

scan(A) = [I, a0, (a0 a1), …, (a0 a1 … an-2)]

Example: if is addition, then scan on the set

[3 1 7 0 4 1 6 3]

returns the set

[0 3 4 11 11 15 16 22]

29

Page 30: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Applications of Scan

Scan is a simple and useful parallel building block

for many parallel algorithms:

Fascinating, since scan is unnecessary in sequential

computing!

30

radix sort

quicksort (segmented

scan)

String comparison

Lexical analysis

Stream compaction

Run-length encoding

Polynomial evaluation

Solving recurrences

Tree operations

Histograms

Allocation

Etc.

Page 31: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Scan Literature

Pre-GPGPU

First proposed in APL by Iverson

Used as a data parallel primitive in the Connection Machine

Feature of C* and CM-Lisp

Guy Blelloch used scan as a primitive for various parallel algorithms

Blelloch, 1990, “Prefix Sums and Their Applications”

GPGPU

O(n log n) GPU implementation by Daniel Horn (GPU Gems 2)

Applied to Summed Area Tables by Hensley et al. (EG05)

O(n) work GPU scan by Sengupta et al. (EDGE06) and Greß et al. (EG06)

O(n) work & space GPU implementation by Harris et al. (2007)

NVIDIA CUDA SDK and GPU Gems 3

Applied to radix sort, stream compaction, and summed area tables

31

Page 32: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Stream Compaction

Input: stream of 1s and 0s

[1 0 1 1 0 0 1 0]

Operation:“sum up all elements before you”

Output: scatter addresses for “1” elements

[0 1 1 2 3 3 3 4]

Note scatter addresses for blue elements are

packed!

32

Page 33: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

A Naive Parallel Scan Algorithm

33

T0 3 1 7 0 4 1 6 3

log(n) iterations

Note: With graphics API,

can’t read and write the same texture, so

must “ping-pong”.

Not necessary with newer APIs.

Page 34: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

A Naive Parallel Scan Algorithm

34

T0 3 1 7 0 4 1 6 3

T1 3 4 8 7 4 5 7 9

Stride 1

log(n) iterations

Note: With graphics API,

can’t read and write the same texture, so

must “ping-pong”

For i from 1 to log(n)-1:

• Specify domain from 2i to n. Element k

computes

vout = v[k] + v[k-2i].

Page 35: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

A Naive Parallel Scan Algorithm

35

T0 3 1 7 0 4 1 6 3

T1 3 4 8 7 4 5 7 9

Stride 1

log(n) iterations

Note: With graphics API,

can’t read and write the same texture, so

must “ping-pong”

For i from 1 to log(n)-1:

• Specify domain from 2i to n. Element k

computes

vout = v[k] + v[k-2i].

• Due to ping-pong,

specify 2nd domain from 2(i-1) to 2i with a

simple pass-through

shader

vout = vin.

Page 36: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

A Naive Parallel Scan Algorithm

36

T0 3 1 7 0 4 1 6 3

T1 3 4 8 7 4 5 7 9

T0 3 4 11 11 12 12 11 14

Stride 1

Stride 2

log(n) iterations

Note: With graphics API,

can’t read and write the same texture, so

must “ping-pong”

For i from 1 to log(n)-1:

• Specify domain from 2i to n. Element k

computes

vout = v[k] + v[k-2i].

• Due to ping-pong,

specify 2nd domain from 2(i-1) to 2i with a

simple pass-through

shader

vout = vin.

Page 37: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

A Naive Parallel Scan Algorithm

37

T0 3 1 7 0 4 1 6 3

T1 3 4 8 7 4 5 7 9

T0 3 4 11 11 12 12 11 14

Stride 1

Stride 2

log(n) iterations

Note: With graphics API,

can’t read and write the same texture, so

must “ping-pong”

For i from 1 to log(n)-1:

• Specify domain from 2i to n. Element k

computes

vout = v[k] + v[k-2i].

• Due to ping-pong,

specify 2nd domain from 2(i-1) to 2i with a

simple pass-through

shader

vout = vin.

Page 38: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

A Naive Parallel Scan Algorithm

38

T0 3 1 7 0 4 1 6 3

T1 3 4 8 7 4 5 7 9

T0 3 4 11 11 12 12 11 14

Out 3 4 11 11 15 16 22 25

Stride 1

Stride 2

Stride 4

log(n) iterations

Note: With graphics API,

can’t read and write the same texture, so

must “ping-pong”

For i from 1 to log(n)-1:

• Specify domain from 2i to n. Element k

computes

vout = v[k] + v[k-2i].

• Due to ping-pong,

specify 2nd domain from 2(i-1) to 2i with a

simple pass-through

shader

vout = vin.

Page 39: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

A Naive Parallel Scan Algorithm

39

T0 3 1 7 0 4 1 6 3

T1 3 4 8 7 4 5 7 9

T0 3 4 11 11 12 12 11 14

Out 3 4 11 11 15 16 22 25

Stride 1

Stride 2

Stride 4

Log(n) iterations

Note: With graphics API,

can’t read and write the same texture, so

must “ping-pong”

For i from 1 to log(n)-1:

• Specify domain from 2i to n. Element k

computes

vout = v[k] + v[k-2i].

• Due to ping-pong,

specify 2nd domain from 2(i-1) to 2i with a

simple pass-through

shader

vout = vin.

Page 40: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

A Naive Parallel Scan Algorithm

Algorithm given in more detail by Horn [‘05]

Step-efficient, but not work-efficient

O(log n) steps, but O(n log n) adds

Sequential version is O(n)

A factor of log(n) hurts: 20x for 106 elements!

Dig into parallel algorithms literature for a better

solution

See Blelloch 1990, “Prefix Sums and Their Applications”

40

Page 41: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

n log n Parallel Scan Algorithm

T0 3 1 7 0 4 1 6 3

T1 3 4 8 7 4 5 7 9

T0 3 4 11 11 12 12 11 14

Out 3 4 11 11 15 16 22 25

Stride 1

Stride 2

Stride 4

Log(n) iterations

Iteration n: Add each

element to its neighbor n away

Page 42: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

n log n Parallel Scan Algorithm

Algorithm given in more detail by Horn [‘05]

Step-efficient, but not work-efficient

O(log n) steps, but O(n log n) adds

Sequential version is O(n)

A factor of log n hurts: 20x for 106 elements!

Dig into parallel algorithms literature for a better

solution

See Blelloch 1990, “Prefix Sums and Their Applications”

42

Page 43: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Improving Efficiency

A common parallel algorithms pattern:

Balanced Trees

Build balanced binary tree on input data and sweep to and

from the root

Tree is conceptual, not an actual data structure

For scan:

Traverse from leaves to root building partial sums at internal nodes

Root holds sum of all leaves

Traverse from root to leaves building the scan from the

partial sums

This algorithm originally described by Blelloch (1990)

43

Page 44: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Scan with Scatter

Scatter in CUDA kernels makes algorithms that use

scan easier to implement

NVIDIA CUDA SDK includes example implementation of scan primitive

http://www.gpgpu.org/developer/cudpp/

Per-block shared memory improves efficiency

All steps executed in a single kernel

Threads communicate through shared memory

Drastically reduces bandwidth bottleneck!

Key for algorithmic efficiency: how to block computation

and utilize parallel data cache in each block

44

Page 45: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Build the Sum Tree

45

3 1 7 0 4 1 6 3

Assume array is already in shared memory

Page 46: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Build the Sum Tree

46

3 1 7 0 4 1 6 3

3 4 7 7 4 5 6 9

Iteration 1, n/2 threads

Iterate log(n) times. Each thread adds value stride elements away to its own value

Each corresponds

to a single thread.

Page 47: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Build the Sum Tree

47

3 1 7 0 4 1 6 3

3 4 7 7 4 5 6 9

3 4 7 11 4 5 6 14

Stride 1

Stride 2 Iteration 2, n/4 threads

Iterate log(n) times. Each thread adds value stride elements away to its own value

Each corresponds

to a single thread.

Page 48: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Build the Sum Tree

48

3 1 7 0 4 1 6 3

3 4 7 7 4 5 6 9

3 4 7 11 4 5 6 14

3 4 7 11 4 5 6 25

Iterate log(n) times. Each thread adds value stride elements away to its own value.

Note that this algorithm operates in-place: no need for additional storage

Iteration log(n), 1 thread Stride 4

Each corresponds

to a single thread.

Stride 1

Stride 2

Page 49: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Zero the Last Element

49

3 4 7 11 4 5 6 0

We now have an array of partial sums. Since this is an exclusive scan,

set the last element to zero. It will propagate back to the first element.

Page 50: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Build Scan From Partial Sums

50

3 4 7 11 4 5 6 0

Page 51: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Build Scan From Partial Sums

51

3 4 7 0 4 5 6 11

3 4 7 11 4 5 6 0

Iterate log(n) times. Each thread adds value stride elements away to its own value,

and sets the value stride elements away to its own previous value.

Iteration 1

1 thread

Each corresponds

to a single thread.

Page 52: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Build Scan From Partial Sums

52

3 4 7 0 4 5 6 11

3 4 7 11 4 5 6 0

3 0 7 4 4 11 6 16

Iterate log(n) times. Each thread adds value stride elements away to its own value,

and sets the value stride elements away to its own previous value.

Iteration 2

2 threads

Each corresponds

to a single thread.

Page 53: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Build Scan From Partial Sums

53

3 4 7 0 4 5 6 11

3 4 7 11 4 5 6 0

3 0 7 4 4 11 6 16

0 3 4 11 11 15 16 22

Done! We now have a completed scan that we can write out to device memory.

Total steps: 2 * log(n).

Total work: 2 * (n-1) adds = O(n) Work Efficient!

Iteration log(n)

n/2 threads

Page 54: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

0.001

0.01

0.1

1

10

100

1000

Tim

e (m

s)

10 3 10 4 10 5 10 6 10 7 10 8

Elements to Scan

CUD

A v

s. CP

U S

peed

up

0.01

0.1

1

10

100

CPUOpenGLCUDANaiveSpeedup

CUDA Scan Performance

GPU vs. CPU: 20x

54

GeForce 8800 GTX, Intel Core2 Duo Extreme 2.93 GHz

CUDA vs. OpenGL: 7x

Page 55: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Optimizing Scan

Most important: the right algorithm!

Ensure all memory transfers can achieve maximum

bandwidth (“coherent”/“coalesced”)

Scan runs close to “speed of light” (limited by memory

bandwidth)

Eliminate bank conflicts in shared memory access

Process multiple elements per thread

Processing float4 per thread rather than float more than

doubled the overall speed

Unroll loops

CUDA 1.1 adds some software support for this

55

Page 56: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Application: Stream Compaction

1M elements:

~0.6-1.3 ms

16M elements:

~8-20 ms

Perf depends on # elements

retained

56

Harris, M., S. Sengupta, and J.D. Owens. “Parallel Prefix Sum (Scan) in CUDA”. GPU Gems 3

Page 57: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Application: Radix Sort

57

Sort 4M 32-bit integers:

165ms

Perform split operation

on each bit using scan

Can also sort each block and merge

Efficient merge on

GPU an active area of

research

Page 58: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Application: Summed Area Tables

Each pixel in SAT is the sum of all pixels below and to the left

Can be used to perform box filter of arbitrary radius per pixel in constant time

Easy to compute with scan

Scan all rows, then all columns

Transpose in between and scan only rows

GPU can scan all rows in parallel

Scan all rows of 1024x1024 image in 0.85 ms

Build summed area table in 3.5 ms

58

Page 59: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Segmented Scan

Segmented scan enables another class of parallel algorithms

Parallel quicksort

Parallel sparse matrix-vector multiply in CSR format

Sengupta, S., M. Harris, Y. Zhang, and J.D. Owens.

“Scan Primitives for GPU Computing”. Graphics Hardware 2007

59

3 1 7 0 4 1 6 3

0 0 1 0 0 1 0 0

0 3 0 7 7 0 1 7

Segment Head Flags

Input Data Array

Segmented scan

Page 60: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Parallel Sorting

Given an unordered list of elements, produce list

ordered by key value

GPU’s constrained programming environment limits

viable algorithms

Three major threads of work

Merge sort (e.g. bitonic [Batcher 68] or radix)

Periodic balanced sorting networks [Dowd 89]

Quicksort [Sengupta 07] (not performance competitive)

Recent research results impressive

Govindaraju’s GPUTeraSort (UNC/Microsoft work,

published in ACM SIGMOD Dec. ‘05)

Hybrid radix-bitonic sort

10x performance over CPU, PennySort champion 60

Page 61: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Binary Search

Find a specific element in an ordered list

Implement just like CPU algorithm

Finds the first element of a given value v

If v does not exist, find next smallest element > v

Search is sequential, but many searches can be executed in

parallel

Number of threads launched determines number of searches executed in parallel

1 thread == 1 search

For details see:

“A Toolkit for Computation on GPUs”. Ian Buck and Tim Purcell.

In GPU Gems. Randy Fernando, ed. 2004

61

Page 62: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

Conclusion

Think parallel!

Effective programming on GPUs requires mapping

computation & data structures into parallel formulations

There is a huge scope for innovation in this space

Start contributing!

Try out our CUDA Data Parallel Primitives library,

CUDPP

Collaboration between NVIDIA and UC Davis

Information: http://www.gpgpu.org/developer/cudpp/

62

Page 63: Advanced Data-Parallel Programming: Data Structures and ...skadron/cuda_asplos08_tutorial/5-advanced-data-parallel...This is the data structure Iterate in parallel over that computation

© NVIDIA and UC Davis 2008

References

“A Toolkit for Computation on GPUs”. Ian Buck and Tim Purcell. In GPU Gems, Randy Fernando, ed. 2004.

“Parallel Prefix Sum (Scan) with CUDA”. Mark Harris, Shubhabrata Sengupta, John D. Owens. In GPU Gems 3, Herbert Nguyen, ed. Aug. 2007.

“Stream Reduction Operations for GPGPU Applications”. Daniel Horn. In GPU Gems 2, Matt Pharr, ed. 2005.

“Glift: Generic, Efficient, Random-Access GPU Data Structures”. Aaron Lefohn et al., ACM TOG, Jan. 2006.

“Scan Primitives for GPU Computing”. Sengupta, S., M. Harris, Y. Zhang, and J.D. Owens. Graphics Hardware 2007.

63