Code Optimization II: Machine Dependent Optimization Topics Machine-Dependent Optimizations...

Preview:

Citation preview

Code Optimization II:Machine Dependent Optimization

Code Optimization II:Machine Dependent Optimization

TopicsTopics Machine-Dependent Optimizations

UnrollingEnabling instruction level parallelism

– 2 –

Previous Best Combining CodePrevious Best Combining Code

TaskTask Compute sum of all elements in vector Vector represented by C-style abstract data type Achieved CPE of 2.00

Cycles per element

void combine4(vec_ptr v, int *dest){ int i; int length = vec_length(v); int *data = get_vec_start(v); int sum = 0; for (i = 0; i < length; i++) sum += data[i]; *dest = sum;}

– 3 –

General Forms of CombiningGeneral Forms of Combiningvoid abstract_combine4(vec_ptr v, data_t *dest){ int i; int length = vec_length(v); data_t *data = get_vec_start(v); data_t t = IDENT; for (i = 0; i < length; i++) t = t OP data[i]; *dest = t;}

– 4 –

Pointer CodePointer Code

OptimizationOptimization Use pointers rather than array references CPE: 3.00 (Compiled -O2)

Oops! We’re not making progress here!

Warning: Some compilers do better job optimizing array code

void combine4p(vec_ptr v, int *dest){ int length = vec_length(v); int *data = get_vec_start(v); int *dend = data+length; int sum = 0; while (data < dend) { sum += *data; data++; } *dest = sum;}

– 5 –

Pointer vs. Array Code Inner LoopsPointer vs. Array Code Inner LoopsArray CodeArray Code

Pointer CodePointer Code

PerformancePerformance Array Code: 4 instructions in 2 clock cycles Pointer Code: Almost same 4 instructions in 3 clock cycles

.L24: # Loop:addl (%eax,%edx,4),%ecx # sum += data[i]incl %edx # i++cmpl %esi,%edx # i:lengthjl .L24 # if < goto Loop

.L30: # Loop:addl (%eax),%ecx # sum += *dataaddl $4,%eax # data ++cmpl %edx,%eax # data:dendjb .L30 # if < goto Loop

– 6 –

Modern CPU DesignModern CPU Design

ExecutionExecution

FunctionalUnits

Instruction ControlInstruction Control

Integer/Branch

FPAdd

FPMult/Div

Load Store

InstructionCache

DataCache

FetchControl

InstructionDecode

Address

Instrs.

Operations

PredictionOK?

DataData

Addr. Addr.

GeneralInteger

Operation Results

RetirementUnit

RegisterFile

RegisterUpdates

– 7 –

CPU Capabilities of Pentium IIICPU Capabilities of Pentium IIIMultiple Instructions Can Execute in ParallelMultiple Instructions Can Execute in Parallel

1 load 1 store 2 integer (one may be branch) 1 FP Addition 1 FP Multiplication or Division

Some Instructions Take > 1 Cycle, but Can be PipelinedSome Instructions Take > 1 Cycle, but Can be Pipelined Instruction Latency Cycles/Issue Load / Store 3 1 Integer Multiply 4 1 Double/Single FP Multiply 5 2 Double/Single FP Add 3 1

– 8 –

CPU Capabilities of Pentium IIICPU Capabilities of Pentium IIIMultiple Instructions Can Execute in ParallelMultiple Instructions Can Execute in Parallel

1 load 1 store 2 integer (one may be branch) 1 FP Addition 1 FP Multiplication or Division

Some Instructions Take > 1 Cycle, but Can be PipelinedSome Instructions Take > 1 Cycle, but Can be Pipelined Instruction Latency Cycles/Issue Load / Store 3 1 Integer Multiply 4 1 Double/Single FP Multiply 5 2 Double/Single FP Add 3 1 Integer Divide 36 36 Double/Single FP Divide 38 38

– 9 –

Visualizing OperationsVisualizing Operations

OperationsOperations Vertical position denotes time

at which executedCannot begin operation until

operands available

Height denotes latency

cc.1

t.1

load

%ecx.1

incl

cmpl

jl

%edx.0

%edx.1

%ecx.0

imull

Time

– 10 –

Visualizing Operations (cont.)Visualizing Operations (cont.)

OperationsOperations Same as before, except

that add has latency of 1

Time

cc.1

t.1

%ecx.i +1

incl

cmpl

jl

load

%edx.0

%edx.1

%ecx.0

addl%ecx.1

load

– 11 –

cc.1

cc.2%ecx.0

%edx.3t.1

imull

%ecx.1

incl

cmpl

jl

%edx.0

i=0

load

t.2

imull

%ecx.2

incl

cmpl

jl

%edx.1

i=1

load

cc.3

t.3

imull

%ecx.3

incl

cmpl

jl

%edx.2

i=2

load

Cycle

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

cc.1

cc.2

Iteration 3

Iteration 2

Iteration 1

cc.1

cc.2%ecx.0

%edx.3t.1

imull

%ecx.1

incl

cmpl

jl

%edx.0

i=0

load

t.1

imull

%ecx.1

incl

cmpl

jl

%edx.0

i=0

load

t.2

imull

%ecx.2

incl

cmpl

jl

%edx.1

i=1

load

t.2

imull

%ecx.2

incl

cmpl

jl

%edx.1

i=1

load

cc.3

t.3

imull

%ecx.3

incl

cmpl

jl

%edx.2

i=2

load

Cycle

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

Cycle

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

cc.1

cc.2

Iteration 3

Iteration 2

Iteration 1

3 Iterations of Combining Product3 Iterations of Combining ProductUnlimited Resource Unlimited Resource

AnalysisAnalysis Assume operation

can start as soon as operands available

Operations for multiple iterations overlap in time

PerformancePerformance Limiting factor

becomes latency of integer multiplier

Gives CPE of 4.0

– 12 –

4 Iterations of Combining Sum4 Iterations of Combining Sum

Unlimited Resource AnalysisUnlimited Resource Analysis

PerformancePerformance Can begin a new iteration on each clock cycle Should give CPE of 1.0 Would require executing 4 integer operations in parallel

%edx.0

t.1

%ecx.i +1

incl

cmpl

jl

addl%ecx.1

i=0

loadcc.1

%edx.0

t.1

%ecx.i +1

incl

cmpl

jl

addl%ecx.1

i=0

loadcc.1

%edx.1

t.2

%ecx.i +1

incl

cmpl

jl

addl%ecx.2

i=1

loadcc.2

%edx.1

t.2

%ecx.i +1

incl

cmpl

jl

addl%ecx.2

i=1

loadcc.2

%edx.2

t.3

%ecx.i +1

incl

cmpl

jl

addl%ecx.3

i=2

loadcc.3

%edx.2

t.3

%ecx.i +1

incl

cmpl

jl

addl%ecx.3

i=2

loadcc.3

%edx.3

t.4

%ecx.i +1

incl

cmpl

jl

addl%ecx.4

i=3

loadcc.4

%edx.3

t.4

%ecx.i +1

incl

cmpl

jl

addl%ecx.4

i=3

loadcc.4

%ecx.0

%edx.4

Cycle

1

2

3

4

5

6

7

Cycle

1

2

3

4

5

6

7

Iteration 1

Iteration 2

Iteration 3

Iteration 4

4 integer ops

– 13 –

Combining Sum: Resource ConstraintsCombining Sum: Resource Constraints Only have two integer functional units Some operations delayed even though operands available

PerformancePerformance Sustain CPE of 2.0

– 14 –

Loop UnrollingLoop Unrolling

OptimizationOptimization Combine multiple

iterations into single loop body

Amortizes loop overhead across multiple iterations

Finish extras at end Measured CPE = 1.33

void combine5(vec_ptr v, int *dest){ int length = vec_length(v); int limit = length-2; int *data = get_vec_start(v); int sum = 0; int i;

}

– 15 –

Visualizing Unrolled LoopVisualizing Unrolled Loop Loads can pipeline,

since don’t have dependencies

Only one set of loop control operations

Time

%edx.0

%edx.1

%ecx.0c

cc.1

t.1a

%ecx.i +1

addl

cmpl

jl

addl

%ecx.1c

addl

addl

t.1b

t.1c

%ecx.1a

%ecx.1b

load

load

load

– 16 –

Executing with Loop UnrollingExecuting with Loop Unrolling

i=6

cc.3

t.3a

%ecx.i +1

addl

cmpl

jl

addl

%ecx.3c

addl

addl

t.3b

t.3c

%ecx.3a

%ecx.3b

load

load

load

%ecx.2c

i=9

cc.4

t.4a

%ecx.i +1

addl

cmpl

jl

addl

%ecx.4c

addl

addl

t.4b

t.4c

%ecx.4a

%ecx.4b

load

load

load

cc.4

t.4a

%ecx.i +1

addl

cmpl

jl

addl

%ecx.4c

addl

addl

t.4b

t.4c

%ecx.4a

%ecx.4b

load

load

load

%edx.3

%edx.2

%edx.4

5

6

7

8

9

10

11

Cycle

12

13

14

15

5

6

7

8

9

10

11

Cycle

12

13

14

15

Iteration 3

Iteration 4

Predicted Performance Can complete iteration in 3 cycles Should give CPE of 1.0

Measured Performance CPE of 1.33 One iteration every 4 cycles

– 17 –

Effect of UnrollingEffect of Unrolling

Only helps integer sum for our examplesOther cases constrained by functional unit latencies

Effect is nonlinear with degree of unrollingMany subtle effects determine exact scheduling of operations

Unrolling DegreeUnrolling Degree 11 22 33 44 88 1616

IntegerInteger SumSum 2.002.00 1.501.50 1.331.33 1.501.50 1.251.25 1.061.06

IntegerInteger ProductProduct 4.004.00

FPFP SumSum 3.003.00

FPFP ProductProduct 5.005.00

– 18 –

Serial ComputationSerial ComputationComputationComputation

((((((((((((1 * x0) * x1) * x2) * x3) * x4) * x5) * x6) * x7) * x8) * x9) * x10) * x11)

PerformancePerformance N elements, D cycles/operation N*D cycles

*

*

11 xx00

xx11

*

xx22

*

xx33

*

xx44

*

xx55

*

xx66

*

xx77

*

xx88

*

xx99

*

xx1010

*

xx1111

– 19 –

Parallel Loop UnrollingParallel Loop Unrolling

Code VersionCode Version Integer product

OptimizationOptimization Accumulate in two

different productsCan be performed

simultaneously

Combine at end

PerformancePerformance CPE = 2.0 2X performance

void combine6(vec_ptr v, int *dest){ int length = vec_length(v); int limit = length-1; int *data = get_vec_start(v); int x0 = 1; int x1 = 1; int i;

} *dest = x0 * x1;}

– 20 –

Dual Product ComputationDual Product ComputationComputationComputation

((((((1 * x0) * x2) * x4) * x6) * x8) * x10) *

((((((1 * x1) * x3) * x5) * x7) * x9) * x11)

PerformancePerformance N elements, D cycles/operation (N/2+1)*D cycles ~2X performance improvement

– 21 –

Requirements for Parallel ComputationRequirements for Parallel Computation

MathematicalMathematical Combining operation must be associative & commutative

OK for integer multiplicationNot strictly true for floating point

» OK for most applications

HardwareHardware Pipelined functional units Ability to dynamically extract parallelism from code

– 22 –

Visualizing Parallel LoopVisualizing Parallel Loop Two multiplies within

loop no longer have data dependency

Allows them to pipeline

Time

%edx.1

%ecx.0

%ebx.0

cc.1

t.1a

imull

%ecx.1

addl

cmpl

jl

%edx.0

imull

%ebx.1

t.1b

load

load

– 23 –

Executing with Parallel LoopExecuting with Parallel Loop

%edx.3%ecx.0

%ebx.0

i=0

i=2

cc.1

t.1a

imull

%ecx.1

addl

cmpl

jl

%edx.0

imull

%ebx.1

t.1b

load

loadcc.1

t.1a

imull

%ecx.1

addl

cmpl

jl

%edx.0

imull

%ebx.1

t.1b

load

loadcc.2

t.2a

imull

%ecx.2

addl

cmpl

jl

%edx.1

imull

%ebx.2

t.2b

load

loadcc.2

t.2a

imull

%ecx.2

addl

cmpl

jl

%edx.1

imull

%ebx.2

t.2b

load

load

i=4

cc.3

t.3a

imull

%ecx.3

addl

cmpl

jl

%edx.2

imull

%ebx.3

t.3b

load

load

14

Cycle

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Cycle

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Iteration 1

Iteration 2

Iteration 3

Predicted PerformanceCan keep 4-cycle multiplier

busy performing two simultaneous multiplications

Gives CPE of 2.0

– 24 –

– 25 –

Parallel Unrolling: Method #2Parallel Unrolling: Method #2

Code VersionCode Version Integer product

OptimizationOptimization Multiply pairs of

elements together And then update

product

PerformancePerformance CPE = 2.5

void combine6aa(vec_ptr v, int *dest){ int length = vec_length(v); int limit = length-1; int *data = get_vec_start(v); int x = 1; int i;

}

– 26 –

Method #2 ComputationMethod #2 ComputationComputationComputation

((((((1 * (x0 * x1)) * (x2 * x3)) * (x4 * x5)) * (x6 * x7)) * (x8 * x9)) * (x10 * x11))

PerformancePerformance N elements, D cycles/operation Should be (N/2+1)*D cycles

CPE = 2.0

Measured CPE worse

UnrollingUnrolling CPE CPE (measured)(measured)

CPE CPE (theoretical)(theoretical)

22 2.502.50 2.002.00

33 1.671.67 1.331.33

44 1.501.50 1.001.00

66 1.781.78 1.001.00

– 27 –

Understanding ParallelismUnderstanding Parallelism

CPE = 4.00 All multiplies performed in sequence

/* Combine 2 elements at a time */ for (i = 0; i < limit; i+=2) { x = x * (data[i] * data[i+1]); }

/* Combine 2 elements at a time */ for (i = 0; i < limit; i+=2) { x = (x * data[i]) * data[i+1]; }

CPE = 2.50 Multiplies overlap

*

*

11 xx00

xx11

*

xx22

*

xx33

*

xx44

*

xx55

*

xx66

*

xx77

*

xx88

*

xx99

*

xx1010

*

xx1111

*

*

11 xx00

xx11

*

xx22

*

xx33

*

xx44

*

xx55

*

xx66

*

xx77

*

xx88

*

xx99

*

xx1010

*

xx1111

*

*

11

*

*

*

*

*

xx11xx00

*

xx33xx22

*

xx55xx44

*

xx77xx66

*

xx99xx88

*

xx1111xx1010

*

*

11

*

*

*

*

*

xx11xx00

*

xx11xx00

*

xx33xx22

*

xx33xx22

*

xx55xx44

*

xx55xx44

*

xx77xx66

*

xx77xx66

*

xx99xx88

*

xx99xx88

*

xx1111xx1010

*

xx1111xx1010

– 28 –

Limitations of Parallel ExecutionLimitations of Parallel Execution

Need Lots of RegistersNeed Lots of Registers To hold sums/products Only 6 usable integer registers

Also needed for pointers, loop conditions

8 FP registers When not enough registers, must spill temporaries onto

stackWipes out any performance gains

– 29 –

Machine-Dependent Opt. SummaryMachine-Dependent Opt. Summary

Pointer CodePointer Code Look carefully at generated code to see whether helpful

Loop UnrollingLoop Unrolling Some compilers do this automatically Generally not as clever as what can achieve by hand

Exposing Instruction-Level ParallelismExposing Instruction-Level Parallelism Very machine dependent (GCC on IA32/Linux not very good)

Warning:Warning: Benefits depend heavily on particular machine Do only for performance-critical parts of code

– 30 –

– 31 –

– 32 –

– 33 –

Recommended