24
Computer Systems Architecture I CSE 560M Lecture 17 Guest Lecturer: Shakir James

Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

Computer Systems

Architecture I

CSE 560M

Lecture 17

Guest Lecturer: Shakir James

Page 2: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

2

Plan for Today

• Announcements and Reminders– Project demos in three weeks (Nov. 23rd)

• Questions

• Today’s discussion: Improving cache performance– Decrease misses

• Cache organization

• Prefetching

• Compiler algorithms

– Decrease miss latency• Prioritize reads and writes

• Enable multiple concurrent misses: lock-up free caches

• Cache hierarchies

Page 3: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

3

Cache

1010010111

0001011111

1010111101

0011111110

Tag Data

0

1

2

3

Address

101111001010111

Tag Index

0

Offset

10111

Memory

ABCD BCDD

A9B70467ABCDBCDDF1569978

10111100

Address Data

101111011011111010111111

1011101110111010

Page 4: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

4

Improving Cache Performance

• CPI contributed by cache– Miss rate * number of cycles to handle the miss

• Execution time contributed by cache– Clock cycle time * CPI

– Clock cycle time might depend on cache organization

• To improve cache performance– Decrease miss rate without increasing time to handle the miss

– Decrease time to handle the miss (i.e., penalty) without increasing miss rate

• In general, larger caches require larger access times

Page 5: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

5

Obvious Solutions to Decrease

Miss Rate

• Increase cache capacity

– Larger means slower

– Limitations for on-chip caches

• Increase cache associativity

– Diminishing returns (perhaps after 4-way)

– More comparisons (i.e., more logic)

– Make cache look more associative than it really is

Page 6: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

6

What About Cache Block Size?

• For a given application, cache capacity and associativity, there is an optimal block size

• Benefits of long cache blocks– good for spatial locality (code, vectors)

– reduce compulsory misses

• Drawbacks– increase access time

– Increase inter-level transfer time

– Fewer blocks imply more conflicts

Page 7: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

7

Miss Rate vs. Block Size

Page 8: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

8

Example Block Analysis

• We are to choose between 32B and 64B blocks– With miss rates m32 and m64, resp.

• Miss penalty components– Send request + access time + transfer time

– Example• Send request (block independent): 2 cycles

• Access time (assume block ind.): 28 cycles

• Transfer time depends on bus width. Say our bus is 64 bits wide,then transfer time is twice as much for 64B (8 cycles) than for 32B (4 cycles)

– 32B is better if 34*m32 < 38*m64

Page 9: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

9

Example (cont’d)

• From the text (p. C-27):

– For a 16K cache m32=2.87, m64=2.64

– For a 64K cache m32=1.35, m64=1.06

• Thus, 32B is better for 16K and 64B better

for 64K

Page 10: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

10

Associativity

• The same kind of analysis can be used for associativity – miss rate vs. cycle time

• “Old” conventional wisdom– Direct-mapped caches are faster; cache access is bottleneck for on-

chip L1; make L1 direct-mapped

– For on-board (L2) caches, direct-mapped are 10% faster

• “New” conventional wisdom– Can make 2-way set-associative cache fast enough for L1. Allows

larger caches to be addressed only with page offset bits

– Looks like it does not make much difference for L2/L3 caches

Page 11: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

11

Victim Cache: Bringing Associativity to

a Direct-Mapped Cache

• Goal: remove some of the conflict misses

• Idea: place a small, fully-associative buffer “behind”the L1 cache and “before” the L2 cache

• Procedure– L1 hit, do nothing– L1 miss in block b, hit in victim v, pay extra cycle, swap b and v

– L1 miss in b, victim miss, add b to victim (evict if necessary)

• Victim buffer of 4 to 8 entries in a 4KB cache works well

Page 12: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

12

Victim Cache

Page 13: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

13

Improving Hit Rate via

Prefetching

• Goal: bring data in cache just in time for its use

– Not too early, otherwise might displace other

useful blocks (i.e., pollution)

– Not too late otherwise will have hit-wait cycles

• Constrained by (among others)

– Imprecise knowledge of instruction stream

– Imprecise knowledge of data stream

Page 14: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

14

Prefetching: Why, What, When &

Where

• Why– Hide memory latency, reduce cache misses

• What– Ideally, a semantic object

– In practice, a cache line or sequence of them

• When– Ideally, just in time

– In practice, depends dynamically on prefetching technique

• Where– Either the cache or a prefetch buffer

Page 15: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

15

How

• Hardware prefetching– Nextline prefetching for instructions (many processors bring in a block on a miss and the following one as well)

– OBL “one block lookahead” for data prefetching (many variations, can be directed by previous successes)

– Stream buffers [Jouppi, ISCA 90] is an example that works well for instructions but not for data

– Stream buffers with hardware stride prediction mechanisms• Works well for scientific programs with disciplined data access patterns

Page 16: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

16

How (cont’d)

• Software prefetching– Use of special instructions

• touch (dcbt ) in PowerPC

• Load to R31 in Alpha• SSE prefetch in IA-32

• lfetch in IA-64

– Non-binding prefetch• prefetch ignored if an exception occurs

– Must be inserted by software• usually via compiler analysis

– Advantage: no special hardware needed– Drawback: increases instruction count

Page 17: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

17

Compiler and Algorithm

Optimizations

• Tiling or blocking

– Reuse contents of the data cache as much as possible (e.g., structure code to work on blocks of a matrix)

• Same type of compiler analysis as for parallelizing

code

• Code reordering

– Particularly for improving Icache performance

– Can be guided by profile information

Page 18: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

18

Reduce Cache Miss Penalty

• Give priority to reads (e.g., write buffers)

• Lock-up free (i.e., non-blocking) caches

• Cache hierarchy

Page 19: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

19

Write Buffers

• Reads are more important than– Writes to memory if WT cache

– Replacement of dirty lines in WB

• Hence buffer the writes in write buffers

– FIFO queues to store data temporarily (loads must check contents of buffer)

– Since writes have a tendency to come in bunches, write buffer must be deep

– Allow read miss to bypass the writes in the write buffer (when there are no conflicts)

Page 20: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

20

Coalescing Write Buffers and

Write Caches

• Coalescing (or merging) write buffers: writes to an address

(block) already in the write buffer are combined

• Extend write buffers to small fully associative write

caches with WB strategy and dirty bit. Natural

extension for write-through caches.

Page 21: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

21

Lock-up Free Caches

• Idea: allow cache to have several outstanding miss

requests simultaneously (i.e., hit under miss)

• Proposed in the early 1980’s but implemented in the

late 90s/early 00s due to complexity

• Single hit under miss (a la HP PA1700) relatively

simple

• For several outstanding misses, require the use of

miss status holding registers (MSHR’s)

Page 22: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

22

MSHR’s

• Each MSHR should hold

– A valid (busy) bit

– Address of the requested cache block

– Index into the cache where the block will go

– Comparator (to prevent using the same MSHR for a miss to the same block)

– If data is to be forwarded to CPU at the same time as the cache, needs register name

– Implemented in MIPS R10K, Alpha 21164, Pentium Pro

Page 23: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

23

Cache Hierarchy

• Two and even three levels of caches are quite

common now

• L2 is very large, but L1 filters many hits so “local”

hit rate can appear low

• L2, in general, has longer blocks and higher

associativity

• Multi-level inclusion property is implemented most

of the time (also with L1 WT and L2 WB)

Page 24: Computer Systems Architecture Ipcrowley/cse/560/L17-11-09... · 2009. 11. 9. · Architecture I CSE 560M Lecture 17 Guest Lecturer: ShakirJames. 2 Plan for Today • Announcements

24

Assignment

• Project demos in three weeks

• Readings– For Wednesday

• H&P: sections 5.3-5.6