33
DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

  • View
    217

  • Download
    3

Embed Size (px)

Citation preview

Page 1: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 1

Lecture 13: Memory Hierarchy—Ways to Reduce

Misses

Page 2: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 2

Review: Who Cares About the Memory Hierarchy?

µProc60%/yr.

DRAM7%/yr.

1

10

100

1000

198

0198

1 198

3198

4198

5 198

6198

7198

8198

9199

0199

1 199

2199

3199

4199

5199

6199

7199

8 199

9200

0

DRAM

CPU198

2

Processor-MemoryPerformance Gap:(grows 50% / year)

Per

form

ance

“Moore’s Law”

• Processor Only Thus Far in Course:– CPU cost/performance, ISA, Pipelined Execution

CPU-DRAM Gap

• 1980: no cache in µproc; 1995 2-level cache on chip(1989 first Intel µproc with a cache on chip)

Page 3: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 3

The Goal: Illusion of large, fast, cheap memory

• Fact: Large memories are slow, fast memories are small

• How do we create a memory that is large, cheap and fast (most of the time)?

• Hierarchy of Levels– Uses smaller and faster memory technologies close to the

processor

– Fast access time in highest level of hierarchy

– Cheap, slow memory furthest from processor

• The aim of memory hierarchy design is to have access time close to the highest level and size equal to the lowest level

Page 4: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 4

Recap: Memory Hierarchy Pyramid

Processor (CPU)

Size of memory at each level

Level 1

Level 2

Level n

Increasing Distance from CPU,

Decreasing cost / MB

Level 3

. . .

transfer datapath: bus

Decreasing distance

from CPU, Decreasing

Access Time (Memory Latency)

Page 5: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 5

Memory Hierarchy: TerminologyHit: data appears in level X: Hit Rate: the fraction of memory accesses

found in the upper level

Miss: data needs to be retrieved from a block in the lower level (Block Y) Miss Rate = 1 - (Hit Rate)

Hit Time: Time to access the upper level which consists of Time to determine hit/miss + memory access time

Miss Penalty: Time to replace a block in the upper level + Time to deliver the block to the processor

Note: Hit Time << Miss Penalty

Page 6: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 6

Current Memory Hierarchy

Control

Data-path

Processor

reg

s

Secon-daryMem-ory

L2Cache

Speed(ns): 0.5ns 2ns 6ns 100ns 10,000,000ns Size (MB): 0.0005 0.05 1-4 100-1000 100,000Cost ($/MB): -- $100 $30 $1 $0.05 Technology: Regs SRAM SRAM DRAM Disk

L1

c

ach

e

MainMem-ory

Page 7: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 7

Memory Hierarchy: Why Does it Work? Locality!

• Temporal Locality (Locality in Time):=> Keep most recently accessed data items closer to the

processor

• Spatial Locality (Locality in Space):=> Move blocks consists of contiguous words to the upper

levels Lower Level

MemoryUpper LevelMemory

To Processor

From ProcessorBlk X

Blk Y

Address Space0 2^n - 1

Probabilityof reference

Page 8: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 8

Memory Hierarchy Technology• Random Access:

– “Random” is good: access time is the same for all locations

– DRAM: Dynamic Random Access Memory» High density, low power, cheap, slow

» Dynamic: need to be “refreshed” regularly

– SRAM: Static Random Access Memory» Low density, high power, expensive, fast

» Static: content will last “forever”(until lose power)

• “Not-so-random” Access Technology:– Access time varies from location to location and from time

to time

– Examples: Disk, CDROM

• Sequential Access Technology: access time linear in location (e.g.,Tape)

• We will concentrate on random access technology– The Main Memory: DRAMs + Caches: SRAMs

Page 9: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 9

Introduction to Caches

• Cache– is a small very fast memory (SRAM, expensive)

– contains copies of the most recently accessed memory locations (data and instructions): temporal locality

– is fully managed by hardware (unlike virtual memory)

– storage is organized in blocks of contiguous memory locations: spatial locality

– unit of transfer to/from main memory (or L2) is the cache block

• General structure– n blocks per cache organized in s sets

– b bytes per block

– total cache size n*b bytes

Page 10: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 10

Cache Organization(1) How do you know if something is in the cache?

(2) If it is in the cache, how to find it?

• Answer to (1) and (2) depends on type or organization of the cache

• In a direct mapped cache, each memory address is associated with one possible block within the cache– Therefore, we only need to look in a single location in the

cache for the data if it exists in the cache

Page 11: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 11

Simplest Cache: Direct MappedMain

Memory

4-Block Direct Mapped CacheBlock

Address

0123456789

101112131415

Cache Index

0123

0010

0110

1010

1110

• index determines block in cache

• index = (address) mod (# blocks)

• If number of cache blocks is power of 2, then cache index is just the lower n bits of memory address [ n = log2(# blocks) ]

tag index

Memory block address

Page 12: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 12

Issues with Direct-Mapped

• If block size > 1, rightmost bits of index are really the offset within the indexed block

ttttttttttttttttt iiiiiiiiii oooo

tag index byteto check to offset

if have select withincorrect block block block

Page 13: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 13

16 12 Byteoffset

Hit Data

16 32

4Kentries

16 bits 128 bits

Mux

32 32 32

2

32

Block offsetIndex

Tag

Address (showing bit positions)31 . . . 16 15 . . 4 3 2 1 0

64KB Cache with 4-word (16-byte) blocks

Tag DataV

Page 14: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 14

Direct-mapped Cache Contd.• The direct mapped cache is simple to design and its

access time is fast (Why?)

• Good for L1 (on-chip cache)

• Problem: Conflict Miss, so low hit ratio

Conflict Misses are misses caused by accessing different memory locations that are mapped to the same cache index

In direct mapped cache, no flexibility in where memory block can be placed in cache, contributing to conflict misses

Page 15: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 15

Another Extreme: Fully Associative• Fully Associative Cache (8 word block)

– Omit cache index; place item in any block!

– Compare all Cache Tags in parallel

• By definition: Conflict Misses = 0 for a fully associative cache

Byte Offset

:

Cache DataB 0

0431

:

Cache Tag (27 bits long)

Valid

:

B 1B 31 :

Cache Tag=

==

=

=:

Page 16: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 16

Fully Associative Cache

• Must search all tags in cache, as item can be in any cache block

• Search for tag must be done by hardware in parallel (other searches too slow)

• But, the necessary parallel comparator hardware is very expensive

• Therefore, fully associative placement practical only for a very small cache

Page 17: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 17

Compromise: N-way Set Associative Cache

• N-way set associative: N cache blocks for each Cache Index

– Like having N direct mapped caches operating in parallel

– Select the one that gets a hit

• Example: 2-way set associative cache– Cache Index selects a “set” of 2 blocks from the cache

– The 2 tags in set are compared in parallel

– Data is selected based on the tag result (which matched the address)

Page 18: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 18

Example: 2-way Set Associative Cache

Cache DataBlock 0

Cache TagValid

:: :

Cache DataBlock 0

Cache Tag Valid

: ::

Cache Block

Hit

mux

tag index offset address

= =

Page 19: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 19

Set Associative Cache Contd.• Direct Mapped, Fully Associative can be seen as

just variations of Set Associative block placement strategy

• Direct Mapped =1-way Set Associative Cache

• Fully Associative = n-way Set associativity for a cache with

exactly n blocks

Page 20: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 20

Alpha 21264 Cache Organization

Page 21: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 21

Block Replacement Policy• N-way Set Associative or Fully Associative have

choice where to place a block, (which block to replace)– Of course, if there is an invalid block, use it

• Whenever get a cache hit, record the cache block that was touched

• When need to evict a cache block, choose one which hasn't been touched recently: “Least Recently Used” (LRU)– Past is prologue: history suggests it is least likely of the

choices to be used soon

– Flip side of temporal locality

Page 22: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 22

Review: Four Questions for Memory Hierarchy Designers

• Q1: Where can a block be placed in the upper level? (Block placement)

– Fully Associative, Set Associative, Direct Mapped

• Q2: How is a block found if it is in the upper level? (Block identification)

– Tag/Block

• Q3: Which block should be replaced on a miss? (Block replacement)

– Random, LRU

• Q4: What happens on a write? (Write strategy)

– Write Back or Write Through (with Write Buffer)

Page 23: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 23

Write Policy:Write-Through vs Write-Back

• Write-through: all writes update cache and underlying memory/cache– Can always discard cached data - most up-to-date data is in memory– Cache control bit: only a valid bit

• Write-back: all writes simply update cache– Can’t just discard cached data - may have to write it back to memory – Flagged write-back– Cache control bits: both valid and dirty bits

• Other Advantages:– Write-through:

» memory (or other processors) always have latest data» Simpler management of cache

– Write-back:» Needs much lower bus bandwidth due to infrequent access» Better tolerance to long-latency memory?

Page 24: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 24

Write Through: Write Allocate vs Non-Allocate

• Write allocate: allocate new cache line in cache

–Usually means that you have to do a “read miss” to fill in rest of the cache-line!

–Alternative: per/word valid bits• Write non-allocate (or “write-around”):

–Simply send write data through to underlying memory/cache - don’t allocate new cache line!

Page 25: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 25

Write Buffers• Write Buffers (for wrt-

through)– buffers words to be

written in L2 cache/memory along with their addresses.

– 2 to 4 entries deep– all read misses are

checked against pending writes for dependencies (associatively)

– allows reads to proceed ahead of writes

– can coalesce writes to same address

• Write-back Buffers– between a write-back

cache and L2 or MM– algorithm

» move dirty block to write-back buffer

» read new block» write dirty block in L2 or

MM– can be associated with

victim cache (later)

L1

L2

Write buffer

to CPU

Page 26: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 26

Write Merge

Page 27: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 27

• Miss-oriented Approach to Memory Access:

– CPIExecution includes ALU and Memory instructions

CycleTimeyMissPenaltMissRateInst

MemAccessExecution

CPIICCPUtime

CycleTimeyMissPenaltInst

MemMissesExecution

CPIICCPUtime

Review: Cache performance

• Separating out Memory component entirely– AMAT = Average Memory Access Time

– CPIALUOps does not include memory instructions

CycleTimeAMATInst

MemAccessCPI

Inst

AluOpsICCPUtime

AluOps

yMissPenaltMissRateHitTimeAMAT DataDataData

InstInstInst

yMissPenaltMissRateHitTime

yMissPenaltMissRateHitTime

Page 28: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 28

Impact on Performance• Suppose a processor executes at

– Clock Rate = 200 MHz (5 ns per cycle), Ideal (no misses) CPI = 1.1

– 50% arith/logic, 30% ld/st, 20% control• Suppose that 10% of memory operations (Data) get 50 cycle miss

penalty• Suppose that 1% of instructions get same miss penalty• CPI = ideal CPI + average stalls per instruction

= 1.1(cycles/ins) + [ 0.30 (DataMops/ins) x 0.10 (miss/DataMop) x 50 (cycle/miss)] + [ 1 (InstMop/ins) x 0.01 (miss/InstMop) x 50

(cycle/miss)] = (1.1 + 1.5 + .5) cycle/ins = 3.1

• 58% (1.5/2.6) of the time the proc is stalled waiting for data memory!• Total no. of memory accesses = one per instrn + 0.3 for data = 1.3

Thus, AMAT=(1/1.3)x[1+0.01x50]+(0.3/1.3)x[1+0.1x50]=2.54 cycles => instead of one cycle.

Page 29: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 29

• Suppose a processor has the following parameters:

– CPI = 2 (w/o memory stalls)

– mem access per instruction = 1.5

• Compare AMAT and CPU time for a direct mapped cache and a 2-way set associative cache assuming:

– AMATd = hit time + miss rate * miss penalty = 1*1 + 0.014*75 = 2.05 ns

– AMAT2 = 1*1.25 + 0.01*75 = 2 ns < 2.05 ns

– CPId = (CPI*cc + mem. stall time)*IC = (2*1 + 1.5*0.014*75)IC = 3.575*IC

– CPI2 = (2*1.25 + 1.5*0.01*75)IC = 3.625*IC > CPId !

• Change in cc affects all instructions while reduction in miss rate benefit only memory instructions.

Impact of Change in cc

cc Hit cycle Miss penalty Miss rate

Direct map 1ns 1 75 ns 1.4%

2-way associative

1.25ns(why?) 1 75 ns 1.0%

Page 30: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 30

Miss Penalty for Out-of-Order (OOO) Exe. Processor.

• In OOO processors, memory stall cycles are overlapped with execution of other instructions. Miss penalty should not include this overlapped part.

mem stall cycle per instruction = mem miss per instruction x (total miss penalty – overlapped miss penalty)

• For the previous example. Suppose 30% of the 75ns miss penalty can be overlapped, what is the AMAT and CPU time?

– Assume using direct map cache, cc=1.25ns to handle out of order execution.

AMATd = 1*1.25 + 0.014*(75*0.7) = 1.985 ns

With 1.5 memory accesses per instruction,

CPU time =( 2*1.25 + 1.5 * 0.014 * (75*0.7))*IC = 3.6025 IC < CPU2

Page 31: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 31

Lock-Up Free Cache Using MSHR (Miss Status Holding Register)

Valid bit Block request address Source node bits mshr1

1 bit 32 bits 16 bits

Comparator

Valid bit Block request address Source node bits mshr 2

Comparator

Valid bit Block request address Source node bits mshr n

Comparator

Page 32: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 32

Avg. Memory Access Time vs. Miss Rate

• Associativity reduces miss rate, but increases hit time due to increase in hardware complexity!

• Example: For on-chip cache, assume CCT = 1.10 for 2-way, 1.12 for 4-way, 1.14 for 8-way vs. CCT direct mapped

Cache Size Associativity (KB) 1-way 2-way 4-way 8-way

1 2.33 2.15 2.07 2.01 2 1.98 1.86 1.76 1.68 4 1.72 1.67 1.61 1.53 8 1.46 1.48 1.47 1.43 16 1.29 1.32 1.32 1.32 32 1.20 1.24 1.25 1.27 64 1.14 1.20 1.21 1.23 128 1.10 1.17 1.18 1.20

(Red means A.M.A.T. not improved by more associativity)

Page 33: DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

DAP Spr.‘98 ©UCB 33

Unified vs Split Caches• Unified vs Separate I&D

• Example:– 16KB I&D: Inst miss rate=0.64%, Data miss rate=6.47%– 32KB unified: Aggregate miss rate=1.99%

• Which is better (ignore L2 cache)?– Assume 33% data ops 75% accesses from instructions (1.0/1.33)– hit time=1, miss time=50– Note that data hit has 1 stall for unified cache (only one port)

AMATHarvard=75%x(1+0.64%x50)+25%x(1+6.47%x50) = 2.05

AMATUnified=75%x(1+1.99%x50)+25%x( 1+1+1.99%x50)= 2.24

ProcI-Cache-1

Proc

UnifiedCache-1

UnifiedCache-2

D-Cache-1

Proc

UnifiedCache-2