30
1 2004 Morgan Kaufmann Publishers Chapter Seven

1 2004 Morgan Kaufmann Publishers Chapter Seven

  • View
    233

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 1  2004 Morgan Kaufmann Publishers Chapter Seven

12004 Morgan Kaufmann Publishers

Chapter Seven

Page 2: 1  2004 Morgan Kaufmann Publishers Chapter Seven

22004 Morgan Kaufmann Publishers

• SRAM:

– value is stored on a pair of inverting gates

– very fast but takes up more space than DRAM (4 to 6 transistors)

• DRAM:

– value is stored as a charge on capacitor (must be refreshed)

– very small but slower than SRAM (factor of 5 to 10)

Memories: Review

Word line

Pass transistor

Capacitor

Bit line

Page 3: 1  2004 Morgan Kaufmann Publishers Chapter Seven

32004 Morgan Kaufmann Publishers

• Users want large and fast memories!

SRAM access times are .5 – 5ns at cost of $4000 to $10,000 per GB.DRAM access times are 50-70ns at cost of $100 to $200 per GB.Disk access times are 5 to 20 million ns at cost of $.50 to $2 per GB.

• Try and give it to them anyway

– build a memory hierarchy

Exploiting Memory Hierarchy

2004

CPU

Level 1

Level 2

Level n

Increasing distance

from the CPU in

access timeLevels in the

memory hierarchy

Size of the memory at each level

Page 4: 1  2004 Morgan Kaufmann Publishers Chapter Seven

42004 Morgan Kaufmann Publishers

Locality

• A principle that makes having a memory hierarchy a good idea

• If an item is referenced,

temporal locality: it will tend to be referenced again soon

spatial locality: nearby items will tend to be referenced soon.

Why does code have locality?

• Our initial focus: two levels (upper, lower)

– block: minimum unit of data

– hit: data requested is in the upper level

– miss: data requested is not in the upper level

Page 5: 1  2004 Morgan Kaufmann Publishers Chapter Seven

52004 Morgan Kaufmann Publishers

• Two issues:

– How do we know if a data item is in the cache?

– If it is, how do we find it?

• Our first example:

– block size is one word of data

– "direct mapped"

For each item of data at the lower level, there is exactly one location in the cache where it might be.

e.g., lots of items at the lower level share locations in the upper level

Cache

Page 6: 1  2004 Morgan Kaufmann Publishers Chapter Seven

62004 Morgan Kaufmann Publishers

• Mapping: address is modulo the number of blocks in the cache

Direct Mapped Cache

00001 00101 01001 01101 10001 10101 11001 11101

000

Cache

Memory

001

01

001

11

001

011

101

11

Page 7: 1  2004 Morgan Kaufmann Publishers Chapter Seven

72004 Morgan Kaufmann Publishers

• For MIPS:

What kind of locality are we taking advantage of?

Direct Mapped Cache

Address (showing bit positions)

Data

Hit

Data

Tag

Valid Tag

3220

Index

012

102310221021

=

Index

20 10

Byteoffset

31 30 13 12 11 2 1 0

Page 8: 1  2004 Morgan Kaufmann Publishers Chapter Seven

82004 Morgan Kaufmann Publishers

• Accessing a Cache:

Direct Mapped Cache

Decimal address of reference

Binary address of reference

Hit or miss in cache

Assigned cache block (where found or placed)

22 10110 miss 10110two mod 8 =110two

26 11010 miss 11010two mod 8 =010two

22 10110 hit 10110two mod 8 =110two

26 11010 hit 11010two mod 8 =010two

16 10000 miss 10000two mod 8 =000two

3 00011 miss 00110two mod 8 =110two

16 10000 hit 10000two mod 8 =000two

18 10010 miss 10010two mod 8 =010two

Index V Tag Data

000 N

001 N

010 N

011 N

100 N

101 N

110 N

111 N

10two

11two

Memory(10110two)

Memory(11010two)

Memory(10000two)10two

00two Memory(00011two)

10two Memory(10010two)

Y

Y

Y

Y

Page 9: 1  2004 Morgan Kaufmann Publishers Chapter Seven

92004 Morgan Kaufmann Publishers

• Taking advantage of spatial locality (the Intrinsity FastMATH processor):

Direct Mapped Cache

Address (showing bit positions)

DataHit

Data

Tag

V Tag

32

16

=

Index

18 8 Byteoffset

31 14 13 2 1 06 5

4

Block offset

256

entries

512 bits18 bits

Mux

3232 32

Page 10: 1  2004 Morgan Kaufmann Publishers Chapter Seven

102004 Morgan Kaufmann Publishers

• Read hits

– this is what we want!

• Read misses

– stall the CPU, fetch block from memory, deliver to cache, restart

• Write hits:

– can replace data in cache and memory (write-through)

– write the data only into the cache (write-back the cache later)

• Write misses:

– read the entire block into the cache, then write the word

Hits vs. Misses

Page 11: 1  2004 Morgan Kaufmann Publishers Chapter Seven

112004 Morgan Kaufmann Publishers

• Make reading multiple words easier by using banks of memory

• It can get a lot more complicated...

Hardware Issues

CPU

Cache

Memory

Bus

One-word-widememory organization

a.

b. Wide memory organization

CPU

Cache

Memory

Bus

Multiplexor

CPU

Cache

Bus

Memory

bank 0

Memory

bank 1

Memory

bank 2

Memory

bank 3

c. Interleaved memory organization

Page 12: 1  2004 Morgan Kaufmann Publishers Chapter Seven

122004 Morgan Kaufmann Publishers

• Increasing the block size tends to decrease miss rate:

• Use split caches because there is more spatial locality in code:

Performance

1 KB

8 KB

16 KB

64 KB

256 KB

256

40%

35%

30%

25%

20%

15%

10%

5%

0%

Mis

s ra

te

64164

Block size (bytes)

ProgramBlock size in

wordsInstruction miss rate

Data miss rate

Effective combined miss rate

gcc 1 6.1% 2.1% 5.4%4 2.0% 1.7% 1.9%

spice 1 1.2% 1.3% 1.2%4 0.3% 0.6% 0.4%

Page 13: 1  2004 Morgan Kaufmann Publishers Chapter Seven

132004 Morgan Kaufmann Publishers

Performance

• Simplified model:

execution time = (execution cycles + stall cycles) cycle time

stall cycles = # of instructions miss ratio miss penalty

• Two ways of improving performance:

– decreasing the miss ratio

– decreasing the miss penalty

What happens if we increase block size?

Page 14: 1  2004 Morgan Kaufmann Publishers Chapter Seven

142004 Morgan Kaufmann Publishers

Decreasing miss ratio with associativity

Page 15: 1  2004 Morgan Kaufmann Publishers Chapter Seven

152004 Morgan Kaufmann Publishers

Compared to direct mapped, give a series of references that:– results in a lower miss ratio using a 2-way set associative cache– results in a higher miss ratio using a 2-way set associative cache

assuming we use the “least recently used” replacement strategy

Decreasing miss ratio with associativity

Eight-way set associative (fully associative)

Tag Tag Data DataTagTag Data Data Tag Tag Data DataTagTag Data Data

Tag Tag Data DataTagTag Data DataSet

0

1

Four-way set associative

TagTag Data DataSet

0

1

2

3

Two-way set associative

Tag DataBlock

0

1

2

3

4

5

6

7

One-way set associative

(direct mapped)

Page 16: 1  2004 Morgan Kaufmann Publishers Chapter Seven

162004 Morgan Kaufmann Publishers

An implementation

Address

22 8

V TagIndex

01

2

253254255

Data V Tag Data V Tag Data V Tag Data

3222

4-to-1 multiplexor

Hit Data

123891011123031 0

Page 17: 1  2004 Morgan Kaufmann Publishers Chapter Seven

172004 Morgan Kaufmann Publishers

Performance

Associativity

0One-way Two-way

3%

6%

9%

12%

15%

Four-way Eight-way

1 KB

2 KB

4 KB

8 KB

16 KB

32 KB64 KB 128 KB

Page 18: 1  2004 Morgan Kaufmann Publishers Chapter Seven

182004 Morgan Kaufmann Publishers

Decreasing miss penalty with multilevel caches

• Add a second level cache:

– often primary cache is on the same chip as the processor

– use SRAMs to add another cache above primary memory (DRAM)

– miss penalty goes down if data is in 2nd level cache

• Example:– CPI of 1.0 on a 5 Ghz machine with a 5% miss rate, 100ns DRAM access– Adding 2nd level cache with 5ns access time decreases miss rate to .5%

• Using multilevel caches:

– try and optimize the hit time on the 1st level cache

– try and optimize the miss rate on the 2nd level cache

Page 19: 1  2004 Morgan Kaufmann Publishers Chapter Seven

192004 Morgan Kaufmann Publishers

Cache Complexities

• Not always easy to understand implications of caches:

Radix sort

Quicksort

Size (K items to sort)

04 8 16 32

200

400

600

800

1000

1200

64 128 256 512 1024 2048 4096

Radix sort

Quicksort

Size (K items to sort)

04 8 16 32

400

800

1200

1600

2000

64 128 256 512 1024 2048 4096

Theoretical behavior of Radix sort vs. Quicksort

Observed behavior of Radix sort vs. Quicksort

Page 20: 1  2004 Morgan Kaufmann Publishers Chapter Seven

202004 Morgan Kaufmann Publishers

Cache Complexities

• Here is why:

• Memory system performance is often critical factor– multilevel caches, pipelined processors, make it harder to predict outcomes– Compiler optimizations to increase locality sometimes hurt ILP

• Difficult to predict best algorithm: need experimental data

Radix sort

Quicksort

Size (K items to sort)

04 8 16 32

1

2

3

4

5

64 128 256 512 1024 2048 4096

Page 21: 1  2004 Morgan Kaufmann Publishers Chapter Seven

212004 Morgan Kaufmann Publishers

Virtual Memory

• Main memory can act as a cache for the secondary storage (disk)

• Advantages:– illusion of having more physical memory– program relocation – Sharing of both the physical memory and CPU among multiple

programs– Protection

Virtual addresses Physical addresses

Address translation

Disk addresses

Page 22: 1  2004 Morgan Kaufmann Publishers Chapter Seven

222004 Morgan Kaufmann Publishers

Pages: virtual memory blocks

• Page faults: the data is not in memory, retrieve it from disk

– huge miss penalty, thus pages should be fairly large (e.g., 4KB)

– reducing page faults is important (LRU is worth the price)

– can handle the faults in software instead of hardware

– using write-through is too expensive so we use writeback

Virtual page number Page offset

31 30 29 28 27 3 2 1 015 14 13 12 11 10 9 8

Physical page number Page offset

29 28 27 3 2 1 015 14 13 12 11 10 9 8

Virtual address

Physical address

Translation

Page 23: 1  2004 Morgan Kaufmann Publishers Chapter Seven

232004 Morgan Kaufmann Publishers

Page Tables

Page tablePhysical page or

disk addressPhysical memory

Virtual pagenumber

Disk storage

1111011

11

1

0

0

Valid

Page 24: 1  2004 Morgan Kaufmann Publishers Chapter Seven

242004 Morgan Kaufmann Publishers

Page Tables

Virtual page number Page offset

3 1 3 0 2 9 2 8 2 7 3 2 1 01 5 1 4 1 3 1 2 11 1 0 9 8

Physical page number Page offset

2 9 2 8 2 7 3 2 1 01 5 1 4 1 3 1 2 11 1 0 9 8

Virtual address

Physical address

Page table register

Physical page numberValid

Page table

If 0 then page is notpresent in memory

20 12

18

Page 25: 1  2004 Morgan Kaufmann Publishers Chapter Seven

252004 Morgan Kaufmann Publishers

Making Address Translation Fast

• A cache for address translations: translation lookaside buffer

1111011

11

1

0

0

1000000

11

1

0

0

1001011

11

1

0

0

Physical pageor disk addressValid Dirty Ref

Page table

Physical memory

Virtual pagenumber

Disk storage

111101

011000

111101

Physical pageaddressValid Dirty Ref

TLB

Tag

Typical values: 16-512 entries, miss-rate: .01% - 1%miss-penalty: 10 – 100 cycles

Page 26: 1  2004 Morgan Kaufmann Publishers Chapter Seven

262004 Morgan Kaufmann Publishers

TLBs and caches

YesWrite access

bit on?

No

YesCache hit?

No

Write data into cache,update the dirty bit, and

put the data and theaddress into the write buffer

YesTLB hit?

Virtual address

TLB access

Try to read datafrom cache

No

YesWrite?

No

Cache miss stallwhile read block

Deliver datato the CPU

Write protectionexception

YesCache hit?

No

Try to write datato cache

Cache miss stallwhile read block

TLB missexception

Physical address

Page 27: 1  2004 Morgan Kaufmann Publishers Chapter Seven

272004 Morgan Kaufmann Publishers

TLBs and Caches

=

=

20

Virtual page number Page offset

31 30 29 3 2 1 014 13 12 11 10 9

Virtual address

TagValid Dirty

TLB

Physical page number

TagValid

TLB hit

Cache hit

Data

Data

Byteoffset

=====

Physical page number Page offset

Physical address tag Cache index

12

20

Blockoffset

Physical address

18

32

8 4 2

12

8

Cache

Page 28: 1  2004 Morgan Kaufmann Publishers Chapter Seven

282004 Morgan Kaufmann Publishers

Modern Systems

Page 29: 1  2004 Morgan Kaufmann Publishers Chapter Seven

292004 Morgan Kaufmann Publishers

Modern Systems

• Things are getting complicated!

Page 30: 1  2004 Morgan Kaufmann Publishers Chapter Seven

302004 Morgan Kaufmann Publishers

• Processor speeds continue to increase very fast— much faster than either DRAM or disk access times

• Design challenge: dealing with this growing disparity

– Prefetching? 3rd level caches and more? Memory design?

Some Issues

Year

Performance

1

10

100

1,000

10,000

100,000

CPU

Memory