36
1 1998 Morgan Kaufmann Publishers Chapter Seven

1 1998 Morgan Kaufmann Publishers Chapter Seven

  • View
    237

  • Download
    1

Embed Size (px)

Citation preview

Page 1: 1  1998 Morgan Kaufmann Publishers Chapter Seven

11998 Morgan Kaufmann Publishers

Chapter Seven

Page 2: 1  1998 Morgan Kaufmann Publishers Chapter Seven

21998 Morgan Kaufmann Publishers

• SRAM:

– value is stored on a pair of inverting gates

– very fast but takes up more space than DRAM (4 to 6 transistors)

• DRAM:

– value is stored as a charge on capacitor (must be refreshed)

– very small but slower than SRAM (factor of 5 to 10)

Memories: Review

B

A A

B

Word line

Pass transistor

Capacitor

Bit line

Page 3: 1  1998 Morgan Kaufmann Publishers Chapter Seven

31998 Morgan Kaufmann Publishers

• Users want large and fast memories!

SRAM access times are 2 - 25ns at cost of $100 to $250 per Mbyte. (1997)

DRAM access times are 60-120ns at cost of $.50 per Mbyte.Disk access times are 10 to 20 million ns at cost of $.002 per Mbyte.(2003)

• Try and give it to them anyway

– build a memory hierarchy

Exploiting Memory Hierarchy

(2003)

CPU

Level n

Level 2

Level 1

Levels in thememory hierarchy

Increasing distance from the CPU in

access time

Size of the memory at each level

Page 4: 1  1998 Morgan Kaufmann Publishers Chapter Seven

41998 Morgan Kaufmann Publishers

Locality

• A principle that makes having a memory hierarchy a good idea

• If an item is referenced,

temporal locality: it will tend to be referenced again soon

spatial locality: nearby items will tend to be referenced soon.

Why does code have locality?

• Our initial focus: two levels (upper, lower)– block: minimum unit of data – hit: data requested is in the upper level– miss: data requested is not in the upper level

Page 5: 1  1998 Morgan Kaufmann Publishers Chapter Seven

51998 Morgan Kaufmann Publishers

• Two issues:

– How do we know if a data item is in the cache?

– If it is, how do we find it?

• Our first example:

– block size is one word of data

– "direct mapped"

For each item of data at the lower level, there is exactly one location in the cache where it might be.

e.g., lots of items at the lower level share locations in the upper level

Cache

Page 6: 1  1998 Morgan Kaufmann Publishers Chapter Seven

61998 Morgan Kaufmann Publishers

• Mapping: address is modulo the number of blocks in the cache

Direct Mapped Cache

00001 00101 01001 01101 10001 10101 11001 11101

000

Cache

Memory

001

01

001

11

001

011

101

11

Page 7: 1  1998 Morgan Kaufmann Publishers Chapter Seven

71998 Morgan Kaufmann Publishers

• For MIPS:

What kind of locality are we taking advantage of?

Direct Mapped Cache

Address (showing bit positions)

20 10

Byteoffset

Valid Tag DataIndex

0

1

2

1021

1022

1023

Tag

Index

Hit Data

20 32

31 30 13 12 11 2 1 0

Page 8: 1  1998 Morgan Kaufmann Publishers Chapter Seven

81998 Morgan Kaufmann Publishers

• Taking advantage of spatial locality:

Direct Mapped Cache

Address (showing bit positions)

16 12 Byteoffset

V Tag Data

Hit Data

16 32

4Kentries

16 bits 128 bits

Mux

32 32 32

2

32

Block offsetIndex

Tag

31 16 15 4 32 1 0

Page 9: 1  1998 Morgan Kaufmann Publishers Chapter Seven

91998 Morgan Kaufmann Publishers

• Read hits

– this is what we want!

• Read misses

– stall the CPU, fetch block from memory, deliver to cache, restart

• Write hits:

– can replace data in cache and memory (write-through)

– write the data only into the cache (write-back the cache later)

• Write misses:

– read the entire block into the cache, then write the word

Hits vs. Misses

Page 10: 1  1998 Morgan Kaufmann Publishers Chapter Seven

101998 Morgan Kaufmann Publishers

• Make reading multiple words easier by using banks of memory

• It can get a lot more complicated...

Hardware Issues

CPU

Cache

Bus

Memory

a. One-word-wide memory organization

CPU

Bus

b. Wide memory organization

Memory

Multiplexor

Cache

CPU

Cache

Bus

Memorybank 1

Memorybank 2

Memorybank 3

Memorybank 0

c. Interleaved memory organization

Page 11: 1  1998 Morgan Kaufmann Publishers Chapter Seven

111998 Morgan Kaufmann Publishers

• Increasing the block size tends to decrease miss rate:

• Use split caches because there is more spatial locality in code:

Performance

1 KB

8 KB

16 KB

64 KB

256 KB

256

40%

35%

30%

25%

20%

15%

10%

5%

0%

Mis

s ra

te

64164

Block size (bytes)

ProgramBlock size in

wordsInstruction miss rate

Data miss rate

Effective combined miss rate

gcc 1 6.1% 2.1% 5.4%4 2.0% 1.7% 1.9%

spice 1 1.2% 1.3% 1.2%4 0.3% 0.6% 0.4%

Page 12: 1  1998 Morgan Kaufmann Publishers Chapter Seven

121998 Morgan Kaufmann Publishers

Performance

• Simplified model:

execution time = (execution cycles + stall cycles) cycle time

stall cycles = # of instructions miss ratio miss penalty

• Two ways of improving performance:

– decreasing the miss ratio

– decreasing the miss penalty

What happens if we increase block size?

Page 13: 1  1998 Morgan Kaufmann Publishers Chapter Seven

131998 Morgan Kaufmann Publishers

Compared to direct mapped, give a series of references that:

– results in a lower miss ratio using a 2-way set associative cache

– results in a higher miss ratio using a 2-way set associative cache

assuming we use the “least recently used” replacement strategy

Decreasing miss ratio with associativity

Tag Data Tag Data Tag Data Tag Data Tag Data Tag Data Tag Data Tag Data

Eight-way set associative (fully associative)

Tag Data Tag Data Tag Data Tag Data

Four-way set associative

Set

0

1

Tag Data

One-way set associative(direct mapped)

Block

0

7

1

2

3

4

5

6

Tag Data

Two-way set associative

Set

0

1

2

3

Tag Data

Page 14: 1  1998 Morgan Kaufmann Publishers Chapter Seven

141998 Morgan Kaufmann Publishers

An implementation

Address

22 8

V TagIndex

01

2

253254255

Data V Tag Data V Tag Data V Tag Data

3222

4-to-1 multiplexor

Hit Data

123891011123031 0

Page 15: 1  1998 Morgan Kaufmann Publishers Chapter Seven

151998 Morgan Kaufmann Publishers

Performance

0%

3%

6%

9%

12%

15%

Eight-wayFour-wayTwo-wayOne-way

1 KB

2 KB

4 KB

8 KB

Mis

s ra

te

Associativity 16 KB

32 KB

64 KB

128 KB

Page 16: 1  1998 Morgan Kaufmann Publishers Chapter Seven

161998 Morgan Kaufmann Publishers

Decreasing miss penalty with multilevel caches

• Add a second level cache:

– often primary cache is on the same chip as the processor

– use SRAMs to add another cache above primary memory (DRAM)

– miss penalty goes down if data is in 2nd level cache

• Example:– CPI of 1.0 on a 500Mhz machine with a 5% miss rate, 200ns DRAM access– Adding 2nd level cache with 20ns access time decreases miss rate to 2%

• Using multilevel caches:

– try and optimize the hit time on the 1st level cache

– try and optimize the miss rate on the 2nd level cache

Page 17: 1  1998 Morgan Kaufmann Publishers Chapter Seven

171998 Morgan Kaufmann Publishers

Virtual Memory

• Main memory can act as a cache for the secondary storage (disk)

• Advantages:– illusion of having more physical memory– program relocation – protection

Physical addresses

Disk addresses

Virtual addresses

Address translation

Page 18: 1  1998 Morgan Kaufmann Publishers Chapter Seven

181998 Morgan Kaufmann Publishers

Pages: virtual memory blocks

• Page faults: the data is not in memory, retrieve it from disk

– huge miss penalty, thus pages should be fairly large (e.g., 4KB)

– reducing page faults is important (LRU is worth the price)

– can handle the faults in software instead of hardware

– using write-through is too expensive so we use writeback

3 2 1 011 10 9 815 14 13 1231 30 29 28 27

Page offsetVirtual page number

Virtual address

3 2 1 011 10 9 815 14 13 1229 28 27

Page offsetPhysical page number

Physical address

Translation

Page 19: 1  1998 Morgan Kaufmann Publishers Chapter Seven

191998 Morgan Kaufmann Publishers

Page Tables

Physical memory

Disk storage

Valid

1

1

1

1

0

1

1

0

1

1

0

1

Page table

Virtual pagenumber

Physical page ordisk address

Page 20: 1  1998 Morgan Kaufmann Publishers Chapter Seven

201998 Morgan Kaufmann Publishers

Page Tables

Page offsetVirtual page number

Virtual address

Page offsetPhysical page number

Physical address

Physical page numberValid

If 0 then page is notpresent in memory

Page table register

Page table

20 12

18

31 30 29 28 27 15 14 13 12 11 10 9 8 3 2 1 0

29 28 27 15 14 13 12 11 10 9 8 3 2 1 0

Page 21: 1  1998 Morgan Kaufmann Publishers Chapter Seven

211998 Morgan Kaufmann Publishers

Making Address Translation Fast

• A cache for address translations: translation lookaside buffer

Valid

1

1

1

1

0

1

1

0

1

1

0

1

Page table

Physical pageaddressValid

TLB

1

1

1

1

0

1

TagVirtual page

number

Physical pageor disk address

Physical memory

Disk storage

Page 22: 1  1998 Morgan Kaufmann Publishers Chapter Seven

221998 Morgan Kaufmann Publishers

TLBs and caches

Yes

Deliver datato the CPU

Write?

Try to read datafrom cache

Write data into cache,update the tag, and put

the data and the addressinto the write buffer

Cache hit?Cache miss stall

TLB hit?

TLB access

Virtual address

TLB missexception

No

YesNo

YesNo

Write accessbit on?

YesNo

Write protectionexception

Physical address

Page 23: 1  1998 Morgan Kaufmann Publishers Chapter Seven

231998 Morgan Kaufmann Publishers

Modern Systems

• Very complicated memory systems:Characteristic Intel Pentium Pro PowerPC 604

Virtual address 32 bits 52 bitsPhysical address 32 bits 32 bitsPage size 4 KB, 4 MB 4 KB, selectable, and 256 MBTLB organization A TLB for instructions and a TLB for data A TLB for instructions and a TLB for data

Both four-way set associative Both two-way set associativePseudo-LRU replacement LRU replacementInstruction TLB: 32 entries Instruction TLB: 128 entriesData TLB: 64 entries Data TLB: 128 entriesTLB misses handled in hardware TLB misses handled in hardware

Characteristic Intel Pentium Pro PowerPC 604Cache organization Split instruction and data caches Split intruction and data cachesCache size 8 KB each for instructions/data 16 KB each for instructions/dataCache associativity Four-way set associative Four-way set associativeReplacement Approximated LRU replacement LRU replacementBlock size 32 bytes 32 bytesWrite policy Write-back Write-back or write-through

Page 24: 1  1998 Morgan Kaufmann Publishers Chapter Seven

241998 Morgan Kaufmann Publishers

• Processor speeds continue to increase very fast— much faster than either DRAM or disk access times

• Design challenge: dealing with this growing disparity

• Trends:

– synchronous SRAMs (provide a burst of data)

– redesign DRAM chips to provide higher bandwidth or processing

– restructure code to increase locality

– use prefetching (make cache visible to ISA)

Some Issues

Page 25: 1  1998 Morgan Kaufmann Publishers Chapter Seven

251998 Morgan Kaufmann Publishers

Chapters 8 & 9

(partial coverage)

Page 26: 1  1998 Morgan Kaufmann Publishers Chapter Seven

261998 Morgan Kaufmann Publishers

Interfacing Processors and Peripherals

• I/O Design affected by many factors (expandability, resilience)

• Performance:— access latency — throughput— connection between devices and the system— the memory hierarchy— the operating system

• A variety of different users (e.g., banks, supercomputers, engineers)

Mainmemory

I/Ocontroller

I/Ocontroller

I/Ocontroller

Disk Graphicsoutput

Network

Memory– I/O bus

Processor

Cache

Interrupts

Disk

Page 27: 1  1998 Morgan Kaufmann Publishers Chapter Seven

271998 Morgan Kaufmann Publishers

I/O

• Important but neglected

“The difficulties in assessing and designing I/O systems haveoften relegated I/O to second class status”

“courses in every aspect of computing, from programming tocomputer architecture often ignore I/O or give it scanty coverage”

“textbooks leave the subject to near the end, making it easierfor students and instructors to skip it!”

• GUILTY!

— we won’t be looking at I/O in much detail

— be sure and read Chapter 8 in its entirety.

— you should probably take a networking class!

Page 28: 1  1998 Morgan Kaufmann Publishers Chapter Seven

281998 Morgan Kaufmann Publishers

I/O Devices

• Very diverse devices— behavior (i.e., input vs. output)— partner (who is at the other end?)— data rate

Device Behavior Partner Data rate (KB/sec)Keyboard input human 0.01Mouse input human 0.02Voice input input human 0.02Scanner input human 400.00Voice output output human 0.60Line printer output human 1.00Laser printer output human 200.00Graphics display output human 60,000.00Modem input or output machine 2.00-8.00Network/LAN input or output machine 500.00-6000.00Floppy disk storage machine 100.00Optical disk storage machine 1000.00Magnetic tape storage machine 2000.00Magnetic disk storage machine 2000.00-10,000.00

Page 29: 1  1998 Morgan Kaufmann Publishers Chapter Seven

291998 Morgan Kaufmann Publishers

I/O Example: Disk Drives

• To access data:— seek: position head over the proper track (8 to 20 ms. avg.)— rotational latency: wait for desired sector (.5 / RPM)— transfer: grab the data (one or more sectors) 2 to 15 MB/sec

Platter

Track

Platters

Sectors

Tracks

Page 30: 1  1998 Morgan Kaufmann Publishers Chapter Seven

301998 Morgan Kaufmann Publishers

I/O Example: Buses

• Shared communication link (one or more wires)• Difficult design:

— may be bottleneck— length of the bus— number of devices— tradeoffs (buffers for higher bandwidth increases latency)— support for many different devices— cost

• Types of buses:— processor-memory (short high speed, custom design)— backplane (high speed, often standardized, e.g., PCI)— I/O (lengthy, different devices, standardized, e.g., SCSI)

• Synchronous vs. Asynchronous— use a clock and a synchronous protocol, fast and small

but every device must operate at same rate andclock skew requires the bus to be short

— don’t use a clock and instead use handshaking

Page 31: 1  1998 Morgan Kaufmann Publishers Chapter Seven

311998 Morgan Kaufmann Publishers

Some Example Problems

• Let’s look at some examples from the text

“Performance Analysis of Synchronous vs. Asynchronous”“Performance Analysis of Two Bus Schemes”

DataRdy

Ack

Data

ReadReq 13

4

57

642 2

Page 32: 1  1998 Morgan Kaufmann Publishers Chapter Seven

321998 Morgan Kaufmann Publishers

Other important issues

• Bus Arbitration:

— daisy chain arbitration (not very fair)

— centralized arbitration (requires an arbiter), e.g., PCI

— self selection, e.g., NuBus used in Macintosh

— collision detection, e.g., Ethernet

• Operating system:

— polling

— interrupts

— DMA

• Performance Analysis techniques:

— queuing theory

— simulation

— analysis, i.e., find the weakest link (see “I/O System Design”)

• Many new developments

Page 33: 1  1998 Morgan Kaufmann Publishers Chapter Seven

331998 Morgan Kaufmann Publishers

Multiprocessors

• Idea: create powerful computers by connecting many smaller ones

good news: works for timesharing (better than supercomputer) vector processing may be coming back

bad news: its really hard to write good concurrent programs many commercial failures

Cache

Processor

Cache

Processor

Cache

Processor

Single bus

Memory I/ONetwork

Cache

Processor

Cache

Processor

Cache

Processor

Memory Memory Memory

Page 34: 1  1998 Morgan Kaufmann Publishers Chapter Seven

341998 Morgan Kaufmann Publishers

Questions

• How do parallel processors share data?— single address space (SMP vs. NUMA)— message passing

• How do parallel processors coordinate? — synchronization (locks, semaphores)— built into send / recieve primitives— operating system protocols

• How are they implemented?— connected by a single bus — connected by a network

Page 35: 1  1998 Morgan Kaufmann Publishers Chapter Seven

351998 Morgan Kaufmann Publishers

Some Interesting Problems

• Cache Coherency

• Synchronization— provide special atomic instructions (test-and-set, swap, etc.)

• Network Topology

Cache tagand data

Processor

Single bus

Memory I/O

Snooptag

Cache tagand data

Processor

Snooptag

Cache tagand data

Processor

Snooptag

Page 36: 1  1998 Morgan Kaufmann Publishers Chapter Seven

361998 Morgan Kaufmann Publishers

Concluding Remarks

• Evolution vs. Revolution

“More often the expense of innovation comes from being too disruptive to computer users”

“Acceptance of hardware ideas requires acceptance by software people; therefore hardware people should learn about software. And if software people want good machines, they must learn more about hardware to be able to communicate with and thereby influence hardware engineers.”

Ca

che

Vir

tual

mem

ory

RIS

C

Pa

ralle

l pro

cess

ing

mul

tipro

cess

or

Pip

elin

ing

Ma

ssiv

e S

IMD

Mic

ropr

ogr

am

min

g

Tim

esha

red

mu

ltipr

oce

sso

r

CC

-UM

A m

ultip

roce

ssor

CC

-NU

MA

mu

ltipr

oce

sso

r

Not

-CC

-NU

MA

mu

ltipr

oce

sso

r

Me

ssa

ge-p

ass

ing

mul

tipro

cess

or

Evolutionary Revolutionary