31
1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

Embed Size (px)

Citation preview

Page 1: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

1

CMPE 421 Parallel Computer Architecture

PART5More Elaborations with cache

&Virtual Memory

Page 2: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

2

Cache Optimization into categories

Reducing Miss Penalty Multilevel caches Critical word first: Don’t wait for the full block to be loaded before

sending the requested word and restarting the CPU Read Miss Before write miss: This optimization serves reads before

writes have been completed.SW R2, 512(R0) ; M[512] ← R3 (cache index 0)

LW R1,1024(R0) ; R1 ← M[1024] (cache index 0)

LW R2,512(R0) ; R2 ← M[512] (cache index 0)

- If the write buffer hasn’t completed writing to location 512 in memory, the read of location 512 will put the old, wrong value into the cache block, and then into R2.

Victim Caches

Page 3: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

3

Victim Caches One approach to lower miss

penalty is to remember what was discarded in case it is needed again.

This victim cache contains only blocks that are discarded from a cache because of a miss—“victims”—and are checked on a miss to see if they have the desired data before going to the next lower-level memory.

The AMD Athlon has a victim cache with eight entries.

Jouppi [1990] found that victim caches of one to five entries are effective at reducing misses, especially for small, direct-mapped data caches. Depending on the program, a four-entry victim cache might remove one quarter of the misses in a 4-KB direct-mapped data cache.

Page 4: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

4

Cache Optimization into categories

Reducing the miss rate Larger block size, Larger cache size, Higher associativity, Way prediction Pseudo-associativity,

- In way-prediction, extra bits are kept in the cache to predict the set of the next cache access.

Compiler optimizations Reducing the time to hit in the cache

small and simple caches, avoiding address translation, and pipelined cache access.

Page 5: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

5

Cache Optimization

Complier-based cache optimization reduces the miss rate without any hardware change

For Instructions Reorder procedures in memory to reduce conflict Profiling to determine likely conflicts among groups of instructions

For Data Merging Arrays: improve spatial locality by single array of compound

elements vs. two arrays Loop Interchange: change nesting of loops to access data in order

stored in memory Loop Fusion: Combine two independent loops that have same

looping and some variables overlap Blocking: Improve temporal locality by accessing “blocks” of data

repeatedly vs. going down whole columns or rows

Page 6: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

6

Examples

Reduces misses by improving spatial locality through combined arrays that are accessed simultaneously

Sequential accesses instead of striding through memory every 100 words; improved spatial locality

Page 7: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

7

Examples Some programs have separate sections of code that access the

same arrays (performing different computation on common data) Fusing multiple loops into a single loop allows the data in cache to

be used repeatedly before being swapped out Loop fusion reduces missed through improved temporal locality

(rather than spatial locality in array merging and loop interchange)

Accessing array “a” and “c” would have caused twice the number of misses without loop fusion

Page 8: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

8

Blocking Example

Page 9: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

9

Example

B called Blocking Factor

Conflict misses can go down too

Blocking is also useful for register allocation

Page 10: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

10

Summary of performance equations

Page 11: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

11

VIRTUAL MEMORY

You’re running a huge program that requires 32MB Your PC has only 16MB available...

• Rewrite your program so that it implements overlays• Execute the first portion of code (fit it in the available

memory)• When you need more memory...

• Find some memory that isn’t needed right now• Save it to disk• Use the memory for the latter portion of code• So on...

• The memory is to disk as registers are to memory• Disk as an extension of memory• Main memory can act as a “cache” for the secondary

stage (magnetic disk)

Page 12: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

A Memory Hierarchy

Disk

Extend the hierarchy Main memory acts like a cache for the disk

Cache: About $20/Mbyte<2ns access time

512KB typical Memory: About $0.15/MBtye, 50ns access time

256MB typical Disk: About $0.0015/MByte, 15ms (15,000,000 ns) access time

40GB typical

Registers

CPU

Load or I-FetchStore

MainMemory(DRAM)

Cache

12

SW manages movement

HW manages movement

The operating system is responsible for managing the movement of memory between disk and main memory, and for keeping the address translation table accurate.

Page 13: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

13

Virtual Memory• Idea: Keep only the portions of a program (code, data)

that are currently needed in Main Memory• Currently unused data is saved on disk, ready to be brought in

when needed• Appears as a very large virtual memory (limited only by the disk

size)

• Advantages:• Programs that require large amounts of memory can be run (As

long as they don’t need it all at once)• Multiple programs can be in virtual memory at once, only active

programs will be loaded into memory• A program can be written (linked) to use whatever addresses it

wants to! It doesn’t matter where it is physically loaded!• When a program is loaded, it doesn’t need to be placed in

continuous memory locations

• Disadvantages:• The memory a program needs may all be on disk• The operating system has to manage virtual memory

Page 14: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

14

Virtual Memory

We will focus

on using the disk as a storage area for chunks of main memory that are not being used.

The basic concepts are similar to providing a cache for main memory, although we now view part of the hard disk as being the memory.

Only few programs are active An active might not need all the memory that has been reserved

by the program (store rest in the Hard disk)

Page 15: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

15

The Virtual Memory Concept

Virtual Memory Space:All possible memory addresses(4GB in 32-bit systems)All that can be held as an option (conceived) .

Disk Swap Space:Area on hard disk that canbe used as an extension of memory. (Typically equal to ram size) All that can be used.

Main Memory:Physical memory.(Typically 1GB)All that physically exists.

Virtual Memory Space

Disk SwapSpace

Main Memory

Page 16: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

16

The Virtual Memory Concept

Virtual Memory Space

Disk SwapSpace

Main Memory

This address can be conceived of, but doesn’t correspond to any memory. Accessing it will produce an error.

This address can be accessed.However, it currently is only on disk and must be read intomain memory before being used. A table maps from its virtual address to the disk location.

This address can be accessedimmediately since it is already in memory. A table maps from its virtual address to its physical address. There will also be a back-up location on disk.

Error

Disk Address: 58984Not in main memory

Physical Address: 883232Disk Address: 322321

Page 17: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

17

The Process

The CPU deals with Virtual Addresses• Steps to accessing memory with a virtual address

1. Convert the virtual address to a physical address• Need a special table (Virtual Addr-> Physical Addr.)• The table may indicate that the desired address is

on disk, but not in physical memory• Read the location from the disk into memory (this may require

moving something else out of memory to make room)

2. Do the memory access using the physical address• Check the cache first (note: cache uses only

physical addresses)• Update the cache if needed

Page 18: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

18

Structure of Virtual Memory

Virtual Address

Address Translator

Physical Address

From Processor

To Memory

Page fault

Using elaborateSoftware page faultHandlingalgorithm

Return our Library Analogy• Virtual addresses as the title of a book • Physical address as the location of that in the library

Page 19: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

19

Translation (hardware that translates these virtual addresses to physical addresses) Since the hardware access memory, we need to convert from a

logical address to a physical address in hardware The Memory Management Unit (MMU) provides this functionality

0

2n-1

CPU MMUVirtual

Address(Logical)

Physical Address(Real)

Physical Memory

Page 20: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

20

Address Translation

In Virtual Memory, blocks of memory (called pages) are mapped from one set of address (called virtual addresses) to another set (called physical addresses)

Page 21: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

21

If the valid bit for a virtual page is off, a page fault occurs. The operating system must be given control. Once the operating system gets control, it must find the page in the next level of the hierarchy (usually magnetic disk) and decide where to place the requested page in main memory.

Page Faults

Page 22: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

22

Terminology

page: The unit of memory transferred between disk and the main memory.

page fault: when a program accesses a virtual memory location that is not currently in the main memory.

address translation: the process of finding the physical address that corresponds to a virtual address.

Cache Virtual memoryBlock ⇒ PageCache miss ⇒ page faultBlock addressing ⇒ Address translation

Page 23: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

23

Difference between virtual and cache memory

The miss penalty is huge (millions of seconds) Solution: Increase block size (page size) around 8KB

- Because transfers have a large startup time, but data transfer is relatively fast after started

Even on faults (misses) VM must provide info on the disk location

VM system must have an entry for all possible locations When there is a hit, the VM system provides the physical

address in memory (not the actual data, in the cache we have data itself )

- Saves room – one address rather than 8 KB data

Since miss penalty is very huge, VM systems typically have a miss (page fault) rate of 0.00001- 0.0001%

Page 24: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

24

In Virtual Memory Systems

Pages should be large enough to amortize the high

access time. (from 4 kB to 16 kB are typical, and some

designers are considering size as large as 64 kB.)

Organizations that reduce the page fault rate are

attractive. The primary technique used here is to allow

flexible placement of pages. (e.g. fully associative)

Sophisticated LRU replacement policy is preferable

Page faults can be handled in software.Write-back (Write-through scheme does not work.)

we need a scheme that reduce the number of disk writes.

Page 25: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

25

Keeping track of pages: The page table

All programs use the same virtual addressing space Each program must have its own memory mapping Each program has its own page table to map virtual

addresses to physical addresses

virtual Address Physical Address

The page table resides in memory, and is pointed to by the page table register

The page table has an entry for every possible page (in principle, not in practice...), no tags are necessary.

A valid bit indicates whether the page is in memory or on disk.

Page Table

Page 26: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

Virtual to Physical Mapping

Virtual Page Number Page Offset0121331

Physical Page Number Page Offset0121324

Example• 4GB (32-bit) Virtual

Address Space• 32MB (25-bit)

Physical Address Space

• 8 KB (13-bit) page size (block size)

Example• 4GB (32-bit) Virtual

Address Space• 32MB (25-bit)

Physical Address Space

• 8 KB (13-bit) page size (block size)

Translation

A 32-bit virtual address is given to the V.M. hardware The virtual page number (index) is derived from this by removing

the page (block) offset

Note: may involve reading from diskPage tables are stored in main MEM

(index)No tag - All entries are unique

• The Virtual Page Number is looked up in a page table• When found, entry is either:

• The physical page number, if in memory V->1• The disk address, if not in memory (a page fault) V->0

• If not found, the address is invalid

Both virtual and physical address are broken down a page number and page offset

Page 27: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

27

Virtual Memory (32-bit system): 8KB page size,16MB Mem

Phys. Page # Disk AddressVirt.Pg.# V

012

512K

...

...

0121331

Index1319

Virtual AddressVirtual Address

Page offset

0121323Physical AddressPhysical Address

4GB / 8KB =512K entries

219=512K

11

Page 28: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

28

Virtual Memory Consists

Bits for page address Bits for virtual page number Number of virtual pages Entries in the page table Bits for physical page number Number of physical pages Bits per page table line Total page table size

Page 29: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

29

Write issues

Write Through - Update both disk and memory + Easy to implement - Requires a write buffer - Requires a separate disk write for every write to memory - A write miss requires reading in the page first, then writing back the

single word

• Write Back - Write only to main memory. Write to the disk only when block is replaced.• + Writes are fast• + Multiple writes to a page are combined into one disk write• - Must keep track of when page has been written (dirty bit)

Page 30: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

30

Page replacement policy

Exact Least Recently Used (LRU) but it is expensive. So, use Approximate LRU: a use bit (or reference bit) is added to every page table

line If there is a hit, PPN is used to form the address and reference

bit is turned on so the bit is set at every access

the OS periodically clears all use bits the page to replace is chosen among the ones with their

use bit at zero Choose one entry as a victim randomly

If the OS chooses to replace the page, the dirty bit indicates whether the page to be written out before its location in memory can be given to another (give a Figure)

Page 31: 1 CMPE 421 Parallel Computer Architecture PART5 More Elaborations with cache & Virtual Memory

31

Virtual memory example

Virtual Page # Valid Physical Page #/(index) Bit Disk address000000 1 1001000001 0 sector 5000...000010 1 0010000011 0 sector 4323…000100 1 1011000101 1 1010000110 0 sector 1239...000111 1 0001

Page Table:

System with 20-bit V.A., 16KB pages, 256KB of physical memory

Page offset takes 14 bits, 6 bits for V.P.N. and 4 bits for P.P.N.

Access to:0000 1000 1100 1010 1010

PPN = 0010

Physical Address: 00 1000 1100 1010 1010

Access to:0001 1001 0011 1100 0000

PPN = Page Fault tosector 1239...

Pick a page to “kick out” of memory (use LRU).

Assume LRU is VPN 000101 for this example.

01 1010

sector xxxx...

Read data from sector 1239into PPN 1010