25
Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

Embed Size (px)

Citation preview

Page 1: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

Computer Architecture

Memory Management Units

Iolanthe II - Reefed down, heading for Great Barrier Island

Page 2: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

Memory Management Unit

• Physical and Virtual Memory• Physical memory presents a flat address space

• Addresses 0 to 2p-1p = number of bits in an address word

• User programs accessing this spaceConflicts in

multi-user (eg Unix) multi-process (eg Real-Time systems)

systemsVirtual Address Space

Each user has a “private” address space Addresses 0 to 2q-1

q and p not necessarily the same!

Page 3: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

Memory Management UnitVirtual Address Space

• Each user has a “private” address space

User D’sAddressSpace

Page 4: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

Virtual Addresses

• Memory Management Unit • CPU (running user programs) emits Virtual

Address• MMU translates it to Physical Address• Operating System allocates pages of physical

memory to users

Page 5: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

Virtual Addresses

• Mappings between user space and physical memory created by OS

Page 6: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

Memory Management Unit (MMU)

• Responsible forVIRTUAL PHYSICAL

address mapping• Sits between CPU and cache

• Cache operates on Physical Addresses(mostly - some research on VA cache)

CPU

MMU

CacheMainMemD or I

VA PAPA

D or I

Page 7: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

MMU - operation

• OS constructs page tables -one for each user

• Page address from memory address selects a page table entry

• Page table entry contains physical page address

Page 8: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

MMU - operation

q-k

Page 9: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

MMU - Virtual memory space

• Page Table Entries can also point to disc blocks• Valid bit

• Set: page in memory address is physical page address

• Cleared: page “swapped out” address is disc block address

• MMU hardware generates page faultwhen swapped out page is requested

• Allows virtual memory space to be larger than physical memory• Only “working set” is in physical memory• Remainder on paging disc

Page 10: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

MMU - Page Faults

• Page Fault Handler• Part of OS kernel• Finds usable physical page

• LRU algorithm• Writes it back to disc if modified• Reads requested page from paging disc• Adjusts page table entries• Memory access re-tried

Page 11: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

Page Fault

q-k

Page 12: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

MMU - Page Faults

• Page Fault Handler• Part of OS kernel• Finds usable physical page

• LRU algorithm• Writes it back to disc if modified• Reads requested page from paging disc• Adjusts page table entries• Memory access re-tried

• Can be an expensive process!• Usual to allow page tables to be swapped out

too!Page fault can be generated on the page tables!

Page 13: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

MMU - practicalities

• Page size• 8 kbyte pages k = 13

• q = 32 q - k = 19

• So page table size• 219 ≈ 0.5 x 106 entries• Each entry 4 bytes

0.5 x 106 × 4 ≈ 2 Mbytes!

• Page tables can take a lot of memory!• Larger page sizes reduce page table size

but can waste space• Access one byte whole page needed!

Repeat this calculation with q = 46, k = 13Need 246-13 = 233 entries or 235 bytes!!

Page 14: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

MMU - practicalities

• Page tables are stored in main memory• They’re too large to be in smaller memories!• MMU needs to read PT for address translation Address translation can require additional

memory accesses!

• >32-bit address space?• How does a 32-bit datapath handle large

addresses?• q = 46 or 52

Page 15: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

MMU - Virtual Address Space Size

• Segments, etc• Virtual address spaces >232 bytes are common• Intel x86 - 46 bit address space• PowerPC - 52 bit address space

• PowerPCaddressformation

Page 16: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

MMU - Protection

• Page table entries• Extra bits are added to specify access rights

• Set by OS (software)but• Checked by MMU hardware!

• Access control bits• Read• Write• Read/Write• Execute only

• Unix (Linux) programs contain 3 sections

Name Content Access Rights

Text Program Execute only

Bss Constants Read only

Data Program data Read/Write

Page 17: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

MMU - Alternative Page Table Styles

• Inverted Page tables• One page table entry (PTE) / page of physical

memory• MMU has to search for correct VA entryPowerPC hashes VA PTE address

• PTE address = h( VA )

• h – hash function• Hashing collisions

• Hash functions in hardware• “hash” of n bits to produce m bits• Usually m < n

• Fewer bits reduces information content

Page 18: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

MMU - Alternative Page Table Styles

• Hash functions in hardware• “hash” of n bits to produce m bits• Usually m < n

• Fewer bits reduces information content• There are only 2m distinct values now!• Some n-bit patterns will reduce to the same

m-bit patterns• Trivial example

• 2-bits 1-bit with xor

• h(x1 x0) = x1 xor x0

y h(y)00 001 110 111 0

Collisions

Page 19: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

MMU - Alternative Page Table Styles

• Inverted Page tables• One page table entry (PTE) / page of physical

memory• MMU has to search for correct VA entryPowerPC hashes VA PTE address

• PTE address = h( VA )• h – hash function• Hashing collisions• PTEs are linked together

• PTE contains tags (like cache) and link bits

• MMU searches linked list to find correct entry

• Smaller Page Tables / Longer searches

Page 20: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

MMU - Sharing

• Can map virtual addresses from different user spaces to same physical pages

Sharing of data• Commonly done for frequently used programs

• Unix “sticky bit”• Text editors, eg vi• Compilers• Any other program used by multiple usersMore efficient memory use

Less paging Better use of cache

Page 21: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

Address Translation - Speeding it up

• Two+ memory accesses for each datum?• Page table 1 - 3 (single - 3 level tables)• Actual data 1

System running slower than an 8088?

Solution• Translation Look-Aside Buffer

• Acronym: TLB or TLAB• Small cache of recently-used page table entries• Usually fully-associative• Can be quite small!

Page 22: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

Address Translation - Speeding it up

• TLB sizes• MIPS R4000 1992 48 entries• MIPS R10000 1996 64 entries• HP PA7100 1993 120 entries• Pentium 4 (Prescott) 2006 64 entries

• One page table entry / page of data• Locality of reference

• Programs spend a lot of time in same memory region

TLB hit rates tend to be very high• 98%Compensate for cost of a miss

(many memory accesses – but for only 2% of references to memory!)

Page 23: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

TLB - Coverage

• How much of the address space is ‘covered’ by the TLB?

• ie how large is the address space for which the TLB will ensure fast translations?• All others will need to fetch PTEs from cache,

main memory or (hopefully not too often!) discCoverage = number of TLB entries × page size

eg 128 entry TLB, 8 kbyte pagesCoverage = 27 × 213 = 220 ~ 1 Mbyte

• Critical factor in program performance• Working data set size > TLB coverage

could be bad news!

Page 24: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

TLB – Sequential access

• Luckily, sequential access is fine!• Example: large (several MByte) matrix of

doubles (8 bytes floating point values)• 8kbyte pages => 1024 doubles/page

• Sequential access, eg sum all values: for(j=0;j<n;j++)

sum = sum + x[j]

• First access to each page TLB miss

but now TLB is loaded with data for this page, so• Next 1023 accesses

TLB hits! TLB Hit Rate = 1023/1024 ~ 99.9% Note this hit rate is independent of matrix size!

Page 25: Computer Architecture Memory Management Units Iolanthe II - Reefed down, heading for Great Barrier Island

Memory Hierarchy - Operation