Memory Management - AndroBenchcsl.skku.edu/uploads/SDE5007M16/7-memory.pdf · 2016-07-17 · Memory...

Preview:

Citation preview

Memory Management

Jinkyu Jeong (jinkyu@skku.edu)Computer Systems Laboratory

Sungkyunkwan Universityhttp://csl.skku.edu

2SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Topics§ Why is memory management difficult?

§ Old memory management techniques:• Fixed partitions• Variable partitions

§ Introduction to virtual memory

3SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Memory Management (1)§ Goals

• To provide a convenient abstraction for programming.

• To allocate scarce memory resources among competing processes to maximize performance with minimal overhead.

• To provide isolation between processes.

4SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Single/Batch Programming§ An OS with one user process

• Programs use physical addresses directly.• OS loads job, runs it, unloads it.

UserProgram

OperatingSysteminRAM

0

0xFFFF..

UserProgram

OperatingSysteminROM

UserProgram

OperatingSysteminRAM

DeviceDriversinROM

5SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Multiprogramming (1)§ Example

• What happens if two users simultaneously run this application?

#include <stdio.h>

int n = 0;

int main (){

printf (“&n = 0x%08x\n”, &n);}

% ./a.out&n = 0x08049508% ./a.out&n = 0x08049508

6SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Multiprogramming (2)§ Multiprogramming

• Need multiple processes in memory at once.– To overlap I/O and CPU of multiple jobs– Each process requires variable-sized and contiguous space.

• Requirements– Protection: restrict which addresses processes can use.– Fast translation: memory lookups must be fast, in spite of

protection scheme.– Fast context switching: updating memory hardware (for

protection and translation) should be quick.

7SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Fixed Partitions (1)

OperatingSystem

Partition0

Partition1

Partition2

Partition3

Partition4

0x1000

0x2000

0x5000

0x4000

0x3000

0

0x2000

0x0362

Baseregister

Virtualaddress+

0x2362

8SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Fixed Partitions (2)§ Physical memory is broken up into fixed partitions

• Size of each partition is the same and fixed• the number of partitions = degree of multiprogramming• Hardware requirements: base register

– Physical address = virtual address + base register– Base register loaded by OS when it switches to a process

§ Advantages• Easy to implement, fast context switch

§ Problems• Internal fragmentation: memory in a partition not used by a

process is not available to other processes• Partition size: one size does not fit all

– Fragmentation vs. fitting large programs

9SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Fixed Partitions (3)§ Improvement

• Partition size need not be equal.• Allocation strategies

– Maintain a separate queue for each partition size

– Maintain a single queue and allocate to the closest job whose size fits in an empty partition (first fit)

– Search the whole input queue and pick the largest job that fits in an empty partition (best fit)

• IBM OS/MFT(Multiprogramming with a Fixed number of Tasks)

OperatingSystem

Partition0

Partition1

Partition2

Partition4

0x1000

0x2000

0x8000

0x4000

0

10SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Variable Partitions (1)

OperatingSystem

Partition0

Partition1

Partition2

Partition3

offset

P1’sBaseBaseregister

Virtualaddress+<?

Yes

No

protectionfault

P1’sLimitLimitregister

11SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Variable Partitions (2)§ Physical memory is broken up into variable-sized

partitions• IBM OS/MVT• Hardware requirements: base register and limit register

– Physical address = virtual address + base register– Base register loaded by OS when it switches to a process

• The role of limit register: protection– If (physical address > base + limit), then raise a protection fault.

§ Allocation strategies• First fit: Allocate the first hole that is big enough• Best fit: Allocate the smallest hole that is big enough• Worst fit: Allocate the largest hole

12SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Variable Partitions (3)§ Advantages

• No internal fragmentation– Simply allocate partition size to be just big enough for process– But, if we break the physical memory into fixed-sized blocks and

allocate memory in unit of block sizes (in order to reduce bookkeeping), we have internal fragmentation.

§ Problems• External fragmentation

– As we load and unload jobs, holes are left scattered throughout physical memory

• Solutions to external fragmentation:– Compaction– Paging and segmentation

13SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Virtual Memory (1)§ Example

• What happens if two users simultaneously run this application?

#include <stdio.h>

int n = 0;

int main (){

printf (“&n = 0x%08x\n”, &n);}

% ./a.out&n = 0x08049508% ./a.out&n = 0x08049508

14SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Virtual Memory (2)§ Virtual Memory (VM)

• Use virtual addresses for memory references.– Large and contiguous

• CPU performs address translation at run time.– Instructions executed by the CPU issue virtual

addresses.– Virtual addresses are translated by hardware into

physical addresses (with help from OS).– Virtual addresses are independent of the actual

physical location of data referenced.– OS determines location of data in physical

memory

15SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Virtual Memory (3)§ Virtual Memory (VM)

• Physical memory is dynamically allocated or released on demand.– Programs execute without requiring their entire

address space to be resident in physical memory.– Lazy loading

• Virtual addresses are private to each process.– Each process has its own isolated virtual address

space.– One process cannot name addresses visible to

others. • Many ways to translate virtual addresses into

physical addresses…

16SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Virtual Memory (4)§ Advantages

• Separates user’s logical memory from physical memory.– Abstracts main memory into an extremely large, uniform array of

storage.– Frees programmers from the concerns of memory-storage

limitations.

• Allows the execution of processes that may not be completely in memory.

– Programs can be larger than physical memory.– More programs could be run at the same time.– Less I/O would be needed to load or swap each user program into

memory.

• Allows processes to easily share files and address spaces.• Provides an efficient mechanism for protection and process

creation.

17SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Virtual Memory (5)§ Disadvantages

• Performance!!!– In terms of time and space

§ Implementation• Paging• Segmentation

Paging

Jinkyu Jeong (jinkyu@skku.edu)Computer Systems Laboratory

Sungkyunkwan Universityhttp://csl.skku.edu

19SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Topics§ Virtual memory implementation

• Paging• Demand paging

§ Advanced VM techniques• Shared memory• Copy-on-write• Memory-mapped files

20SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Paging (1)§ Paging

• Permits the physical address space of a process to be noncontiguous.

• Divide physical memory into fixed-sized blocks called frames.

• Divide logical memory into blocks of same size called pages.– Page (or frame) size is power of 2 (typically, 512B – 8KB)

• To run a program of size n pages, need to find n free frames and load program.

• OS keeps track of all free frames.• Set up a page table to translate virtual to physical

addresses.

21SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Paging (2)

Frame6

Frame7

Frame8

Frame9

Frame10

Frame11

Frame0

Frame1

Frame2

Frame3

Frame4

Frame5

Page0

Page1

Page2

Page3

Page4

Page5ProcessA

ProcessB

VirtualmemoryPhysicalmemory

Page3

Page2

Page1

Page0Pagetables

22SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Paging (3)§ User’s perspective

• Users (and processes) view memory as one contiguous address space from 0 through N.– Virtual address space (VAS)

• In reality, pages are scattered throughout the physical memory.– Virtual-to-physical mapping– This mapping is invisible to the program.

• Protection is provided because a program cannot reference memory outside of its VAS.– The virtual address 0xdeadcafe maps to different physical

addresses for different processes.

23SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Paging (4)§ Logical to physical memory

24SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Paging (5)§ Translating addresses

• A virtual address has two parts:<virtual page number (VPN)::offset>

• VPN is an index into a page table• Page table determines page frame number (PFN)• Physical address is <PFN::offset>

§ Page tables• Managed by OS• Map VPN to PFN

– VPN is the index into the table that determines PFN

• One page table entry (PTE) per page in virtual address space, i.e. one PTE per VPN

25SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Paging (6)§ Address translation hardware

26SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Paging (7)§ Paging example

• Virtual address: 32 bits• Physical address: 20 bits• Page size: 4KB

• Offset: 12 bits• VPN: 20 bits• Page table entries: 220

0

Virtualaddress(32bits)

0 0 0 4 A F E

2 7

D 0

3 1

F E

4 6

A 4

4 6 A F E

Physicaladdress(20bits)

Pagetables

VPN Offset

PFN

27SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Paging (8)§ Protection

• Memory protection is implemented by associating protection bit with each frame.

• Valid / Invalid bit– “Valid” indicates that the associated page is in the process’

virtual address space, and is thus a legal page.– “Invalid” indicates that the page is not in the process’ virtual

address space.

• Finer level of protection is possible for valid pages.– Provide read-only, read-write, or execute-only protection.

28SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Paging (9)§ Page Table Entries (PTEs)

• Valid bit (V) says whether or not the PTE can be used.– It is checked each time a virtual address is used.

• Reference bit (R) says whether the page has been accessed.– It is set when a read or write to the page occurs.

• Modify bit (M) says whether or not the page is dirty.– It is set when a write to the page occurs.

• Protection bits (Prot) control which operations are allowed on the page.

– Read, Write, Execute, etc.

• Page frame number (PFN) determines physical page.

V R M Prot PageFrameNumber(PFN)1 1 1 2 20

29SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Paging (10)§ Advantages

• Easy to allocate physical memory.– Physical memory is allocated from free list of frames– To allocate a frame, just remove it from its free list

• No external fragmentation.• Easy to “page out” chunks of a program.

– All chunks are the same size (page size).– Use valid bit to detect reference to “paged-out” pages.– Page size is usually chosen to be convenient multiple of disk

block sizes.

• Easy to protect pages from illegal accesses.• Easy to share pages.

30SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Paging (11)§ Disadvantages

• Can still have internal fragmentation– Process may not use memory in exact multiple of pages.

• Memory reference overhead– 2 references per address lookup (page table, then memory)– Solution: get a hardware support (TLB)

• Memory required to hold page tables can be large– Need one PTE per page in virtual address space– 32-bit address space with 4KB pages = 220 PTEs– 4 bytes/PTE = 4MB per page table– OS’s typically have separate page tables per process

(25 processes = 100MB of page tables)– Solution: page the page tables, multi-level page tables, inverted

page tables, etc.

31SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Demand Paging (1)§ Demand paging

• Bring a page into memory only when it is needed.– Less I/O needed– Less memory needed– Faster response – More users

• OS uses main memory as a (page) cache of all of the data allocated by processes in the system.

– Initially, pages are allocated from physical memory frames.– When physical memory fills up, allocating a page requires some

other page to be evicted from its physical memory frame.• Evicted pages go to disk (only need to write if they are dirty)

– To a swap file– Movement of pages between memory/disks is done by the OS– Transparent to the application

32SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Demand Paging (2)§ Page faults

• What happens to a process that references a virtual address in a page that has been evicted?

– When the page was evicted, the OS sets the PTE as invalid and stores (in PTE) the location of the page in the swap file.

– When a process accesses the page, the invalid PTE will cause an exception (fault)

• The page fault handler (in kernel) is invoked by the fault.– Handler uses invalid PTE to locate page in swap file.– Handler reads page into a physical frame, updates PTE to point to

it and to be valid.– Handler restarts the faulted process.

• Where does the page that’s read in go?– Have to evict something else (page replacement algorithm)– OS typically tries to keep a pool of free pages around so that

allocations don’t inevitably cause evictions.

33SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Demand Paging (3)§ Handling a page fault

34SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Demand Paging (4)§ Why does this work?

• Locality– Temporal locality: locations referenced recently tend to be

referenced again soon.– Spatial locality: locations near recently referenced locations are

likely to be referenced soon.

• Locality means paging can be infrequent.– Once you’ve paged something in, it will be used many times.– On average, you use things that are paged in.– But this depends on many things:

- Degree of locality in application- Page replacement policy- Amount of physical memory- Application’s reference pattern and memory footprint

35SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Demand Paging (5)§ Why is this “demand” paging?

• When a process first starts up, it has a brand new page table, with all PTE valid bits “false”.– No pages are yet mapped to physical memory.

• When the process starts executing:– Instructions immediately fault on both code and data pages.– Faults stop when all necessary code/data pages are in

memory.– Only the code/data that is needed (demanded!!) by process

needs to be loaded.– What is needed changes over time, of course…

36SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Advanced VM Functionality§ Virtual memory tricks

• Shared memory• Copy on write• Memory-mapped files

37SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Shared Memory (1)§ Shared memory

• Private virtual address spaces protect applications from each other.

• But this makes it difficult to share data.– Parents and children in a forking Web server or proxy will

want to share an in-memory cache without copying.– Read/Write (access to share data)

Execute (shared libraries)

• We can use shared memory to allow processes to share data using direct memory reference.– Both processes see updates to the shared memory segment.– How are we going to coordinate access to shared data?

38SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Shared Memory (2)§ Implementation

• How can we implement shared memory using page tables?– Have PTEs in both tables map to the same physical frame.– Each PTE can have different protection values.– Must update both PTEs when page becomes invalid.

• Can map shared memory at same or different virtual addresses in each process’ address space– Different: Flexible (no address space conflicts), but pointers

inside the shared memory segment are invalid.– Same: Less flexible, but shared pointers are valid.

39SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Copy On Write (1)§ Process creation

• requires copying the entire address space of the parent process to the child process.

• Very slow and inefficient!

§ Solution 1: Use threads• Sharing address space is free.

§ Solution 2: Use vfork() system call• vfork() creates a process that shares the memory address space

of its parent.• To prevent the parent from overwriting data needed by the child,

the parent’s execution is blocked until the child exits or executes a new program.

• Any change by the child is visible to the parent once it resumes.• Useful when the child immediately executes exec().

40SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Copy On Write (2)§ Solution 3: Copy On Write

(COW)• Instead of copying all pages,

create shared mappings of parent pages in child address space.

• Shared pages are protected as read-only in child.

– Reads happen as usual– Writes generate a protection fault,

trap to OS, and OS copies the page, changes page mapping in client page table, restarts write instruction

Process

Pagetable

Physicalmemory

COW

COW

fork

childprocess

copied

write

41SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Memory-Mapped Files (1)§ Memory-mapped files

• Mapped files enable processes to do file I/O using memory references.– Instead of open(), read(), write(), close()

• mmap(): bind a file to a virtual memory region– PTEs map virtual addresses to physical frames holding file

data– <Virtual address base + N> refers to offset N in file

• Initially, all pages in mapped region marked as invalid.– OS reads a page from file whenever invalid page is accessed.– OS writes a page to file when evicted from physical memory.– If page is not dirty, no write needed.

42SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Memory-Mapped Files (2)§ Note:

• File is essentially backing store for that region of the virtual address space (instead of using the swap file).

• Virtual address space not backed by “real” files also called “anonymous VM”.

§ Advantages• Uniform access for files and memory (just use pointers)• Less copying• Several processes can map the same file allowing the pages in

memory to be shared.

§ Drawbacks• Process has less control over data movement.• Does not generalize to streamed I/O (pipes, sockets, etc.)

Address Translation

Jinkyu Jeong (jinkyu@skku.edu)Computer Systems Laboratory

Sungkyunkwan Universityhttp://csl.skku.edu

44SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Topics§ How to reduce the size of page tables?

§ How to reduce the time for address translation?

45SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Page Tables§ Managing page tables

• Space overhead of page tables– The size of the page table for a 32-bit address space with

4KB pages = 4MB (per process)

• How can we reduce this overhead?– Observation: Only need to map the portion of the address

space actually being used (tiny fraction of entire address space)

• How do we only map what is being used?– Make the page table structure dynamically extensible– Use another level of indirection:

» Two-level, hierarchical, hashed, etc.

46SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Two-level Page Tables (1)

47SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Two-level Page Tables (2)§ Two-level page tables

• Virtual addresses have 3 parts:

– Master page table: master page number à secondary page table.– Secondary page table: secondary page number à page frame

number.

Masterpage# Secondarypage# Offset

48SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Two-level Page Tables (3)§ Example

• 32-bit address space, 4KB pages, 4bytes/PTE• Want master page table in one page

Masterpage# Secondarypage# Offset

1210 10

Pageframe Offset

PageframeN

….

Pageframe6

Pageframe5

Pageframe4

Pageframe3

Pageframe2

Pageframe1

Pageframe0Master

pagetable Secondarypagetable

Physicalmemory

Physicaladdress

49SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Multi-level Page Tables§ Address translation in Alpha AXP Architecture

• Three-level page tables• 64-bit address divided

into 3 segments(coded in bits63/62)

– seg0 (0x): user code– seg1 (11): user stack– kseg (10): kernel

• Alpha 21064– Page size: 8KB– Virtual address: 43bits– Each page table is

one page long.

50SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

TLBs (1)§ Making address translation efficient

• Original page table scheme doubled the cost of memory lookups

– One lookup into the page table, another to fetch the data

• Two-level page tables triple the cost!– Two lookups into the page tables, a third to fetch the data– And this assumes the page table is in memory

• How can we make this more efficient?– Goal: make fetching from a virtual address about as efficient as

fetching from a physical address– Solutions:

- Cache the virtual-to-physical translation in hardware- Translation Lookaside Buffer (TLB)- TLB managed by the Memory Management Unit (MMU)

51SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

TLBs (2)§ Translation Lookaside Buffers

• Translate virtual page #s into PTEs (not physical address)

• Can be done in a single machine cycle

52SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

TLBs (3)§ TLB is implemented in hardware

• Fully associative cache (all entries looked up in parallel)• Cache tags are virtual page numbers.• Cache values are PTEs (entries from page tables).• With PTE+offset, MMU can directly calculate the physical

address.

§ TLBs exploit locality• Processes only use a handful of pages at a time.

– 16-48 entries in TLB is typical (64-192KB)– Can hold the “hot set” or “working set” of process

• Hit rates are therefore really important.

53SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

TLBs (4)§ Address translation with TLB

54SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

TLBs (5)§ Handling TLB misses

• Address translations are mostly handled by the TLB– > 99% of translations, but there are TLB misses occasionally– In case of a miss, who places translations into the TLB?

• Hardware (MMU): Intel x86– Knows where page tables are in memory– OS maintains tables, HW access them directly– Page tables have to be in hardware-defined format

• Software loaded TLB (OS)– TLB miss faults to OS, OS finds right PTE and loads TLB– Must be fast (but, 20-200 cycles typically)– CPU ISA has instructions for TLB manipulation– Page tables can be in any format convenient for OS (flexible)

55SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

TLBs (6)§ Managing TLBs

• OS ensures that TLB and page tables are consistent.– When OS changes the protection bits of a PTE, it needs to

invalidate the PTE if it is in the TLB.

• Reload TLB on a process context switch.– Remember, each process typically has its own page tables.– Need to invalidate all the entries in TLB. (flush TLB)– In IA-32, TLB is flushed automatically when the contents of CR3

(page directory base register) is changed.– (cf.) Alternatively, we can store the PID as part of the TLB entry, but

this is expensive.

• When the TLB misses, and a new PTE is loaded, a cached PTE must be evicted.

– Choosing a victim PTE called the “TLB replacement policy”.– Implemented in hardware, usually simple (e.g., LRU)

56SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Memory

Memory Reference (1)§ Situation

• Process is executing on the CPU, and it issues a read to a (virtual) address.

TLBVA PATLBhit

PagetablesTLBmisspagefault

protection fault

PTEdata

57SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Memory Reference (2)§ The common case

• The read goes to the TLB in the MMU.• TLB does a lookup using the page number of the

address.• The page number matches, returning a PTE.• TLB validates that the PTE protection allows reads.• PTE specifies which physical frame holds the page.• MMU combines the physical frame and offset into a

physical address.• MMU then reads from that physical address, returns

value to CPU.

58SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Memory Reference (3)§ TLB misses: two possibilities

(1) MMU loads PTE from page table in memory.– Hardware managed TLB, OS not involved in this step.– OS has already set up the page tables so that the hardware

can access it directly.

(2) Trap to the OS.– Software managed TLB, OS intervenes at this point.– OS does lookup in page tables, loads PTE into TLB.– OS returns from exception, TLB continues.

• At this point, there is a valid PTE for the address in the TLB.

59SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Memory Reference (4)§ TLB misses

• Page table lookup (by HW or OS) can cause a recursive fault if page table is paged out.– Assuming page tables are in OS virtual address space.– Not a problem if tables are in physical memory.

• When TLB has PTE, it restarts translation.– Common case is that the PTE refers to a valid page in

memory.– Uncommon case is that TLB faults again on PTE because of

PTE protection bits.(e.g., page is invalid)

60SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Memory Reference (5)§ Page faults

• PTE can indicate a protection fault– Read/Write/Execute – operation not permitted on page– Invalid – virtual page not allocated, or page not in physical

memory.

• TLB traps to the OS (software takes over)– Read/Write/Execute – OS usually will send fault back to the

process, or might be playing tricks (e.g., copy on write, mapped files).

– Invalid (Not allocated) – OS sends fault to the process (e.g., segmentation fault).

– Invalid (Not in physical memory) – OS allocates a frame, reads from disk, and maps PTE to physical frame.

Page Replacement

Jinkyu Jeong (jinkyu@skku.edu)Computer Systems Laboratory

Sungkyunkwan Universityhttp://csl.skku.edu

62SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Topics§ What if the physical memory becomes full?

• Page replacement algorithms

§ How to manage memory among competing processes?

63SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Page Replacement (1)§ Page replacement

• When a page fault occurs, the OS loads the faulted page from disk into a page frame of memory.

• At some point, the process has used all of the page frames it is allowed to use.

• When this happens, the OS must replace a page for each page faulted in.– It must evict a page to free up a page frame.

• The page replacement algorithm determines how this is done.

64SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Page Replacement (2)§ Evicting the best page

• The goal of the replacement algorithm is to reduce the fault rate by selecting the best victim page to remove.

• The best page to evict is the one never touched again.– as process will never again fault on it.

• “Never” is a long time, so picking the page closest to “never” is the next best thing– Belady’s proof: Evicting the page that won’t be used for the

longest period of time minimizes the number of page faults.

65SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Belady’s Algorithm§ Optimal page replacement

• Replace the page that will not be used for the longest time in the future.

• Has the lowest fault rate for any page reference stream.

• Problem: have to predict the future• Why is Belady’s useful? – Use it as a yardstick!

– Compare other algorithms with the optimal to gauge room for improvement.

– If optimal is not much better, then algorithm is pretty good, otherwise algorithm could use some work.

– Lower bound depends on workload, but random replacement is pretty bad.

66SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

FIFO (1)§ First-In First-Out

• Obvious and simple to implement– Maintain a list of pages in order they were paged in– On replacement, evict the one brought in longest time ago

• Why might this be good?– Maybe the one brought in the longest ago is not being

used.

• Why might this be bad?– Maybe, it’s not the case.– We don’t have any information either way.

• FIFO suffers from “Belady’s Anomaly”– The fault rate might increase when the algorithm is given

more memory.

67SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

FIFO (2)§ Example: Belady’s anomaly

• Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5• 3 frames: 9 faults

• 4 frames: 10 faults

1

2

3

4

1

2

5

3

4

1

2

3

4

5

1

2

3

4

5

68SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

LRU (1)§ Least Recently Used

• LRU uses reference information to make a more informed replacement decision.– Idea: past experience gives us a guess of future behavior.– On replacement, evict the page that has not been used for

the longest time in the past.– LRU looks at the past, Belady’s wants to look at future.

• Implementation– Counter implementation: put a timestamp– Stack implementation: maintain a stack

• Why do we need an approximation?

69SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

LRU (2)§ Approximating LRU

• Many LRU approximations use the PTE reference (R) bit.– R bit is set whenever the page is referenced (read or written)

• Counter-based approach– Keep a counter for each page.– At regular intervals, for every page, do:

- If R = 0, increment the counter (hasn’t been used)- If R = 1, zero the counter (has been used)- Zero the R bit

– The counter will contain the number of intervals since the last reference to the page.

– The page with the largest counter is the least recently used.

70SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Second Chance (1)§ Second chance or LRU clock

• FIFO with giving a second chance to a recently referenced page.

• Arrange all of physical page frames in a big circle (clock).

• A clock hand is used to select a good LRU candidate.– Sweep through the pages in circular order like a clock– If the R bit is off, it hasn’t been used recently and we have a

victim.– If the R bit is on, turn it off and go to next page.

• Arm moves quickly when pages are needed.– Low overhead if we have plenty of memory.– If memory is large, “accuracy” of information degrades.

71SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Second Chance (2)

72SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Not Recently Used (1)§ NRU or enhanced second chance

• Use R (reference) and M (modify) bits– Periodically, (e.g., on each clock interrupt), R is cleared, to

distinguish pages that have not been referenced recently from those that have been.

Class1R=0,M=1

Class3R=1,M=1

Class2R=1,M=0

Class0R=0,M=0

Read

Write

interrupt

Read

Write

interrupt

ReadWrite

interruptReadWrite

interruptPaged-in

73SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Not Recently Used (2)§ Algorithm

• Removes a page at random from the lowest numbered nonempty class.

• It is better to remove a modified page that has not been referenced in at least one clock tick than a clean page that is in heavy use.

• Used in Macintosh.

§ Advantages• Easy to understand.• Moderately efficient to implement.• Gives a performance that, while certainly not optimal,

may be adequate.

74SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

LFU (1)§ Counting-based page replacement

• A software counter is associated with each page.• At each clock interrupt, for each page, the R bit is added to the

counter.– The counters denote how often each page has been referenced.

§ Least frequently used (LFU)• The page with the smallest count will be replaced.• (cf.) Most frequently used (MFU) page replacement

– The page with the largest count will be replaced– Based on the argument that the page with the smallest count was

probably just brought in and has yet to be used.• It never forgets anything.

– A page may be heavily used during the initial phase of a process, but then is never used again

75SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

LFU (2)§ Aging

• The counters are shifted right by 1 bit before the R bit is added to the leftmost.

76SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Allocation of Frames§ Problem

• In a multiprogramming system, we need a way to allocate physical memory to competing processes.

– What if a victim page belongs to another process?– How to determine how much memory to give to each process?

• Fixed space algorithms– Each process is given a limit of pages it can use.– When it reaches its limit, it replaces from its own pages.– Local replacement: some process may do well, others suffer.

• Variable space algorithms– Processes’ set of pages grows and shrinks dynamically.– Global replacement: one process can ruin it for the rest (Linux)

77SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Thrashing (1)§ Thrashing

• What the OS does if page replacement algorithms fail.

• Most of the time is spent by an OS paging data back and forth from disk.– No time is spent doing useful work.– The system is overcommitted.– No idea which pages should be in memory to reduce faults.– Could be that there just isn’t enough physical memory for all

processes.

• Possible solutions– Swapping – write out all pages of a process– Buy more memory.

78SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Thrashing (2)

79SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Working Set Model (1)§ Working set

• A working set of a process is used to model the dynamic locality of its memory usage.– i.e., working set = set of pages process currently “needs”– Peter Denning, 1968.

• Definition– WS(t,w) = {pages P such that P was referenced in the time

interval (t, t-w)}– t: time, w: working set window size (measured in page

references)

• A page is in the working set only if it was referenced in the last w references.

80SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Working Set Model (2)§ Working set size (WSS)

• The number of pages in the working set= The number of pages referenced in the interval (t, t-w)

• The working set size changes with program locality.– During periods of poor locality, more pages are referenced.– Within that period of time, the working set size is larger.

• Intuitively, working set must be in memory to prevent heavy faulting (thrashing).

81SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Working Sets and Page Fault Rates§ Direct relationship between working set of a

process and its page-fault rate§ Working set changes over time§ Peaks and valleys over time

82SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Summary (1)§ VM mechanisms

• Physical and virtual addressing• Partitioning, Paging, Segmentation• Page table management, TLBs, etc.

§ VM policies• Page replacement algorithms• Memory allocation policies

§ VM requires hardware and OS support• MMU (Memory Management Unit)• TLB (Translation Lookaside Buffer)• Page tables, etc.

83SDE5007: Special Topics on IC Design II | Spring 2016 | Jinkyu Jeong (jinkyu@skku.edu)

Summary (2)§ VM optimizations

• Demand paging (space)• Managing page tables (space)• Efficient translation using TLBs (time)• Page replacement policy (time)

§ Advanced functionality• Sharing memory• Copy on write• Mapped files

Recommended