78
Memory Management Chapter 5

Chapter5 Memory Management

Embed Size (px)

Citation preview

Page 1: Chapter5 Memory Management

Memory Management

Chapter 5

Page 2: Chapter5 Memory Management

Memory Management - Early Systems Single-User Contiguous Scheme

• Each program is loaded in its entirety into memory and is allocated as much contiguous memory space as needed.

• If program was too large - it couldn’t be executed.

• Minimal amount of work done by Memory Manager.

• Hardware needed : • 1) register to store base address; • 2) accumulator to track size of program as it is

loaded into memory.

Page 3: Chapter5 Memory Management

Memory

OS

Job1

Job2

Job3

Job4

0

256

300

420

880

1024

300

120

Base register

Limit register

Page 4: Chapter5 Memory Management

Fixed (Static) Partitions

• Allows multiprogramming by using fixed partitions - one partition for each job

• The size of each partition remains static once the system is in operation. Each partition can only be reconfigured when the computer system is shut down, reconfigured and restarted.

• The partition sizes are critical. If the partition sizes are too small, larger jobs will be rejected. If partition sizes are too big, memory can be wasted if a job does not occupy the entire partition.

• Entire program is stored contiguously in memory during entire execution.

• Internal fragmentation is a problem. Internal fragmentation occurs when there are unused memory spaces within the partition itself.

Page 5: Chapter5 Memory Management

Memory

300k

300k

300k

300k

300k

Job1 = 250kJob1(250k)

Job 2 = 30k

Job 2 (30k)

50k

270k

Internal fragmentation

Job 3 = 200kJob 3 (200k)

100k

Page 6: Chapter5 Memory Management

Memory

300k

300k

300k

300k

300k

User Program = 720k

180k

Internal fragmentation

UserProgram

(720k)

Page 7: Chapter5 Memory Management

Simplified Fixed Partition Memory Table (Table 2.1)

Partition size

Memory address

Access Partition status

100K 200K Job 1 Busy 25K 300K Job 4 Busy 25K 325K Free 50K 350K Job 2 Busy

Page 8: Chapter5 Memory Management

• As each job terminates, the status of its memory partition is changed from busy to free so that an incoming job can be assigned to that partition.

• The fixed partition scheme works well if all of the jobs run on the same system are of the same size of if the sizes are known ahead of time and don’t vary between reconfigurations.

Page 9: Chapter5 Memory Management

• Job 3 must wait even though 70K of free space is available in Partition 1 where Job 1 only occupies 30K of the 100K available. The jobs are allocated space on the basis of “ first available partition of required size.”

Page 10: Chapter5 Memory Management

Memory

100k

25k

25k

50k

J1 = 30k

J1 (30k)

J2 = 50k70k

J3 = 30k

J4 = 25k

J2 (50k)

J4 (25k)

J3 (30k)

Page 11: Chapter5 Memory Management

Dynamic Partitions

• Available memory are kept in contiguous blocks and jobs are given only as much memory as they request when loaded.

• Improves memory use over fixed partitions.

• Performance deteriorates as new jobs enter the system

• Fragments of free memory are created between blocks of allocated memory (external fragmentation).

Page 12: Chapter5 Memory Management
Page 13: Chapter5 Memory Management

J1 = 10kJ2 = 15k

20k J3 = 20kJ4 = 50k

OS10k

55k

35k

105k

J1 (10k)

J2 (15k)

J3 (20k)

J4 (50k)

10k

50k

J5 = 5k

J5 (5k)

5k

J6 = 30k

J6 (30k)

20k

15k

85k

J7 = 10k20k

J7 (10k)

10k45k J8 = 30k

External fragmentation

Page 14: Chapter5 Memory Management

• In this example eight jobs are submitted for processing and allocated space on the basis of “first-come-first-served.” Job 8 has to wait even though there’s enough free memory between partitions to accommodate it because the free memory space available is not contiguous. Since jobs are loaded in a contiguous manner, Job 8 needs to wait.

Page 15: Chapter5 Memory Management

Dynamic Partition Allocation Schemes

• First-fit: Allocate the first partition that is big enough.– Keep free/busy lists organized by memory location

(low-order to high-order).– Faster in making the allocation.

• Best-fit: Allocate the smallest partition that is big enough– Keep free/busy lists ordered by size (smallest to

largest). – Produces the smallest leftover partition.– Makes best use of memory.

Page 16: Chapter5 Memory Management

First-Fit Allocation Example (Table 2.2)

Page 17: Chapter5 Memory Management

• Using a first-fit scheme, Job 1 claims the first available space. Job 2 then claims the first partition large enough to accommodate it, but by doing so it takes the last block large enough to accommodate Job 3.

• Therefore, Job 3 (indicated by the asterisk) must wait until a large block becomes available, even though there’s 75K of unused memory space (internal fragmentation).

• Notice that the memory list is ordered according to memory location.

Page 18: Chapter5 Memory Management

Fixed Partition Memory

30k

50k

15k

20k

J1 = 10k

J1 (10k)

J2 = 20k20k

J3 = 30k

J4 = 15k

J2 (20k)

J4 (15k)

First-Fit Allocation

30k

5k

Is waiting

Page 19: Chapter5 Memory Management

Dynamic Partition Memory

30k

50k

15k

20k

J1 = 10k

J1 (10k)

J2 = 20k20k

J3 = 30k

J4 = 15k

J2 (20k)

J4 (15k)

First-Fit Allocation

20k

J3 (30k)

Page 20: Chapter5 Memory Management
Page 21: Chapter5 Memory Management

• Note:• A request for a block of 200 spaces has just been given to

the Memory Manager. Using the first-fit algorithm and starting from the top of the list, the Memory manager locates the first block of memory large enough to accommodate the job, which is at location 6785.

• The job is then loaded, starting at location 6785 and occupying the next 200 spaces.

• The next step is to adjust the free list to indicate the block of free memory now starts at location 6985 (not 6785 as before) and that it contains only 400 spaces (not 600 as before)

Page 22: Chapter5 Memory Management

105k

600k

5k

20k

J1 = 200k

First-Fit Allocation

205k

4050k

230k

1000k

J1 (200k)67856985

400k

Page 23: Chapter5 Memory Management
Page 24: Chapter5 Memory Management

Fixed Partition Memory

15k

30k

20k

50k

J1 = 10k

J1 (10k)

J2 = 20k5k

J3 = 30k

J4 = 10kJ2 (20k)

J4 (10k)

Best-Fit Allocation

J3 (30k)

40k

J5 = 5k (is waiting)

Internal fragmentation

Page 25: Chapter5 Memory Management

15k

30k

20k

50k

J1 = 10k

J1 (10k)

J2 = 20k5k

J3 = 30k

J4 = 10kJ2 (20k)

J4 (10k)

Best-Fit Allocation

J3 (30k)

40k

Dynamic Partition Memory

J5 = 5k

J5 (5k)

Page 26: Chapter5 Memory Management
Page 27: Chapter5 Memory Management

• A request for a block of 200 spaces has just been given to the Memory Manager. Using the best-fit algorithm and starting from the top of the list, the Memory Manager searches the entire list and locates a block of memory starting at location 7600, which is the smallest block that’s large enough to accommodate the job. The choice of this block minimizes the wasted space (only 5 spaces are wasted, which is less than in the four alternative blocks).

• The job is then stored, starting at location 7600 and occupying the next 200 spaces.

• Now the free list must be adjusted to show that the block of free memory starts at location 7800 (not 7600 as before) and that it contains only 5 spaces (not 205 as before).

Page 28: Chapter5 Memory Management

105k

600k

5k

20k

J1 = 200k

Best-Fit Allocation

205k

4050k

230k

1000k

J1 (200k)7600

7800 5k

Page 29: Chapter5 Memory Management

• Best-Fit vs. First-Fit

First-Fit Best-Fit

Faster to implement but not may not be making efficient use of memory space.

Uses memory efficiently but slower to implement because the entire free list table needs to be searched before allocation can be made.

Algorithm is less complex. Algorithm is more complex because it needs to find smallest block of memory into which the job can fit.

Memory list organized according to memory locations, low-order

Memory list organized according to memory size, smallest to largest.

Page 30: Chapter5 Memory Management

• Release of Memory Space : Deallocation

• Deallocation for fixed partitions is simple– Memory Manager resets status of memory block to “free”.

• Deallocation for dynamic partitions will be more complex because it tries to combine free areas of memory whenever possible.

• Example :• If the block to be deallocated is adjacent to another free block• Then • The deallocated block is combined together with the free block • The memory list is changed to reflect the starting address of the

new free block(if starting address of new free block has changed) • The free memory block size is changed to show its new size

Page 31: Chapter5 Memory Management
Page 32: Chapter5 Memory Management

• Relocatable Dynamic Partitions• The Memory Manager relocates programs to gather together all

of the empty blocks and compact them to make one block of memory large enough to accommodate some or all of the jobs waiting to get in.

• Compaction• Used to consolidate all external fragments (free areas in memory)

into one contiguous block. In some cases, compaction enhances throughput by allowing more programs to be active at the same time.

• Compaction Steps• Relocate every program in memory so they’re contiguous.• Adjust every address, and every reference to an address, within

each program to account for program’s new location in memory. • Must leave alone all other values within the program (e.g., data

values).

Page 33: Chapter5 Memory Management

• Relocation• The process by which programs are repositioned in main memory to allow

compaction of free memory areas. When relocation takes place, the addresses specified by a program for either branching or data reference are modified, during execution, to allow the program to execute correctly.

• Memory Manager relocates programs to gather all empty blocks and compact them to make 1 memory block.

• Memory compaction (garbage collection, defragmentation) performed by OS to reclaim fragmented sections of memory space.

• Memory Manager optimizes use of memory & improves throughput by compacting & relocating.

• Relocation can be time-consuming and should be done sparingly. The options on the frequency of doing relocation and compaction include:-– When a certain percentage of main memory is used up (e.g. 75% used

up)– When the number of programs waiting for execution reaches a prescribed

upper limit.– When a prescribed amount of time has elapsed. – Combinations of all the above options.

Page 34: Chapter5 Memory Management

Example

Page 35: Chapter5 Memory Management

30k

OS10k

92k

62k

210k

J1 (8k)

J4 (32k)

12k

J5 (48k)

54k

18k

156k

30kJ6 = 84k

J2 (16k)108k

Page 36: Chapter5 Memory Management

30k

OS10k

92k

62k

J1 (8k)

J4 (32k)

12k

J5 (48k)

54k

18k

156k

30k

J2 (16k)108k

50k

OS10k

114k

66k

J1 (8k)

J4 (32k)

J5 (48k)

96k

18k

156k

J2 (16k)Compaction

Page 37: Chapter5 Memory Management

50k

OS10k

114k

66k

J1 (8k)

J4 (32k)

J5 (48k)

12k

18k

198k

J2 (16k)J6 = 84k

J6 (84k)

210k

Page 38: Chapter5 Memory Management

Memory Management – Recent Systems

• Early schemes were limited to storing entire program in memory.

– Fragmentation.– Overhead due to relocation.– More sophisticated memory schemes now that:– Eliminate need to store programs contiguously.– Eliminate need for entire program to reside in memory during

execution.

• More Recent Memory Management Schemes include:• Paged Memory Allocation• Demand Paging Memory Allocation• Segmented Memory Allocation

Page 39: Chapter5 Memory Management

Paged Memory Allocation

• Divides each incoming job into pages of equal size.• Works well if page size = size of memory block size

(page frames) = size of disk section (sector, block).• Before executing a program, the Memory Manager:

• 1. Determines number of pages in program.• 2. Locates enough empty page frames in

main memory• 3. Loads all of the program’s pages into them.

Page 40: Chapter5 Memory Management
Page 41: Chapter5 Memory Management

• At compilation time every job is divided into pages:– Page 0 contains the first hundred lines.– Page 1 contains the second hundred

lines.– Page 2 contains the third hundred lines.– Page 3 contains the last fifty lines.

• Program has 350 lines.• Referred to by system as line 0 through line

349.

Page 42: Chapter5 Memory Management

Job11st 100 lines

2nd 100 lines

3rd 100 lines

50 lines

Job 1 (350 lines)0

349

OSOS

100k

100k

100k

100k

100k

100k

100k

100k

100k

100k

Page frame

012

3456

91011

100k

100k

100k

78

12

Page 0

Page 1

Page 2

Page 3

Page 0

Page 1

Page 2

Page 3

123

99100

..

101102

199200

.

.

201202

.

.

299300301302

.

.

Page 43: Chapter5 Memory Management

• Paging Requires 3 Tables to Track a Job’s Pages

• 1. Job Table (JT) - 2 entries for each active job.– Size of job & memory location of its page map table.– Dynamic – grows/shrinks as jobs loaded/completed.

• 2. Page Map Table (PMT) - 1 entry per page.– Page number & corresponding page frame memory

address.– Page numbers are sequential (Page 0, Page 1 …)

• 3. Memory Map Table (MMT) - 1 entry for each page frame.– Location & free/busy status.

Page 44: Chapter5 Memory Management

Job Table (JT) Page Map Table (PMT)

Memory Map Table (MMT)

Job Size

0Job 1(360k)

360

Memory Address

Job 2(200k)

Page No Frame No

Page 0 Frame 8Page 1 Frame 10Page 2 Frame 5Page 3 Frame 11

Frame 8Frame 9Frame 10Frame 11Frame 12

Frame No Status

busyfreebusybusy

free

Page 45: Chapter5 Memory Management

• Displacement (Figure 3.2)

• Displacement (offset) of a line -- how far away a line is from the beginning of its page.– Used to locate that line within its page

frame.– Relative factor.

• For example, lines 0, 100, 200, and 300 are first lines for pages 0, 1, 2, and 3 respectively so each has displacement of zero.

Page 46: Chapter5 Memory Management
Page 47: Chapter5 Memory Management
Page 48: Chapter5 Memory Management

• Example:• If we use 100 lines as the page size, the page

number and the displacement (the location within that page) of Line 214 can be calculated:-

• 2• 100 214• 200• 14• Quotient 2 Page Number • Remainder 2 Displacement• Line 214 is located on Page 2, 15 lines (Line 14)

from the top of the page

Page size Page No

Displacement

Line no to be located

Page 49: Chapter5 Memory Management

1st 100 lines

2nd 100 lines

Page 0

Page 1

Page 2

0Job is 215 lines

12...

99100101102.

.

.

199200201202

.

.

.

.214

0

99

.

.

.

99

.

.

0

0...

14

Remaining15 lines

Page 50: Chapter5 Memory Management

Demand Paging

• Bring a page into memory only when it is needed, so less I/O and memory is needed.– Faster response.

• Takes advantage of the fact that programs are written sequentially, so not all pages are needed at once. For example:– User-written error handling modules.– Mutually exclusive modules.– Certain program options are either mutually exclusive

or not always accessible.– Many tables assigned fixed amount of address space

even though only a fraction of table is actually used. • Demand paging has made virtual memory widely

available.

Page 51: Chapter5 Memory Management

Demand paging

Program A

Program B

1 4

2 3

Swap out

Swap in

1

4

9

7

5

7

8

2

3

Page 52: Chapter5 Memory Management

Demand paging

Program A

Program B

1 4

7 3Swap in

1

4

2

3

7

Page 53: Chapter5 Memory Management

• Requires use of a high-speed direct access storage device that can work directly with CPU.

• How and when the pages are passed (or “swapped”) depends on predefined policies that determine when to make room for needed pages and how to do so.

• Thrashing Is a Problem With Demand Paging• Thrashing – an excessive amount of page swapping back and

forth between main memory and secondary storage.– Operation becomes inefficient.– Caused when a page is removed from memory but is called

back shortly thereafter. – Can occur across jobs, when a large number of jobs are vying

for a relatively few number of free pages.– Can happen within a job (e.g., in loops that cross page

boundaries). • Page fault – a failure to find a page in memory.

Page 54: Chapter5 Memory Management

Tables in Demand Paging

• Job Table.• Page Map Table (with 3 new fields).

– Determines if requested page is already in memory.– Determines if page contents have been modified.– Determines if the page has been referenced recently.

• Used to determine which pages should remain in main memory and which should be swapped out.

• Memory Map Table.

Page 55: Chapter5 Memory Management

Page Fault

A B

C D E F

M

C

A

FM

Physical Memory

DiskO/S

C

A

Load M

F

A 0

i

C 2

F 5

Page Map Table

Logical Processes

Load X

A 0

M 1

C 2

F 5

Page 56: Chapter5 Memory Management

Page Fault

A B

C D E F

M

C

A

FM

Physical Memory

DiskO/S

C

A

Load M

F

A 0

M i

C 2

F 5

Page Map Table

Logical Processes

Page 57: Chapter5 Memory Management

C

A

F

X

r

Z

Physical Memory

C

A

N

F

X

NXN

Page 58: Chapter5 Memory Management

Page Replacement Policies

• Policy that selects page to be removed is crucial to system efficiency. Policies used include:– First-in first-out (FIFO) policy – best page to

remove is the one that has been in memory the longest.

– Least-recently-used (LRU) policy – chooses pages least recently accessed to be swapped out.

Page 59: Chapter5 Memory Management
Page 60: Chapter5 Memory Management
Page 61: Chapter5 Memory Management

A B A C A B D B A C D

a) First-In-First-Out (FIFO)

ABA

9 page faults

BC

AB A

CA

CD

AB

DC

D

Page 62: Chapter5 Memory Management
Page 63: Chapter5 Memory Management

A B A C A B D B A C D

a) Least Recently Used (LRU)

ABA

8 page faults

A ACA

CD

BB

A

C B

D

Page 64: Chapter5 Memory Management

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

a) First-In-First-Out (FIFO)

707

1

32

Example: 3 frames (3 pages can be in memory for the process)

07

1

02

1 0

32 4

0

3

1

4

0

2

4

2

3

2

3

0

3

0

1

0

12

12

7

2

7

0

15 page faults

7

0

Page 65: Chapter5 Memory Management

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1Example: 3 frames

b) Least Recently Used (LRU)Replace the page that has been used for the longest period of time

707

1

02

07

1

02

3 7

12 page faults

0

3

4

0

4

2

4

2

3

2

3

0

2

3

1

2

1

0

1

0

Page 66: Chapter5 Memory Management

• LRU• The efficiency of LRU is only slightly better

than with FIFO. • LRU is a stack algorithm removal policy –

increasing main memory causes either a decrease in or same number of page interrupts.– LRU doesn’t have same anomaly that FIFO

does.

Page 67: Chapter5 Memory Management

Belady’s anomaly problem (in FIFO)

2

4

6

8

10

12

14

16

18

1 2 3 4 5 6 7

No of Page faults

No of Frames

Page 68: Chapter5 Memory Management

• Pros and Cons of Demand Paging

• A job is no longer constrained by the size of physical memory (virtual memory). (Pro)

• Uses memory more efficiently than previous schemes because sections of a job used seldom or not at all aren’t loaded into memory unless specifically requested. (Pro)

• Increased overhead caused by tables and page interrupts. (Con)

Page 69: Chapter5 Memory Management

• Segmented Memory Allocation

• Programmers commonly structure their programs in modules (logical groupings of code).– A segment is a logical unit such as: main program,

subroutine, procedure, function, local variables, global variables, common block, stack, symbol table, or array.

– Main memory is not divided into page frames because the size of each segment is different.

• In a segmented memory allocation scheme, jobs are divided into a number of distinct logical units called segments, one for each module that contains pieces that performs related functions.

• Memory is allocated dynamically.

Page 70: Chapter5 Memory Management

• Segment Map Table (SMT)

• When a program is compiled, segments are set up according to program’s structural modules.

• Each segment is numbered and a Segment Map Table (SMT) is generated for each job.– Contains segment numbers, their lengths,

access rights, status, and (when each is loaded into memory) its location in memory.

Page 71: Chapter5 Memory Management

• Tables Used in Segmentation

• Memory Manager needs to track segments in memory:

• Job Table (JT) lists every job in process (one for whole system).

• Segment Map Table lists details about each segment (one for each job).

• Memory Map Table monitors allocation of main memory (one for whole system).

Page 72: Chapter5 Memory Management

Seg 0

Seg 1

Subroutine

A

Main Program

Seg 0

4000

7000

Subroutine B

Seg 1

Seg 2

Segment Map Table (SMT)

Seg No Size Status Access Memory Address

0 200 busy E 40001 400 busy E 70002 240 free E 6700

Memory

Page 73: Chapter5 Memory Management

• Pros and Cons of Segmentation• Compaction.• External fragmentation.• Secondary storage handling.• Memory is allocated dynamically.

P1

P3

P0

P2Memory

Seg 0

Seg 2

Seg 1

Seg 3

Program B (340k)

200k

100k

80k

40k

No segment can fit the program B, so External fragmentation

Page 74: Chapter5 Memory Management

P1

P3

P0

P2

Memory

Frame 0

Frame 2

Frame 1

Frame 3

Frame 4

Frame 5

Frame 6

Frame 7

100k

100k

100k

100k

100k

100k

100k

100k

Program B (340k)

100k

100k

100k

40k

Page 0

Page 1

Page 2

Page 3

Page 0

Page 1

Page 2

Page 3

How paging overcome the segmentation

Internal fragmentation

Page 75: Chapter5 Memory Management

• Virtual Memory (VM)

• Even though only a portion of each program is stored in memory, virtual memory gives the appearance that programs are being completely loaded in main memory during their entire processing time.

• Shared programs and subroutines are loaded “on demand,” reducing storage requirements of main memory.

• • VM is implemented through demand paging and

segmentation schemes.

Page 76: Chapter5 Memory Management

Comparison of VM with Paging and with Segmentation

Paging Segmentation

Allows internal fragmentation within page frames

Doesn’t allow internal fragmentation

Doesn’t allow external fragmentation Allows external fragmentation

Programs are divided into equal-sized pages

Programs are divided into unequal-sized segments

Absolute address calculated using page number and displacement

Absolute address calculated using segment number and displacement

Requires PMT Requires SMT

Page 77: Chapter5 Memory Management

Advantages of VM

1. Works well in a multiprogramming environment because most programs spend a lot of time waiting.

2. Job’s size is no longer restricted to the size of main memory (or the free space within main memory).

3. Memory is used more efficiently.

4. Allows an unlimited amount of multiprogramming.

5. Eliminates external fragmentation when used with paging and eliminates internal fragmentation when used with segmentation.

6. Allows a program to be loaded multiple times occupying a different memory location each time.

7. Allows the sharing of code and data.

8. Facilitates dynamic linking of program segments.

Page 78: Chapter5 Memory Management

• Disadvantages of VM

• Increased processor hardware costs.

• Increased overhead for handling paging interrupts.

• Increased software complexity to prevent thrashing.