Operating System Unit-3

Embed Size (px)

Citation preview

  • 7/31/2019 Operating System Unit-3

    1/50

    UNIT 3 MEMORY MANAGEMENT

    DEF: Memory:

    Memory consists of a large array of words or bytes; each contains its own address. Th

    CPU fetches instructions from memory according to the value of the program counter.

    Before entering into depth let we see the basic concepts with relevant to memo

    management,

    Address Binding,Logical Vs Physical Address Space,Dynamic Loading,Dynamic Linking and Shared Libraries,Overlays.

    3.1 Address Binding:

    Generally Programs are stored under a disk, later those several programs will be broug

    into memory for execution and it forms an input Queue.

    Procedure for Execution:

    Select a process from the input Queue Load that process into Memory During execution it access both data and instructions from memory. Later Process Terminates.

    Binding up of data and instruction into the memory address involves,

    Compile Time:

    During compilation the user does not know where the process is located in tmemory so compiler generates Absolute code.

    The compilation starts from the starting address from where the process is beistored.

    Load Time:

  • 7/31/2019 Operating System Unit-3

    2/50

    During loading the user does not know where the process is located in the memoso compiler generates relocatable code.

    If the starting address changes then the user will reload the user code.Execution Time:

    If a process is moved during its execution from one memory segment to anothethen binding must be delayed until run time.

    The Execution of the User Program is Shown below,

    Fig 3.1 Execution of a user Program

  • 7/31/2019 Operating System Unit-3

    3/50

    Logical Address:

    Address which is generated by CPU is called as Logical Address.Physical address:

    The Address seen by the memory unit will be loaded into memory address regist(MAR) it is called as Physical Address.

    3.2 Logical Address Vs Physical Address:

    During compilation and loading logical and physical address will be generated. During Execution logical and physical address may differ. The logical Address is also called as Virtual Address. The set of all logical addresses generated by a program is called logical-addre

    space.

    Similarly the set of all physical addresses corresponding to that logical addresscalled physical-address space.

    The run-time mapping from virtual to physical address is done by a hardwadevice called as Memory-Management Unit (MMU).

    An example for Memory-Management Unit is shown below, Now the base register is called as relocation register. The value in the relocatio

    register will be added with every address generated by a user process at the time

    is sent to memory.

    For example, now if the base is at 14000, then an attempt by the user to addrelocation 0 is dynamically relocated to location 14000; an access to location 346

    mapped to location 14346.

  • 7/31/2019 Operating System Unit-3

    4/50

    The dynamic relocation is shown below,

    Fig 3.2Dynamic relocation using relocation register

    3.3 Dynamic Loading:

    The Dynamic loading is mainly used to improve the memory space utilization. A routine will not be loaded until it is called. All routines will be kept on the disk in a relocatable load format. The main program will be loaded into memory and it will be executed. When a routine needs to call an another routine then the calling routine shou

    check whether other routine has been loaded.

    If other routines are not loaded then the relocatable linking loader will be called load the desired routine into the memory and also it has to update the progra

    address table to reflect this change. Finally the control will be passed to the newly loaded routine.

    Advantage of Dynamic Loading:

  • 7/31/2019 Operating System Unit-3

    5/50

    No need for loading unused Routines. Time as well as memory is Saved.

    3.4 Dynamic Linking and Shared Libraries:

    Dynamic Linking:

    The dynamic linking is similar to the dynamic loading. In case of dynamic linking the linking will be postponed until the execution

    Dynamic loading completes.

    In Linking a Stub will be included as a image for each library routine reference. Stub is nothing but a small piece of code that indicates how to locate or load t

    appropriate memory resident library which is not already present.

    When a stub is executed it first checks whether the needed routine is alreaavailable in memory or not.

    If it is not available then the stub will load that program routine into memory. On any cause the stub will successfully executes that routine.

    Shared Libraries:

    A library can be replaced by a new version and the entire program that refers thlibrary will be automatically using the new version.

    Without using the dynamic linking all the programs can be relinked to gain taccess to the new library.

    More than 1 version of library can be loaded into memory. Each program holds its version information so that, it can able to know under whi

    version we currently using.

  • 7/31/2019 Operating System Unit-3

    6/50

    If any minor changes is done then it holds the same version number. If any major changes is done then it increments the version number. If a program is compiled with a new library version then the program version has be updated else the program may get affected. The other programs that are linked before the new version is installed will contin

    without any errors using the ordered library. This is called as Shared Library.

    3.5 Overlays:

    Overlay can be used to execute the process whose size will be larger than tphysical memory.

    Within the memory only the needed instructions and data can be kept When other instructions are needed then the previously used instructions and da

    will be moved into disk.

    Let we see an example for overlay with 2 pass Assembler. During pass 1 it constructs the symbol table. During pass 2 it generates the machine-language code. Assume that the sizes of these components are as follows:

    Pass 1 70 KB

    Pass 2 80 KB

    Symbol table 20 KB

    Common routines 30 KB

    To load everything at once, we need 200 KB of memory. Assume if only 150 KB is available then we cannot run our process. However noti

    that pass 1 and pass 2 do not going to be in memory at the same time.

    Let we define two overlays:

  • 7/31/2019 Operating System Unit-3

    7/50

    Overlay A Overlay B

    Symbol table, Symbol table,

    Common routines, Common routines,

    Pass 1. Pass 2.

    We add an overlay driver with (10 KB) and start with overlay Ain memory. When we finish pass 1, we jump to the overlay driver, which reads the overlay

    into memory, overwriting overlay A, and then transfers control to pass 2.

    Overlay A needs only 120 KB, whereas overlay B needs 130 KB (Refer Figure 3.3). We can now run our assembler with 150 KB of memory. It will load somewhfaster because fewer data alone need to be transferred before execution starts.

    Fig 3.3 Overlays for a two-pass assembler

    The code for overlay A and the code for overlay B will be kept on the disk asabsolute memory images and it will be read by the overlay driver when needed.

    Special relocation and linking algorithms are required to construct the overlays.

    3.6 Swapping:

    Swapping is a process of moving the process between the main memory and disk to a backing store.

  • 7/31/2019 Operating System Unit-3

    8/50

    During execution the process will be moved from disk to memory After completion of execution the process will be moved back to the disk from t

    memory.

    This process is shown pictorially,

    Fig 3.4 Swapping of two processes using a disk as a backing store

    Round Robin is a best example for swapping process when a quantum expires, tmemory manager will swap out the currently running process and swap in som

    other process in this meantime the CPU scheduler will allocate a time slice to som

    other process in memory.

    When each process finishes its quantum then it will be swapped with some othprocess.

    Swapping policy is also used for priority-based scheduling algorithms. If a higher-priority process arrives and it requests for service then the memo

    manager will swap out the lower-priority process so that it can load and execu

    the higher-priority process.

    When the higher-priority process finishes then the lower-priority process can swapped back in and continued. This variant of swapping is sometimes called asro

    out (swap out), roll in (swap in).

  • 7/31/2019 Operating System Unit-3

    9/50

    Roll out Moving a process from Memory to disk.

    Roll in Moving a process from disk to memory.

    When a process is swapped out then it will be swapped back into the same memospace which it occupies previously.

    This memory space will be based on the Address Binding. If the Binding is done at assembly or at load time then the process cannot

    moved to the different locations.

    If execution time binding is used then the process can be swapped into differememory locations.

    A backing store is needed for swapping. The backing store is a fast disk with large space and it holds the copies of

    memory images for all users and it must provide direct access to these memo

    images.

    The system maintains a ready queue it consists of all process whose memoimages will be disk or in memory.

    So whenever the CPU scheduler decides to execute a process then it calls tdispatcher

    The dispatcher will check whether the next process in the queue is available memory or not.

    If it is not available it denotes there is no free memory region then the dispatchswaps out a process currently available in memory and swaps the desired process

    Then it reloads the register in a normal mode and transfers the control to tselected process.

    Swapping is mainly based on the transfer time

  • 7/31/2019 Operating System Unit-3

    10/50

    The total transfer time is directly proportional to the amount of memory swapped3.7 Contiguous Memory Allocation

    The memory is usually divided into two partitions: One for operating system and Another one for user processes. We can place the operating system either in low memory or high memory. The major factor that affects this is the location of the interrupt vector. Since the interrupt vector is often situated in low memory, programmers usuaplace the operating system in low memory. In this contiguous memory allocation, each process is contained in a sing

    contiguous section of memory.

    3.8 Memory Protection

    Protecting the operating system from user processes, and protecting usprocesses from one another. We can provide this protection using a relocati

    register and limit register.

    The relocation register contains the value of the smallest physical address The limit register contains the range of logical addresses For example consider Relocation = 100040 and limit = 74600 With relocation and limit registers, each logical address will be less than the lim

    register so that the MMU maps the logical address dynamicallyby adding the valuavailable in the relocation register.

  • 7/31/2019 Operating System Unit-3

    11/50

    This mapped address will be sent to memory which is shown pictorially,

    Fig 3.5 Hardware support for relocation and limit registers

    When the CPU scheduler selects a process for execution then the dispatcher loathe relocation and limit registers with the correct values.

    Because every address generated by the CPU will be checked against theregisters so that we can protect both the operating system and the other use

    programs.

    The relocation-register provides an effective way to allow the operating-system change its size dynamically.

    For example, the operating system contains code and buffer space for devidrivers.

    If a device driver is not used, so there is no need to keep the code and data memory we can use that space for other purposes. Such code is sometimes call

    transient operating-system code.

    The code can be included or removed as needed so by using this code size of toperating system can be changed during program execution.

    3.9 Memory Allocation

  • 7/31/2019 Operating System Unit-3

    12/50

    Dividing the memory into several fixed-sized partitions each partition can contaexactly only one process.

    In this multiple-partition method, when a partition is free then a process will selected from the input queue and it will be loaded into the free partition.

    When processes enter the system, they will be put into an input queue then toperating system takes into account about the memory requirements of ea

    process and the amount of available memory space in determining which process

    can be allocated memory.

    When a process is allocated space then it will be loaded into memory and it cwork with CPU. When a process terminates then it releases its memory so th

    operating system can fill some other process from the input queue.

    In general, a set of holes of various sizes will be scattered throughout memory any given time. When a process arrives and requests for memory then the syste

    searches the set of hole that is large enough for that process.

    If the hole is too large then it will be splited into two:One part is allocated to the arriving process The another will be returned to the set of holes.

    When a process terminates it releases its block of memory, which is then placback in the set of holes.

    If the new hole is adjacent to another hole then these adjacent holes will merged to form one larger hole. At this point, the system has to check wheth

    there are processes waiting for memory and whether this newly freed an

    recombined memory could satisfy the demands of any of these waiting processes

    The set of holes is searched to determine which hole is best to allocate. The first-fit, best-fit, and worst-fit strategies are the most common ones used

    select a free hole from the set of available holes.

  • 7/31/2019 Operating System Unit-3

    13/50

    First fit:Allocate the first hole that is big enough.Searching can start either at the beginning of the set of holes or

    from where the previous first-fit search ended.We can stop searching as soon as we find a free hole that is large enough.

    Best fit: Allocate the smallest hole that is big enough. We must search the entire list, unless the list is kept ordered by size. This strategy produces the smallest leftover hole.

    Worst fit: Allocate the largest hole. Again, we must search the entire list, unless it is sorted by size. This strategy produces the largest leftover hole, which can be more use

    than the smallest leftover hole from a best-fit.

    Both First fit and Best fit are better than Worst fit in terms of decreasing both timand storage utilization.

    Neither First fit nor Best fit is better in terms of storage utilization, but First fitgenerally faster. These algorithms, however, suffer from external fragmentation. When a process is loaded and removed from memory then the free memory spa

    is broken into little pieces.

    In the worst case, we could have a block of free or wasted memory between evetwo processes. If all this memory were in one big free block, we might be able run several more processes.

    The selection of the first-fit versus best-fit strategies can affect the amount fragmentation.

  • 7/31/2019 Operating System Unit-3

    14/50

    First fit is better for some systems, similarly best fit is better for some othsystems.

    Another factor that has to be noted is which end of a free block is allocateWeather the leftover piece on the top, or on the bottom

    Depending on the total amount of memory storage and the average process sizthe external fragmentation may be a minor or a major problem.

    Statistical analysis of first fit reveals that even with some optimization given allocated blocks another 0.5Nblocks will be lost due to fragmentation.

    That is, one-third of memory may be unusable. This property is known as the 5percent rule.

    3.10 Fragmentation

    Memory fragmentation can be internal as well as external. Consider a multiple partition allocation scheme with a hole of 18,464 bytes. Suppose the next process requests 18,462 bytes. If we allocate exactly t

    requested block and now we are left with a hole of 2 bytes.

    The general approach is to break the physical memory into fixed-sized blocks, aallocate memory in unit of block sizes.

    With this approach, the memory allocated to a process may be slightly larger ththe requested memory. The difference between these two numbers is intern

    fragmentation-memory that is internal to a partition but is not being used.

    One solution to the problem of external fragmentation is compaction. That grouping all free memory together in one large block.

    Compaction is not always possible because if relocation is static compaction cannbe done if relocation is dynamic then compaction can be done at run time.

  • 7/31/2019 Operating System Unit-3

    15/50

    If addresses are relocated dynamically, relocation requires only moving tprogram and data, and then changing the base register to reflect the new ba

    address.

    When compaction is possible, we must determine its cost. The simplest compaction algorithm is to simply move all the processes toward o

    end of memory similarly all holes has to move in the other direction producing o

    large hole of available memory.

    This scheme can be little bit expensive. Another possible solution to external-fragmentation problem is to permit t

    logical-address space of a process to be noncontiguous so allowing a process to allocated physical memory wherever the latter is available.

    Two complementary techniques can able to achieve this solution: One is paging aanother is segmentation.

    4. PAGING

    DEF: Paging is a memory management scheme that permits the physical address space

    a process to be non contiguous.

    Basic Method

    Physical memory is broken into fixed-sized blocks called frames. Logical memory is also broken into blocks of the same size called pages. When a process is to be executed then its pages will be loaded into any availab

    memory frames from the backing store. The backing store is divided into fixed-siz

    blocks that are of the same size as the memory frames.

  • 7/31/2019 Operating System Unit-3

    16/50

    The hardware support for paging is shown below,

    Fig 4.1 Paging hardware

    Every address generated by the CPU is divided into two parts:A page number (p) andA page offset (d).

    The page number is used as an index into a page table. The page table contains tbase address of each page in physical memory.

    The physical address sent to the memory unit is calculated by adding the baaddress with page offset.

    In paging no external fragmentation will occur. The page size or the frame size is defined by the hardware. Generally the size

    page will be power of 2.

    The paging model of memory is shown below,

  • 7/31/2019 Operating System Unit-3

    17/50

    Fig 4.2 Paging model of logical and physical memory

    Consider the memory which is shown below,

  • 7/31/2019 Operating System Unit-3

    18/50

    Fig 4.3 Paging example for a 32-byte memory with 4-byte pages

    Using a page size of 4 bytes and a physical memory of 32 bytes (8 pages), let wshow how the user's view of memory can be mapped into physical memory whi

    is shown in Fig 4.3

    Logical address 0 is page 0, offset 0. Indexing into the page table, we find that pa0 is in frame 5. Thus, logical address 0 maps to physical address

    20 (= (5 x 4) + 0).

    Logical address 3 is page 0, offset 3 maps to physical address23 (= (5 x 4) + 3).

  • 7/31/2019 Operating System Unit-3

    19/50

    Logical address 4 is page 1, offset 0; according to the page table, page 1 is mappto frame 6. Thus, logical address 4 maps to physical address

    24 (= (6 x 4) + 0).

    Logical address 13 maps to physical address 9. Paging itself is a form of dynamic relocation. When a process is going to be executed then its size should be expressed in pages One frame is needed for each page of process . If the process needs n pages then atleast n frames must be available in memory. If n frames are available then they will be allocated to the process. The first page will be entered in the page table for that process. The second page will be loaded into another frame and its frame number will

    put into the page table and so on. Which is shown in Fig 4.4

    The operating system will maintain a frame table like page table. The following information will be available in the frame table about the physic

    memory.

    Allocated framesAvailable framesTotal no of frames.

    The frame table will maintain an entry for each physical page frame, indicatiwhether the space is free or allocated and, if it is allocated it denotes to which pa

    the process is allocated.

  • 7/31/2019 Operating System Unit-3

    20/50

    Fig 4.4 Free frames. (a) Before allocation. (b) After allocation

    The operating system maintains a copy of the page table for each process, just asmaintains a copy of the instruction counter and register contents.

    This copy is used to translate logical addresses to physical addresses whenever toperating system map a logical address to a physical address manually.

    Hardware Support:

    Each operating system has its own methods for storing page tables. A pointer to the page table will be stored with other fast registers values in t

    Process Control Block (PCB).

    When the dispatcher tell a process to start then the CPU Dispatcher will reloathese registers and it defines the correct hardware page table values from t

    stored user page table .

  • 7/31/2019 Operating System Unit-3

    21/50

    The hardware implementation of the page table can be done in several ways tsimplest one is the page table can be implemented with a set of dedicat

    registers.

    These registers should be built with very high-speed logic to make the paginaddress translation efficient. And these normal registers for the page table accessing will not be efficient for t

    larger size page table so to overcome this Page Table Base Register is used (PTBR

    A Page Table Base Register (PTBR) is mainly used to point the large size page table PTBR consumes large time to point a specific memory location for examp

    consider If we want to access the memory location i,

    We must first index into the page table using the value in the PTBR offset get tframe Number.

    Now the Frame number has to be combined with Page Offset so that we can gthe Actual Address.

    So to access a single Byte we are pointing 2 memory one is for page table entanother one is for to obtain a specific byte

    So to avoid this problem a special, fast lookup hardware cache is used calletranslation look-aside buffer (TLB)

    The TLB is associative, high-speed memory. Each entry in the TLB consists of twparts:

    A key (or tag) and

    A value.

    The TLB is used with page tables in the following way. When a logical address is generated by the CPU, its page number will be given

    TLB.

  • 7/31/2019 Operating System Unit-3

    22/50

    If the page number is found that denotes TLB HIT then its frame number will immediately available and it can be used to access memory.

    If the page number is not found that denotes TLB MISS then a memory referenhas to be made in the page table this is pictorially shown below,

    Fig 4.5 Paging hardware with TLB

    The whole task may takes less than 10 percent to complete. We can add the page number and the frame number to the TLB, so that they can

    found quickly on the next reference.

    If the TLB is already full with multiple number of entries then the operating systemust select one for replacement policies.

    Some TLBs store address-space identifiers (ASIDs) in each entry of the TLB. The use of ASID is it uniquely identifies each process and it provides address spa

    protection for that process.

  • 7/31/2019 Operating System Unit-3

    23/50

    The percentage of times that a particular page number is found in the TLB is callthe hit ratio.

    To find the effective Access Time Let we take an example,80-percent hit ratio means that we find the desired page number in the TLB 80

    percent of the time.

    If it takes 20 nanoseconds to search the TLB and 100 nanoseconds to accessmemory, then a mapped memory access takes 120 nanoseconds.

    If we fail to find the page number in the TLB (20 nanoseconds)Then we must first access memory for the page table and frame number (1

    nanoseconds), and then access the desired byte in memory (100 nanosecond

    for a total of220 nanoseconds.

    Effective access time = 0.80 x 120 + 0.20 x 220

    = 140 nanoseconds.

    Protection:

    Memory protection is accomplished by protection bits that are associated wieach frame and these bits are kept in the page table.

    Protection bit can define a page to be read only, read-write or execute onProtection.

    One more bit is added to each entry in the page table to improve the protection. The bit can be valid or invalid. If the Bit is set to valid then this value indicates that the associated page is in th

    process logical address space and it denotes the page is legal.

  • 7/31/2019 Operating System Unit-3

    24/50

    If the Bit is set to invalid then this value indicates that the associated page is not the process logical address space and it denotes the page is illegal.

    An example for Memory Protection is shown below,

    Fig 4.6 Valid (v) or invalid (i) bit in a page table

    Most of this table would be unused but it would take up valuable memory space. Some systems provide hardware in the form of a page-tablelength register (PTLR

    which is usedto indicate the size of the page table.

    This value is checked against every logical address to verify that the address is the valid range for the process. Failure of this test causes an error trap to th

    operating system.

    Structure of the Page Table

    The structure of page table is listed below,

  • 7/31/2019 Operating System Unit-3

    25/50

    Hierarchical Paging Hashed Page Tables Inverted Page Table

    Hierarchical Paging:

    Representing the page in form of a tree Structure. Consider a system with a 32-bit logical-address space If the page size in such a system is 4 KB (212), then a page table may consist of up

    1 million entries (232

    /212

    )

    One simple solution to this problem is to divide the page table into smaller pieces There are several ways to accomplish this division One way is to use a two-level paging algorithm in which the page table itself is al

    paged and it is shown pictorially in Fig 4.7

    Consider a 32-bit machine with a page size of 4 KB. A logical address is divided into a page number consisting of 20 bits and a pa

    offset consisting of 12 bits

    Because we page the page table, the number is further divided into a 10-bit panumber and a 10-bit page offset.

    Thus a, logical address is as follows:

  • 7/31/2019 Operating System Unit-3

    26/50

    Fig 4.7 A two-level page-table scheme

    Here p1 is an index into the outer page table and Here p2 is the displacement within the page of the outer page table. The address-translation method for this architecture is shown below,

    Fig 4.8 Address translation for a two-level 32-bit paging architecture

  • 7/31/2019 Operating System Unit-3

    27/50

    The address translation works from the outer page table inwards, this scheme isalso known as a forward-mapped page table.

    Hashed Page Tables

    For handling address spaces larger than 32 bits we can use a hashed page tabwith the hash value being the virtual-page number.

    Each entry in the hash table contains a linked list of elements that hash to the samlocation (to handle collisions).

    Each element consists of three fields:(a) The virtual page number,

    (b) The value of the mapped page frame, and

    (c) A pointer to the next element in the linked list.

    The algorithm works as follows: The virtual page number in the virtual address is hashed into the hash table. The virtual page number is compared to field (a) in the first element in the link

    list. If there is a match, the corresponding page frame field (b) is used to form t

    desired physical address.

    If there is no match then the subsequent entries in the linked list are searchedfor a matching virtual page number. This scheme is shown below,

    Fig 4.9 Hashed page table

  • 7/31/2019 Operating System Unit-3

    28/50

    A variation with 64-bit address spaces has been proposed this was carried through Clustered page tables

    Clustered page tables are similar to hashed page tables except that each entrythe hash table refers to several pages (such as 16) rather than a single pagTherefore, a single page-table entry can store the mappings for multiple physic

    page frames.

    Clustered page tables are particularly useful for sparse address spaces whememory references are noncontiguous and scattered throughout the addre

    space.

    Inverted Page Table

    Each process has a page table associated with it. The page table has one entry feach page .

    Processes reference pages through the pages' virtual addresses. The operating system must then translate this reference into a physical memo

    address.

    Since the table is sorted by virtual address, the operating system can able calculate where in the table the associated with the physical-address entry and ho

    to use that value directly.

    One of the drawbacks of this method is that each page table can consist of millioof entries. These tables may consume large amounts of physical memory.

    To solve this problem, we can use an inverted page table. An inverted page tabhas one entry for each real page (or frame) of memory.

    Each entry consists of the virtual address of the page stored in that real memolocation; with information about the process that owns that page.

  • 7/31/2019 Operating System Unit-3

    29/50

    Thus, only one page table is in the system, and it has only one entry for each paof physical memory.

    The operation of an inverted page table is shown below,

    Fig 4.10 Inverted page table

    Fig 4.11 Paging Hardware

  • 7/31/2019 Operating System Unit-3

    30/50

    Now Fig 4.10 is compared with Fig 4.11 it depicts a standard page table operation. Because only one page table is in the system yet there are usua

    several different address spaces mapping physical memory, inverted page tabl

    often require an address-space identifier stored in each entry of the page table.

    Shared Pages

    With the help of paging the common code can be shared by more than 1 process. For sharing Reentrant code or pure code can be used. Reentrant code (or pure code) is a non-self-modifying code i.e. it never chang

    during execution.

    Example:

    Consider a system that supports 40 users, each of whom executes a text editor. If the text editor consists of 150 KB of code and 50 KB of data space then we ne

    around 8,000 KB to support the 40 users.

    So if we use the reentrant code then only 1 copy of the editor needs to be kept physical memory.

    In the Fig 4.12 we see a three-page editor-each page of size 50 KB; the large pasize is used to simplify the figure-being shared among three processes.

    Each process has its own data page. Thus, two or more processes can execute the same code at the same time. Each process has its own copy of registers and data storage to hold the data for th

    process' execution.

  • 7/31/2019 Operating System Unit-3

    31/50

    Fig 4.12 Sharing of code in a paging environment

    The data for two different processes will, of course, vary for each process. Only one copy of the editor needs to be kept in physical memory. Each user's pa

    table maps into the same physical copy of the editor, but data pages are mapp

    onto different frames.

    So to support 40 users, we need only one copy of the editor (150 KB), plus copies of the 50 KB of data space per user.

    The total space required is now 2,150 KB, instead of 8,000 KB so it a significasaving of memory.

  • 7/31/2019 Operating System Unit-3

    32/50

    Segmentation

    Segmentation is a memory management scheme that supports the user view of thmemory.

    The collection of segments forms a logical space which is shown below

    Fig 4.13 User's view of a program

    Each segment has a name and length The segment address specify both the segment name and the offset within the

    segment and it is denoted by,

    For simplicity the segments are numbered instead of name .So the logical addresconsists of two tuples,

    Hardware Support

    The physical memory is a one dimensional address but logical address have twodimensional address

  • 7/31/2019 Operating System Unit-3

    33/50

    So to access any programs we must map the two dimensional address into a onedimensional physical address.

    This mapping will be get affected by the segment table. The operating system keeps a segment table similar to the page table. Each entry of the segment table contains a segment base and segment limit. The segment base specifies the starting physical address where segment resides i

    memory.

    The limit specifies the length of the segment A logical address consists of 2 parts,

    A segmentAn offset into that segment d

    The segment number is used as an index into the segment table The offset d of the logical address must be between 0 and the segment limit. If the offset is legal it will be added with the segment base and generates the

    appropriate physical address.

    If the offset is illegal a trap will be sent to the operating system The Segmentation hardware is shown below,

  • 7/31/2019 Operating System Unit-3

    34/50

    Fig 4.14 Segmentation Hardware

    Consider an example; we have five segments numbered from 0 through 4. The segments are stored in physical memory and it is shown below,

  • 7/31/2019 Operating System Unit-3

    35/50

    Fig 4.15 Example for segmentation

    The segment table has a separate entry for each segment giving the beginniaddress of the segment in physical memory (or base) and the length of th

    segment (or limit).

    For example, segment 2 is 400 bytes long and begins at location 4300. Thus,reference to byte 53 of segment 2 is mapped onto location 4300 + 53 = 4353.

    A reference to segment 3, byte 852, is mapped to 3200(the base segment 3) + 852 = 4052.

    A reference to byte 1222 of segment 0 would result in a trap to the operatisystem, as this segment is only 1,000 bytes long.

  • 7/31/2019 Operating System Unit-3

    36/50

    5. VIRTUAL MEMORY

    DEF: Virtual memory is a technique that allows the execution of processes that may n

    be completely in memory and it allows execution of program that is larger than t

    physical memory.

    Background:

    The larger programs can be split into 2 pieces called overlays. The overlay can be kept in a disk and it can be swapped in and out of memory

    the operating system dynamically.

    Virtual memory allows processes to easily share files and address spaces, andprovides an efficient mechanism for process creation.Benefits of Virtual Memory:

    We can execute a program that is not entirely available in memory. Less I/O will be needed to swap or load each program into memory. User can write large programs. CPU utilization and throughput can be improved by executing more programs

    the same time.

    Virtual memory can be implemented either by demand paging or by segmentatio Page segmentation can also be used to implement the virtual memory in th

    scheme the segments are broken into pages.

    Virtual memory allows files and memory to be shared among different processes.

    An example for virtual memory is shown below,

  • 7/31/2019 Operating System Unit-3

    37/50

    Fig 5.1 Diagram showing virtual memory that is larger than physical memory

    6. DEMAND PAGING

    The concept of demand paging is similar to paging with swapping. In demand paging all the process can be kept in a secondary memory. So the process which needs to be executed alone will be swapped into memo

    instead of swapping the entire process.

    A lazy swapper never swaps a page into memory unless the page is needed. Swapper swaps the entire process but pager swaps the individual page of process So in demand paging we use the term pager rather than using swapper.

  • 7/31/2019 Operating System Unit-3

    38/50

    Basic Concepts:

    The page brings only the necessary pages into the memory instead of swapping twhole process as shown below,

    Fig 6.1 Transfer of a paged memory to contiguous disk space

    The pages that are available in memory and disk are distinguished through valinvalid bit scheme.

    In this scheme the protection bits i.e. valid and invalid bits can be used distinguish the pages from memory and disk.

    When the bit is set to valid it indicates the associated page is legal and it is noavailable in memory.

    When the bit is said to be invalid then it indicates the page is either valid or nvalid and it is not available on disk.

  • 7/31/2019 Operating System Unit-3

    39/50

    An example for valid and invalid bits are shown below,

    Fig 6.2 Page table when some pages are not in main memory

    During execution if the process access a page if it is available in memory then theexecution proceeds normally.

    If the page is not available in the memory then the page fault trap will occur and iwill be sent to the operating system.

    The operating system then will bring the desired pages into memory. The procedure for handling this page fault is shown below,

  • 7/31/2019 Operating System Unit-3

    40/50

    Fig 6.3 Steps in handling a page fault

    If the page fault is occurred then the following sequence will occur as per Fig 6.3

    Checks the Page reference was legal and determine the location of that page

    in the disk.

    If Operating System receives page fault Trap then,

    Save the user registers and current process states.Determine whether that interrupt was page fault.

    Check whether the page was available on backing store i.e. disk.

    Read the page from the disk into the free frame it involves,

    1

    2

    3

    4

  • 7/31/2019 Operating System Unit-3

    41/50

    The page has to wait in a Queue until the read request is serviced. Wait for device seek or latency time Begin the transfer of page to the free frame.

    Correct the page table to show that the desired page is now available in memory.

    Restart the instruction.

    So now it restores the user registers, process states, and new page table

    then it will resume the interrupted instruction.

    The Three major components involved during page-fault service time are:

    1. Service the page-fault interrupt.

    2. Read the page.3. Restart the process.

    7.REPLACEMENT ALGORITHMS

    When a process starts execution and access a page that is not in memory thenpage fault will occur.

    The page fault can be serviced by swapping the desired page from disk to memowhen there is an available of free space in memory. When there is no free space available in memory then an unneeded page will

    swapped out from the memory to disk and then the desired page will be swapp

    into the memory from disk. This mechanism is known as Page replacement Policy

    6

    5

  • 7/31/2019 Operating System Unit-3

    42/50

    Fig 7.1 Need for Page Replacement

    Steps for page Replacement:

    1. Find the location of the desired page on the disk.2. Find a free frame:

    a. If there is a free frame, use it.b. If there is no free frame, use a page-replacement algorithm to select a

    Victim frame.

    b. Write the victim page to the disk; change the page and frame tablesaccordingly.

    3. Read the desired page into the (newly) free frame; change the page and frametables.

    4. Restart the user process.

  • 7/31/2019 Operating System Unit-3

    43/50

    Fig 7.2 Page replacement

    NOTE:

    If no frames are free then 2 page transfers are needed so that it will increase the pa

    fault service time and effective access time.

    This can be reduced by using a modify bit.

    Each page of frame will have a modify bit this will be set by the hardware whenever the

    is any changes in that page that indicates the page has been modified.

    If the modify bit is not set then it indicates the page or frame has not been modified an

    it will be read into memory.

    The number of page faults will be inversely proportional to the number of availab

    frames.

  • 7/31/2019 Operating System Unit-3

    44/50

    If the number of available frames increases then the number of page faults will g

    automatically decreased as shown in the below graph,

    Fig 7.3 Graph of page faults versus the number of frames

    PAGE REPLACEMENT ALGORITHMS

    The commonly used page replacement algorithms are listed below,

    1. FIFO Page replacement Algorithm

    2. Optimal Page replacement Algorithm

    3. LRU Page replacement Algorithm

    4. LRU Approximation Page replacement Algorithm

    Additional Reference Bits Algorithm

    Second Chance Algorithm

    Enhanced Second Chance Algorithm

    5. Counting Based Page replacement Algorithm6. Page Buffering Algorithms.

    Let we see One by One. . .

  • 7/31/2019 Operating System Unit-3

    45/50

    FIFO Page replacement Algorithm:

    It is the simplest one when compared to other algorithms. When a page has to be replaced then the oldest page has to be choosen i.e. tpage first inserted will be swapped out first. All pages will be stored in a FIFO Queue. When page fault occurs then the head of the Queue will be replaced. When a page is brought into memory then it will be inserted at the tail of t

    Queue.

    Consider the Reference string : 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1Frame size: 3

    7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

    Frame size: 3 Total Number of page faults: 15

    Explanation:

    Initially frames will be empty. The first reference bit 7 will be inserted first then it will be followed by 0 and 1 Now frame size is full. To insert reference string 2 any one of the frame has to be swapped out. Since w

    are dealing with FIFO first inserted reference string 7 will be swapped out and 2 w

    be swapped in.

    7 7

    1

    0

    7

    0

    1

    0

    2 2

    3

    2

    1

    3

    0

    4

    3

    0

    4

    3

    2

    4

    0

    2

    3

    2

    0 0

    3

    1

    0

    1

    2 1

    0

    7

    2

    0

    7

    2

    1

    7

  • 7/31/2019 Operating System Unit-3

    46/50

    Now 0 has to be inserted but 0 is already available in memory so no page fault woccur.

    And this process goes on..as per FIFO Technique.Advantage: Drawback:

    Easy to Implement Performance will not always be good.

    Beladys Anomaly Rule:

    In general if we increase the frame size then the total no of Page fault will be lesser.

    But in FIFO at some cases if we increase the frame size then the total no of Page fault w

    be higher if we decrease the frame size page fault will become lesser this strategy is callas Beladys Anomaly Rule.

    Consider an example for Beladys Anomaly Rule.

    0 1 2 3 0 1 4 0 1 2 3 4

    Frame size: 3 Total no of Page fault: 9

    0 1 2 3 0 1 4 0 1 2 3 4

    Frame size: 4 Total no of Page fault: 10

    Hence proved we increased the frame size but page fault is increased.

    0 0

    2

    1

    0

    1

    2

    1

    0

    1

    4

    2

    4

    2

    4

    1

    0

    4

    1

    0 4

    3

    1

    0

    3

    1

    0

    3 23 3 3 2 2

    0 0

    2

    0

    1 0

    3

    2

    1

    3 3

    2

    0

    4

    1

    0

    1 1

    2

    4

    3

    2

    4

    1

  • 7/31/2019 Operating System Unit-3

    47/50

    Optimal Page Replacement Algorithm:

    Optimal page replacement will have the lowest no of page fault when compared w

    other algorithms.

    It will never suffer from Beladys Anomaly.

    Optimal Page replacement is based on the following Principle,

    Replace the page that will not be used for the longest period of time

    Consider an example,

    Reference String: 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

    7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

    Frame size: 3 Total no of Page fault: 9

    7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

    7

    Frame size: 4 Total no of Page fault: 8

    Explanation for frame Size 3:

    Initially frames will be empty. The first reference bit 7 will be inserted first then it will be followed by 0 and 1

    7 7

    1

    0

    7

    0

    1

    0

    2 2

    3

    0

    2

    4

    3 3

    0

    2 2

    1 1

    0

    7

    0

    7 7

    1

    0

    7

    0

    1

    0

    7 3

    1

    0

    3

    4

    1

    0

    4 7

    0

    1

    0

    2 2 22 2

  • 7/31/2019 Operating System Unit-3

    48/50

    Now frame size is full. To insert reference string 2 any one of the frame has to be swapped out in frame

    Since we are dealing with Optimal the page that will not be used for the longeperiod of time has to be swapped out so 7 0 1 has to be checked in forward that

    shown below,

    (7 0 1)

    Forward checking Left to Right

    0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 2Swap out swap i

    1st Position

    2nd Position

    3rd position

    In the above representation 7 is located at 3rd position (i.e. last position) So according to optimal principle 7 will not be used for the longest period of time

    so page 7 has to be swapped out and 2 has to be swapped in .

    And this process goes on..as per Optimal Technique.

    LRU Page Replacement Algorithm:

    LRU stands for Least Recently Used

    LRU Page replacement is based on the following Principle,

    Replace the page that has not been used for the longest period of time

    Consider an example,

    Reference String: 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

    Frame size: 3

    1

    0

    7

    1

    0

    2

  • 7/31/2019 Operating System Unit-3

    49/50

    7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

    Frame size: 3 Total no of Page fault: 12

    Explanation for frame Size 3:

    Initially frames will be empty. The first reference bit 7 will be inserted first then it will be followed by 0 and 1 Now frame size is full. To insert reference string 2 any one of the frame has to be swapped out in frame

    Since we are dealing with LRU the page that has not been used for the longe

    period of time has to be swapped out so 7 0 1 has to be checked in Reverse that

    shown below,

    (7 0 1)

    Reverse checking Right to Left

    7 0 1 2Swap out swap in

    3rd Position 2nd Position 1st Position

    In the above representation 7 is located at 3rd position (i.e. last position) So according to LRU principle 7 has not been used for the longest period of time s

    page 7 has to be swapped out and 2 has to be swapped in.

    And this process goes on..as per LRU TechniqueEND OF UNIT - 3

    7 7

    1

    0

    7

    0

    1

    0

    2 2

    3

    0

    4

    0

    3

    4

    2

    3

    4

    2

    0

    2

    3

    0 1

    2

    1

    2

    0

    7

    0

    1

    3

    1

    0

    7

    1

    0

    2

  • 7/31/2019 Operating System Unit-3

    50/50

    UNIT -3

    MEMORY MANAGEMENT