Upload
sitara
View
24
Download
0
Tags:
Embed Size (px)
DESCRIPTION
Chapter 8, Main Memory. 8.1 Background. When a machine language program executes, it may cause memory address reads or writes From the point of view of memory, it is of no interest what the program is doing - PowerPoint PPT Presentation
Citation preview
1
Chapter 8, Main Memory
2
8.1 Background
• When a machine language program executes, it may cause memory address reads or writes
• From the point of view of memory, it is of no interest what the program is doing
• All that is of concern is how the program/operating system/machine manage access to the memory
3
Address binding
• The O/S manages an input queue in secondary storage of jobs that have been submitted but not yet scheduled
• The long term scheduler takes jobs from the input queue, triggers memory allocation, and puts jobs into physical memory
• PCB’s representing the jobs go into the scheduling system’s ready queue
4
• The term memory address binding refers to the system for determining how memory references in programs are related to the actual physical memory addresses where the program resides
• In short, this aspect of system operation stretches from the contents of high level language programs to the hardware the system is running on
5
Variables and memory
• 1. In high level language programs, memory addresses are symbolic.
• Variable names make no reference to an address space in the program
• But in the compiled, loaded code, the variable name is associated with a memory address that doesn’t change during the course of a program run
• This memory location is where the value of the variable is stored
6
Relative memory addresses
• 2. When a high level language program is compiled, typically the compiler generates relative addresses.
• This means that the loaded code contains address references into the data and code space starting with the value 0
• Instructions which have variable operands, for example, refer to the variables in terms of offsets into the allocated memory space beginning at 0
7
Loader/linkers
• 3. An operating system includes a loader/linker.
• This is part of the long term scheduler functionality.
• When the program is placed in memory, assuming (as is likely) that its base load address is not 0, the relative addresses it contains don’t agree with the physical addresses is occupies
8
Resolving relative addresses to absolute addresses
• A simple approach to solving this problem is to have the loader/linker convert the relative addresses of a program to absolute addresses at load time.
• Absolute addresses are the actual physical addresses where the program resides
9
• Note the underlying assumptions of this scenario
• 1. Programs can be loaded into arbitrary memory locations
• 2. Once loaded, the locations of programs in memory don’t change
10
Compile time address binding
• There are several different approaches to binding memory access in programs to actual locations
• 1. Binding can be done at compile time• If it’s known in advance where in memory a
program will be loaded, the compiler can generate absolute code
11
Load time address binding
• 2. Binding can be done at load time• This was the simple approach described
earlier• The compiler generates relocatable code• The loader converts the relative addresses to
actual addresses at the time the program is placed into memory.
12
Run time address binding
• 3. Binding can be done at execution time• This is the most flexible approach• Relocatable code (containing relative
addresses) is actually loaded• At run time, the system converts each relative
memory address reference to a real, absolute address
13
• Implementing such a system removes the restriction that a program is always in the same address space
• This kind of system supports advanced memory management systems like paging and virtual memory, which are the topics of chapters 8 and 9, on memory
14
• In simple terms, you see that run time, or dynamic address binding supports medium term scheduling
• A job can be offloaded and reloaded without needing either to reload it to the same address or go through the address binding process again
15
• The diagram on the following overhead shows the various steps involved in getting a user written piece of high level code into a system and running
16
17
Logical vs. physical address space
• The address generated by a program running on the CPU is a logical address
• The address that actually gets manipulated in the memory management unit of the CPU—that ends up in the memory management unit memory address register—is a physical address
• Under compile time or load time binding, the logical and physical addresses are the same
18
• Under run time/execution time binding, the logical and physical addresses differ
• Logical addresses can be called virtual addresses.
• The book uses the terms interchangeably• However, for the time being, it’s better to refer
to logical addresses, so you don’t confuse this concept with the broader concept of virtual memory, the topic of Chapter 9
19
• Overall, the physical memory belonging to a program can be called its physical address space
• The complete set of possible memory references that a program would generate when running can be called its logical address space (or virtual address space)
20
• For efficiency, memory management in real systems is supported in hardware
• The mapping from logical to physical is done by the memory management unit (MMU)
• In the simplest of schemes, the MMU contains a relocation register
• Suppose you are doing run time address binding
21
• The MMU relocation register contains the base address, or offset into main memory, where a program is loaded
• Converting from a relative address to an absolute address means adding the relative address generated by the running program to the contents of the relocation register
22
• When a program is running, every time an instruction makes reference to a memory address, the relative address is passed to the MMU
• The MMU is transparent. • It does everything necessary to convert the
address
23
• For a simple read, for example, given a relative address, the MMU returns the data value found at the converted address
• For a simple write, the MMU takes the given data value and relative address, and writes the value to the converted address
• All other memory access instructions are handled similarly
• An illustrative diagram of MMU functionality follows
24
Memory management unit functionality with relative addresses
25
• Although the simple diagram doesn’t show it, logical address references generated by a program can be out of range
• In principle, these would cause the MMU to generate out of range physical addresses
• However, the point is that under relative addressing, the program lives in its own virtual world
26
• The program deals only in logical addresses while the system handles mapping them to physical addresses
• It will be shown shortly how the possibility of out of range references can be handled by the MMU
27
• The previous discussion illustrated addressing in a very basic way
• What follows are some historical enhancements, some of which led to the characteristics of complete, modern memory management schemes– Dynamic loading– Dynamic linking and shared libraries– Overlays
28
Dynamic loading
• Dynamic loading is a precursor to paging, but it isn’t efficient enough for a modern environment
• It is reminiscent of medium term scheduling• One of the assumptions so far has been that a
complete program had to loaded into memory in order to run
• Consider the alternative scenario given on the next overhead
29
• 1. Separate routines of an application are stored on the disk in relocatable format
• 2. When a routine is called, first it’s necessary to check if it’s already been loaded. – If so, control is transferred to it
• 3. If not, the loader immediately loads it and updates its address table– An application/routine address table entry contains the
value that would go into the relocation register for each of the routines, when it’s running
30
Dynamic linking and shared libraries
• To understand dynamic linking, consider what static linking would mean
• If every user program that used a system library had to have a copy of the system code bound into it, that would be static linking
• This is clearly inefficient. • Why make multiple copies of shared code in
loaded program images?
31
• Under dynamic linking, a user program contains a special stub where system code is called
• At run time, when the stub is encountered, a system call checks to see whether the needed code has already been loaded by another program
• If not, the code is loaded and execution continues• If the code was already loaded, then execution
continues at the address where the system had loaded it
32
• Dynamic linking of system libraries supports both transparent library updates and the use of different library versions
• If user code is dynamically linked to system code, if the system code changes, there is no need to recompile the user code.
• The user code doesn’t contain a copy of the system code
33
• If different versions of libraries are needed, this is straightforward
• Old user code will use whatever version was in effect when it was written
• New versions of libraries need new names (i.e., names with version numbers) and new user code can be written to use the new version
34
• If it is desirable for old user code to use the new library version, the old user code will have to be changed so that the stub refers to the new rather than the old
• Obviously, the ability to do this is all supported by system functionality
35
• The fundamental functionality, from the point of view of memory management, is shared access to common memory
• In general, the memory space belonging to one process is disjoint from the memory space belonging to another
• However, the system may support access to a shared system library in the virtual memory space of more than one user process
36
Overlays
• This is a technique that is very old and has little modern use
• It is possible that it would have some application in modern environments where physical memory was extremely limited
37
• Suppose a program ran sequentially and could be broken into two halves where no loop or if reached from the second half back to the first
• Suppose that the system provided a facility so that a running program could load an executable image into its memory space
• This is reminiscent of forking where the fork() is followed by an exec()
38
• Suppose those requirements were met and memory was large enough to hold half of the program but not all of the program
• Write the first half and have it conclude by loading the second half
39
• This is not simple to do, it requires system support, it certainly won’t solve all of your problems, and it would be prone to mistakes
• However, something like this may be necessary if memory is tiny and the system doesn’t support advanced techniques like paging and virtual memory
40
8.2 Swapping
• Swapping was mentioned before as the action taken by the medium term scheduler
• Remember to keep the term distinct from switching, which refers to switching loaded processes on and off of the CPU
• In this section, swapping will refer to the approach used to support multi-programming in systems with limited memory
41
• Elements of swapping existed in early versions of Windows
• Swapping continues to exist in Unix environments
42
• This is the scenario for swapping:• Execution images for >1 job may be in
memory• The long term scheduler picks a job from the
input queue• There isn’t enough memory for it• So the image of a currently inactive job is
swapped out and the new job is swapped in
43
• Medium term scheduling does swapping on the grounds that the multi-programming level is too high
• In other words, the CPU is the limiting resource• Swapping as discussed now is implemented
because memory space is limited• Note that swapping for either reason isn’t
suitable for interactive type processes
44
• Swapping is slow because it writes to a swap space in secondary storage
• Swapping can be useful as a protection against limited resources, whether CPU (medium term scheduling) or memory (swapping as described here)
• However, transferring back and forth from the disk is definitely not a time-effective strategy for supporting multi-programming, let lone multi-tasking, on a modern system
45
8.3 Contiguous Memory Allocation
• Along with the other assumptions made so far, such as the fact that all of a program has to be loaded into memory, another assumption is made
• In simple systems, the whole program is loaded, in order, from beginning to end, in one block of physical memory
46
• Referring back to earlier chapters, the interrupt vector table is assigned a fixed memory location
• O/S code is assigned a fixed location• User processes are allocated contiguous blocks
in the remaining free memory• Valid memory address references for
relocatable code are determined by a base address and a limit value
47
• The base address corresponds to relative address 0
• The limit tells the amount of memory allocated to the program
• In other words, the limit corresponds to the largest valid relative address
48
• The limit register contains the maximum relative address value.
• The relocation register contains the base address allocated to the program
• Keep in mind that when context switching, these registers are among those that the dispatcher sets
• The following diagram illustrates the MMU in more detail under these assumptions
49
MMU functionality with relative addresses, contiguous memory allocation, and limit and relocation registers
50
Memory allocations
• A simple scheme for allocating memory is to give processes fixed size partitions
• A slightly more efficient scheme would vary the partition size according to the program size
• The O/S keeps a table or list of free and allocated memory
51
• Part of scheduling becomes determining whether there is enough memory to load a new job
• Under contiguous allocation, that means finding out whether there is a “hole” (window of free memory) large enough for the job
• If there is a large enough hole, in principle, that makes things “easy” (stay tuned)
52
• If there isn’t a large enough hole you have two choices:
• A. Wait and schedule the new process when a large enough hole becomes available
• B. Set the current new job aside and have the scheduler search for jobs in the input queue that are small enough to fit into available holes
53
The dynamic storage allocation problem
• This is a classic problem of memory management
• The assumption is that scattered throughout memory are various holes of contiguous memory large enough for the process to be loaded into them
• The question is how to choose which of those holes to load the process into
54
• Historically, three algorithms have been considered
• 1. First fit: Put a process into the first hole found that’s big enough for it. – This is fast and allocates memory efficiently
• 2. Best fit: Look for the hole closest in size to what’s needed. – This is not as fast and it’s not clearly better in
allocation
55
• 3. Worst fit: This essentially means, load the job into the largest available hole. – In practice it performs as well as its name, but see
the following bullets• Note that for any of these three choices, the
question is not where in the hole to load the process
• For the sake of argument, assume that it will be loaded at the beginning of the hole
56
External fragmentation
• External fragmentation describes the situation when memory has been allocated to processes leaving lots of scattered, small holes
• If sufficiently small, the holes are wasted memory space under contiguous loading
• Even though worst fit doesn’t work, the idea behind it was to leave usable size holes
57
• Empirical studies have shown that for an amount of allocated memory measured as N, an amount of memory approximately equal to .5N will be lost due to external fragmentation
• This is known as the 50% rule• In other words, under contiguous memory
allocation about 1/3 of memory is wasted due to unusable, small memory holes external to the blocks that are successfully allocated
58
Block allocation
• In reality, memory is typically allocated in fixed size blocks rather than exact byte counts corresponding to process size
• Keeping track of arbitrary, varying amounts of memory allocation is not practical due to the overhead involved
• A block may consist of 1KB or some other measure of similar magnitude or larger
59
• Under block allocation, a process is allocated enough contiguous blocks to contain the whole program
• External fragmentation still results under block allocation
• The smallest possible hole will be one block
60
Internal fragmentation
• Something called internal fragmentation also results from block allocation
• This refers to the wasted memory in the last block allocated to a process
• Internal fragmentation on average is equal to ½ of the size of one block
61
Picking a block size
• Picking a block size is a classic case of balancing extremes
• If block size is large enough, each process will only need one block.
• This degenerates into fixed partitions for processes, with large waste due to internal fragmentation
62
• If block size is small enough, you approach allocating byte by byte, which is undesirable due to record keeping overhead
• If the blocks are small, internal fragmentation is insignificant, but this is not an overriding advantage
63
• Block allocation is a desirable enhancement of contiguous memory allocation, but it’s still contiguous memory allocation
• It is reasonable to assume that external fragments, even if measure in units of blocks rather than bytes, can become small enough to be unusable
64
Memory compaction
• Memory compaction is an approach to solving the fragmentation resulting from contiguous memory allocation
• Compaction refers to relocating programs loaded in memory in order to reduce fragmentation
• Relocation is a system process that happens dynamically, without unloading the user processes
65
• If programs use absolute memory addresses, they simply can’t be relocated.
• Memory couldn’t be compacted without recompiling the programs.
• This would require unloading them and loading the recompiled code
• This is out of the question• It would not be a dynamic process
66
• If programs use relative memory addresses, they are relocatable.
• Even during run time, they can be moved to new memory locations
• The system relocation process accomplishes this by doing the relocating and updating the base and offset register values for user processes
• Relocation makes it possible to squeeze the loaded programs together in memory, squeezing out the unusable fragments
67
8.4 Paging
• Paging is a big deal• Fundamentally, paging is a memory management
technique that makes it possible to load a program into non-contiguous memory
• A page is a fix-sized block• A program may be large enough that it has to be
loaded into more than one page• But the program does not have to be loaded into
a contiguous set of pages
68
• Paging solves two problems:• 1. Under paging, external fragmentation is not a
problem.– Even a single, isolated, unallocated page is still usable– It can be allocated as part of a non-contiguous
allocation• Another way of putting this is that with paging,
memory compaction will never be needed
69
• 2. Under paging, fragmentation in the swap space in secondary storage is also eliminated– When memory compaction was discussed above, its
relationship to swapping was not mentioned– It turns out that memory compaction and swapping
are incompatible, because compacting the secondary storage space to match the reorganized memory space would take too long
– Memory compaction is not necessary with paging, so this problem is solved
70
How paging is implemented
• Paging is based on the idea that the O/S can maintain data structures that match given blocks in physical memory with given ranges of virtual addresses in programs
• Physical memory is conceptually broken into fixed size frames
• Logical memory is broken into pages of the same size
71
• In essence, the O/S maintains a lookup table telling which logical page matches with which physical frame
• In contiguous memory allocation there was a limit register and a relocation register
• In paging there are special registers for placing the logical address and forming the corresponding physical address
72
• In paging, fixed page sizes mean that the limits are always the same, but there is a table containing the relocation values telling which frame each page address is relocated to
• It is important to understand that under paging, allocation isn’t contiguous, but complete programs do have to be loaded
• For a program of x pages, x frames will be needed• The number of frames allocated will differ for
programs of different sizes
73
Implementation Details
• Every (logical) address generated by the CPU takes this form:
• Page part (p) | offset part (d)• The page part is a page id• The offset part is the location of a given word
within the page that contains it• More specifically, let an address consist of m bits• Then a logical address can be pictured as shown on
the next overhead
74
75
• The addresses are binary numbers• There are m bits for the address overall• That means the address space consists of 2m
pages• The range of valid addresses goes from 0 to 2m
– 1
76
• The components of the address fall neatly into two parts
• The (m – n) digits for p can be treated separately as a page number in the range from 0 to 2(m – n) – 1
• The n digits for d can be treated separately as an offset in the range from 0 to 2n – 1
• The fact that n bits are reserved for the offset into a page implies that the size of a page is 2n bytes
77
Forming an address from a page table
• Paging is based on maintaining a page table• For some page value p, the corresponding
frame value f is looked up in the page table• The lookup is done at offset p in the table• The offset d, is unchanged
78
• The physical address is formed by appending the binary value for d to the binary value for f
• The result is f | d• The formation of a physical address from a
logical address, p | d, using a page table, is illustrated in the following diagram
79
80
The contents of a page table
• In theory you could have a global page table containing entries for all processes
• In practice, each process may have its own page table which is used when that process is scheduled
• When a process is initially scheduled by the long term scheduler, its page table would be populated with the frames allocated to it
81
• When the short term scheduler context switches between processes, an address register pointing to the page table would be changed
• The use of the page table for a single process can be illustrated with a simple example
• Each page table entry is like a base and offset for a given page in the process
82
83
• Note again that under paging there is no external fragmentation
• Every empty physical memory space is a usable frame
• Internal fragmentation will average one half of a frame per process
84
Page sizes
• In modern systems page sizes vary in the range of around 512 bytes to 16MB
• The smaller the page size, the smaller the internal fragmentation
• However, if the memory space is large, there is overhead in allocating small pages and maintaining a page table with lots of entries
85
• As hardware resources have become less costly, larger memory spaces have become available
• Page sizes have grown correspondingly large• Page sizes of 2K-8K may be considered
representative of an average, modern system
86
Summary of paging ideas
• 1. The logical view of the address space is separate from the physical view. – This means that code is relocatable, not absolute
• 2. The logical view is of contiguous memory. • Paging is completely hidden by the MMU. – Allocation of frames is not contiguous– However, programs have to be loaded in their
entirety
87
• 3. Although the discussion has been in terms of the page table, in reality there is also a global frame table. – The frame table provides the system with ready look-
up of which frames have been allocated, and which are free and still available for allocation
• 4. There is a page table for each process. – It keeps track of memory allocation from the process
point of view and supports the translation from logical to physical addresses
88
Hardware support for paging
• A page table has to hold the mapping from logical pages to physical frames for a single process
• Note that the page table resides in memory• The minimum hardware support for paging is
a dedicated register on the chip which holds the address of the page table of the currently running process
89
• With this minimal support, for each logical memory address generated by a program, two accesses to actual memory would be necessary
• The first access would be to the page table, the second to the physical address located there
• This is expensive• In order to support non-contiguous allocation, the
cost of a memory access is doubled
90
• In order to be viable, paging needs additional hardware support.
• There are two basic choices• 1. Dedicated registers• 2. Translation look-aside buffers
91
• 1. Have a complete set of dedicated registers for the page table. – That is, each page table entry would reside in a
register– There would have to be as many registers as the
maximum number of frames that could be allocated per process
– This is fast, but the hardware cost (monetary and real estate on the chip) becomes impractical if the memory space is large
92
• 2. The chip will contain hardware elements known as translation look-aside buffers (TLB’s). – This is the current state of the art, and it will be
explained below
93
Translation look-aside buffers
• Translation look-aside buffers are in essence a special set of registers which support look-up.
• In other words, they are table-like. • They are designed to contain keys, p, page
identifiers, and values, f, the matching frame identifiers
• They are different from dedicated registers• They are designed to hold a subset of the page
table
94
• TLB’s have an additional, special characteristic. • They are not independent buffers. • They come as a collection• The “look-aside” part of the name is meant to
suggest that when a search value is “dropped” onto the TLB, for all practical purposes, all of the buffers are searched for that value simultaneously.
95
• If the search value is present, the matching value is found within a fixed number of clock cycles
• In other words, look-up in a TLB does not involve linear search or any other software search algorithm.
• There is no order of complexity to searching depending on the number of entries in the collection of TLB’s.
• Response time is fixed and small
96
• TLB’s are like a highly specialized cache• The set of TLB’s doesn’t store a whole page table• When a process starts accessing pages, this
requires reading the page table and finding the frame
• Once a page has been read the first time, it’s entered into the TLB
• Subsequent reads to that page will not require reading from the page table in memory
97
• Just like with caching, some process memory accesses will be a TLB “hit” and some will be a TLB “miss”
• A hit is very economical• With a hit, a memory access requires a
reference to the TLB followed by one main memory access
98
• A miss requires reading the page table and replacing (the LRU) entry in the TLB with the most recent page accessed
• In other words, a miss incurs the full “double” cost of reading the accessing memory twice
• The first access updates the TLB and the second finds the desired memory address
• Memory management with TLB’s is shown in the following diagrams
99
100
• In the following diagram, the page table is shown in memory, where it’s located.
• The ALU, TLB’s, and logical and physical address registers are all in the CPU.
• The TLB’s and address registers are in the MMU of the CPU.
• A program running in the ALU generates a logical memory address which is passed to the MMU, which translates it to a physical address and reads from or writes to it.
101
102
• Note the following things about the diagram• The page table is complete, so a search of the
page table simply means jumping to offset p in the table
• The TLB is a subset, so it has to have both key, p, and look-up, f values in it
• It shows addressing, but it doesn’t attempt to show, through arrows or other notation, the replacement of TLB entries on a miss
103
TLB hits and misses
• Paging costs can be summarized in this way• On a hit: TLB access + memory access• On a miss: TLB access + memory access to
page table + memory access to desired page• The book states that typical TLB’s are in the
range from 16 to 512 entries• With this number of TLB’s, a hit ratio of 80%-
98% can be achieved
104
Calculating the cost of paging
• Given a hit ratio and some sample values for the time needed for TLB and memory access, weighted averages for the cost of paging can be calculated
• For example, let the time needed for a TLB search be 20 ns.
• Let the time needed for a main memory access be 100 ns.
105
• Cost of TLB hit: 20 + 100 = 120• Cost of TLB miss: 20 + 100 + 100 = 220• Let the hit ratio be 80%• Then the overall, weighted cost of paging
is: .8(120) + .2(220) = 140
106
• In other words, if you could always access memory directly, it would take 100 ns.
• With paging, it takes on average 140 ns.• Paging imposes a 40% overhead on memory
access• On the other hand, without TLB’s, every memory
access would cost 100 ns. + 100 ns., which would mean a 100% overhead on memory access
107
Justification for paging
• Why would you live with a 40% overhead cost on memory accesses?
• Remember the reasons for introducing the idea of paging:
• It allows for non-contiguous memory allocation
108
• This solves the problem of external fragmentation in memory
• As long as the page size strikes a balance between large and small, internal fragmentation is not great
• There is also a potential benefit in reducing fragmentation in swap space—but supporting contiguous memory allocation is the main event
109
Having a global page table
• The previous discussion has referred to a page table as belonging to one process
• This would mean there would be many page tables
• When a new process was scheduled, the TLB would be flushed so that pages belonging to the new process would be loaded.
110
• The alternative is to have a single, unified page table
• This means that each page table entry, in addition to a value for f, would have to identify which process it belonged to
• The identifier is known as an ASID, an address space id
111
• Such a table would work like this:• When a process generated a page id, the TLB
would be searched for that page• If found, it would further be checked to see if
the page belonged to the process• If so, everything is good
112
• If not, this is simply a page miss• Replacement would occur using the usual
algorithm for replacement on a miss• With a page table like this, there is no need for
flushing when a new process is scheduled• In effect, the TLB is flushed entry by entry as
misses occur
113
Implementing protection in the page table with valid and invalid bits
• Recall that a page table functions like a set of base and limit registers
• Each page address is a base, and the fixed page size functions as a limit
• If a system maintains page tables of length n, then the maximum amount of memory that could theoretically be allocated to a process is n pages, or n * (page length) bytes
114
• In practice, processes do not always need the maximum amount of memory and will not be allocated that much
• This information can be maintained in the page table by the inclusion of a valid/invalid bit
115
• If a page table entry is marked “i”, this means that if a process generates that logical page, it is trying to access an address outside of the memory space that was allocated to it
• A diagram of the page table follows
116
117
A page table length register
• An alternative to valid/invalid bits is a page table length register (PTLR)
• The idea is simple—this register is like a limit register for the page table
• The range of logical addresses for a given process begins at page 0 and goes to some maximum which is less than the absolute maximum size allowed for a page table
• When a process generates a page, it is checked against the PTLR to see if it’s valid
118
• The valid/invalid bit scheme can be extended to support finer protections
• For example, read/write/execute protections can be represented by three bits
• You typically think of these protections as being related to a file system
119
• In theory, different pages of a process could have different attributes
• This may be especially important if you are dealing with shared memory accessible to >1 process
• It is also likely to be complicated in practice, and the idea won’t be pursued further here
120
8.5 Structure of the Page Table
121
• The topic of this section is the structure of page tables
• Before considering the structure, it’s helpful to consider the sizes of address spaces that a page table may have to support
• Modern systems may support address spaces in the range of 232 to 264 bytes
• 232 is 4 Gigabytes• 264 ~= 18.4+ x 1018
122
• The higher value is what you get if you allow all 64 bits of a 64 bit architecture to be used as an address
• Note that this is 16 x 260, but by this stage the powers of 2 and the powers of 10 do not match up the way they do where we casually equate 210 to 103
123
This is just a digression• According to Wikipedia• Standard prefixes for the SI units of measure • Multiples• Name: deca- hecto- kilo- mega- giga- tera- peta- exa- zetta-
yotta- • Symbol: da h k M G T P E Z Y • Factor: 100 101 102 103 106 109 1012 1015 1018 1021 1024 • Subdivisions • Name: deci- centi- milli- micro- nano- pico- femto- atto- zepto-
yocto- • Symbol: d c m µ n p f a z y • Factor: 100 10−1 10−2 10−3 10−6 10−9 10−12 10−15 10−18 10−21 10−24
124
• The reality is that modern systems support logical address spaces too large for simple page tables
• In order to support these address spaces, hierarchical or multi-level paging is used
• Take the lower of the address spaces given above, 232
• Let the page size be 212 or 4 KB
125
• 232 bytes of memory divided into pages of size 212 bytes means a total of 220 pages
• The corresponding physical address space would consist of 220 frames
• That means that each page table entry would have to be at least 20 bits long, in order to hold the frame id
126
• Suppose each page table entry is 4 bytes, or 32 bits, long
• This would allow for validity and protection bits in addition to the frame id
• It’s also simpler to argue using powers of 2 rather than speaking in terms of a table entry of length 3 bytes
127
• A page table with 220 entries each of size 22 bytes means the page table is of length 222, or 4 MB
• But a page itself under this scenario was only 212, or 4 KB
• In other words, it would take 1 K of pages to hold the complete page table for a process that had been allocated the theoretical maximum amount of memory possible
128
• To restate the result in another way, the page table won’t fit into a single page
• In theory, it might be possible to devise a hybrid system where the memory for page tables was allocated and addressed by the O/S as a monolithic block instead of in pages
• Then that page table would support paging of user memory
129
• Having two different addressing schemes in the same system would be a mess and leads to questions like, could there be fragmentation in the monolithic page table block?
• It is preferable not to have the page table consist of monolithic (and contiguous) memory
130
• The practical solution to the problem is hierarchical or multi-level paging
• The underlying idea is to come up with a scheme where a large page table can be managed as a collection of individual pages
• In one of its forms, multi-level paging is similar to indexing
• The book refers to this as a forward-mapped page table
131
• Under multi-level paging, given a logical page value, you don’t look up the frame id directly
• You look up another page that contains a page id for the page containing the desired frame id
• The book mentions that this kind of scheme was used by the Pentium II
132
• The multi-level paging scheme will be illustrated in the following diagrams
• A logical address of 32 bits can be divided into blocks of 10, 10, and 12 bits
• 10 + 10 = 20 bits correspond to the page identifier
• The remaining 12 bits correspond to d, the offset into a page of size 212 bytes
133
• There is a reason for treating the first 20 bits as two blocks of 10 bits
• The example illustrates a two level page table scheme
• The size of a page is 212 bytes• If a page table entry is 4 bytes (22 bytes) wide,
then a page can hold 210 page table entries
134
• Conceptually, the first 10 bits in an address will be used as an offset into an outer page table
• The entry found in that table will refer to one of 210 inner page tables
• The second 10 bits in the address will be used as an offset into that inner page table
• The entry found there will refer to a page that is in the address space of the process
135
• The last 12 bits of the address will be the offset into the memory page (frame) allocated to the process
• 12 bits are used for this instead of 10• That’s because addressing of allocated memory
pages is byte-by-byte, and a page contains 212 bytes
• These ideas are graphically illustrated on the following overheads
136
This is the form of a page address
137
This is how a logical address maps to a physical address through multiple levels
138
This shows the multiple layers of the page table
139
Calculating the cost of paging using a multi-level page table
• The cost of a page miss will be higher for a two level page table than for a one level table
• This is because three hits to the page table, three hits to memory, would be needed to find a missing address rather than two
140
• As before, let the time needed for a TLB search be 20 ns.
• Let the time needed for a main memory access be 100 ns.
• Cost of TLB hit: 20 + 100 = 120• Cost of TLB miss: 20 + 100 + 100 + 100 = 320
141
• In the calculation for the miss, the first 100 is the outer page table, the second 100 is the inner page table, the third 100 is the access to the desired address
• Let the hit ratio be 98%• Then the overall, weighted cost of paging
is: .98(120) + .02(320) = 124• The overhead cost of paging under this scheme is
24%
142
Larger address spaces
• Observe what happens if you go to a 64 bit address space and a page size of 4KB
• Sample address breakdowns are shown on the next overhead for two and three level paging
• If you only break the address into three or four parts, the number of bits for one of the parts is so high that you again have the problem that a level of the page table won’t fit into a single page
143
144
• Depending on page size, some 32 bit systems go to 3 or 4 levels
• To implement multi-level paging in a 64 bit system, you would need 6 levels
• This is too deep to be practical• A page miss would involve seven accesses to
memory• This makes the cost of paging too high
145
Hashed page tables--Hashing
• Hashed page tables provide an alternative approach to multi-level paging in a large address space
• The first thing you need to keep in mind is what hashing is, how it works, and what it accomplishes
146
How hashing works
• You may have a widely dispersed set of n different x values in a given domain
• You have a specific, compact set of y values that you want to map to in the range.
• You need a hashing function, y = f(x), that converts x values into the desired set of y values in the range
147
• In the ideal case, there would be a set of exactly n different, contiguous y values that the x’s map to
• That would mean that no two x values would ever collide
• However, this doesn’t typically happen• f() needs to be devised so that the likelihood that
any two x values will give the same y value is small
148
• f() also has to be quick and easy to compute• In practice the range will be somewhat larger
than n and collisions may occur• The most common kind of hashing function is
based on division and remainders
149
• Choose z to be the smallest prime number larger than n
• Then let f(x) = x % z• f(x) will fall into the range [0, z – 1]
150
• Hashing makes it possible to create a look-up table that doesn’t require an index or any sorting or searching
• Let there be z – 1 entries in the table• Store the entry for x at the offset y = f(x) in the
table• When x occurs again and you want to look up the
corresponding value in the table, compute y = f(x) and read the entry at that offset y
151
• Note that the value, x, has to be repeated as part of the table entry, along with value that goes along with it that you’re trying to look up
• This is necessary in order to resolve collisions• An example of a hashing algorithm and the
resulting hash table is illustrated in the following diagram
152
153
Hashed page tables—Why?
• Consider again the background of multi-level paging and its disadvantages
• Conceivably you could be maintaining a global page table or a page table for each process
• Since memory is being accessed page by page, it’s desirable for a large page table itself to be accessible by page
• As the address space grows large, it becomes impossible to store a complete page table in one page
154
• A multi-level page table provides a tree-like way of using pages to access the whole memory address space
• Each level in the tree corresponds to a block of bits in an address
• The larger the address space, the more levels in the tree, the more memory accesses to arrive at the desired address
155
• Consider the large memory case, a 64 bit architecture, for example
• You would not expect to have processes that required all of the memory made addressable under such a scheme
• The purpose of a memory of this size would be to support multi-tasking, with each process getting a portion of the memory
156
• Even if a process got only a part of memory, with non-contiguous memory allocation, its frames could be dispersed across the whole address space
• There is nothing about the allocation process that would confine it to a fixed subset of neighboring frames
• In other words, a single process might use the whole address space, but use it very sparsely
157
• The scenario used to illustrate hashed page tables will be this:
• We’d like the complete page table for a process to fit into a single page
• If the memory is large enough, page size can be large
• Also, if the memory is large enough, the amount of memory allocated to a single process may be a small fraction of the total
158
• The process may access addresses in the range of 0 to 264-1
• If the page size were still 212, that would mean you could access
159
• Suppose that the page size of a system is large enough that a page table that can be contained in one page would be the maximum amount of memory that could be allocated to one process
• The system would still have to maintain a global record of all process/page/frame assignments
• However, hashing makes it possible to store the mapping for a single process in one page
160
• In summary, making a hashed page table involves the following:
• When a virtual page is allocated a frame, the virtual page id, p, is hashed to a location in the hash table
• The hash table entry contains p, to account for collisions, and the id of the allocated frame
• See the following diagram
161
162
• In this illustration, a collision is shown• Collisions are handled with links rather than
overflow• The two logical pages, q and p, hash to the
same location• Their corresponding frames are s and r,
respectively
163
• The book doesn’t give any details on the organization of a hash table on a page
• In general, if you’re doing division/remainder hashing, you might expect that the divisor is chosen so that the size of a hash table node times the number of possible hash values is less than the size of a whole page
164
Clustered page tables
• The book doesn’t give a very detailed explanation of this
• The general idea appears to be that memory can be allocated so that these properties hold:
• Several different (say 16) page id’s, p, will hash to the same entry in the page table
• This entry will then have no fewer than 16 linked nodes, one for each page, (and possibly more, due to collisions)
165
• Honestly, it’s not clear to me what advantage this gives
• The length of the page table would be reduced by a factor of 16, but it seems that its width would be increased by a factor of 16
• I have no more to say about this, and there will be no test questions on it
166
Inverted page tables
• Inverted page tables are an important alternative to multi-level page tables and hashed page tables
• Recall that with (non-inverted) page tables:• 1. The system has to maintain a global frame
table that tells which frames are allocated to which processes
167
• 2. The system has to maintain a page table for each process, that makes it possible to look up the physical frame that is allocated to a given logical address
• Simple illustrations of both of these things are given on the next overhead
168
169
• An inverted page table is an extension of the frame table
• Instead of many page tables, one for each process, there is one master table
• The offsets into the table represent the frame id’s for the whole physical memory space
• The table has two columns, one for pid, the process that the frame/page belongs to, and one for p, the logical page id of the page
170
171
• The use of an inverted page table to resolve a logical address is shown in the diagram on the next overhead
• The key thing to notice about the process is that it is necessary to do linear search through the inverted page table, looking for a match on the pid that generated the address and the logical address that was generated
• The offset into the table identifies that frame that was allocated to it
172
173
• Searching the inverted page table is the cost of this approach
• There is no choice except for simple, linear search because the random allocation of frames means that the table entries are not in any order
• It is not possible to do binary search or anything else
174
• This is where hashing and inverted page tables come together
• The way to get direct access to a set of values in random order is to hash
• Let n be the total number of pages/frames and devise a hashing function that will provide this mapping:
• f(pid, p) [0, n – 1]• Use this function to allocate frames to processes
175
• Then, when the logical address (pid, p) is generated, hash it
• In theory, the hash function value itself could be the frame id, f, but you still have to do table look-up because of the possibility of collisions
• You can go directly to offset f in the table and check there for the key values (pid, p).
176
• You don’t have to do linear search• If the values are not found, check for overflow
or linking until you find the desired values• Note that if you don’t find the desired values,
the process has tried to access an address that is out of range.
177
• The most recent discussions have left TLB’s behind, but they are still relevant as hardware support for addressing
• A diagram of the use of a hashed inverted page table with TLB’s is shown on the next overhead
• In looking at the picture, remember that since the table is stored in memory, that adds an extra memory access to the overall cost of addressing
• Also note that in reality the table would probably be bigger than a page
178
• The table would be stored in system space and might be addressed using a special scheme
• In other words, we’ve come back around to the problem which motivated multi-level paging in the first place
• The table that supports paging is bigger than a page itself
• And the solution has something in common with the solution that was rejected earlier—supporting paging through a non-paged mechanism
179
180
• The previous discussion included the assumption that you could allocate frames based on hashing
• This simplified things and made the diagram easier to draw
• In reality, you would have a frame table that recorded which frame was allocated to which frame
• You would then have a separate hash table that supported look-up into the frame table
181
• The idea is that the has value, h, takes you to an offset in the hash table.
• What you look up in the hash table is the value i, which is the frame id that was assigned to pid|p—and which is the offset to the correct entry in the frame table.
182
183
Shared pages
• The basic idea is this: Shared memory between processes can be implemented by mapping their logical addresses to the same physical pages (frames)
• An operating system may support interprocess communication (IPC) this way
• It is also a convenient way to share (read only) data• It’s also possible to share code, such as libraries
which >1 process need to run
184
Reentrant code is shareable
• In order for code to be shareable, it has to be reentrant
• Reentrant means that there is nothing in the code which causes it to modify itself
• Consider the MISC sumtenV1.txt example• It is divided into a data segment and a code segment• Two processes could share the code as long as the
accesses to memory variables were mapped to separate copies of the variables
185
• Every memory access that a program makes has to pass through the O/S
• This means that the O/S is responsible for incorrect memory access and for detecting when shared code may be being misused
186
• Threads are a good, concrete example of shared code
• We have considered some of the problems that can occur when threads share references to common objects
• If they share no references, then they are completely trouble free
187
Inverted page tables don’t support shared memory very well
• Keep in mind that an inverted page table is a global structure that effectively maps one logical page to one physical frame
• This kind of arrangement makes it difficult to support memory pages (frames) shared between different processes
• To support shared memory, it would be necessary to add linking to the table or add other data structures to the system
188
8.6 Segmentation
• The idea behind segmentation is that the user view of memory is not simply a linear array of bytes
• Users tend to think of their applications in terms of program units
• The relative locations of different modules or classes are not important
• Each separate unit can be identified by its offset from some base and its length, where the length of each is variable
189
• Segmentation supports the user view of memory
• An address is conceptually of the form <segment id, offset into segment>
• An address isn’t simply a pure logical address or a page plus offset
190
Implementation of segmentation
• The system would have to support segmented addresses in software
• It would then be necessary to map from segmented addresses to physical addresses
191
• Segments may be reminiscent of simple contiguous memory allocation
• They may also be thought of, very roughly, as (comparatively large) pages of varying size
• Just like with paging, hardware support in the MMU makes the translation possible
• The diagram on the next overhead shows how segmented addresses are resolved
192
193
• This is similar to one of the earliest diagrams showing in general how page addresses were resolved
• The segment table is like a set of base-limit pairs, one for each segment
• Just like with pages, in the long run you would probably want some sort of TLB support
• For the time being, segments and pages are treated separately
• In real, modern systems with segmentation, the segments are subdivided into pages which are accessed through a paging mechanism
194
Protection and sharing with segmentation
• The theory is that protection and sharing make more logical sense under a segmented scheme
• Instead of worrying about protection and sharing at a page level, the assumption is that the same protection and sharing decisions would logically apply to a complete segment
195
• In other words, protection is applied to semantic constructs like “data block” or “program block”
• Under a segmented scheme, semantically different blocks would be stored in different segments
• Similarly with sharing• If two processes need to share the same block,
that the block be stored in a given segment, and give both processes accesses to the segment
196
• Although perhaps clearer than paged sharing, segmented sharing doesn’t solve all of the problems of sharing
• If code is shared and two processes access it, the system still has to resolve addresses when processes cross the boundary from unshared to shared code
197
• In other words, two processes may know the same code by different symbolic names
• Potentially, ifs or jumps across boundaries have to be supported (from one address space to another) and the return from shared code has to go to the address space of whichever process called it
198
Segmentation and fragmentation
• Segmentation, in the sense that it’s like contiguous memory allocation, suffers from the problem of external fragmentation
• The difference is that a single process consists of multiple segments and each segment is loaded into contiguous memory
• The ultimate solution to this problem is to break the segments into pages
199
8.7 Example: The Intel Pentium
• The reality is that the Intel 8086 architecture has had segmented addressing from the beginning.
• The Motorola 68000 didn’t.• The following details are given in the same spirit
that the information about scheduling and priorities was given in the chapter on scheduling
• Namely, to show that real systems tend to have many disparate features, and overall they can be somewhat complex
200
• Some information about Intel addressing• The maximum number of segments per
process is 16K (214)• Each segment can be as large as 4GB (232)• A page is 4KB (212), so a segment may consist
of up to 220 or 1M of pages
201
• The logical address space of a process is divided into two partitions, each of up to 8K segments
• Partition 1 is private to the process. – Information about its segments are stored in the
local descriptor table• Partition 2 contains segments shared among
processes. – Information about these segments is stored in the
global descriptor table
202
• The first part of a logical address is known as a selector
• It consists of these parts:– 13 bits for segment id, s– 1 bit for global vs local, g– 2 bits for protections– (14 bits total for segment id)
203
• Within each segment, an address is paged• It takes two levels to hold the page table• The page address takes the form described
earlier:– 10 bits for outer page of page table– 10 bits for inner page of page table– 12 bits for offset– (At 4 bytes per page table entry, you can fit 210
entries into a 4KB page)
204
• Notice that you’ve got both 14 bits for segment id and 32 bits for page id
• This means that in a 32 bit architecture you can’t “use” all of the bits
• There is a limit on how many segments total you can have, but there is flexibility in where they’re located in memory
205
• The diagram shown on the next overhead is supposed to summarize how a segmented logical address is resolved to a physical address
• Read it and weep
206
207
The End