32
Previous lecture chapter 8 NWI-IBC019: Operating systems 2017-2018

Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Previous lecture

chapter 8

NWI-IBC019: Operating systems

2017-2018

Page 2: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Why memory-management?

• Relocation Avoid collision in used memory addresses.

• Protection Avoid unintended mingling in a process.

• Sharing Allow certain areas of memory to be shared.

• Physical organisation Account for the hardware, and possible unknown number of memory banks.

Page 3: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Hardware support of Memory Management Unit (§8.1.3)

CPU geheugenread/write

data bus

address bus MMUlogical address physical

addresserror

Page 4: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Paging (§ 8.5)

Memory is divided into small partitions of fixed size, called frames. Currently mostly 4 KiB.

Process is divided into small partitions of the same fixed size, called pages. In contrast with partitioning scheme: one process uses multiple small partitions.

Every process has a mapping, called the page table, from page to frame.

Advantages: • little internal fragmentation; • no external fragmentation; • hidden for the programmer.

Drawbacks: • page table can take up (a lot of ) memory.

Page 5: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Paging (§ 8.5)0

Main memoryFramenumber

123456789

1011121314

(a) Fifteen Available Frames

0Main memory

A.0123456789

1011121314

(b) Load Process A

A.1A.2A.3

0Main memory

A.0123456789

1011121314

(c) Load Process B

A.1A.2A.3

0Main memory

A.0123456789

1011121314

(d) Load Process C

A.1A.2A.3

0Main memory

A.0123456789

1011121314

(f) Load Process D

A.1A.2A.3

0Main memory

A.0123456789

1011121314

(e) Swap out B

A.1A.2A.3

Figure 7.9 Assignment of Process Pages to Free Frames

C.0C.1C.2C.3

C.1C.2C.3

C.1C.2C.3

C.0 C.0

D.0D.1D.2

B.0B.1B.2

D.3D.4

B.0B.1B.2

Page 6: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Page table (§ 8.5)

00112233

Process Apage table

—0—1—2

Process Bpage table

708192103

Process Cpage table

405162113124

Process Dpage table

1314

Free framelist

Figure 7.10 Data Structures for the Example of Figure 7.9 at Time Epoch (f)

Page 7: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Paging address translation (§ 8.5)

0

012

0 0 0 0 1 0 1 1 1 0 1 1 1 1 0

6-bit page # 10-bit offset

Processpage table

16-bit logical address

0 0 0 1 1 0 0 1 1 1 0 1 1 1 1 0

16-bit physical address(a) Paging

000101000110011001

0 0 0 1 0 0 1 0 1 1 1 1 0 0 0 0

4-bit segment # 12-bit offset

Process segment table

Length Base

16-bit logical address

0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0

16-bit physical address(b) Segmentation

00101110111001

0000010000000000011110011110 0010000000100000 +

Figure 7.12 Examples of Logical-to-Physical Address Translation

Page 8: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Structure of the page table

chapter 8.6

NWI-IBC019: Operating systems

2017-2018

Page 9: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Hierarchical paging (§8.6.1)

Per process there is a page table. This page table should be resident (= in physical memory).

E.g. 4 GiB memory, with a page size of 4 KiB and a page table entry of 4 bytes. How large are the page tables?

4 MiB per process. With my 402 processes running this morning all page tables together would be 1,5 GiB!(for a 64-bits machine, a page table of 32768 TiB per process is needed)

Adjustments necessary: • store the page table in virtual memory geheugen (but not accessible for user

processes) • hierarchical page table, as used by Pentium processors

or inverted page table, as used by PowerPC, UltraSPARC and others

Page 10: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Plain paging

Page 11: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Two level page-table scheme

378 Chapter 8 Main Memory

8.6 Structure of the Page Table

In this section, we explore some of the most common techniques for structuringthe page table, including hierarchical paging, hashed page tables, and invertedpage tables.

8.6.1 Hierarchical Paging

Most modern computer systems support a large logical address space(232 to 264). In such an environment, the page table itself becomes excessivelylarge. For example, consider a system with a 32-bit logical address space. Ifthe page size in such a system is 4 KB (212), then a page table may consist ofup to 1 million entries (232/212). Assuming that each entry consists of 4 bytes,each process may need up to 4 MB of physical address space for the page tablealone. Clearly, we would not want to allocate the page table contiguously inmain memory. One simple solution to this problem is to divide the page tableinto smaller pieces. We can accomplish this division in several ways.

One way is to use a two-level paging algorithm, in which the page tableitself is also paged (Figure 8.17). For example, consider again the system witha 32-bit logical address space and a page size of 4 KB. A logical address isdivided into a page number consisting of 20 bits and a page offset consistingof 12 bits. Because we page the page table, the page number is further divided

•••

•••

outer pagetable

page ofpage table

page tablememory

929

900

929

900

708

500

100

1

0

•••

100

708

•••

•••

•••

•••

•••

•••

•••

•••

•••

1

500

Figure 8.17 A two-level page-table scheme.

Page 12: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Hierarchical page table

Page 13: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Hierarchical paging

Advantages: • sharing large chunks of memory is easy (part of the page table can be shared) • not used memory almost uses no extra memory in page table

Disadvantages: • indirection can incur a lot of overhead (per process a page table) • management is difficult • behaviour less predictable

Page 14: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Inverted page table (§ 8.6.3)382 Chapter 8 Main Memory

page table

CPU

logicaladdress physical

address physicalmemory

i

pid p

pid

search

p

d i d

Figure 8.20 Inverted page table.

To illustrate this method, we describe a simplified version of the invertedpage table used in the IBM RT. IBM was the first major company to use invertedpage tables, starting with the IBM System 38 and continuing through theRS/6000 and the current IBM Power CPUs. For the IBM RT, each virtual addressin the system consists of a triple:

<process-id, page-number, offset>.

Each inverted page-table entry is a pair <process-id, page-number> where theprocess-id assumes the role of the address-space identifier. When a memoryreference occurs, part of the virtual address, consisting of <process-id, page-number>, is presented to the memory subsystem. The inverted page tableis then searched for a match. If a match is found—say, at entry i—then thephysical address <i, offset> is generated. If no match is found, then an illegaladdress access has been attempted.

Although this scheme decreases the amount of memory needed to storeeach page table, it increases the amount of time needed to search the table whena page reference occurs. Because the inverted page table is sorted by physicaladdress, but lookups occur on virtual addresses, the whole table might needto be searched before a match is found. This search would take far too long.To alleviate this problem, we use a hash table, as described in Section 8.6.2,to limit the search to one—or at most a few—page-table entries. Of course,each access to the hash table adds a memory reference to the procedure, so onevirtual memory reference requires at least two real memory reads—one for thehash-table entry and one for the page table. (Recall that the TLB is searched first,before the hash table is consulted, offering some performance improvement.)

Systems that use inverted page tables have difficulty implementing sharedmemory. Shared memory is usually implemented as multiple virtual addresses(one for each process sharing the memory) that are mapped to one physicaladdress. This standard method cannot be used with inverted page tables;because there is only one virtual page entry for every physical page, one

Page 15: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Inverted page table

Advantages: • number of page table entries represents the size of installed memory • only one page table on a system

Disadvantages: • hard to realise sharing, as there is one entry in page table per frame

(solved by using page faults and resets the table entry associated with a frame)

Page 16: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Virtual-memory management

chapter 9

NWI-IBC019: Operating systems

2017-2018

Page 17: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Why virtual memory? (§ 9.1)

We want the OS to manage the address space of a program automatically. In effect, this enables only having in the memory banks loaded the parts of the memory that are actually active.

This allows for: • to fit more processes at the same time in memory • allows to run a process that needs more memory than is available

Therefore we get a higher utilisation.

Page 18: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Types of memory (§ 9.1)

• real memory every running process is in hardware memory.

• virtual memory combination of both hardware memory and secondary storage, but it appears as one memory to a programmer.

Page 19: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

What is virtual memory? (§ 9.1)

We move from logical addresses to virtual addresses (note: the book uses them interchangeably).

In chapter 8, if a logical address is claimed by a process, it is actually in memory if the process is running.

With virtual memory/addresses, a page can also be temporary not in memory. We need additional hardware support in the MMU, which triggers an interrupt. The OS handles this case automatically.

Page 20: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Demand paging (§ 9.2)

If a page is requested, but currently not in memory, a page fault is triggered. A page fault is triggered by the memory management unit (MMU).

The OS will load the page, possibly throwing away or swapping out other pages if there is no room.

The process/thread will go to the blocking state (in the process model).

The OS can decide to execute a different process/thread.

Page 21: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Demand paging (§ 9.2)9.2 Demand Paging 403

load M

reference trap

i

page is onbacking store

operatingsystem

restartinstruction

reset pagetable

page table

physicalmemory

bring inmissing page

free frame

1

2

3

6

5 4

Figure 9.6 Steps in handling a page fault.

But what happens if the process tries to access a page that was not broughtinto memory? Access to a page marked invalid causes a page fault. The paginghardware, in translating the address through the page table, will notice thatthe invalid bit is set, causing a trap to the operating system. This trap is theresult of the operating system’s failure to bring the desired page into memory.The procedure for handling this page fault is straightforward (Figure 9.6):

1. We check an internal table (usually kept with the process control block)for this process to determine whether the reference was a valid or aninvalid memory access.

2. If the reference was invalid, we terminate the process. If it was valid butwe have not yet brought in that page, we now page it in.

3. We find a free frame (by taking one from the free-frame list, for example).

4. We schedule a disk operation to read the desired page into the newlyallocated frame.

5. When the disk read is complete, we modify the internal table kept withthe process and the page table to indicate that the page is now in memory.

6. We restart the instruction that was interrupted by the trap. The processcan now access the page as though it had always been in memory.

In the extreme case, we can start executing a process with no pages inmemory. When the operating system sets the instruction pointer to the first

Page 22: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Copy on Write (§ 9.3)

Advanced memory management techniques also supports a new mechanism. Copy-on-write allows sharing of writable data between processes, while keeping the isolation property.

If a page is to be modified, it is duplicated, and the page table updated to point to the new copy.

The same mechanism is used to quickly allocate new memory in a lazy manner:a zero-filled page is referenced which is marked copy-on-write. This way gigabytes of memory can be allocated almost instantly.

Page 23: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Copy on Write (§ 9.3)

408 Chapter 9 Virtual Memory

9.3 Copy-on-Write

In Section 9.2, we illustrated how a process can start quickly by demand-pagingin the page containing the first instruction. However, process creation using thefork() system call may initially bypass the need for demand paging by usinga technique similar to page sharing (covered in Section 8.5.4). This techniqueprovides rapid process creation and minimizes the number of new pages thatmust be allocated to the newly created process.

Recall that the fork() system call creates a child process that is a duplicateof its parent. Traditionally, fork() worked by creating a copy of the parent’saddress space for the child, duplicating the pages belonging to the parent.However, considering that many child processes invoke the exec() systemcall immediately after creation, the copying of the parent’s address space maybe unnecessary. Instead, we can use a technique known as copy-on-write,which works by allowing the parent and child processes initially to share thesame pages. These shared pages are marked as copy-on-write pages, meaningthat if either process writes to a shared page, a copy of the shared page iscreated. Copy-on-write is illustrated in Figures 9.7 and 9.8, which show thecontents of the physical memory before and after process 1 modifies page C.

For example, assume that the child process attempts to modify a pagecontaining portions of the stack, with the pages set to be copy-on-write. Theoperating system will create a copy of this page, mapping it to the address spaceof the child process. The child process will then modify its copied page and notthe page belonging to the parent process. Obviously, when the copy-on-writetechnique is used, only the pages that are modified by either process are copied;all unmodified pages can be shared by the parent and child processes. Note, too,that only pages that can be modified need be marked as copy-on-write. Pagesthat cannot be modified (pages containing executable code) can be shared bythe parent and child. Copy-on-write is a common technique used by severaloperating systems, including Windows XP, Linux, and Solaris.

When it is determined that a page is going to be duplicated using copy-on-write, it is important to note the location from which the free page willbe allocated. Many operating systems provide a pool of free pages for suchrequests. These free pages are typically allocated when the stack or heap for aprocess must expand or when there are copy-on-write pages to be managed.

process1

physicalmemory

page A

page B

page C

process2

Figure 9.7 Before process 1 modifies page C.

Page 24: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Copy on Write (§ 9.3)

9.4 Page Replacement 409

process1

physicalmemory

page A

page B

page C

Copy of page C

process2

Figure 9.8 After process 1 modifies page C.

Operating systems typically allocate these pages using a technique known aszero-fill-on-demand. Zero-fill-on-demand pages have been zeroed-out beforebeing allocated, thus erasing the previous contents.

Several versions of UNIX (including Solaris and Linux) provide a variationof the fork() system call—vfork() (for virtual memory fork)—that operatesdifferently from fork() with copy-on-write. With vfork(), the parent processis suspended, and the child process uses the address space of the parent.Because vfork() does not use copy-on-write, if the child process changesany pages of the parent’s address space, the altered pages will be visible to theparent once it resumes. Therefore,vfork()must be used with caution to ensurethat the child process does not modify the address space of the parent. vfork()is intended to be used when the child process calls exec() immediately aftercreation. Because no copying of pages takes place, vfork() is an extremelyefficient method of process creation and is sometimes used to implement UNIXcommand-line shell interfaces.

9.4 Page Replacement

In our earlier discussion of the page-fault rate, we assumed that each pagefaults at most once, when it is first referenced. This representation is not strictlyaccurate, however. If a process of ten pages actually uses only half of them, thendemand paging saves the I/O necessary to load the five pages that are neverused. We could also increase our degree of multiprogramming by runningtwice as many processes. Thus, if we had forty frames, we could run eightprocesses, rather than the four that could run if each required ten frames (fiveof which were never used).

If we increase our degree of multiprogramming, we are over-allocatingmemory. If we run six processes, each of which is ten pages in size but actuallyuses only five pages, we have higher CPU utilization and throughput, withten frames to spare. It is possible, however, that each of these processes, for aparticular data set, may suddenly try to use all ten of its pages, resulting in aneed for sixty frames when only forty are available.

Further, consider that system memory is not used only for holding programpages. Buffers for I/O also consume a considerable amount of memory. This use

Page 25: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Copy on Write (§ 9.3)

fork() can be efficiently implemented using copy on write. By marking all the pages as copy on write, both processes can continue.

Therefore no duplication of memory is performed, unless written to.

If frames are referenced by only one process, the copy-on-write bit is automatically unset by the operating system.

In practice almost no memory duplication is needed if the child quickly executes a execvp() call. execvp() results in a new page table for the process, so the copy-on-write bit is unset on all the original pages.

(note that vfork() is even more efficient, as the marking of copy-on-write bits is not needed)

Page 26: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Page table entries (§ 9.2 + 9.3)

Foundation is same as in chapter 8.

Control bits are added to all entries of a page table, among others: • present bit (P): is this page loaded in hardware. If not, the frame number is

empty. • modified bit (M): accounts for changes in the memory page. If not changed, if

removed from memory it can be thrown away. If modified a page can be written to disk (in the swap or a file)

• copy-on-write bit: if set, changes are possible, but a copy is made before writing.

Control bits are used for the first time in 1962 in the Atlas computer.

Easy to realise sharing between processes: the same frame can occur in multiple page table entries.

Page 27: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Trashing and locality (§ 9.6)

Worst case scenario is bad: by making wrong choices of the operating system, page faults are occurring often. Almost no useful work is performed. This results in no (or almost none) progress being made

Assumption is principle of locality:clustering of memory that should be active to make progress.

This is a basic assumption of virtual memory!

If we are pushing the limits, in the end we will hit a progress barrier:no progress can be made.

Page 28: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Memory-mapped files: mmap() (§ 9.7)

Memory map a file on disk in memory. In this way we can use the virtual memory system for file access.

Your file is loaded on a certain memory region.

Unlike reading data from a file (with read()) and leaving it as is in memory, it is possible to replace these pages with others without them being written to the swap. This can be handled like this because the OS knows the relation between memory and file backing. We are only loading the pages that are used (not the whole file), called lazy loading.

Big advantage is that caching can be managed globally (= over the whole system), which results in better utilisation of the system.

This method is used a lot by modern applications, for example in mobile data stores as Realm (Android and iOS), and the reverse proxy Varnish. This allows them to achieve higher efficiency (both latency and throughput) as to apps with the same functionality but using ‘traditional’ memory management techniques.

Page 29: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Memory-mapped files: mmap() (§ 9.7)

Sharing files between processes can be done:432 Chapter 9 Virtual Memory

process Avirtual memory

1

1

1 2 3 4 5 6

23

3

45

5

42

66

123456

process Bvirtual memory

physical memory

disk file

Figure 9.22 Memory-mapped files.

Solaris still memory-maps the file; however, the file is mapped to the kerneladdress space. Regardless of how the file is opened, then, Solaris treats allfile I/O as memory-mapped, allowing file access to take place via the efficientmemory subsystem.

Multiple processes may be allowed to map the same file concurrently,to allow sharing of data. Writes by any of the processes modify the data invirtual memory and can be seen by all others that map the same section ofthe file. Given our earlier discussions of virtual memory, it should be clearhow the sharing of memory-mapped sections of memory is implemented:the virtual memory map of each sharing process points to the same page ofphysical memory—the page that holds a copy of the disk block. This memorysharing is illustrated in Figure 9.22. The memory-mapping system calls canalso support copy-on-write functionality, allowing processes to share a file inread-only mode but to have their own copies of any data they modify. So thataccess to the shared data is coordinated, the processes involved might use oneof the mechanisms for achieving mutual exclusion described in Chapter 5.

Quite often, shared memory is in fact implemented by memory mappingfiles. Under this scenario, processes can communicate using shared memoryby having the communicating processes memory-map the same file into theirvirtual address spaces. The memory-mapped file serves as the region of sharedmemory between the communicating processes (Figure 9.23). We have alreadyseen this in Section 3.4.1, where a POSIX shared memory object is created andeach communicating process memory-maps the object into its address space.In the following section, we illustrate support in the Windows API for sharedmemory using memory-mapped files.

Page 30: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Memory-mapped files: mmap() (§ 9.7)

Shared memory can be implemented using memory-maps9.7 Memory-Mapped Files 433

process1

memory-mappedfile

sharedmemory

sharedmemory

sharedmemory

process2

Figure 9.23 Shared memory using memory-mapped I/O.

9.7.2 Shared Memory in the Windows API

The general outline for creating a region of shared memory using memory-mapped files in the Windows API involves first creating a file mapping for thefile to be mapped and then establishing a view of the mapped file in a process’svirtual address space. A second process can then open and create a view ofthe mapped file in its virtual address space. The mapped file represents theshared-memory object that will enable communication to take place betweenthe processes.

We next illustrate these steps in more detail. In this example, a producerprocess first creates a shared-memory object using the memory-mappingfeatures available in the Windows API. The producer then writes a messageto shared memory. After that, a consumer process opens a mapping to theshared-memory object and reads the message written by the consumer.

To establish a memory-mapped file, a process first opens the file to bemapped with the CreateFile() function, which returns a HANDLE to theopened file. The process then creates a mapping of this file HANDLE usingthe CreateFileMapping() function. Once the file mapping is established, theprocess then establishes a view of the mapped file in its virtual address spacewith the MapViewOfFile() function. The view of the mapped file representsthe portion of the file being mapped in the virtual address space of the process—the entire file or only a portion of it may be mapped. We illustrate thissequence in the program shown in Figure 9.24. (We eliminate much of the errorchecking for code brevity.)

The call to CreateFileMapping() creates a named shared-memory objectcalled SharedObject. The consumer process will communicate using thisshared-memory segment by creating a mapping to the same named object.The producer then creates a view of the memory-mapped file in its virtualaddress space. By passing the last three parameters the value 0, it indicatesthat the mapped view is the entire file. It could instead have passed valuesspecifying an offset and size, thus creating a view containing only a subsectionof the file. (It is important to note that the entire mapping may not be loadedinto memory when the mapping is established. Rather, the mapped file may bedemand-paged, thus bringing pages into memory only as they are accessed.)The MapViewOfFile() function returns a pointer to the shared-memory object;any accesses to this memory location are thus accesses to the memory-mapped

Page 31: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

Memory-mapped files: mmap() (§ 9.7)

Advantages: • (much) less copying between kernel and user space • less memory used (sharing between filesystem cache and application data)

Disadvantages: • depending on permissions, a dangling pointer can result in modified files in a

unpredictable way • error handling is difficult (because signals are used instead of explicit return

codes of function calls) • modifying size of a file is hard (remap necessary)

Page 32: Previous lecture - ocw.cs.ru.nl · Avoid collision in used memory addresses. ... Paging (§ 8.5) Memory is divided into small partitions of fixed size, called frames. Currently mostly

This week

Read chapters 8.6, 9 - 9.3, 9.6 - 9.9.

Please install UPPAAL on your laptop!