Download doc - Class Note in O.S

Transcript
Page 1: Class Note in O.S

128

Web Pages

There is a web page for the course. You can find it from my home page.

Can find these notes there. Let me know if you can't find it. They will be updated as bugs are found.

Will also have each lecture available as a separate page. I will produce the page after the lecture is given. These individual pages might not get updated.

Textbook

Text is Tanenbaum, "Modern Operating Systems".

Available in bookstore. We will do part 1, starting with chapter 1.

Computer Accounts and Mailman mailing list

You are entitled to a computer account, get it. Sign up for a Mailman mailing list for the course. http://www.cs.nyu.edu/mailman/listinfo/g22_2250_001_fl00

If you want to send mail to me, use [email protected] not the mailing list.

You may do assignments on any system you wish, but ...

o You are responsible for the machine. I extend deadlines if the nyu machines are down, not if yours are.

o Be sure to upload your assignments to the nyu systems.

If somehow your assignment is misplaced by me or a grader, we need a to have a copy ON AN NYU SYSTEM that can be used to verify the date the lab was completed.

When you complete a lab (and have it on an nyu system), do not edit those files. Indeed, put the lab in a separate directory and keep out of the directory. You do not want to alter the dates.

Page 2: Class Note in O.S

128

Homework and Labs

I make a distinction between homework and labs.

Labs are

Required Due several lectures later (date given on assignment)

Graded and form part of your final grade

Penalized for lateness

Homework’s are: Optional Due the beginning of Next lecture

Not accepted late

Mostly from the book

Collected and returned

Can help, but not hurt, your grade

Upper left board for assignments and announcements.

Interlude on Linkers

Page 3: Class Note in O.S

128

Originally called linkage editors by IBM.

This is an example of a utility program included with an operating system distribution. Like a compiler, it is not part of the operating system per se, i.e. it does not run in supervisor mode. Unlike a compiler it is OS dependent (what object/load file format is used) and is not (normally) language dependent.

What does a Linker Do?

Link of course.

When the assembler has finished it produces an object module that is almost run able. There are two primary problems that must be solved for the object module to be runnable. Both are involved with linking (that word, again) together multiple object modules.

1. Relocating relative addresses. o Each module (mistakenly) believes it will be loaded at location zero (or some other fixed

location). We will use zero.

o So when there is an internal jump in the program, say jump to location 100, this means jump to location 100 of the current module.

o To convert this relative address to an absolute address, the linker adds the base address of the module to the relative address. The base address is the address at which this module will be loaded.

o Example: Module A is to be loaded starting at location 2300 and contains the instructionjump 120

The linker changes this instruction tojump 2420

o How does the linker know that Module M5 is to be loaded starting at location 2300?

o It processes the modules one at a time. The first module is to be loaded at location zero. So the relocating is trivial (adding zero). We say the relocation constant is zero.

Page 4: Class Note in O.S

128

o When finished with the first module (say M1), the linker knows the length of M1 (say that length is L1).

o Hence the next module is to be loaded starting at L1, i.e., the relocation constant is L1.

o In general the linker keeps track of the sum of the lengths of all the modules it has already processed and this is the location at which the next module is to be loaded.

2. Resolving external references. o If a C (or Java, or Pascal) program contains a function call

f(x)

to a function f() that is compiled separately, the resulting object module will contain some kind of jump to the beginning of f.

o But this is impossible!

o When the C program is compiled. the compiler (and assembler) do not know the location of f() so there is no way it can supply the starting address.

o Instead a dummy address is supplied and a notation made that this address needs to be filled in with the location of f(). This is called a use of f.

o The object module containing the definition of f() indicates that f is being defined and gives its relative address (which the linker will convert to an absolute address). This is called a definition of f.

The output of a linker is called a load module because it is now ready to be loaded and run.

To see how a linker works let’s consider the following example, which is the first dataset from lab #1. The description in lab1 is more detailed.

Page 5: Class Note in O.S

128

The target machine is word addressable and has a memory of 1000 words, each consisting of 4 decimal digits. The first (leftmost) digit is the opcode and the remaining three digits form an address.

Each object module contains three parts, a definition list, a use list, and the program text itself. Each definition is a pair (sym, loc). Each use is a pair (sym, loc). The address in loc points to the next use or is 999 to end the chain.

For those text entries that do not form part of a use chain a fifth (leftmost) digit is added. If it is 8, the address in the word is relocatable. If it is 0 (and hence omitted), the address is absolute.

Sample input1 xy 21 z 45 10043 56781 29994 80023 7002401 z 36 80013 19994 10014 30024 10023 1010201 z 12 50013 499941 z 21 xy 23 80002 19994 20014

I will illustrate a two-pass approach: The first pass simply produces the symbol table giving the values for xy and z (2 and 15 respectively). The second pass does the real work (using the values in the symbol table).

It is faster (less I/O) to do a one pass approach, but is harder since you need ``fix-up code'' whenever a use occurs in a module that precedes the module with the definition.

xy=2z=15

Page 6: Class Note in O.S

128

+00: 10043 1004+0=10041: 56781 56782: xy: 29994 ->z 20153: 80023 8002+0=80024: ->z 70024 7015+50 80013 8001+5=80061 19994 ->z 10152 10014 ->z 10153 ->z 30024 3015

4 10023 1002+5=10075 10102 1010+110 50013 5001+11=50121 ->z 49994 4015+130 80002 80001 19994 ->xy 10022 z:->xy 20014 2002

The linker on UNIX is mistakenly called ld (for loader), which is unfortunate since it links but does not load.

Lab #1: Implement a linker. The specific assignment is detailed on the sheet handed out in in class and is due in two weeks, 2 October 2000 . The content of the handout is available on the web as well (see the class home page).

Chapter 1. Introduction

Page 7: Class Note in O.S

128

Levels of abstraction (virtual machines)

Software (and hardware, but that is not this course) is often implemented in layers. The higher layers use the facilities provided by lower layers.

Alternatively said, the upper layers are written using a more powerful and more abstract virtual machine than the lower layers.

Alternatively said, each layer is written as though it runs on the virtual machine supplied by the lower layer and in turn provides a more abstract (pleasent) virtual machine for the higher layer to run on.

Using a broad brush, the layers are.

1. Scripts (e.g. shell scripts)

2. Applications and utilities

3. Libraries 4. The OS proper (the kernel)

5. Hardware

The kernel itself is itself normally layered, e.g.

1. ...

2. File systems

3. Machine independent I/O

4. Machine dependent device drivers

The machine independent I/O part is written assuming ``virtual (i.e. idealized) hardware''. For example, the machine independent I/O portion simply reads a block from a ``disk''. But in reality one must deal with the specific disk controller.

Often the machine independent part is more than one layer.

Page 8: Class Note in O.S

128

The term OS is not well defined. Is it just the kernel? How about the libraries? The utilities? All these are certainly system software but not clear how much is part of the OS.

1.1: What is an operating system?

The kernel itself raises the level of abstraction and hides details. For example a user (of the kernel) can write to a file (a concept not present in hardware) and ignore whether the file resides on a floppy, a CD-ROM, or a hard magnetic disk

The kernel is a resource manager (so users don't conflict).

How is an OS fundamentally different from a compiler (say)?

Answer: Concurrency! Per Brinch Hansen in Operating Systems Principles (Prentice Hall, 1973) writes.

The main difficulty of multiprogramming is that concurrent activities can interact in a time-dependent manner, which makes it practically impossibly to locate programming errors by systematic testing. Perhaps, more than anything else, this explains the difficulty of making operating systems reliable.

1.2 History of Operating Systems

1. Single user (no OS). 2. Batch, uniprogrammed, run to completion.

o The OS now must be protected from the user program so that it is capable of starting (and assisting) the next program in the batch).

3. Multiprogrammed

o The purpose was to overlap CPU and I/O

Page 9: Class Note in O.S

128

o Multiple batches

IBM OS/MFT (Multiprogramming with a Fixed number of Tasks)

The (real) memory is partitioned and a batch is assigned to a fixed partition.

The memory assigned to a partition does not change

IBM OS/MVT (Multiprogramming with a Variable number of Tasks) (then other names)

Each job gets just the amount of memory it needs. That is, the partitioning of memory changes as jobs enter and leave

MVT is a more ``efficient'' user of resources but is more difficult.

When we study memory management, we will see that with varying size partitions questions like compaction and ``holes'' arise.

o Time sharing

This is multiprogramming with rapid switching between jobs (processes). Deciding when to switch and which process to switch to is called scheduling.

We will study scheduling when we do processor management

4. Multiple computers

o Multiprocessors: Almost from the beginning of the computer age but now are not exotic.

o Network OS: Make use of the multiple PCs/workstations on a LAN.

o Distributed OS: A ``seamless'' version of above.

o Not part of this course (but often in G22.2251).

5. Real time systems

o Often in embedded systems

o Soft vs hard real time. In the latter missing a deadline is a fatal error--sometimes literally.

Page 10: Class Note in O.S

128

o Very important commercially, but not covered much in this course.

1.3: Operating System Concepts

This will be very brief. Much of the rest of the course will consist in ``filling in the details''.

1.3.1: Processes

A program in execution. If you run the same program twice, you have created two processes. For example if you have two editors running in two windows, each instance of the editor is a separate process.

Often one distinguishes the state or context (memory image, open files) from the thread of control. Then if one has many threads running in the same task, the result is a ``multithreaded processes''.

The OS keeps information about all processes in the process table. Indeed, the OS views the process as the entry. An example of an active entity being viewed as a data structure (cf. discrete event simulations). An observation made by Finkel in his (out of print) OS textbook.

The set of processes forms a tree via the fork system call. The forker is the parent of the forkee. If the parent stops running until the child finishes, the ``tree'' is quite simple, just a line. But the parent (in many OSes) is free to continue executing and in particular is free to fork again producing another child.

A process can send a signal to another process to cause the latter to execute a predefined function (the signal handler). This can be tricky to program since the programmer does not know when in his ``main'' program the signal handler will be invoked.

1.3.2: Files

Modern systems have a hierarchy of files. A file system tree.

Page 11: Class Note in O.S

128

In MSDOS the hierarchy is a forest not a tree. There is no file, or directory that is an ancestor if both a:\ and c:\. In Unix the existence of symbolic links weakens the tree to a DAG

Files and directories normally have permissions

Normally have at least raw. User, group, world

More general is access control lists

Often files have ``attributes'' as well. For example the Linux ext2 file system supports a ``d'' attribute that is a hint to the dump program not to backup this file.

When a file is opened, permissions are checked and, if the open is permitted, a file descriptor is returned that is used for subsequent operations

Devices (mouse, tape drive, cd room) are often view as ``special files''. In a UNIX system these are normally found in the /dev directory. Often utilities that are normally applied to (ordinary) files can be applied as well to some special files. For example, when you are accessing a UNIX system and do not have anything serious going on (e.g., right after you log in), type the following command

cat /dev/mouse

and then move the mouse. You kill the cat by typing cntl-C. I tried this on my Linux box and no damage occurred. Your mileage may vary.

Many systems have standard files that are automatically made available to a process upon startup. There (initial) file descriptors are fixed

standard input: fd=0 standard output: fd=1

standard error: fd=2

A convenience offered by some command interpreters is a pipe or pipeline. The pipeline

Page 12: Class Note in O.S

128

ls | wc

will give the number of files in the directory (plus other info).

================ Start Lecture #2 ================

Note: Lab 1 is assigned and due 2 October.

1.3.3: System Calls

System calls are the way a user (i.e. a program) directly interfaces with the OS. Some textbooks use the term envelope for the component of the OS responsible for fielding system calls and dispatching them. Here is a picture showing some of the OS components and the external events for which they are the interface.

Note that the OS serves two masters. The hardware (below) asynchronously sends interrupts and the user makes system calls and generates page faults.

Page 13: Class Note in O.S

128

What happens when a user executes a system call such as read()? We discuss this in much more detail later but briefly what happens is

1. Normal function call (in C, ada, etc.). 2. Library routine (in C).

3. Small assembler routine.

1. Move arguments to predefined place (perhaps registers).

2. Poof (a trap instruction) and then the OS proper runs in supervisor mode.

3. Fix up result (move to correct place).

1.4: OS Structure

I must note that tanenbaum is a big advocate of the so called microkernel approach in which as much as possible is moved out of the (protected) microkernel into separate processes.

In the early 90s this was popular. Digital UNIX (now called True64) and Windows NT were examples. Digital Unix is based on Mach, a research OS from Carnegie Mellon university. Lately, the growing popularity of Linux has called into question the belief that ``all new operating systems will be microkernel based''.

1.4.1: Monolithic approach

The previous picture: one big program

The system switches from user mode to kernel mode during the poof and then back when the OS does a ``return''.

But of course we can structure the system better, which brings us to.

Page 14: Class Note in O.S

128

1.4.2: Layered Systems

Some systems have more layers and are more strictly structured.

An early layered system was ``THE'' operating system by Dijkstra. The layers were.

Page 15: Class Note in O.S

128

1. The operator 2. User programs

3. I/O mgt

4. Operator-process communication

5. Memory and drum management

The layering was done by convention, i.e. there was no enforcement by hardware and the entire OS is linked together as one program. This is true of many modern OS systems as well (e.g., linux).

The multics system was layered in a more formal manner. The hardware provided several protection layers and the OS used them. That is, arbitrary code could not jump to or access data in a more protected layer.

1.4.3: Virtual machines

Use a ``hypervisor'' (beyond supervisor, i.e. beyond a normal OS) to switch between multiple Operating Systems

Each App/CMS runs on a virtual 370 CMS is a single user OS

A system call in an App traps to the corresponding CMS

CMS believes it is running on the machine so issues I/O instructions but ...

... I/O instructions in CMS trap to VM/370

1.4.4: Client Server

Page 16: Class Note in O.S

128

When implemented on one computer, a client server OS is the microkernel approach in which the microkernel just supplies interprocess communication and the main OS functions are provided by a number of separate processes.

This does have advantages. For example an error in the file server cannot corrupt memory in the process server. This makes errors easier to track down.

But it does mean that when a (real) user process makes a system call there are more processes switches. These are not free.

A distributed system can be thought of as an extension of the client server concept where the servers are remote.

Page 17: Class Note in O.S

128

Chapter 2: Process ManagementTanenbaum's chapter title is ``processes''. I prefer process management. The subject matter is processes, process scheduling, interrupt handling, and IPC (Interprocess communication--and coordination).

Page 18: Class Note in O.S

128

2.1: Processes

Definition: A process is a program in execution.

We are assuming a multiprogramming OS that can switch from one process to another. Sometimes this is called pseudoparallelism since one has the illusion of a parallel processor.

The other possibility is real parallelism in which two or more processes are actually running at once because the computer system is a parallel processor, i.e., has more than one processor.

We do not study real parallelism (parallel processing, distributed systems, multiprocessors, etc) in this course.

3.1.1: The Process Model

Even though in actuality there are many processes running at once, the OS gives each process the illusion that it is running alone.

Virtual time: The time used by just this processes. Virtual time progresses at a rate independent of other processes. Actually, this is false, the virtual time is typically incremented a little during systems calls used for process switching; so if there are more other processors more ``overhead'' virtual time occurs.

Virtual memory: The memory as viewed by the process. Each process typically believes it has a contiguous chunk of memory starting at location zero. Of course this can't be true of all processes (or they would be using the same memory) and in modern systems it is actually true of no processes (the memory assigned is not contiguous and does not include location zero).

Virtual time and virtual memory are examples of abstractions provided by the operating system to the user processes so that the latter ``sees'' a more pleasant virtual machine than actually exists.

Process Hierarchies

Modern general purpose operating systems permit a user to create and destroy processes. In unix this is done by the fork system call, which creates a child process, and the exit

system call, which terminates the current process.

After a fork both parent and child keep running (indeed they have the same program text) and each can fork off other processes.

Page 19: Class Note in O.S

128

A process tree results. The root of the tree is a special process created by the OS during startup.

MS-DOS is not multi programmed so when one process starts another, the first process is blocked and waits until the second is finished.

Process states and transitions

Page 20: Class Note in O.S

128

The above diagram contains a great deal of information.

Page 21: Class Note in O.S

128

Consider a running process P that issues an I/O request o The process blocks

o At some later point, a disk interrupt occurs and the driver detects that P's request is satisfied.

o P is unblocked, i.e. is moved from blocked to ready

o At some later time the operating system looks for a ready job to run and picks P.

A preemptive scheduler has the dotted line preempt;A non-preemptive scheduler doesn't.

The number of processes changes only for two arcs: create and terminate.

Suspend and resume are medium term scheduling

o Done on a longer time scale.

o Involves memory management as well.

o Sometimes called two level scheduling.

One can organize an OS around the scheduler.

Write a minimal ``kernel'' consisting of the scheduler, interrupt handlers, and IPC (interprocess communication) The rest of the OS consists of kernel processes (e.g. memory, filesystem) that act as servers for the user processes (which of course act as clients.

The system processes also act as clients (of other system processes).

Page 22: Class Note in O.S

128

The above is called the client-server model and is one Tanenbaum likes. His ``Minix'' operating system works this way. Indeed, there was reason to believe that it would dominate. But that hasn't happened.

Such an OS is sometimes called server based.

Systems like traditional unix or linux would then be called self-service since the user process serves itself.

o That is, the user process switches to kernel mode and performs the system call.

o To repeat: the same process changes back and forth from/to user<-->system mode and services itself.

2.1.3: Implementation of Processes

The OS organizes the data about each process in a table naturally called the process table. Each entry in this table is called a process table entry or PTE.

One entry per process. The central data structure for process management.

Page 23: Class Note in O.S

128

A process state transition (e.g., moving from blocked to ready) is reflected by a change in the value of one or more fields in the PTE.

We have converted an active entity (process) into a data structure (PTE). Finkel calls this the level principle ``an active entity becomes a data structure when looked at from a lower level''.

The PTE contains a great deal of information about the process. For example,

o Saved value of registers when process not running

o Stack pointer

o CPU time used

o Process id (PID)

o Process id of parent (PPID)

o User id (uid and euid)

o Group id (gid and egid)

o Pointer to text segment (memory for the program text)

o Pointer to data segment

o Pointer to stack segment

o UMASK (default permissions for new files)

o Current working directory

o Many others

An aside on Interrupts

In a well defined location in memory (specified by the hardware) the OS stores an interrupt vector, which contains the address of the (first level) interrupt handler.

Page 24: Class Note in O.S

128

Tanenbaum calls the interrupt handler the interrupt service routine. Actually one can have different priorities of interrupts and the interrupt vector contains one pointer for each level. This is why it is called a vector.

Assume a process P is running and a disk interrupt occurs for the completion of a disk read previously issued by process Q, which is currently blocked. Note that interrupts are unlikely to be for the currently running process (because the process waiting for the interrupt is likely blocked).

1. The hardware stacks the program counter etc (possibly some registers) 2. Hardware loads new program counter from the interrupt vector.

o Loading the program counter causes a jump

o Steps 1 and 2 are similar to a procedure call. But the interrupt is asynchronous

3. Assembly language routine saves registers 4. Assembly routine sets up new stack

o These last two steps can be called setting up the C environment

5. Assembly routine calls C procedure (tanenbaum forgot this one)

6. C procedure does the real work

o Determines what caused the interrupt (in this case a disk completed an I/O)

How does it figure out the cause?

Which priority interrupt was activated.

The controller can write data in memory before the interrupt

The OS can read registers in the controller

Page 25: Class Note in O.S

128

o Mark process Q as ready to run.

That is move Q to the ready list (note that again we are viewing Q as a data structure).

The state of Q is now ready (it was blocked before).

The code that Q needs to run initially is likely to be OS code. For example, Q probably needs to copy the data just read from a kernel buffer into user space.

o Now we have at least two processes ready to run: P and Q

o The scheduler decides which process to run (P or Q or something else). Lets assume that the decision is to run P.

7. The C procedure (that did the real work in the interrupt processing) continues and returns to the assembly code.

8. Assembly language restores P's state (e.g., registers) and starts P at the point it was when the interrupt occurred.

2.2: Inter Process Communication (IPC) and Process Coordination and Synchronization

2.2.1: Race Conditions

A race condition occurs when two processes can interact and the outcome depends on the order in which the processes execute.

Page 26: Class Note in O.S

128

Imagine two processes both accessing x, which is initially 10. o One process is to execute x <-- x+1

o The other is to execute x <-- x-1

o When both are finished x should be 10

o But we might get 9 and might get 11!

o Show how this can happen (x <-- x+1 is not atomic)

o Tanenbaum shows how this can lead to disaster for a printer spooler

Homework: 2

2.2.2: Critical sections

We must prevent interleaving sections of code that need to be atomic with respect to each other. That is, the conflicting sections need mutual exclusion. If process A is executing its critical section, it excludes process B from executing its critical section. Conversely if process B is executing is critical section, it excludes process A from executing its critical section.

Requirements for a critical section implementation.

1. No two processes may be simultaneously inside their critical section 2. No assumption may be made about the speeds or the number of CPUs

3. No process outside its critical section may block other processes

4. No process should have to wait forever to enter its critical section

o I do NOT make this last requirement.

o I just require that the system as a whole make progress (so not all processes are blocked)

o I refer to solutions that do not satisfy tanenbaum's last condition as unfair, but nonetheless correct, solutions

Page 27: Class Note in O.S

128

o Stronger fairness conditions can also be defined

2.2.3 Mutual exclusion with busy waiting

The operating system can choose not to preempt itself. That is, no preemption for system processes (if the OS is client server) or for processes running in system mode (if the OS is self service). Forbidding preemption for system processes would prevent the problem above where x<--x+1 not being atomic crashed the printer spooler if the spooler is part of the OS.

But this is not adequate

Does not work for user programs. So the Unix printer spooler would not be helped. Does not prevent conflicts between the main line OS and interrupt handlers

o This conflict could be prevented by blocking interrupts while the mail line is in its critical section.

o Indeed, blocking interrupts is often done for exactly this reason.

o Do not want to block interrupts for too long or the system will seem unresponsive

Does not work if the system has several processors

o Both main lines can conflict

o One processor cannot block interrupts on the other

Software solutions for two processes

Initially P1wants=P2wants=false

Code for P1 Code for P2

Loop forever { Loop forever { P1wants <-- true ENTRY P2wants <-- true while (P2wants) {} ENTRY while (P1wants) {} critical-section critical-section P1wants <-- false EXIT P2wants <-- false

Page 28: Class Note in O.S

128

non-critical-section } non-critical-section }

Explain why this works.

But it is wrong! Why?

Let's try again. The trouble was that setting want before the loop permitted us to get stuck. We had them in the wrong order!

Initially P1wants=P2wants=false

Code for P1 Code for P2

Loop forever { Loop forever { while (P2wants) {} ENTRY while (P1wants) {} P1wants <-- true ENTRY P2wants <-- true critical-section critical-section P1wants <-- false EXIT P2wants <-- false non-critical-section } non-critical-section }

Explain why this works.

But it is wrong again! Why?

So let's be polite and really take turns. None of this wanting stuff.

Initially turn=1

Code for P1 Code for P2

Loop forever { Loop forever { while (turn = 2) {} while (turn = 1) {} critical-section critical-section turn <-- 2 turn <-- 1

Page 29: Class Note in O.S

128

non-critical-section } non-critical-section }

This one forces alternation, so is not general enough. Specifically, it does not satisfy condition three, which requires that no process in its non-critical section can stop another process from entering its critical section. With alternation, if one process is in its non-critical section (NCS) then the other can enter the CS once but not again.

In fact, it took years (way back when) to find a correct solution. Many earlier ``solutions'' were found and several were published, but all were wrong The first true solution was found by dekker. It is very clever, but I am skipping it (I cover it when I teach OS II). Subsequently, algorithms with better fairness properties were found (e.g. no task has to wait for another task to enter the CS twice).

================ Start Lecture #3 ================

Note: Lab 1 due date extended 1 week (too many typos!). It is now due 9 October. Added 30 minutes to office hours 5-5:30 wed, mostly for this class.

Will do some more on lab1 in about 10 minutes.

Jesse Wei-Yeh Chu writes:

> The last program text in Module no. 4 of input set 2 has only 4 digits> (4999). Is this a typo or an intended error for our linker to catch?

Typo!

> Also for input no. 2, there is a discrepancy between the pdf file and the> text file. Module 5 in pdf indicates both definition and use list as 00,> whereas the text version contains only one 00.

The text version is wrong. A filter I used to save space squashedduplicate lines (thinking they were blanks). Alas it doesn't workhere.

I fixed both errors.

Page 30: Class Note in O.S

128

What follows is Peterson's solution. When it was published, it was a surprise to see such a simple soluntion. In fact Peterson gave a solution for any number of processes. A proof that the algorithm satisfies our properties (including a strong fairness condition) can be found in Operating Systems Review Jan 1990, pp. 18-22.

Initially P1wants=P2wants=false and turn=1

Code for P1 Code for P2

Loop forever { Loop forever { P1wants <-- true P2wants <-- true turn <-- 2 turn <-- 1 while (P2wants and turn=2) {} while (P1wants and turn=1) {} critical-section critical-section P1wants <-- false P2wants <-- false non-critical-section non-critical-section

Hardware assist (test and set)

TAS(b) where b is a binary variable ATOMICALLY sets b<--true and returns the OLD value of b. Of course it would be silly to return the new value of b since we know the new value is true

Now implementing a critical section for any number of processes is trivial.

loop forever { while (TAS(s)) {} ENTRY CS s<--false EXIT NCS

P and V and Semaphores

Note: Tanenbaum does both busy waiting (like above) and blocking (process switching) solutions. We will only do busy waiting.

Page 31: Class Note in O.S

128

The entry code is often called P and the exit code V (Tanenbaum only uses P and V for blocking, but we use it for busy waiting). So the critical section problem is to write P and V so that

loop forever P critical-section V non-critical-section

satisfies 1. Mutual exclusion 2. No speed assumptions

3. No blocking by processes in NCS

4. Forward progress (my weakened version of Tanenbaum's last condition

Note that I use indenting carefully and hence do not need (and sometimes omit) the braces {}

A binary semaphore abstracts the TAS solution we gave for the critical section problem.

A binary semaphore S takes on two possible values ``open'' and ``closed'' Two operations are supported

P(S) is

while (S=closed) {} S<--closed <== This is NOT the body of the while

where finding S=open and setting S<--closed is atomic

That is, wait until the gate is open, then run through and atomically close the gate Said another way, it is not possible for two processes doing P(S) simultaneously to both see S=open (unless a V(S) is also simultaneous with both of them).

Page 32: Class Note in O.S

128

V(S) is simply S<--open

The above code is not real, i.e., it is not an implementation of P. It is, instead, a definition of the effect P is to have.

To repeat: for any number of processes, the critical section problem can be solved by

loop forever P(S) CS V(S) NCS

The only specific solution we have seen for an arbitrary number of processes is the one just above with P(S) implemented via test and set.

Remark: Peterson's solution requires each process to know its processor number. The TAS soluton does not. Thus, strictly speaking Peterson did not provide an implementation of P and V. He did solve the critical section problem.

To solve other coordination problems we want to extend binary semaphores.

With binary semaphores, two consecutive Vs do not permit two subsequent Ps to succeed (the gate cannot be doubly opened). We might want to limit the number of processes in the section to 3 or 4, not always just 1.

The solution to both of these shortcomings is to remove the restriction to a binary variable and define a generalized or counting semaphore.

A counting semaphore S takes on non-negative integer values Two operations are supported

P(S) is

while (S=0) S--

where finding S>0 and decrementing S is atomic

Page 33: Class Note in O.S

128

That is, wait until the gate is open (positive), then run through and atomically close the gate one unit Said another way, it is not possible for two processes doing P(S) simultaneously to both see the same positive value of S unless a V(S) is also simultaneous.

V(S) is simply S++

These counting semaphores can solve what I call the semi-critical-section problem, where you premit up to k processes in the section. When k=1 we have the original critical-section problem.

initially S=k

loop forever P(S) SCS <== semi-critical-section V(S) NCS

Producer-consumer problem

Two classes of processes o Producers, which produce times and insert them into a buffer.

o Consumers, which remove items and consume them.

What if the producer encounters a full buffer?Answer: Block it.

What if the consumer encounters an empty buffer?Answer: Block it.

Also called the bounded buffer problem.

o Another example of active entities being replaced by a data structure when viewed at a lower level (Finkel's level principle).

Initially e=k, f=0 (counting semaphore); b=open (binary semaphore)

Page 34: Class Note in O.S

128

Producer Consumer

loop forever loop forever produce-item P(f) P(e) P(b); take item from buf; V(b) P(b); add item to buf; V(b) V(e) V(f) consume-item

k is the size of the buffer e represents the number of empty buffer slots

f represents the number of full buffer slots

We assume the buffer itself is only serially accessible. That is, only one operation at a time.

o This explains the P(b) V(b) around buffer operations

o I use ; and put three statements on one line to suggest that a buffer insertion or removal is viewed as one atomic operation.

o Of course this writing style is only a convention, the enforcement of atomicity is done by the P/V.

The P (e), V (f) motif is used to force bounded alternation. If k=1 it gives strict alternation.

Dining Philosophers

A classical problem from Dijkstra

5 philosophers sitting at a round table Each has a plate of spaghetti

There is a fork between each two

Need two forks to eat

What algorithm do you use for access to the shared resource (the forks)? The obvious solution (pick up right; pick up left) deadlocks. Big lock around everything serializes.

Page 35: Class Note in O.S

128

Good code in the book.

The purpose of mentioning the Dining Philosophers problem without giving the solution is to give a feel of what coordination problems are like. The book gives others as well. We are skipping these (again this material would be covered in a sequel course). If you are interested look, for example, at http://allan.ultra.nyu.edu/gottlieb/courses/1997-98-spring/os/class-notes.html.

Homework: 14, 15 (these have short answers but are not easy).

Readers and writers

Two classes of processes. o Readers, which can work concurrently.

o Writers, which need exclusive access.

Must prevent 2 writers from being concurrent.

Must prevent a reader and a writer from being concurrent.

Must permit readers to be concurrent when no writer is active.

Perhaps want fairness (i.e., freedom from starvation).

Variants

1. Writer-priority readers/writers.

2. Reader-priority readers/writers.

Quite useful in multiprocessor operating systems. The ``easy way out'' is to treat all processes as writers in which case the problem reduces to mutual exclusion (P and V). The disadvantage of the easy way out is that you give up reader concurrency. Again for more information see the web page referenced above.

================ Start Lecture #4 ================

Page 36: Class Note in O.S

128

2.4: Process Scheduling

Scheduling the processor is often called ``process scheduling'' or simply ``scheduling''.

The objectives of a good scheduling policy include

Fairness. Efficiency.

Low response time (important for interactive jobs).

Low turnaround time (important for batch jobs).

High throughput [the above are from Tanenbaum].

Repeatability. Dartmouth (DTSS) ``wasted cycles'' and limited logins for repeatability. Fair across projects.

o ``Cheating'' in unix by using multiple processes.

o TOPS-10.

o Fair share research project.

Degrade gracefully under load.

Page 37: Class Note in O.S

128

Recall the basic diagram describing process states

For now we are discussing short-term scheduling, i.e., the arcs connecting running <--> ready.

Medium term scheduling is discussed later.

Preemption

It is important to distinguish preemptive from non-preemptive scheduling algorithms.

Preemption means the operating system moves a process from running to ready without the process requesting it.

Without preemption, the system implements ``run to completion (or yield or block)''.

The ``preempt'' arc in the diagram. We do not consider yield (a solid arrow from running to ready).

Preemption needs a clock interrupt (or equivalent).

Preemption is needed to guarantee fairness.

Page 38: Class Note in O.S

128

Found in all modern general purpose operating systems.

Even non preemptive systems can be multi programmed (e.g., when processes block for I/O).

Deadline scheduling

This is used for real time systems. The objective of the scheduler is to find a schedule for all the tasks (there are a fixed set of tasks) so that each meets its deadline. The run time of each task is known in advance.

Actually it is more complicated.

Periodic tasks What if we can't schedule all task so that each meets its deadline (i.e., what should be the penalty function)?

What if the run-time is not constant but has a known probability distribution?

We do not cover deadline scheduling in this course.

The name game

There is an amazing inconsistency in naming the different (short-term) scheduling algorithms. Over the years I have used primarily 4 books: In chronological order they are Finkel, Deitel, Silberschatz, and Tanenbaum. The table just below illustrates the name game for these four books. After the table we discuss each scheduling policy in turn.

Finkel Deitel Silbershatz Tanenbaum-------------------------------------FCFS FIFO FCFS -- unnamed in tanenbaumRR RR RR RRPS ** PS PSSRR ** SRR ** not in tanenbaum

SPN SJF SJF SJFPSPN SRT PSJF/SRTF -- unnamed in tanenbaumHPRN HRN ** ** not in tanenbaum

Page 39: Class Note in O.S

128

** ** MLQ ** only in silbershatzFB MLFQ MLFQ MQ

First Come First Served (FCFS, FIFO, FCFS, --)

If the OS ``doesn't'' schedule, it still needs to store the PTEs somewhere. If it is a queue you get FCFS. If it is a stack (strange), you get LCFS. Perhaps you could get some sort of random policy as well.

Only FCFS is considered. The simplist scheduling policy.

Non-preemptive.

Round Robbins (RR, RR, RR, RR)

An important preemptive policy. Essentially the preemptive version of FCFS.

The key parameter is the quantum size q.

When a process is put into the running state a timer is set to q.

If the timer goes off and the process is still running, the OS preempts the process.

o This process is moved to the ready state (the preempt arc in the diagram), where it is placed at the rear of the ready list (a queue).

o The process at the front of the ready list is removed from the ready list and run (i.e., moves to state running).

When a process is created, it is placed at the rear of the ready list.

As q gets large, RR approaches FCFS

As q gets small, RR approaches PS (Processor Sharing, described next)

What value of q should we choose?

Page 40: Class Note in O.S

128

o Tradeoff

o Small q makes system more responsive.

o Large q makes system more efficient since less process switching.

Consider the set of processes in the table below. When does each process finish if RR scheduling is used with q=1, if q=2, if q=3, if q=100. First assume (unrealistically) that context switch time is zero. Then assume it is .1. Each process performs no I/O (i.e., no process ever blocks). All times are in milliseconds. The CPU time is the total time required for the process (excluding context switch time). The creation time is the time when the process is created. So P1 is created when the problem begins and P2 is created 5 milliseconds later.

Process CPU Time Creation TimeP1 20 0P2 3 3P3 2 5

Processor Sharing (PS, **, PS, PS)

Merge the ready and running states and permit all ready jobs to be run at once. However, the processor slows down so that when n jobs are running at once each progresses at a speed 1/n as fast as it would if it were running alone.

Clearly impossible as stated due to the overhead of process switching. Of theoretical interest (easy to analyze).

Approximated by RR when the quantum is small. Make sure you understand this last point. For example, consider the last homework assignment (with zero context switch time) and consider q=1, q=.1, q=.01, etc.

Variants of Round Robbins

State dependent RR o Same as RR but q is varied dynamically depending on the state of the system.

Page 41: Class Note in O.S

128

o Favor processes holding important resources.

For example, non-swappable memory.

Perhaps this should be considered medium term scheduling since you probably do not recalculate q each time.

External priorities: RR but a user can pay more and get bigger q. That is one process can be given a higher priority than another. But this is not an absolute priority, i.e., the lower priority (i.e., less important) process does get to run, but not as much as the high priority process.

Priority Scheduling

Each job is assigned a priority (externally, perhaps by charging more for higher priority) and the highest priority ready job is run.

Similar to ``External priorities'' above If many processes have the highest priority, use RR among them.

Can easily starve processes (see aging below for fix).

Can have the priorities changed dynamically to favor processes holding important resources (similar to state dependent RR).

Many policies can be thought of as priority scheduling in which we run the job with the highest priority (with different notions of priority for different policies).

Priority aging

As a job is waiting, raise its priority so eventually it will have the maximum priority.

This prevents starvation (assuming all jobs terminate). There may be many processes with the maximum priority.

If so, use RR among those with max priority.

Can apply priority aging to many policies, in particular to priority scheduling described above.

Page 42: Class Note in O.S

128

Selfish RR (SRR, **, SRR, **)

Preemptive. Perhaps it should be called ``snobbish RR''.

``Accepted processes'' run RR.

Accepted process have their priority increase at rate b>=0.

A new process starts at priority 0; its priority increases at rate a>=0.

A new process becomes an accepted process when its priority reaches that of an accepted process (or until there are no accepted processes). Note that at any time all accepted processes have same priority.

If b>=a, get FCFS.

If b=0, get RR.

If a>b>0, it is interesting.

Shortest Job First (SPN, SJF, SJF, SJF)

Sort jobs by total execution time needed and run the shortest first.

Non preemptive First consider a static situation where all jobs are available in the beginning, we know how long each one takes to run, and we implement ``run-to-

completion'' (i.e., we don't even switch to another process on I/O). In this situation, SJF has the shortest average waiting time

o Assume you have a schedule with a long job right before a short job.

Page 43: Class Note in O.S

128

o Consider swapping the two jobs.

o This decreases the wait for the short by the length of the long job and increases the weight of the long job by the length of the short job.

o This decreases the total waiting time for these two.

o Hence decreases the total waiting for all jobs and hence decreases the average waiting time as well.

o Hence, whenever a long job is right before a short job, we can swap them and decrease the average waiting time.

o Thus the lowest average waiting time occurs when there are no short jobs right before long jobs.

o This is SJN.

In the more realistic case where the scheduler switches to a new process when the currently running process blocks (say for I/O), we should call the policy shortest next-CPU-burst first.

The difficulty is predicting the future (i.e., knowing in advance the time required for the job or next-CPU-burst).

Preemptive Shortest Job First (PSPN, SRT, PSJF/SRTF, --)

Preemptive version of above

Permit a process that enters the ready list to preempt the running process if the time for the new process (or for its next burst) is less than the remaining time for the running process (or for its current burst).

It will never happen that a process in the ready list will require less time than the remaining time for the currently running process. Why?Ans: When the process joined the ready list it would have started running if the current process had more time remaining. Since that didn't happen the current job had less time remaining and now it has even less.

Can starve processes that require a long burst.

o This is fixed by the standard technique.

o What is that technique?Ans: Priority aging.

Page 44: Class Note in O.S

128

Highest Penalty Ratio Next (HPRN, HRN, **, **)

Run the process that has been ``hurt'' the most. For each process, let r = T/t; where T is the wall clock time this process has been in system and t is the running time of the process to date. If r=5 that the job has been running 1/5 of the time it has been in the system.

We call r the penalty ration and run the process having the highest r value

HPRN is normally defined to be non-preemptive (i.e., the system only checks r when a burst ends), but there is an preemptive analogue

o Do not worry about a process that just enters the system (its ratio is undefined)

o When putting process into the run state compute the time at which it will no longer have the highest ratio and set a timer.

o When a process is moved into the ready state, compute its ratio and preempt if needed.

HRN stands for highest response ratio next and means the same thing.

This policy is another example of priority scheduling

Multilevel Queues (**, **, MLQ, **)

Put different classes of processes in different queues

Process does not move from one queue to another. Can have different policies on the different queues.

For example, might have a background (batch) queue that is FCFS and one or more foreground queues that are RR.

Must also have a policy among the queues.

For example, might have two queues, foreground and background, and give the first absolute priority over the second o Might apply aging to prevent background starvation

Page 45: Class Note in O.S

128

o But might not, i.e., no guarantee of service for background processes. View a background process as a ``cycle soaker''.

Multilevel Feedback Queues (FB, MFQ, MLFBQ, MQ)

Many queues and process move from queue to queue in an attempt to dynamically separate ``batch-like'' from interactive processs.

Run process from the highest priority nonempty queue in a RR manner. When a process uses its full quanta (looks a like batch process), move it to a lower priority queue.

When a process doesn't use a full quanta (looks like an interactive process), move it to a higher priority queue.

A long process with (perhaps spurious) I/O will remain in the upper queues.

Might have the bottom queue FCFS

Many variantsFor example, might let process stay in top queue 1 quantum, next queue 2 quanta, next queue 4 quanta (i.e. return a process to the rear of the same queue it was in if the quantum expires).

Theoretical Issues

Considerable theory has been developed.

NP completeness results abound. Much work in queuing theory to predict performance.

Not covered in this course.

Medium Term scheduling Decisions made at a coarser time scale.

Called two-level scheduling by Tanenbaum Suspend (swap out) some process if memory is over-committed

Page 46: Class Note in O.S

128

Criteria for choosing a victim o How long since previously suspended

o How much CPU time used recently

o How much memory does it use

o External priority (pay more, get swapped out less)

We will discuss medium term scheduling again next chapter (memory management).

Long Term Scheduling

``Job scheduling''. Decide when to start jobs (i.e., do not necessarily start them when submitted. Force user to log out and/or block logins if over-committed.

o CTSS (an early time sharing system at MIT) did this to insure decent interactive response time.

o Unix does this if out of processes (i.e., out of PTEs)

o ``LEM jobs during the day'' (Grumman).

Some supercomputer sites.

================ Start Lecture #5 ================

Chapter 3: Memory ManagementAlso called storage management or space management.

Page 47: Class Note in O.S

128

Memory management must deal with the storage hierarchy present in modern machines.

Registers, cache, central memory, disk, tape (backup) Move data from level to level of the hierarchy.

How should we decide when to move data up to a higher level? o Fetch on demand (e.g. demand paging, which is dominant now).

o Prefetch

Read-ahead for file I/O.

Large cache lines and pages.

Extreme example. Entire job present whenever running.

We will see in the next few lectures that there are three independent decision:

1. Segmentation (or no segmentation) 2. Paging (or no paging)

3. Fetch on demand (or no fetching on demand)

Memory management implements address translation.

Convert virtual addresses to physical addresses o Also called logical to real address translation.

o A virtual address is the address expressed in the program.

o A physical address is the address understood by the computer.

Page 48: Class Note in O.S

128

The translation from virtual to physical addresses is performed by the Memory Management Unit or (MMU).

Another example of address translation is the conversion of relative addresses to absolute addresses by the linker.

The translation might be trivial (e.g., the identity) but not in a modern general purpose OS.

The translation might be difficult (i.e., slow).

o Often includes addition/shifts/mask--not too bad.

o Often includes memory references.

VERY serious.

Solution is to cache translations in a Translation Look aside Buffer (TLB). Sometimes called a translation buffer (TB).

When is address translation performed?

1. At compile time o Primitive.

o Compiler generates physical addresses.

o Requires knowledge of where the compilation unit will be loaded.

o Rarely used (MSDOS .COM files).

Page 49: Class Note in O.S

128

2. At link-edit time (the ``linker lab'') o Compiler

Generates relocatable addresses for each compilation unit.

References external addresses.

o Linkage editor

Converts the relocatable address to absolute.

Resolves external references.

Misnamed LD by UNIX.

Also converts virtual to physical addresses by knowing where the linked program will be loaded. Linker lab ``does'' this, but it is trivial since we assume the linked program will be loaded at 0.

o Loader is simple.

o Hardware requirements are small.

o A program can be loaded only where specified and cannot move once loaded.

o Not used much anymore.

3. At load time

o Similar to at link-edit time, but do not fix the starting address.

o Program can be loaded anywhere.

o Program can move but cannot be split.

o Need modest hardware: base/limit registers.

o Loader sets the base/limit registers.

Page 50: Class Note in O.S

128

4. At execution time o Addresses translated dynamically during execution.

o Hardware needed to perform the virtual to physical address translation quickly.

o Currently dominates.

o Much more information later.

Extensions

Dynamic Loading o When executing a call, check if module is loaded.

o If not loaded, call linking loader to load it and update tables.

o Slows down calls (indirection) unless you rewrite code dynamically.

o Not used much.

Dynamic Linking

o The traditional linking described above is today often called static linking.

o With dynamic linking, frequently used routines are not linked into the program. Instead, just a stub is linked.

o When the routine is called, the stub checks to see if the real routine is loaded (it may have been loaded by another program).

If not loaded, load it.

If already loaded, share it. This needs some OS help so that different jobs sharing the library don't overwrite each other's private memory.

o Advantages of dynamic linking.

Saves space: Routine only in memory once even when used many times.

Page 51: Class Note in O.S

128

Bug fix to dynamically linked library fixes all applications that use that library, without having to relink the application.

o Disadvantages of dynamic linking.

New bugs in dynamically linked library infect all applications.

Applications ``change'' even when they haven't changed.

Note: I will place ** before each memory management scheme.

3.1: Memory management without swapping or paging

Entire process remains in memory from start to finish.

The sum of the memory requirements of all jobs in the system cannot exceed the size of physical memory.

** 3.1.1: Mono programming without swapping or paging (Single User)

The ``good old days'' when everything was easy.

No address translation done by the OS (i.e., address translation is not performed dynamically during execution).

Either reload the OS for each job (or don't have an OS, which is almost the same), or protect the OS from the job.

o One way to protect (part of) the OS is to have it in ROM.

o Of course, must have the data in ram.

Page 52: Class Note in O.S

128

o Can have a separate OS address space only accessible in supervisor mode.

o Might just put some drivers in ROM (BIOS).

The user employs overlays if the memory needed by a job exceeds the size of physical memory.

o Programmer breaks program into pieces.

o A ``root'' piece is always memory resident.

o The root contains calls to load and unload various pieces.

o Programmer's responsibility to ensure that a piece is already loaded when it is called.

o No longer used, but we couldn't have gotten to the moon in the 60s without it (I think).

o Overlays have been replaced by dynamic address translation and other features (e.g., demand paging) that have the system support logical address sizes greater than physical address sizes.

o Fred Brooks (leader of IBM's OS/360 project and author of ``The mythical man month'') remarked that the OS/360 linkage editor was terrific, especially in its support for overlays, but by the time it came out, overlays were no longer used.

3.1.2: Multiprogramming and Memory Usage

Goal is to improve CPU utilization, by overlapping CPU and I/O

Consider a job that is unable to compute (i.e., it is waiting for I/O) a fraction p of the time. Then, with mono programming, the CPU utilization is 1-p.

Note that p is often > .5 so CPU utilization is poor.

But, if the probability that a job is waiting for I/O is p and n jobs are in memory, then the probability that all n are waiting for I/O is approximately p^n.

So, with a multiprogramming level (MPL) of n, the CPU utilization is approximately 1-p^n.

If p=.5 and n=4, then 1-p^n = 15/16, which is much better than 1/2, which would occur for mono programming (n=1).

Page 53: Class Note in O.S

128

This is a crude model, but it is correct that increasing MPL does increase CPU utilization up to a point.

The limitation is memory, which is why we discuss it here instead of process management. That is, we must have many jobs loaded at once, which means we must have enough memory for them there are other issues as well and we will discuss them.

Some of the CPU utilization is time spent in the OS executing context switches so the gains are not a great as the crude model predicts.

Homework: 1, 3.

3.1.3: Multiprogramming with fixed partitions

This was used by IBM for system 360 OS/MFT (multiprogramming with a fixed number of tasks). Can have a single input queue instead of one for each partition.

o So that if there are no big jobs can use big partition for little jobs.

o But I don't think IBM did this.

o Can think of the input queue(s) as the ready list(s) with a scheduling policy of FCFS in each partition.

The partition boundaries are not movable (must reboot to move a job).

o MFT can have large internal fragmentation, i.e., wasted space inside a region

Each process has a single ``segment'' (we will discuss segments later)

o No sharing between process.

o No dynamic address translation.

o At load time must ``establish addressability''. i.e. must set a base register to the location at which the process was loaded (the bottom of the partition).

Page 54: Class Note in O.S

128

The base register is part of the programmer visible register set.

This is an example of address translation during load time.

Also called relocation.

Storage keys are adequate for protection (IBM method).

Alternative protection method is base/limit registers.

An advantage of base/limit is that it is easier to move a job.

But MFT didn't move jobs so this disadvantage of storage keys is moot.

Tanenbaum says jobs were ``run to completion''. This must be wrong as that would mean mono programming.

He probably means that jobs not swapped out and each queue is FCFS without preemption.

3.2: Swapping

Moving entire processes between disk and memory is called swapping.

3.2.1: Multiprogramming with variable partitions

Both the number and size of the partitions change with time. IBM OS/MVT (multiprogramming with a varying number of tasks).

Also early PDP-10 OS.

Job still has only one segment (as with MFT) but now can be of any size up to the size of the machine and can change with time.

A single ready list.

Job can move (might be swapped back in a different place).

This is dynamic address translation (during run time).

Page 55: Class Note in O.S

128

Must perform an addition on every memory reference (i.e. on every address translation) to add the start address of the partition.

Called a DAT (dynamic address translation) box by IBM.

Eliminates internal fragmentation. o Find a region the exact right size (leave a hole for the remainder).

o Not quite true, can't get a piece with 10A755 bytes. Would get say 10A760. But internal fragmentation is much reduced compared to MFT. Indeed, we say that internal fragmentation has been eliminated.

Introduces external fragmentation, i.e., holes outside any region.

What do you do if no hole is big enough for the request?

o Can compact

Transition from bar 3 to bar 4 in diagram below.

This is expensive.

Not suitable for real time (MIT ping pong).

o Can swap out one process to bring in another

Bars 5-6 and 6-7 in diagram

Page 56: Class Note in O.S

128

There are more processes than holes. Why?

o Because next to a process there might be a process or a hole but next to a hole there must be a process o So can have ``runs'' of processes but not of holes

o Above actually shows that there are about twice as many processes as holes.

Base and limit registers are used

o Storage keys not good since compactifying would require changing many keys.

o Storage keys might need a fine granularity to permit the boundaries move by small amounts. Hence many keys would need to be changed

MVT Introduces the ``Placement Question'', which hole (partition) to choose

Page 57: Class Note in O.S

128

Best fit, worst fit, first fit, circular first fit, quick fit, Buddy o Best fit doesn't waste big holes, but does leave slivers and is expensive to run.

o Worst fit avoids slivers, but eliminates all big holes so a big job will require compaction. Even more expensive than best fit (best fit stops if it finds a perfect fit).

o Quick fit keeps lists of some common sizes (but has other problems, see Tanenbaum).

o Buddy system

Round request to next highest power of two (causes internal fragmentation).

Look in list of blocks this size (as with quick fit).

If list empty, go higher and split into buddies.

When returning coalesce with buddy.

Do splitting and coalescing recursively, i.e. keep coalescing until can't and keep splitting until successful.

See Tanenbaum for more details (or an algorithms book).

A current favorite is circular first fit (also known as next fit)

o Use the first hole that is big enough (first fit) but start looking where you left off last time.

o Doesn't waste time constantly trying to use small holes that have failed before, but does tend to use many of the big holes, which can be a problem.

Buddy comes with its own implementation. How about the others?

o Bit map

Only question is how much memory does one bit represent.

Big: Serious internal fragmentation

Small: Many bits to store and process

Page 58: Class Note in O.S

128

o Linked list

Each item on list says whether Hole or Process, length, starting location

The items on the list are not taken from the memory to be used by processes

Keep in order of starting address

Double linked

o Boundary tag

Knuth

Use the same memory for list items as for processes

Don't need an entry in linked list for blocks in use, just the avail blocks are linked

For the blocks currently in use, just need a hole/process bit at each end and the length. Keep this in the block itself.

See knuth, the art of computer programming vol 1

MVT Also introduces the ``Replacement Question'', which victim to swap out

Considerations in choosing a victim

Cannot replace a job that is pinned, i.e. whose memory is tied down. For example, if Direct Memory Access (DMA) I/O is scheduled for this process, the job is pinned until the DMA is complete.

Victim selection is a medium term scheduling decision

o Job that has been in a wait state for a long time is a good candidate.

o Often choose as a victim a job that has been in memory for a long time.

o Another point is how long should it stay swapped out.

Page 59: Class Note in O.S

128

For demand paging, where swapping out a page is not as drastic as swapping out a job, choosing the victim is an important memory management decision and we shall study several policies,

NOTEs: 1. So far the schemes presented have had two properties:

i. Each job is stored contiguously in memory. That is, the job is contiguous in physical addresses.

ii. Each job cannot use more memory than exists in the system. That is, the virtual addresses space cannot exceed the physical address space.

2. Tanenbaum now attacks the second item. I wish to do both and start with the first

3. Tanenbaum (and most of the world) uses the term ``paging'' to mean what I call demand paging. This is unfortunate as it mixes together two concepts

i. Paging (dicing the address space) to solve the placement problem and essentially eliminate external fragmentation.

ii. Demand fetching, to permit the total memory requirements of all loaded jobs to exceed the size of physical memory.

4. Tanenbaum (and most of the world) uses the term virtual memory as a synonym for demand paging. Again I consider this unfortunate. i. Demand paging is a fine term and is quite descriptive

ii. Virtual memory ``should'' be used in contrast with physical memory to describe any virtual to physical address translation.

Page 60: Class Note in O.S

128

** (non-demand) Paging

Simplest scheme to remove the requirement of contiguous physical memory.

Chop the program into fixed size pieces called pages (invisible to the programmer). Chop the real memory into fixed size pieces called page frames or simply frame.

Size of a page (the page size) = size of a frame (the frame size).

Sprinkle the pages into the frames.

Keeps a table (called the page table) having an entry for each page? The page table entry or PTE for page p contains the number of the frame f that contains page p.

Page 61: Class Note in O.S

128

Page 62: Class Note in O.S

128

Example: Assume a decimal machine with page size = frame size = 1000.Assume PTE 3 contains 459.Then virtual address 3372 corresponds to physical address 459372.

Properties of (non-demand) paging.

Entire job must be memory resident to run. No holes, i.e. no external fragmentation.

If there are 50 frames available and the page size is 4KB than a job requiring <= 200KB will fit, even if the available frames are scattered over memory.

Hence (non-demand) paging is useful.

Introduces internal fragmentation approximately equal to 1/2 the page size for every process (really every segment).

Can have a job unable to run due to insufficient memory and have some (but not enough) memory available. This is not called external fragmentation since it is not due to memory being fragmented.

Eliminates the placement question. All pages are equally good since don't have external fragmentation.

Replacement question remains.

Since page boundaries occur at ``random'' points and can change from run to run (the page size can change with no effect on the program--other than performance), pages are not appropriate units of memory to use for protection and sharing. This is discussed further when we introduce segmentation.

Page 63: Class Note in O.S

128

================ Start Lecture #6 ================

Notes:

Lab 2 due 6 November, but lab3 will be handed out 30 October, so you may want to finish lab 2 early. Lab 2 is being handed out 16 October and is available on the web.

It is NOW DEFINITE that on Monday 23 Oct, my office hours will have to move from 2:30--3:30 to 1:30-2:30 due to a departmental committee meeting.

Address translation

Each memory reference turns into 2 memory references 1. Reference the page table

2. Reference central memory

This would be a disaster!

Hence the MMU caches page#-->frame# translations. This cache is kept near the processor and can be accessed rapidly.

This cache is called a translation look aside buffer (TLB) or translation buffer (TB).

For the above example, after referencing virtual address 3372, entry 3 in the TLB would contain 459.

Page 64: Class Note in O.S

128

Hence a subsequent access to virtual address 3881 would be translated to physical address 459881 without a memory reference.

3.3: Virtual Memory (meaning fetch on demand)

Idea is that a program can execute even if only the active portion of its address space is memory resident. That is, swap in and swap out portions of a program. In a crude sense this can be called ``automatic overlays''.

Advantages

Can run a program larger than the total physical memory. Can increase the multiprogramming level since the total size of the active, i.e. loaded,

programs (running + ready + blocked) can exceed the size of the physical memory.

Since some portions of a program are rarely if ever used, it is an inefficient use of memory to have them loaded all the time. Fetch on demand will not load them if not used and will unload them during replacement if they are not used for a long time (hopefully).

3.2.1: Paging (meaning demand paging)

Fetch pages from disk to memory when they are referenced, with a hope of getting the most actively used pages in memory.

Very common: dominates modern operating systems Started by the Atlas system at Manchester University in the 60s (Fortheringham).

Each PTE continues to have the frame number if the page is loaded.

Page 65: Class Note in O.S

128

But what if the page is not loaded (exists only on disk)?

o The PTE has a flag indicating if it is loaded (can think of the X in the diagram on the right as indicating that this flag is not set).

o If not loaded, the location on disk could be kept in the PTE (not shown in the diagram). But normally it is not (discussed below).

o When a reference is made to a non-loaded page (sometimes called a non-existent page, but that is a bad name), the system has a lot of work to do. We give more details below.

1. Choose a free frame if one exists 2. If not

a. Choose a victim frame.

More later on how to choose a victim.

Called the replacement question

b. Write victim back to disk if dirty,

c. Update the victim PTE to show that it is not loaded and where on disk it has been put (perhaps the disk location is already there).

3. Copy the referenced page from disk to the free frame.

4. Update the PTE of the referenced page to show that it is loaded and give the frame number.

5. Do the standard paging address translation (p#,off)-->(f#,off).

Really not done quite this way

o There is ``always'' a free frame because ...

o There is a daemon active that checks the number of free frames and if this is too low, chooses victims and ``pages them out'' (writing them back to disk if dirty).

Page 66: Class Note in O.S

128

Choice of page size is discussed below.

Homework: 11.

3.3.2: Page tables

A discussion of page tables is also appropriate for (non-demand) paging, but the issues are more acute with demand paging since the tables can be much larger. Why?

1. The total size of the active processes is no longer limited to the size of physical memory. Since the total size of the processes is greater, the total size of the page tables is greater and hence concerns over the size of the page table are more acute.

2. With demand paging an important question is the choice of a victim page to page out. Data in the page table can be useful in this choice.

We must be able access to the page table very quickly since it is needed for every memory access.

Unfortunate laws of hardware.

Big and fast are essentially incompatible. Big and fast and low cost is hopeless.

So we can't just say, put the page table in fast processor registers, and let it be huge, and sell the system for $1500.

For now, put the (one-level) page table in main memory.

Page 67: Class Note in O.S

128

Seems too slow since all memory references require two reference. TLB very helpful to reduce the average delay as mentioned above (discussed later in more detail).

It might be too big.

o Currently we are considering contiguous virtual addresses ranges (i.e. the virtual addresses have no holes).

o Typically put the stack at one end of virtual address and the global (or static) data at the other end and let them grow towards each other

o The memory in between is unused.

o This unused memory can be huge (in address range) and hence the page table will mostly contain unneeded PTEs

Works fine if the maximum virtual address size is small, which was once true (e.g., the PDP-11 as discussed by Tanenbaum) but is no longer the case.

Contents of a PTE

Each page has a corresponding page table entry (PTE). The information in a PTE is for use by the hardware. Information set by and used by the OS is normally kept in other OS tables. The page table format is determined by the hardware so access routines are not portable. The following fields are often present.

1. The valid bit. This tells if the page is currently loaded (i.e., is in a frame). If set, the frame pointer is valid. It is also called the presence or presence/absence bit. If a page is accessed with the valid bit zero, a page fault is generated by the hardware.

2. The frame number. This is the main reason for the table. It is needed for virtual to physical address translation.

3. The Modified bit. Indicates that some part of the page has been written since it was loaded. This is needed when the page is evicted so the OS can know that the page must be written back to disk.

4. The referenced bit. Indicates that some word in the page has been referenced. Used to select a victim: unreferenced pages make good victims by the locality property.

Page 68: Class Note in O.S

128

5. Protection bits. For example one can mark text pages as execute only. This requires that boundaries between regions with different protection are on page boundaries. Normally many consecutive (in logical address) pages have the same protection so many page protection bits are redundant. Protection is more naturally done with segmentation.

Multilevel page tables

Recall the previous diagram. Most of the virtual memory is the unused space between the data and stack regions. However, with demand paging this space does not waste real memory. But the single large page table does waste real memory.

The idea of multi-level page tables (a similar idea is used in UNIX inode-based file systems) is to add a level of indirection and have a page table containing pointers to page tables.

Imagine one big page table. Call it the second level page table and cut it into pieces each the size of a page.

Note that you can get many PTEs in one page so you will have far fewer of these pages than PTEs

Now construct a first level page table containing PTEs that point to these pages.

This first level PT is small enough to store in memory.

But since we still have the 2nd level PT, we have made the world bigger not smaller!

Don't store in memory those 2nd level page tables all of whose PTEs refer to unused memory. That is use demand paging on the (second level) page table

For a two level page table the virtual address is divided into three pieces

+-----+-----+-------+ | P#1 | P#2 | Offset|

Page 69: Class Note in O.S

128

+-----+-----+-------+

P#1 gives the index into the first level page table.

Follow the pointer in the corresponding PTE to reach the frame containing the relevant 2nd level page table.

P#2 gives the index into this 2nd level page table

Follow the pointer in the corresponding PTE to reach the frame containing the (originally) requested frame.

Offset gives the offset in this frame where the requested word is located.

Do an example on the board

The VAX used a 2-level page table structure, but with some wrinkles (see Tanenbaum for details).

Naturally, there is no need to stop at 2 levels. In fact the SPARC has 3 levels and the Motorola 68030 has 4 (and the number of bits of Virtual Address used for P#1, P#2, P#3, and P#4 can be varied).

3.3.4: Associative memory (TLBs)

Note: Tanenbaum suggests that ``associative memory'' and ``translation lookaside buffer'' are synonyms. This is wrong. Associative memory is a general structure and translation lookaside buffer is a special case.

An associative memory is a content addressable memory. That is you access the memory by giving the value of some field and the hardware searches all the records and returns the record whose field contains the requested value.

For example

Name | Animal | Mood | Color======+========+==========+======Moris | Cat | Finicky | GreyFido | Dog | Friendly | BlackIzzy | Iguana | Quiet | BrownBud | Frog | Smashed | Green

If the index field is Animal and Iguana is given, the associative memory returns Izzy | Iguana | Quiet | Brown

Page 70: Class Note in O.S

128

A Translation Look aside Buffer or TLB is an associate memory where the index field is the page number. The other fields include the frame number, dirty bit, valid bit, and others.

A TLB is small and expensive but at least it is fast. When the page number is in the TLB, the frame number is returned very quickly. On a miss, the page number is looked up in the page table. The record found is placed in the TLB and a victim is discarded. There is no placement question

since all entries are accessed at the same time. But there is a replacement question.

Homework: 15.

3.3.5: Inverted page tables

Keep a table indexed by frame number with the entry f containing the number of the page currently loaded in frame f.

Since modern machine have a smaller physical address space than virtual address space, the table is smaller But on a TLB miss, must search the inverted page table.

Would be hopelessly slow except that some tricks are employed.

The book mentions some but not all of the tricks, we are skipping this topic.

3.4: Page Replacement Algorithms

These are solutions to the replacement question.

Good solutions take advantage of locality.

Temporal locality: If a word is referenced now, it is likely to be referenced in the near future. o This argues for caching referenced words, i.e. keeping the referenced word near the processor for a while.

Spatial locality: If a word is referenced now, nearby words are likely to be referenced in the near future.

Page 71: Class Note in O.S

128

o This argues for prefetching words around the currently referenced word.

These are lumped together into locality: If any word in a page is referenced, each word in the page is ``likely'' to be referenced.

o So it is good to bring in the entire page on a miss and to keep the page in memory for a while.

When programs begin there is no history so nothing to base locality on. At this point the paging system is said to be undergoing a ``cold start''.

Programs exhibit ``phase changes'', when the set of pages referenced changes abruptly (similar to a cold start). At the point of a phase change, many page faults occur because locality is poor.

Pages belonging to processes that have terminated are of course perfect choices for victims.

Pages belonging to processes that have been blocked for a long time are good choices as well.

Random

A lower bound on performance. Any decent scheme should do better.

3.4.1: The optimal page replacement algorithm (opt PRA) (aka Belady's min PRA)

Replace the page whose next reference will be furthest in the future.

Also called Belady's min algorithm. Provably optimal. That is, generates the fewest number of page faults.

Unimplementable: Requires predicting the future.

Good upper bound on performance.

3.4.2: The not recently used (NRU) PRA

Divide the frames into four classes and make a random selection from the lowest nonempty class.

Page 72: Class Note in O.S

128

1. Not referenced, not modified 2. Not referenced, modified

3. Referenced, not modified

4. Referenced, modified

Assumes that in each PTE there are two extra flags R (sometimes called U, for used) and M (often called D, for dirty).

Also assumes that a page in a lower priority class is cheaper to evict.

If not referenced, probably not referenced again soon so not so important. If not modified, do not have to write it out so the cost of the eviction is lower.

When a page is brought in, OS resets R and M (i.e. R=M=0)

On a read, hardware sets R.

On a write, hardware sets R and M.

We again have the prisoner problem; we do a good job of making little ones out of big ones, but not the reverse. Need more resets.

Every k clock ticks reset all R bits

Why not reset M?Answer: Must have M accurate to know if victim needs to be written back

Could have two M bits one accurate and one reset, but I don't know of any system (or proposal) that does so.

What if the hardware doesn't set these bits?

OS can use tricks When the bits are reset, make the PTE indicate the page is not resident (i.e. lie). On the page fault, set the appropriate bit(s).

Page 73: Class Note in O.S

128

3.4.3: FIFO PRA

Simple but poor since usage of the page is ignored.

Belady's Anomaly: Can have more frames yet generate more faults. Example given later.

================ Start Lecture #7 ================

3.4.4: Second chance PRA

Similar to the FIFO PRA but when time choosing a victim, if the page at the head of the queue has been referenced (R bit set), don't evict it. Instead reset R and move the page to the rear of the queue (so it looks new). The page is being a second chance.

What if all frames have been referenced?Becomes the same as fifo (but takes longer).

Might want to turn off the R bit more often (say every k clock ticks).

3.4.5: Clock PRA

Same algorithm as 2nd chance, but a better (and I would say obvious) implementation: Use a circular list.

Do an example.

LIFO PRA

This is terrible! Why?Ans: All but the last frame are frozen once loaded so you can replace only one frame. This is especially bad after a phase shift in the program when it is using all new pages.

Page 74: Class Note in O.S

128

3.4.6: Least Recently Used (LRU) PRA

When a page fault occurs, choose as victim that page that has been unused for the longest time, i.e. that has been least recently used.

LRU is definitely

Implementable: The past is knowable. Good: Simulation studies have shown this.

Difficult. Essentially need to either:

1. Keep a time stamp in each PTE, updated on each reference and scan all the PTEs when choosing a victim to find the PTE with the oldest timestamp.

2. Keep the PTEs in a linked list in usage order, which means on each reference moving the PTE to the end of the list

3.4.7: Approximating LRU in Software

The Not Frequently Used (NFU) PRA

Include a counter in each PTE (and have R in each PTE). Set counter to zero when page is brought into memory.

For each PTE, every k clock ticks.

1. Add R to counter.

2. Clear R.

Choose as victim the PTE with lowest count.

The Aging PRA

NFU doesn't distinguish between old references and recent one. The following modification does distinguish.

Include a counter in each PTE (and have R in each PTE).

Page 75: Class Note in O.S

128

Set counter to zero when page is brought into memory.

For each PTE, every k clock ticks.

1. Shift counter right one bit.

2. Insert R as new high order bit (HOB).

3. Clear R.

Choose as victim the PTE with lowest count.

R counter

1 100000000 010000001 101000001 110100000 011010000 001101001 100110101 110011010 01100110

3.5: Modeling Paging Algorithms

3.5.1: Belady's anomaly

Consider a system that has no pages loaded and that uses the FIFO PRU.Consider the following ``reference string'' (sequences of pages referenced).

0 1 2 3 0 1 4 0 1 2 3 4

If we have 3 frames this generates 9 page faults (do it).

Page 76: Class Note in O.S

128

If we have 4 frames this generates 10 page faults (do it).

Theory has been developed and certain PRA (so called ``stack algorithms'') cannot suffer this anomaly for any reference string. FIFO is clearly not a stack algorithm. LRU is.

Repeat the above calculations for LRU.

3.6: Design issues for (demand) Paging

3.6.1 & 3.6.2: The Working Set Model and Local vs. Global Policies

I will do these in the reverse order (which makes more sense). Also Tanenbaum doesn't actually define the working set model, but I shall.

A local PRA is one is which a victim page is chosen among the pages of the same process that requires a new page. That is the number of pages for each process is fixed. So LRU means the page least recently used by this process.

Of course we can't have a purely local policy, why?Answer: A new process has no pages and even if we didn't apply this for the first page loaded, the process would remain with only one page.

Perhaps wait until a process has been running a while.

A global policy is one in which the choice of victim is made among all pages of all processes.

If we apply global LRU indiscriminately with some sort of RR processor scheduling policy, and memory is somewhat over-committed, then by the time we get around to a process, all the others have run and have probably paged out this process.

If this happens each process will need to page fault at a high rate; this is called thrashing. It is therefore important to get a good idea of how many pages a process needs, so that we can balance the local and global desires.

The working set policy (Peter Denning)

The goal is to specify which pages a given process needs to have memory resident in order for the give process to run without too many page faults.

But this is impossible since it requires predicting the future.

Page 77: Class Note in O.S

128

So we make the assumption that the immediate future is well approximated by the immediate past.

Measure time in units of memory references, so t=1045 means the time when the 1045th memory reference is issued.

In fact we measure time separately for each process, so t=1045 really means the time when this process made its 1045th memory reference.

W(t,ω) is the set of pages referenced (by the given process) from time t-ω to time t.

That is, W(t,ω) is the set pages referenced during the window of size ω ending at time t.

That is, W(t,ω) is the set of pages referenced by the last ω memory references ending at reference t.

W(t,ω) is called the working set at time t (with window ω).

Netscape doesn't (yet) support ω to give the Greek letter. Ouch

w(t,ω) is the size of the set W(t,ω), i.e. is the number of pages referenced in the window.

The idea of the working set policy is to ensure that each process keeps its working set in memory.

One possibility is to allocate w(t,ω) frames to each process (this number differs for each process and changes with time) and then use a local policy.

What if there aren't enough frames to do this?

Page 78: Class Note in O.S

128

Ans: Reduce the multiprogramming level (MPL)! That is, we have a connection between memory management and process management. This is the suspend/resume arcs we saw way back when.

Interesting questions include:

What value should be used for ω?Experiments have been done and ω is surprisingly robust (i.e., for a given system a fixed value works reasonably for a wide variety of job mixes)

How should we calculate W(t,ω)?Hard so do exactly so ... ... various approximations to the working set have been devised.

1. Wsclock o Use the aging algorithm above to maintain a counter for each PTE and declare a page whose counter is above a certain threshold to be part of the

working set.

o Apply the clock algorithm globally (i.e. to all pages) but refuse to page out any page in a working set, the resulting algorithm is called wsclock.

o What if we find there are no pages we can page out?Answer: Reduce the MPL.

2. Page Fault Frequency (PFF)

o For each process keep track of the page fault frequency, this is the number of faults divided by the number of references.

o Actually, must use a window or a weighted calculation since you are really interested in the recent page fault frequency.

o If the PFF is too high, allocate more frames to this process. Either

1. Raise its number of frames and use local policy; or

2. Bar its frames from eviction (for a while) and use a global policy.

Page 79: Class Note in O.S

128

o What if there are not enough frames?Answer: Reduce the MPL.

3.6.3: Page size

Page size ``must'' be a multiple of the disk block size. Why?Answer: When copying out a page if you have a partial disk block, you must do a read/modify/write (i.e., 2 I/Os).

Important property of I/O that we will learn later this term is that eight I/Os each 1KB takes considerably longer than one 8KB I/O

Characteristics of a large page size.

o Good for user I/O.

If I/O done using physical addresses, then I/O crossing a page boundary is not contiguous and hence requires multiple I/Os

If I/O uses virtual addresses, then page size doesn't effect this aspect of I/O. That is the addresses are contiguous in virtual address and hence one I/O is done.

o Good for demand paging I/O.

Better to swap in/out one big page than several small pages.

But if page is too big you will be swapping in data that is really not local and hence might well not be used.

o Large internal fragmentation (1/2 page size).

o Small page table.

o A very large page size leads to very few pages. Process will have many faults if using demand paging and the process frequently references more regions than frames.

A mall page size has the opposite characteristics.

3.6.4: Implementation Issues

Don't worry about instruction backup. Very machine dependent and modern implementations tend to get it right.

Page 80: Class Note in O.S

128

Locking (pinning) pages

We discussed pinning jobs already. The same (mostly I/O) considerations apply to pages.

Shared pages

Really should share segments.

Must keep reference counts or something so that when a process terminates, pages (even dirty pages) it shares with another process are not automatically discarded.

Similarly, a reference count would make a widely shared page (correctly) look like a poor choice for a victim.

A good place to store the reference count would be in a structure pointed to by both PTEs. If stored in the PTEs, must keep them consistent between processes

================ Start Lecture #8 ================

Page 81: Class Note in O.S

128

Note: Lab 2 handed out, due in three weeks, 20 November 2000. It is available on the web.

Backing Store

The issue is where on disk we put pages.

For program text, which is presumably read only, a good choice is the file itself. What if we decide to keep the data and stack each contiguous on the backing store. Data and stack grow so must be prepared to grow the space on disk and

leads to the same issues and problems as we saw with MVT.

If those issues/problems are painful, we can scatter the pages on the disk.

o That is we employ paging!

o This is NOT demand paging.

o Need a table to say where the backing space for each page is located.

This corresponds to the page table used to tell where in real memory a page is located.

The format of the ``memory page table'' is determined by the hardware since the hardware modifies/accesses it.

The format of the ``disk page table'' is decided by the OS designers and is machine independent.

If the format of the memory page table was flexible, then we might well keep the disk information in it as well.

Paging Daemons

Page Fault Handling

What happens when a process, say process A, gets a page fault?

1. The hardware detects the fault and traps to the kernel (switches to supervisor mode and saves state).

Page 82: Class Note in O.S

128

2. Some assembly language code save more state, establishes the C-language (or another programming language) environment, and ``calls'' the OS.

3. The OS determines that a page fault occurred and which page was referenced.

4. If the virtual address is invalid, process A is killed. If the virtual address is valid, the OS must find a free frame. If there are no free frames, the OS selects a victim frame. Call the process owning the victim frame, process B. (If the page replacement algorithm is local process B is process A.)

5. If the victim frame is dirty, the OS schedules an I/O write to copy the frame to disk. Thus, if the victim frame is dirty, process B is blocked (it might already be blocked for some other reason). Process A is also blocked since it needs to wait for this frame to be free. The process scheduler is invoked to perform a context switch.

o Tanenbaum ``forgot'' some here.

o The process selected by the scheduler (say process C) runs.

o Perhaps C is preempted for D or perhaps C blocks and D runs and then perhaps D is blocked and E runs, etc.

o When the I/O to write the victim frame completes, a Disk interrupt occurs. Assume processes C is running at the time.

o Hardware trap / assembly code / OS determine I/O done.

o Processes A is moved from blocked to ready as is process B (unless B is also blocked for some other reason).

o The scheduler picks a process to run, maybe a, maybe B, maybe C, maybe another processes.

o At some point the scheduler does pick process A to run. Recall that at this point A is still executing OS code.

6. Now the O/S has a clean frame (this may be much later in wall clock time if a victim frame had to be written). The O/S schedules an I/O to read the desired page into this clean frame. Process A is blocked (perhaps for the second time) and hence the process scheduler is invoked to perform a context switch.

7. A Disk interrupt occurs when the I/O completes (trap / asm / OS determines I/O done). The PTE is updated.

8. The O/S may need to fix up process A (e.g. reset the program counter to re-execute the instruction that caused the page fault).

9. Process A is placed on the ready list and eventually is chosen by the scheduler to run. Recall that process A is executing O/S code.

Page 83: Class Note in O.S

128

10. The OS returns to the first assembly language routine.

11. The assembly language routine restores registers, etc. and ``returns'' to user mode.

Process A is unaware that all this happened.

3.7: Segmentation

Up to now, the virtual address space has been contiguous.

Among other issues this makes memory management difficult when there are more than two dynamically growing regions. With two regions you start them on opposite sides of the virtual space as we did before.

Better is to have many virtual address spaces each starting at zero.

This split up is user visible.

Without segmentation (equivalently said with just one segment) all procedures are packed together so if one changes in size all the virtual addresses following are changed and the program must be re-linked.

Eases flexible protection and sharing (share a segment). For example, can have a shared library.

Homework: 29.

** Two Segments

Late PDP-10s and TOPS-10

One shared text segment, that can also contain shared (normally read only) data. One (private) writable data segment.

Permission bits on each segment.

Page 84: Class Note in O.S

128

Which kind of segment is better to evict?

o Swap out shared segment hurts many tasks.

o The shared segment is read only (probably) so no write back is needed.

``One segment'' is OS/MVT done above.

** Three Segments

Traditional UNIX shown at right.

1. Shared text marked execute only. 2. Data segment (global and static variables).

3. Stack segment (automatic variables).

** Four Segments

Just kidding.

** General (not necessarily demand) Segmentation

Permits fine grained sharing and protection. Visible division of program.

Variable size segments.

Address = (seg#, offset).

Does not mandate how stored in memory.

o One possibility is that the entire program must be in memory in order to run it. Use whole process swapping. Very early versions of Unix did this.

o Can also implement demand segmentation.

Page 85: Class Note in O.S

128

o Can combine with demand paging (done below).

Requires a segment table with a base and limit value for each segment. Similar to a page table.

Entries are called STEs, Segment Table Entries.

(seg#, offset) --> if (offset<limit) base+offset else error.

Segmentation exhibits external fragmentation, just as whole program swapping. Since segments are smaller than programs (several segments make up one program), the external fragmentation is not as bad.

** Demand Segmentation

Same idea as demand paging applied to segments.

If a segment is loaded, base and limit are stored in the STE and the valid bit is set in the PTE. The PTE is accessed for each memory reference (not really, TLB).

If the segment is not loaded, the valid bit is unset. The base and limit as well as the disk address of the segment is stored in the an OS table.

A reference to a non-loaded segment generates a segment fault (analogous to page fault).

To load a segment, we must solve both the placement question and the replacement question (for demand paging, there is no placement question).

I believe demand segmentation was once implemented but am not sure. It is not used in modern systems.

The following table mostly from Tanenbaum compares demand paging with demand segmentation.

ConsiderationDemandPaging

DemandSegmentation

Programmer aware No Yes

How many addr spaces 1 Many

Page 86: Class Note in O.S

128

VA size > PA size Yes Yes

Protect individualprocedures separately

No Yes

Accommodate elementswith changing sizes

No Yes

Ease user sharing No Yes

Why inventedlet the VA size

exceed the PA sizeSharing, Protection,

independent addr spaces

Internal fragmentation Yes No, in principle

External fragmentation No Yes

Placement question No Yes

Replacement question Yes Yes

** 3.7.2: Segmentation with paging

Combines both segmentation and paging to get advantages of both at a cost in complexity. This is very common now.

A virtual address becomes a triple: (seg#, page#, offset). Each segment table entry (STE) points to the page table for that segment. Compare this with a multilevel page table.

The size of each segment is a multiple of the page size (since the segment consists of pages). Perhaps not. Can keep the exact size in the STE (limit value) and shoot the process if it referenced beyond the limit. In this case the last page of each segment is partially valid.

Page 87: Class Note in O.S

128

Page 88: Class Note in O.S

128

The page# field in the address gives the entry in the chosen page table and the offset gives the offset in the page. From the limit field, one can easily compute the size of the segment in pages (which equals the size of the corresponding page table in PTEs).

Implementations may require the size of a segment to be a multiple of the page size in which case the STE would store the number of pages in the segment.

A straightforward implementation of segmentation with paging would requires 3 memory references (STE, PTE, referenced word) so a TLB is crucial.

Some books carelessly say that segments are of fixed size. This is wrong. They are of variable size with a fixed maximum and possible with the requirement that the size of a segment is a multiple of the page size.

The first example of segmentation with paging was Multics.

Keep protection and sharing information on segments. This works well for a number of reasons.

1. A segment is variable size.

2. Segments and their boundaries are user (i.e., linker) visible.

3. Segments are shared by sharing their page tables. This eliminates the problem mentioned above with shared pages.

Do replacement on pages so there is no placement question and no external fragmentation.

Do fetch-on-demand with pages (i.e., do demand paging).

In general, segmentation with demand paging works well and is widely used. The only problems are the complexity and the resulting 3 memory references for each user memory reference. The complexity is real, but can be managed. The three memory references would be fatal were it not for TLBs, which considerably ameliorate the problem. TLBs have high hit rates and for a TLB hit there is essentially no penalty.

Homework: 30.

Some last words on memory management.

Segmentation / Paging / Demand Loading (fetch-on-demand) o Each is a yes or no alternative.

o Gives 8 possibilities.

Placement and Replacement.

Page 89: Class Note in O.S

128

Internal and External Fragmentation.

Page Size and locality of reference.

Multiprogramming level and medium term scheduling.

Chapter 4: File SystemsRequirements

1. Size: Store very large amounts of data. 2. Persistence: Data survives the creating process.

3. Access: Multiple processes can access the data concurrently.

Solution: Store data in files that together form a file system.

4.1: Files

4.1.1: File Naming

Very important. A major function of the file system.

Does each file have a unique name?Answer: Often no. We will discuss this below when we study links.

Extensions, e.g. the ``html'' in ``class-notes.html''.

1. Conventions just for humans: letter.teq (my convention).

Page 90: Class Note in O.S

128

2. Conventions giving default behavior for some programs.

The emacs editor thinks .html files should be edited in html mode butcan edit them in any mode and can edit any file in html mode.

Netscape thinks .html means an html file, but<html> ... </html> works as well

Gzip thinks .gz means a compressed file but accepts a --suffix flag

3. Required extensions for programs The new C compiler (and probably others) requires C programs be named *.c and assembler programs be named *.s

4. Required extensions by operating systems

MS-DOS treats .com files specially

Windows 95 requires (as far as I can tell) shortcuts to end in .lnk.

Case sensitive?UNIX: yes. Windows: no.

4.1.2: File structure

A file is a

1. Byte stream o UNIX, dos, windows (I think).

o Maximum flexibility.

o Minimum structure.

2. (fixed size) Record stream: Out of date

Page 91: Class Note in O.S

128

o 80-character records for card images.

o 133-character records for line printer files. Column 1 was for control (e.g., new page) Remaining 132 characters were printed.

3. Varied and complicated beast.

o Indexed sequential.

o B-trees.

o Supports rapidly finding a record with a specific key.

o Supports retrieving (varying size) records in key order.

o Treated in depth in database courses.

================ Start Lecture #9 ================

4.1.3: File types

Examples

1. (Regular) files.

2. Directories: studied below.

Page 92: Class Note in O.S

128

3. Special files (for devices). Uses the naming power of files to unify many actions.

4. dir # prints on screen5. dir > file # result put in a file6. dir > /dev/tape # results written to tape

7. ``Symbolic'' Links (similar to ``shortcuts''): Also studied below.

``Magic number'': Identifies an executable file.

There can be several different magic numbers for different types of executables. unix: #!/usr/bin/perl

Strongly typed files:

The type of the file determines what you can do with the file. This makes the easy and (hopefully) common case easier and, more importantly safer.

It tends to make the unusual case harder. For example, you have a program that turns out data (.dat) files. But you want to use it to turn out a java file but the type of the output is data and cannot be easily converted to type java.

4.1.4: File access

There are basically two possibilities, sequential access and random access (a.k.a. direct access). Previously, files were declared to be sequential or random. Modern systems do not do this. Instead all files are random and optimizations are applied when the system dynamically determines that a file is (probably) being accessed sequentially.

1. With Sequential access the bytes (or records) are accessed in order (i.e., n-1, n, n+1,). Sequential access is the most common and gives the highest performance. For some devices (e.g. tapes) access ``must'' be sequential.

2. With random access, the bytes are accessed in any order. Thus each access must specify which bytes are desired.

4.1.5: File attributes

Page 93: Class Note in O.S

128

A laundry list of properties that can be specified for a file For example:

hidden do not dump

owner

key length (for keyed files)

4.1.6: File operations

o Create: Essential if a system is to add files. Need not be a separate system call (can be merged with open). o Delete: Essential if a system is to delete files.

o Open: Not essential. An optimization in which the translation from file name to disk locations is perform only once per file rather than once per access.

o Close: Not essential. Free resources.

o Read: Essential. Must specify filename, file location, number of bytes, and a buffer into which the data is to be placed. Several of these parameters can be set by other system calls and in many OS's they are.

o Write: Essential if updates are to be supported. See read for parameters.

o Seek: Not essential (could be in read/write). Specify the offset of the next (read or write) access to this file.

o Get attributes: Essential if attributes are to be used.

o Set attributes: Essential if attributes are to be user settable.

o Rename: Tanenbaum has strange words. Copy and delete is not acceptable for big files. Moreover copy-delete is not atomic. Indeed link-delete is not atomic so even if link (discussed below) is provided, renaming a file adds functionality.

Homework: 2, 3, 4.Read and understand ``copy file'' on page 155.

Page 94: Class Note in O.S

128

Notes on copy file

o Normally in UNIX one wouldn't call read and write directly. o Indeed, for copy file, getchar () and put char () would be nice since they take care of the buffering (standard I/O, stdio).

o Tanenbaum is correct that the error reporting is atrocious.The worst is exiting the loop on error and thus generating an exit (0) as if nothing happened.

4.1.7: Memory mapped files

Conceptually simple and elegant. Associate a segment with each file and then normal memory operations take the place of I/O.

Thus copy file does not have fgetc/fputc (or read/write). Instead it is just like memo copy

while ( (dest++)* = (src++)* );

The implementation is via segmentation with demand paging but the backing store for the pages is the file itself. This all sounds great but...

1. How do you tell the length of a newly created file? You know which pages were written but not what words in those pages. So a file with one byte or 10 looks like a page.

2. What if same file is accessed by both I/O and memory mapping?

3. What if the file is bigger than the size of virtual memory (will not be a problem for systems built 3 years from now as all will have enormous virtual memory sizes).

4.2: Directories

Unit of organization.

Page 95: Class Note in O.S

128

4.2.1: Hierarchical directory systems

Possibilities

o One directory in the system o One per user

o One tree

o One tree per user

o One forest

o One forest per user

These are not as wildly different as they sound.

o If the system only has one directory, but allows the character / in a file name. Then one could fake a tree by having a file named/allan/gottlieb/courses/arch/class-notes.htmlrather than a directory allan, a subdirectory gottlieb... a file class-notes.html.

o Dos (windows) are a forest, UNIX a tree. In dos there is no common parent of a:\ and c:\.

o But windows explorer makes the dos forest look quite a bit like a tree. Indeed, the gnome file manager for Linux, looks A LOT like windows explorer.

o You can get an effect similar to (but not the same as) one X per user by having just one X in the system and having permissions that permits each user to visit only a subset. Of course if the system doesn't have permissions, this is not possible.

o Today's systems have a tree per system or a forest per system.

4.2.2: Path Names

You can specify the location of a file in the file hierarchy by using either an absolute versus or a Relative path to the file

o An absolute path starts at the (or if we have a forest) root.

Page 96: Class Note in O.S

128

o A relative path starts at the current (a.k.a working) directory.

o The special directories. And... Represent the current directory and the parent of the current directory respectively.

4.3: File System Implementation

4.3.1; Implementing Files

o A disk cannot read or write a single word. Instead it can read or write a sector, which is often 512 bytes. o Disks are written in blocks whose size is a multiple of the sector size.

o When we study I/O in the next chapter I will bring in some physically large (and hence old) disks so that we can see what they look like and understand better sectors (and tracks, and cylinders, and heads, etc.).

Contiguous allocation

o This is like OS/MVT. o The entire file is stored as one piece.

o Simple and fast for access, but ...

o Problem with growing files

Must either evict the file itself or the file it is bumping into.

Same problem with an OS/MVT kind of system if jobs grow.

o Problem with external fragmentation.

o Not used for general purpose systems. Ideal for systems where files do not change size.

Linked allocation

o The directory entry contains a pointer to the first block of the file. o Each block contains a pointer to the next.

Page 97: Class Note in O.S

128

o Horrible for random access.

o Not used.

FAT (file allocation table)

o Used by dos and windows (but not windows/NT). o Directory entry points to first block (i.e. specifies the block number).

o A FAT is maintained in memory having one (word) entry for each disk block. The entry for block N contains the block number of the next block in the same file as N.

o This is linked but the links are store separately.

o Time to access a random block is still is linear in size of file but now all the references are to this one table which is in memory. So it is bad but not horrible for random access.

o Size of table is one word per disk block. If one writes all blocks of size 4K and uses 4-byte words, the table is one megabyte for each disk gigabyte. Large but not prohibitive.

o If write blocks of size 512 bytes (the sector size of most disks) then the table is 8 megs per gig, which might be prohibitive.

Inodes

o Used by UNIX. o Directory entry points to inode (index-node).

o Inode points to first few data blocks, often called direct blocks.

o Inode also points to an indirect block, which points to disk blocks.

o Inode also points to a double indirect, which points an indirect ...

o For some implementations there are triple indirect as well.

o The inode is in memory for open files. So references to direct blocks take just one I/O.

Page 98: Class Note in O.S

128

o For big files most references require two I/Os (indirect + data).

o For huge files most references require three I/Os (double indirect, indirect, and data).

Page 99: Class Note in O.S

128

4.3.2; Implementing Directories

Page 100: Class Note in O.S

128

Recall that a directory is a mapping that converts file (or subdirectory) names to the files (or subdirectories) themselves.

Trivial File System (CP/M)

o Only one directory in the system. o Directory entry contains pointers to disk blocks.

o If need more blocks, get another directory entry.

MS-DOS and Windows (FAT)

o Subdirectories supported. o Directory entry contains metadata such as date and size as well as pointer to first block.

Unix

o Each entry contains a name and a pointer to the corresponding inode. o Metadata is in the inode.

o Early UNIX had limit of 14 character names.

o Name field now is varying length.

o To go down a level in directory takes two steps: get inode, get file (or subdirectory).

o Do on the blackboard the steps for /allan/gottlieb/courses/os/class-notes.html

Page 101: Class Note in O.S

128

================ Start Lecture #10 ================

4.3.3: Shared files (links)

o ``Shared'' files is Tanenbaum's terminology. o More descriptive would be ``multinamed files''.

o If a file exists, one can create another name for it (quite possibly in another directory).

o This is often called creating a (or another) link to the file.

o Unix has two flavor of links, hard links and symbolic links or symlinks.

o Dos/windows have symlinks, but I don't believe it has hard links.

o These links often cause confusion, but I really believe that the diagrams I created make it all clear.

Hard Links

o Symmetric multinamed files. o When a hard line is created another name is created for the same file.

o The two names have equal status.

o It is not, I repeat NOT true that one name is the ``real name'' and the other is ``just a link''.

Start with an empty file system (i.e., just the root directory) and then execute:

Page 102: Class Note in O.S

128

cd /mkdir /A; mkdir /Btouch /A/X; touch /B/Y

We have the situation shown on the right.

Note that names are on edges not nodes. When there are no multinamed files, it doesn't much matter.

Now execute

ln /B/Y /A/New

This gives the new diagram to the right.

At this point there are two equally valid name for the right hand yellow file, /B/Y and /A/New. The fact that /B/Y was created first is NOT detectable.

o Both point to the same inode. o Only one owner (the one who created the file initially).

o One date, one set of permissions, one ... .

Assume Bob created /B and /B/Y and Alice created /A, /A/X, and /A/New. Later Bob tires of /B/Y and removes it by executing

rm /B/Y

The file /A/New is still fine (see third diagram on the right). But it is owned by Bob, who can't find it! If the system enforces quotas bob will likely be charged (as the owner), but he can neither find nor delete the file (since

Page 103: Class Note in O.S

128

bob cannot unlink, i.e. remove, files from /A)

Since hard links are only permitted to files (not directories) the resulting file system is a dag (directed acyclic graph). That is there are no directed cycles. We will now proceed to give away this useful property by studying symlinks, which can point to directories.

Symlinks

o Asymmetric multinamed files. o When a symlink is created another file is created, one that points to the original file.

Again start with an empty file system and this time execute

cd /mkdir /A; mkdir /Btouch /A/X; touch /B/Yln -s /B/Y /A/New

We now have an additional file /A/New, which is a symlink to /B/Y.

o The file named /A/New has the name /B/Y as its data (not metadata). o The system notices that A/New is a diamond (symlink) so reading /A/New will return the contents of /B/Y (assuming the reader has read permission

for /B/Y).

o If /B/Y is removed /A/New becomes invalid.

o If a new /B/Y is created, A/New is once again valid.

o Removing /A/New has no effect of /B/Y.

o If a user has write permission for /B/Y, then writing /A/New is possible and writes /B/Y.

The bottom line is that, with a hard link, a new name is created that has equal status to the original name. This can cause some surprises (e.g., you create a link but I own the file). With a symbolic link a new file is created (owned by the creator naturally) that points to the original file.

Page 104: Class Note in O.S

128

Question: Consider the hard link setup above. If Bob removes /B/Y and then creates another /B/Y, what happens to /A/X?Answer: Nothing. /A/X is still a file with the same contents as the original /B/Y.

Question: What about with a symlink?Answer: /A/X becomes invalid and then valid again, this time pointing to the new /B/Y. (It can't point to the old /B/Y as that is completely gone.)

What about symlinking a directory?

cd /mkdir /A; mkdir /Btouch /A/X; touch /B/Yln -s /B /A/New

Is there a file named /A/New/Y ?Yes.

What happens if you execute cd /A/New/.. ?

o Answer: Not clear! o Clearly you are changing directory to the parent directory of /A/New. But is that /A or /?

o The command interpreter I use offers both possibilities.

cd -L /A/New/.. takes you to A (L for logical).

cd -P /A/New/.. takes you to / (P for physical).

cd /A/New/.. takes you to A (logical is the default).

What did I mean when I said the pictures made it all clear?Answer: From the file system perspective it is clear. Not always so clear what programs will do.

4.3.4: Disk space management

Page 105: Class Note in O.S

128

All general purpose systems use a (non-demand) paging algorithm for file storage. Files are broken into fixed size pieces, called blocks that can be scattered over the disk. Note that although this is paging, it is never called paging.

The file is completely stored on the disk, i.e., it is not demand paging.

o Actually it is more complicated as various optimizations are performed to try to have consecutive blocks of a single file stored consecutively on the disk.

o One can imagine systems that store only parts of the file on disk with the rest on tertiary storage (some kind of tape).

o This would be just like demand paging.

o Perhaps NASA does this with their huge datasets.

o Caching (as done for example in microprocessors) is also the same as demand paging.

o We unify these concepts in the computer architecture course.

Choice of block size

o We discussed this before when studying page size. o Current commodity disk characteristics (not for laptops) result in about 15ms to transfer the first byte and 10K bytes per ms for subsequent bytes (if

contiguous).

We will explain the following terms in the I/O chapter.

Rotation rate is 5400, 7600, or 10,000 RPM (15K just now available).

Recall that 6000 RPM is 100 rev/sec or one rev per 10ms. So half a rev (the average time for to rotate to a given point) is 5ms.

Transfer rates around 10MB/sec = 10KB/ms.

Seek time around 10ms.

o This favors large blocks, 100KB or more.

o But the internal fragmentation would be severe since many files are small.

Page 106: Class Note in O.S

128

o Multiple block sizes have been tried as have techniques to try to have consecutive blocks of a given file near each other.

o Typical block sizes are 4KB anf8KB.

Storing free blocks

There are several possibilities.

1. An in-memory bit map. One bit per block

If blocksize=4K, 1 bit per 32K bits

So 32GB disk (potentially all free) needs 1MB ram

2. Bit map paged in.

3. Linked list with each free block pointing to next: Extra disk access per block.

4. Linked list with links stored contiguously, i.e. an array of pointers to free blocks. Store this in free blocks and keep one in memory.

4, 3.5: File System reliability

Bad blocks on disks

Not so much of a problem now. Disks are more reliable and, more importantly, disks take care of the bad blocks themselves. That is, there is no OS support needed to map out bad blocks. But if a block goes bad, the data is lost (not always).

Backups

Page 107: Class Note in O.S

128

All modern systems support full and incremental dumps.

o A level 0 dump is a called a full dump (i.e., dumps everything). o A level n dump (n>0) is called an incremental dump and the standard unix utility dumps all files that have changed since the previous level n-1

dump.

o Other dump utilities dump all files that have changed since the last level n dump.

o Keep on the disk the dates of the most recent level i dumps for all i. In Unix this is traditionally in /etc/dumpdates.

o What about the nodump attribute?

Default policy (for Linux at least) is to dump such files anyway when doing a full dump, but not dump them for incremental dumps.

Another way to say this is the nodump attribute is honored for level n dumps if n>1.

The dump command has an option to override the default policy (can specify k so that nodump is honored for level n dumps if n>k).

Consistency

o Fsck (file system check) and chkdsk (check disk) If the system crashed, it is possible that not all metadata was written to disk. As a result the file system may be inconsistent. These programs

check, and often correct, inconsistencies.

Scan all inodes (or fat) to check that each block is in exactly one file, or on the free list, but not both.

Also check that the number of links to each file (part of the metadata in the file's inode) is correct (by looking at all directories).

Other checks as well.

Offers to ``fix'' the errors found (for most errors).

o ``Journaling'' file systems

An idea from database theory (transaction logs).

Eliminates the need for fsck.

Page 108: Class Note in O.S

128

NTFS has had journaling from day 1.

Many Unix systems have it. IBM's AIX converted to journaling in the early 90s.

Linux does not yet have journaling, a serious shortcoming. It is under very active development.

FAT does not have journaling.

4.3.6 File System Performance

Buffer cache or block cache

An in-memory cache of disk blocks

o Demand paging again! o Clearly good for reads as it is much faster to read memory than to read a disk.

o What about writes?

Must update the buffer cache (otherwise subsequent reads will return the old value).

The major question is whether the system should also update the disk block.

The simplest alternative is write through in which each write is performed at the disk before it declared complete.

Since floppy disk drivers adopt a write through policy, one can remove a floppy as soon as an operation is complete.

Write through results in heavy I/O write traffic.

If a block is written many times all the writes are sent the disk. Only the last one was ``needed''.

If a temporary file is created, written, read, and deleted, all the disk writes were wasted.

DOS

The other alternative is write back in which the disk is not updated until the in-memory copy is evicted (i.e., the replacement question).

Page 109: Class Note in O.S

128

Much less write traffic than write through.

Trouble if a crash occurs.

Used by UNIX and others for hard disks.

Can write dirty blocks periodically, say every minute. This limits the possible damage, but also the possible gain.

Ordered writes. Do not write a block containing pointers until the block pointed to has been written. Especially if the block pointed to contains pointers since the version of these pointers on disk may be wrong and you are giving a file pointers to some random blocks.

o Research in ``log-structured'' file systems tries to make all writes sequential (i.e., writes are treated as if going to a log file).

4.4: Security

Very serious subject. Could easily be a course in itself. My treatment is very brief.

4.4.1: Security environment

1. Accidental data loss Fires, floods, etc

System errors

Human errors

2. Intruders

Sadly an enormous problem.

The NYU ``greeting'' no longer includes the word ``welcome'' since that was somehow interpreted as some sort of license to break in.

Indeed, the greeting is not friendly.

Page 110: Class Note in O.S

128

It once was.

Below I have a nasty version from a few years ago.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ WARNING: UNAUTHORIZED PERSONS........ DO NOT PROCEED ~~~~~~~ ~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~ This computer system is operated by New York University (NYU) and may be accessed only by authorized users. Authorized users are granted specific, limited privileges in their use of the system. The data and programs in this system may not be accessed, copied, modified, or disclosed without prior approval of NYU. Access and use, or causing access and use, of this computer system by anyone other than as permitted by NYU are strictly pro- hibited by NYU and by law and may subject an unauthorized user, including unauthorized employees, to criminal and civil penalties as well as NYU- initiated disciplinary proceedings. The use of this system is routinely monitored and recorded, and anyone accessing this system consents to such monitoring and recording. Questions regarding this access policy or other topics should be directed (by e-mail) to [email protected] or (by phone) to 212-998-3333.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

================ Start Lecture #11 ================

Note: There was a typo or two in the symlinking directories section (picture should show /B not /B/new, and cd -P goes to / not /B). The giant page and the lecture-10 page have been fixed.

o Privacy An enormously serious (societal) subject.

4.4.2: Famous flaws

Page 111: Class Note in O.S

128

o Good bathroom reading. o Trojan horse attack: Planning a program of your choosing in place of a well known program and having an unsuspecting user execute it.

o Some trivial examples:

11 Install a new version of login that does everything normal, but then mails the username and plaintext password to [email protected].

11 Put a new version of ls in your home directory and ask the sysadmin for help. ``Hopefully he types ls while in your directory and has . early in his path''.

4.4.3: The internet worm

o A worm divides itself and sends one portion to another machine. o Different from a virus (see below).

o The famous internet (Morris) worm exploited silly bugs in UNIX to crack systems automatically.

o Specifically, it exploited careless use of gets(), which does not check the length of its argument.

o Attacked Sun and Vax UNIX systems.

o NYU was hit hard; but not our lab, which had IBM RTs.

4.4.4: Generic Security attacks

More bathroom reading

Viruses

Page 112: Class Note in O.S

128

o A virus attaches itself to (``infects'') a part of the system so that it remains until explicitly removed. In particular, rebooting the system does not remove it.

o Attach to an existing program or to a portion of the disk that is used for booting.

o When the virus is run it tries to attach itself to other files.

o Often implemented the same was as a binary patch: Change the first instruction to jump to somewhere where you put the original first instruction, then your patch, then a jump back to the second instruction.

4.4.5: Design principles for security

More bathroom reading

4.4.6: User authentication

Passwords

o Software to crack passwords is publically available. o Use this software for prevention.

o One way to prevent cracking passwords is to use instead one time passwords: e.g. Secure ID

o Current practice here and elsewhere is that when you telnet to a remote machine, your password is sent in the clear along the Ethernet.So maybe .rhosts isn’t that bad after all.

Physical identification

Opens up a bunch of privacy questions. For example, should we require fingerprinting for entering the subway?

4.5: Protection mechanisms

Page 113: Class Note in O.S

128

4.5.1: Protection domains

o We distinguish between Objects, which are passive, and subjects, which are active. o For example, processes (subjects) examine files (objects).

o Protection domain: A collection of (object, rights) pairs.

o At any given time a subject is given a protection domain that specifies its rights.

o In UNIX a subject's domain is determined by its (uid, gid) (and whether it is in kernel mode).

o Generates a matrix called the protection or permission matrix.

Each row corresponds to a domain (i.e. a subject at some time).

Each column corresponds to an object (e.g., a file or device).

Each entry gives the rights the domain/subject has on this object.

o Can model UNIX suid/sgid by permitting columns whose headings are domains and the only right possible in the corresponding entries is entry. If this right is present, the subject corresponding to the row can s[ug]id to the new domain, which corresponds to the column.

4.5.2: Access Control Lists (ACLs)

Keep the columns of the matrix separate and drop the null entries.

4.5.3: Capabilities

Keep the rows of the matrix separate and drop the null entries.

4.5.4: Protection models

Give objects and subjects security levels and enforce:

11 A subject may read only those objects whose level is at or below her own.

Page 114: Class Note in O.S

128

11 A subject may write only those objects whose level is at or above her own.

4.5.5: Covert channels

The bad guys are getting smart and use other means of getting out information. For example give good service for a zero and bad for a one. The figure of merit is the rate at which bits can be sent, i.e. the bandwidth of the covert channel.

Homework: 20.

Chapter 5: Input/ Output

5.1: Principles of I/O Hardware

5.1.1: I/O Devices

o Not much to say. Devices are varied. o Block versus character devices:

Devices, such as disks and CDROMs, with addressable chunks (sectors in this case) are called block devices,These devices support seeking.

Devices, such as Ethernet and modem connections, that are a stream of characters are called character devices.These devices do not support seeking.

Some cases, like tapes, are not so clear.

5.1.2: Device Controllers

These are the ``real devices'' as far as the OS is concerned. That is the OS code is written with the controller spec in hand not with the device spec.

Page 115: Class Note in O.S

128

The figure in the book is so oversimplified as to be border line false. The following picture is closer to the truth (but really there are several I/O buses of different speeds).

Page 116: Class Note in O.S

128

o The controller abstracts away some of the low level features of the device.

o For disks, the controller does error checking, buffering and handles interleaving of sectors. (Sectors are interleaved if the controller or CPU cannot handle the data rate and would otherwise have to wait a full revolution. This is not a concern with modern systems since the electronics have increased in speed faster than the devices.)

o For analog monitors (CRTs) the controller does a great deal. Analog video is far from a bunch of ones and zeros.

o Controllers are also called adaptors.

Using a controller

Think of a disk controller and a read request. The goal is to copy data from the disk to some portion of the central memory. How do we do this?

o The controller contains a microprocessor and memory and is connected to the disk (by a cable).

o When the controller asks the disk to read a sector, the contents come to the controller via the cable and are stored by the controller in its memory.

o The question is how the OS, which is running on another processor, lets the controller know that a disk read is desired and how is the data eventually moved from the controllers memory to the general system memory.

o Typically the interface the OS sees consists of some device registers located on the controller.

These are memory locations into which the OS writes information such as sector to access, read vs. write, length, where in system memory to put the data (for a read) or from where to take the data (for a write).

There is also typically a device register that acts as a ``go button''.

There are also devices registers that the OS reads, such as status of the controller, errors found, etc.

o So now the question is how does the OS read and write the device register.

With Memory-mapped I/O the device registers appear as normal memory. All that is needed is to know at which address each device register appears. Then the OS uses normal load and store instructions to write the registers.

Page 117: Class Note in O.S

128

Some systems instead have a special ``I/O space'' into which the registers are mapped and require the use of special I/O instructions to

accomplish the load and store. From a conceptual point of view there is no difference between the two models.

5.1.3: Direct Memory Access (DMA)

o With or without DMA, the disk controller pulls the desired data from the disk to its buffer (and pushes data from the buffer to the disk).

o Without DMA, i.e., with programmed I/O (PIO), the cpu then does loads and stores (or I/O instructions) to copy the data from the buffer to the desired memory location.

o With a DMA controller, the controller writes the memory without intervention of the CPU.

o Clearly DMA saves CPU work. But this might not be important if the CPU is limited by the memory or by system buses.

o Very important is that there is less data movement so the buses are used less and the entire operation takes less time.

o Since PIO is pure software it is easier to change, which is an advantage.

o DMA does need a number of bus transfers from the CPU to the controller to specify the DMA. So DMA is most effective for large transfers where the setup is amortized.

o Why has the buffer? Why not just go from the disk straight to the memory.Answer: Speed matching. The disk supplies data at a fixed rate, which might exceed the rate the memory, can accept it. In particular the memory might be busy servicing a request from the processor or from another DMA controller.

Page 118: Class Note in O.S

128

================ Start Lecture #12 ================

Note: I improved the presentation of disk controllers given last time (unfortunately, this was very easy to do). In particular, I suggest you read the new small section entitled ``Using a controller''. Both the giant page and the page for lecture-11 now include this new section.

5.2: Principles of I/O Software

As with any large software system, good design and layering is important.

5.2.1: Goals of the I/O Software

Device independence

We want to have most of the OS, unaware of the characteristics of the specific devices attached to the system. Indeed we also want the OS to be largely unaware of the CPU type itself.

Due to this device independence, programs are written to read and write generic devices and then at run time specific devices are assigned. Writing to a disk has differences from writing to a terminal, but Unix cp and DOS copy do not see these differences. Indeed, most of the OS, including the file system code, is unaware of whether the device is a floppy or hard disk.

Uniform naming

Page 119: Class Note in O.S

128

Recall that we discussed the value of the name space implemented by file systems. There is no dependence between the name of the file and the device on which it is stored. So a file called I Am Stored on a Hard Disk might well be stored on a floppy disk.

Error handling

There are several aspects to error handling including: detection, correction (if possible) and reporting.

11 Detection should be done as close to where the error occurred as possible before more damage is done (fault containment). This is not trivial.

11 Correction is sometimes easy, for example ECC memory does this automatically (but the OS wants to know about the error so that it can schedule replacement of the faulty chips before unrecoverable double errors occur).

Other easy cases include successful retries for failed ethernet transmissions. In this example, while logging is appropriate, it is quite reasonable for no action to be taken.

11 Error reporting tends to be awful. The trouble is that the error occurs at a low level but by the time it is reported the context is lost. Unix/Linux in particular is horrible in this area.

Creating the illusion of synchronous I/O

o I/O must be asynchronous for good performance. That is the OS cannot simply wait for an I/O to complete. Instead, it proceeds with other activities and responds to the notification when the I/O has finished.

o Users (mostly) want no part of this. The code sequence

o Read Xo Y <-- X+1o Print Y

should print a value one greater than that read. But if the assignment is performed before the read completes, the wrong value is assigned.

Page 120: Class Note in O.S

128

o Performance junkies sometimes do want the asynchrony so that they can have another portion of their program executed while the I/O is underway. That is they implement a mini-scheduler in their application code.

Sharable vs. dedicated devices

For devices like printers and tape drives, only one user at a time is permitted. These are called serially reusable devices, and are studied next chapter. Devices like disks and Ethernet ports can be shared by processes running concurrently.

Layering

Layers of abstraction as usual prove to be effective. Most systems are believed to use the following layers (but for many systems, the OS code is not available for inspection).

11 User level I/O routines. 11 Device independent I/O software.

11 Device drivers.

11 Interrupt handlers.

We give a bottom up explanation.

5.2.2: Interrupt Handlers

We discussed an interrupt handler before when studying page faults. Then it was called ``assembly language code''.

In the present case, we have a process blocked on I/O and the I/O event has just completed. So the goal is to make the process ready. Possible methods are.

o Releasing a semaphore on which the process is waiting. o Sending a message to the process.

o Inserting the process table entry onto the ready list.

Page 121: Class Note in O.S

128

Once the process is ready, it is up to the scheduler to decide when it should run.

5.2.3: Device Drivers

The portion of the OS that ``knows'' the characteristics of the controller.

The driver has two ``parts'' corresponding to its two access points. Recall the following figure from the beginning of the course.

Page 122: Class Note in O.S

128

11 Accessed by the main line OS via the envelope in response to an I/O system call. The portion of the driver accessed in this way is sometimes call the ``top'' part.

11 Accessed by the interrupt handler when the I/O completes (this completion is signaled by an interrupt). The portion of the driver accessed in this way is sometimes call the ``bottom'' part.

Page 123: Class Note in O.S

128

Tanenbaum describes the actions of the driver assuming it is implemented as a process (which he recommends). I give both that view point and the self-service paradigm in which the driver is invoked by the OS acting in behalf of a user process (more precisely the process shifts into kernel mode).

Driver in a self-service paradigm

11 The user (A) issues an I/O system call.

11 The main line, machine independent, OS prepares a generic request for the driver and calls (the top part of) the driver.

11 If the driver was idle (i.e., the controller was idle), the driver writes device registers on the controller ending with a command for the controller to begin the actual I/O.

11 If the controller was busy (doing work the driver gave it previously), the driver simply queues the current request (the driver dequeues this request below).

2. The driver jumps to the scheduler indicating that the current process should be blocked.

3. The scheduler blocks A and runs (say) B.

4. B starts running.

5. An interrupt arrives (i.e., an I/O has been completed).

6. The interrupt handler invokes (the bottom part of) the driver.

11 The driver informs the main line perhaps passing data and surely passing status (error, OK).

11 The top part is called to start another I/O if the queue is nonempty. We know the controller is free. Why?Answer: We just received an interrupt saying so.

Page 124: Class Note in O.S

128

2. The driver jumps to the scheduler indicating that process A should be made ready.

3. The scheduler picks a ready process to run. Assume it picks A.

4. A resumes in the driver, which returns to the main line, which returns to the user code.

Driver as a process (Tanenbaum) (less detailed than above)

The user issues an I/O request. The main line OS prepares a generic request (e.g. read, not read using Bus logic BT-958 SCSI controller) for the driver and the driver is awakened (perhaps a message is sent to the driver to do both jobs).

1. The driver wakes up.

A. If the driver was idle (i.e., the controller is idle), the driver writes device registers on the controller ending with a command for the controller to begin the actual I/O.

B. If the controller is busy (doing work the driver gave it), the driver simply queues the current request.

2. The driver blocks waiting for an interrupt or for more requests.

An interrupt arrives (i.e., an I/O has been completed).

1. The driver wakes up.

2. The driver informs the main line perhaps passing data and surely passing status (error, OK).

3. The driver finds the next work item or blocks.

A. If the queue of requests is non-empty, desuetude one and proceed as if just received a request from the main line.

B. If queue is empty, the driver blocks waiting for an interrupt or a request from the main line.

5.2.4: Device-Independent I/O Software

Page 125: Class Note in O.S

128

The device-independent code does most of the functionality, but not necessarily most of the code since there can be many drivers all doing essentially the same thing in slightly different ways due to slightly different controllers.

Naming. Again an important O/S functionality. Must offer a consistent interface to the device drivers. o In UNIX this is done by associating each device with a (special) file in the /dev directory.

o The inodes for these files contain an indication that these are special files and also contain so called major and minor device numbers.

o The major device number gives the number of the driver. (These numbers are rather ad hoc, they correspond to the position of the function pointer to the driver in a table of function pointers.)

o The minor number indicates for which device (e.g., which scsi cdrom drive) the request is intended

Protection. A wide range of possibilities are actually done in real systems. Including both extreme examples of everything is permitted and nothing is permitted (directly).

o In ms-dos any process can write to any file. Presumably, our offensive nuclear missile launchers do not run dos.

o In IBM and other mainframe OS's, normal processors do not access devices. Indeed the main CPU doesn't issue the I/O requests. Instead an I/O channel is used and the mainline constructs a channel program and tells the channel to invoke it.

o UNIX uses normal raw-x bits on files in /dev (I don't believe x is used).

Buffering is necessary since requests come in a size specified by the user and data is delivered in a size specified by the device.

Enforce exclusive access for non-shared devices like tapes.

5.2.5: User-Space Software

Page 126: Class Note in O.S

128

A good deal of I/O code is actually executed in user space. Some is in library routines linked into user programs and some is in daemon processes.

Some library routines are trivial and just move their arguments into the correct place (e.g., a specific register) and then issue a trap to the correct system call to do the real work.

Some, notably standard I/O (stdio) in UNIX, are definitely not trivial. For example consider the formatting of floating point numbers done in printf and the reverse operation done in scanf.

Printing to a local printer is often performed in part by a regular program (lpr in UNIX) and part by a daemon (lpd in UNIX). The daemon might be started when the system boots or might be started on demand. I guess it is called a daemon because it is not under the control of any user. Does anyone know the reason?

Printing uses spooling, i.e., the file to be printed is copied somewhere by lpr and then the daemon works with this copy. Mail uses a similar technique (but generally it is called queuing, not spooling).

Homework: 6, 7, 8.

5.3: Disks

Page 127: Class Note in O.S

128

The ideal storage device is Fast Big (in capacity) Cheap Impossible Disks are big and cheap, but slow.

5.3.1: Disk Hardware

Show a real disk opened up and illustrates the components

Platter Surface

Head

Track

Sector

Cylinder

Seek time

Rotational latency

Transfer time

Overlapping I/O operations is important. Many controllers can do overlapped seeks, i.e. issue a seek to one disk while another is already seeking.

Despite what Tanenbaum says, modern disks cheat and do not have the same number of sectors on outer cylinders as on inner one. Often the controller ``cover for them'' and protect the lie.

Again contrary to Tanenbaum, it is not true that when one head is reading from cylinder C, all the heads can read from cylinder C with no penalty.

Page 128: Class Note in O.S

128

Choice of block size

We discussed this before when studying page size. Current commodity disk characteristics (not for laptops) result in about 15ms to transfer the first byte and 10K bytes per ms for subsequent bytes (if

contiguous).

o Rotation rate is 5400, 7600, or 10,000 RPM (15K just now available).

o Recall that 6000 RPM is 100 rev/sec or one rev per 10ms. So half a rev (the average time for to rotate to a given point) is 5ms.

o Transfer rates around 10MB/sec = 10KB/ms.

o Seek time around 10ms.

This favors large blocks, 100KB or more.

But the internal fragmentation would be severe since many files are small.

Multiple block sizes have been tried as have techniques to try to have consecutive blocks of a given file near each other.

Typical block sizes are 4KB-8KB.

5.3.2: Disk Arm Scheduling Algorithms

These algorithms are relevant only if there are several I/O requests pending. For many PCs this is not the case. For most commercial applications, I/O is crucial.

1. FCFS (First Come First Served): Simple but has long delays.

2. Pick: Same as FCFS but pick up requests for cylinders that are passed on the way to the next FCFS request.

3. SSTF (Shortest Seek Time First): Greedy algorithm. Can starve requests for outer cylinders and almost always favors middle requests.

4. Scan (Look, Elevator): The method used by an old fashioned jukebox (remember ``Happy Days'') and by elevators. The disk arm proceeds in one direction picking up all requests until there are no more requests in this direction at which point it goes back the other direction. This favors requests in the middle, but can't starve any requests.

Page 129: Class Note in O.S

128

5. C-Scan (C-look, Circular Scan/Look): Similar to Scan but only service requests when moving in one direction. When going in the other direction, go directly to the furthest away request. This doesn't favor any spot on the disk. Indeed, it treats the cylinders as though they were a clock, i.e. after the highest numbered cylinder comes cylinder 0.

6. N-step Scan: This is what the natural implementation of Scan gives.

o While the disk is servicing a Scan direction, the controller gathers up new requests and sorts them.

o At the end of the current sweep, the new list becomes the next sweep.

Minimizing Rotational Latency

Use Scan, which is the same as C-Scan. Why?Because the disk is only rotates in one direction.

================ Start Lecture #13 ================

RAID (Redundant Array of Inexpensive Disks)

Tanenbaum's treatment is not very good.

The name RAID is from Berkeley.

IBM changed the name to Redundant Array of Independent Disks. I wonder why?

A simple form is mirroring, where two disks contain the same data.

Page 130: Class Note in O.S

128

Another simple form is striping (interleaving) where consecutive blocks are spread across multiple disks. This helps bandwidth, but is not redundant. Thus it shouldn't be called RAID, but it sometimes is.

One of the normal RAID methods is to have N (say 4) data disks and one parity disk. Data is striped across the data disks and the bitwise parity of these sectors is written in the corresponding sector of the parity disk.

On a read if the block is bad (e.g., if the entire disk is bad or even missing), the system automatically reads the other blocks in the stripe and the parity block in the stripe. Then the missing block is just the bitwise exclusive or of all these blocks.

For reads this is very good. The failure free case has no penalty (beyond the space overhead of the parity disk). The error case requires N+1 (say 5) reads.

A serious concern is the small write problem. Writing a sector requires 4 I/O. Read the old data sector, compute the change, read the parity, compute the new parity, write the new parity and the new data sector. Hence one sector I/O became 4, which is a 300% penalty.

Writing a full stripe is not bad. Compute the parity of the N (say 4) data sectors to be written and then write the data sectors and the parity sector. Thus 4 sector I/Os become 5, which is only a 20% penalty and is smaller for larger N, i.e., larger stripes.

A variation is to rotate the parity. That is, for some stripes disk 1 has the parity, for others disk 2, etc. The purpose is to not have a single parity disk since that disk is needed for all small writes and could become a point of contention.

5.3.3: Error Handling

Disks error rates have dropped in recent years. Moreover, bad block forwarding is done by the controller (or disk electronic) so this topic is no longer as important for OS.

5.3.4: Track Caching

Often the disk/controller caches a track, since the seek penalty has already been paid. In fact modern disks have megabyte caches that hold recently read blocks. Since modern disks cheat and don't have the same number of blocks on each track, it is better for the disk electronics (and not the OS or controller) to do the caching since it is the only part of the system to know the true geometry.

5.3.5: Ram Disks

Page 131: Class Note in O.S

128

Fairly clear. Organize a region of memory as a set of blocks and pretend it is a disk. A problem is that memory is volatile.

Often used during OS installation, before disk drivers are available (there are many types of disk but all memory looks the same so only one ram disk driver is needed).

5.4: Clocks

Also called timers.

5.4.1: Clock Hardware

Generates an interrupt when timer goes to zero Counter reload can be automatic or under software (OS) control.

If done automatically, the interrupt occurs periodically and thus is perfect for generating a clock interrupt at a fixed period.

5.4.2: Clock Software

1. TOD: Bump a counter each tick (clock interrupt). If counter is only 32 bits must worry about overflow so keep two counters: low order and high order.

2. Time quantum for RR: Decrement a counter at each tick. The quantum expires when counter is zero. Load this counter when the scheduler runs a process.

3. Accounting: At each tick, bump a counter in the process table entry for the currently running process.

4. Alarm system call and system alarms:

o Users can request an alarm at some future time and the system also needs to do things specific future times (e.g. turn off floppy motor).

Page 132: Class Note in O.S

128

o The conceptually simplest solution is to have one timer for each event. Instead, we simulate many timers with just one.

o The data structure on the right works well.

o The time in each list entry is the time after the preceding entry that this entry's alarm is to ring. For example, if the time is zero, this event occurs at the same time as the previous event. The other entry is a pointer to the action to perform.

o At each tick, decrement next-signal.

o When next-signal goes to zero, process the first entry on the list and any others following immediately after with a time of zero (which means they are to be simultaneous with this alarm). Then set next-signal to the value in the next alarm.

5. Profiling o Want a histogram giving how much time was spent in each 1KB (say) block of code.

o At each tick check the PC and bump the appropriate counter.

o At the end of the run can assign the 1K blocks to software modules.

o If use fine granularity (say 10B instead of 1KB) get higher accuracy but more memory overhead.

5.5: Terminals

5.5.1: Terminal Hardware

Quite dated. It is true that modern systems can communicate to a hardwired ascii terminal, but most don't. Serial ports are used, but they are normally connected to modems and then some protocol (SLIP, PPP) is used not just a stream of ascii characters.

Page 133: Class Note in O.S

128

5.5.2: Memory-Mapped Terminals

Less dated. But it still discusses the character not graphics interface. Today, the idea is to have the software write into video memory the bits to be put on the screen and then the graphics controller converts these bits to analog

signals for the monitor (actually laptop displays and very modern monitors are digital).

But it is much more complicated than this. The graphics controllers can do a great deal of video themselves (like filling).

This is a subject that would take many lectures to do well.

Keyboards

Tanenbaum description of keyboards is correct.

At each key press and key release a code is written into the keyboard controller and the computer is interrupted. By remembering which keys have been depressed and not released the software can determine Cntl-A, Shift-B, etc.

5.5.3: Input Software

We are just looking at keyboard input. Once again graphics is too involved to be treated here. There are two fundamental modes of input, sometimes called raw and cooked.

In raw mode the application sees every ``character'' the user types. Indeed, raw mode is character oriented.

o All the OS does is convert the keyboard ``scan codes'' to ``characters'' and pass these characters to the application.

o Some examples

1. down-cntl down-x up-x up-cntl is converted to cntl-x

2. down-cntl up-cntl down-x up-x is converted to x

3. down-cntl down-x up-cntl up-x is converted to cntl-x (I just tried it to be sure).

4. down-x down-cntl up-x up-cntl is converted to x

Page 134: Class Note in O.S

128

o Full screen editors use this mode.

Cooked mode is line oriented. The OS delivers lines to the application program.

o Special characters are interpreted as editing characters (erase-previous-character, erase-previous-word, kill-line, etc).

o Erased characters are not seen by the application but are erased by the keyboard driver.

o Need an escape character so that the editing characters can be passed to the application if desired.

o The cooked characters must be echoed (what should one do if the application is also generating output at this time?)

The (possibly cooked) characters must be buffered until the application issues a read (and an end-of-line EOL has been received for cooked mode).

5.5.4: Output Software

Again too dated and the truth is too complicated to deal with in a few minutes.

Chapter 6: DeadlocksA deadlock occurs when a every member of a set of processes is waiting for an event that can only be caused by a member of the set.

Often the event waited for is the release of a resource.

In the automotive world deadlocks are called gridlocks.

The processes are the cars. The resources are the spaces occupied by the cars

Reward: One point extra credit on the final exam for anyone who brings a real (e.g., newspaper) picture of an automotive deadlock. You must bring the clipping to the final and it must be in good condition. Hand it in with your exam paper.

Page 135: Class Note in O.S

128

For a computer science example consider two processes A and B that each want to print a file currently on tape.

1. A has obtained ownership of the printer and will release it after printing one file. 2. B has obtained ownership of the tape drive and will release it after reading one file.

3. A tries to get ownership of the tape drive, but is told to wait for B to release it.

4. B tries to get ownership of the printer, but is told to wait for A to release the printer.

Bingo: deadlock!

6.1: Resources:

The resource is the object granted to a process.

Resources come in two types 1. Preemptable, meaning that the resource can be taken away from its current owner (and given back later). An example is memory.

2. Non-preemptable, meaning that the resource cannot be taken away. An example is a printer.

The interesting issues arise with non-preemptable resources so those are the ones we study.

Life history of a resource is a sequence of

1. Request

2. Allocate

3. Use

4. Release

Processes make requests, use the resources, and release the resources. The allocate decisions are made by the system and we will study policies used to make these decisions.

Page 136: Class Note in O.S

128

6.2: Deadlocks

To repeat: A deadlock occurs when a every member of a set of processes is waiting for an event that can only be caused by a member of the set.

Often the event waited for is the release of a resource.

6.2.1: (Necessary) Conditions for Deadlock

The following four conditions (Coffman; Havender) are necessary but not sufficient for deadlock. Repeat: They are not sufficient.

1. Mutual exclusion: A resource can be assigned to at most one process at a time (no sharing). 2. Hold and wait: A processing holding a resource is permitted to request another.

3. No preemption: A process must release its resources; they cannot be taken away.

4. Circular wait: There must be a chain of processes such that each member of the chain is waiting for a resource held by the next member of the chain.

6.2.2: Deadlock Modeling

On the right is the Resource Allocation Graph, also called the Reusable Resource Graph.

The processes are circles. The resources are squares.

An arc (directed line) from a process P to a resource R signifies that process P has requested (but not yet been allocated) resource R.

An arc from a resource R to a process P indicates that process P has been allocated resource R.

Homework: 1.

Page 137: Class Note in O.S

128

Consider two concurrent processes P1 and P2 whose programs are.

P1: request R1 P2: request R2 request R2 request R1 release R2 release R1 release R1 release R2

On the board draw the resource allocation graph for various possible executions of the processes, indicating when deadlock occurs and when deadlock is no longer avoidable.

There are four strategies used for dealing with deadlocks.

1. Ignore the problem 2. Detect deadlocks and recover from them

3. Avoid deadlocks by carefully deciding when to allocate resources.

4. Prevent deadlocks by violating one of the 4 necessary conditions.

6.3: Ignoring the problem--The Ostrich Algorithm

The ``put your head in the sand approach''.

If the likelihood of a deadlock is sufficiently small and the cost of avoiding a deadlock is sufficiently high it might be better to ignore the problem. For example if each PC deadlocks once per 100 years, the one reboot may be less painful that the restrictions needed to prevent it.

Clearly not a good philosophy for nuclear missile launchers.

For embedded systems (e.g., missile launchers) the programs run are fixed in advance so many of the questions Tanenbaum raises (such as many processes wanting to fork at the same time) don't occur.

Page 138: Class Note in O.S

128

6.4: Detecting Deadlocks and Recovering from them

6.4.1: Detecting Deadlocks with single unit resources

Consider the case in which there is only one instance of each resource.

So a request can be satisfied by only one specific resource. In this case the 4 necessary conditions for deadlock are also sufficient.

Remember we are making an assumption (single unit resources) that is often invalid. For example, many systems have several printers and a request is given for ``a printer'' not a specific printer. Similarly, one can have many tape drives.

So the problem comes down to finding a directed cycle in the resource allocation graph. Why?Answer: Because the other three conditions are either satisfied by the system we are studying or are not in which case deadlock is not a question. That is, conditions 1, 2, 3 are conditions on the system in general not on what is happening right now.

To find a directed cycle in a directed graph is not hard. The algorithm is in the book. The idea is simple.

1. For each node in the graph do a depth first traversal (hoping the graph is a DAG (directed acyclic graph), building a list as you go down the DAG.

2. If you ever find the same node twice on your list, you have found a directed cycle and the graph is not a DAG and deadlock exists among the processes in your current list.

3. If you never find the same node twice, the graph is a DAG and no deadlock occurs.

4. The searches are finite since the list size is bounded by the number of nodes.

6.4.2: Detecting Deadlocks with multiple unit resources

This is more difficult.

The figure on the right shows a resource allocation graph with multiple unit resources.

Page 139: Class Note in O.S

128

Each unit is represented by a dot in the box.

Request edges are drawn to the box since they represent a request for any dot in the box.

Allocation edges are drawn from the dot to represent that this unit of the resource has been assigned (but all units of a resource are equivalent and the choice of which one to assign is arbitrary).

Note that there is a directed cycle in black, but there is no deadlock. Indeed the middle process might finish, erasing the magenta arc and permitting the blue dot to satisfy the rightmost process.

The book gives an algorithm for detecting deadlocks in this more general setting. The idea is as follows.

1. Look for a process that might be able to terminate (i.e., all its request arcs can be satisfied).

2. If one is found pretend that it does terminate (erase all its arcs), and repeat step 1.

3. If any processes remain, they are deadlocked.

We will soon do in detail an algorithm (the Banker's algorithm) that has some of this flavor.

6.4.3: Recovery from deadlock

Preemption

Perhaps you can temporarily preempt a resource from a process. Not likely.

Rollback

Database (and other) systems take periodic checkpoints. If the system does take checkpoints, one can roll back to a checkpoint whenever a deadlock is detected. Somehow must guarantee forward progress.

Kill processes

Page 140: Class Note in O.S

128

Can always be done but might be painful. For example some processes have had effects that can't be simply undone. Print, launch a missile, etc.

================ Start Lecture #14 ================

6.6: Deadlock Prevention

Attack one of the Coffman/havender conditions

6.6.1: Attacking Mutual Exclusion

Idea is to use spooling instead of mutual exclusion. Not possible for many kinds of resources

6.6.2: Attacking Hold and Wait

Require each process to request all resources at the beginning of the run. This is often called One Shot.

6.6.3: Attacking No Preempt

Normally not possible.

6.6.4: Attacking Circular Wait

Establish a fixed ordering of the resources and require that they be requested in this order. So if a process holds resources #34 and #54, it can request only resources #55 and higher.

It is easy to see that a cycle is no longer possible.

Page 141: Class Note in O.S

128

6.5: Deadlock Avoidance

Let's see if we can tiptoe through the tulips and avoid deadlock states even though our system does permit all four of the necessary conditions for deadlock.

An optimistic resource manager is one that grants every request as soon as it can. To avoid deadlocks with all four conditions present, the manager must be smart not optimistic.

6.5.1 Resource Trajectories

We have two processes H (horizontal) and V. The origin represents them both starting.

Their combined state is a point on the graph.

The parts where the printer and plotter are needed by each process are indicated.

The dark green is where both processes have the plotter and hence execution cannot reach this point.

Light green represents both having the printer; also impossible.

Pink is both having both printer and plotter; impossible.

Gold is possible (H has plotter, V has printer), but you can't get there.

The upper right corner is the goal both processes finished.

The red dot is ... (cymbals) deadlock. We don't want to go there.

The cyan is safe. From anywhere in the cyan we have horizontal and vertical moves to the finish line without hitting any impossible area.

The magenta interior is very interesting. It is

o Possible: each processor has a different resource

o Not deadlocked: each processor can move within the magenta

Page 142: Class Note in O.S

128

o Deadly: deadlock is unavoidable. You will hit a magenta-green boundary and then will no choice but to turn and go to the red dot.

The cyan-magenta border is the danger zone

The dashed line represents a possible execution pattern.

With a uniprocessor no diagonals are possible. We either move to the right meaning H is executing or move up indicating V.

The trajectory shown represents.

1. H executing a little.

2. V executing a little.

3. H executes; requests the printer; gets it; executes some more.

4. V executes; requests the plotter.

The crisis is at hand!

If the resource manager gives V the plotter, the magenta has been entered and all is lost. ``Abandon all hope yee who enter here'' --Dante.

The right thing to do is to deny the request, let H execute moving horizontally under the magenta and dark green. At the end of the dark green, no danger remains, both processes will complete successfully. Victory!

This procedure is not practical for a general purpose OS since it requires knowing the programs in advance. That is, the resource manager, knows in advance what requests each process will make and in what order.

6.5.2: Safe States

Avoiding deadlocks given some extra knowledge.

Not surprisingly, the resource manager knows how many units of each resource it had to begin with. Also it knows how many units of each resource it has given to each process.

It would be great to see all the programs in advance and thus know all future requests, but that is asking for too much.

Page 143: Class Note in O.S

128

Instead, each process when it starts gives its maximum usage. That is each process at startup states, for each resource, the maximum number of units it can possibly ask for. This is called the claim of the process.

o If during the run the process asks for more than its claim, abort it.

o If it claims more than it needs, the result is that the resource manager will be more conservative than need be and there will be more waiting.

Definition: A state is safe if there one can find an ordering of the processes such that if the processes are run in this order, they will all terminate (assuming none exceeds its claim).

Give an example of all four possibilities. A state that is

1. Safe and deadlocked 2. Safe and not deadlocked

3. Not safe and deadlocked

4. Not safe and not deadlocked

A manager can determine if a state is safe.

Since the manager knows all the claims, it can determine the maximum amount of additional resources each process can request. The manager knows how many units of each resource it has left.

The manager then follows the following procedure, which is part of Banker's Algorithms discovered by Dijkstra, to determine if the state is safe.

1. If there are no processes remaining, the state is safe.

2. Seek a process P whose max additional requests is less than what remains (for each resource).

o If no such process can be found, then the state is not safe.

Page 144: Class Note in O.S

128

o The banker (manager) knows that if it refuses all requests excepts those from P, then it will be able to satisfy all of P's requests. Why?Look at how P was chosen.

3. The banker now pretends that P has terminated (since the banker knows that it can guarantee this will happen). Hence the banker pretends that all of P's currently held resources are returned. This makes the banker richer and hence perhaps a process that was not eligible to be chosen as P previously, can now be chosen.

4. Repeat these steps.

Example 1

One resource type R with 22 unit Three processes X, Y, and Z with claims 3, 11, and 19 respectively.

Currently the processes have 1, 5, and 10 units respectively.

Hence the manager currently has 6 units left.

Also note that the max additional needs for processes are 2, 6, 9

So the manager cannot assure (with its current remaining supply of 6 units that Z can terminate. But that is not the question.

This state is safe

1. Use 2 units to satisfy X; now the manager has 7 units.

2. Use 6 units to satisfy Y; now the manager has 12 units.

process claim current

X 3 1

Y 11 5

Z 19 10

total 16

Page 145: Class Note in O.S

128

3. Use 9 units to satisfy Z; done!

Example 2

Assume that Z now requests 2 units and we grant them.

Currently the processes have 1, 5, and 12 units respectively. The manager has 4 units.

The max additional needs are 2, 6, and 7.

This state is unsafe

1. Use 2 unit to satisfy X; now the manager has 5 units.

2. Y needs 6 and Z needs 7 so we can't guarantee satisfying either

Note that we were able to find a process that can terminate (X) but then we were stuck. So it is not enough to find one process. Must find a sequence of all the processes.

Remark: An unsafe state is not necessarily a deadlocked state. Indeed, if one gets lucky all processes may terminate successfully. A safe state means that the manager can guarantee that no deadlock will occur.

6.5.3: The Banker's Algorithm (Dijkstra) for a Single Resource

The algorithm is simple: Stay in safe states.

Check before any process starts that the state is safe (this means that no process claims more than the manager has). If not, then this process is trying to claim more than the system has so cannot be run.

process claim current

X 3 1

Y 11 5

Z 19 12

total 18

Page 146: Class Note in O.S

128

When the manager receives a request, it pretends to grant it and checks if the resulting state is safe. If it is safe the request is granted, if not the process is blocked.

When a resource is returned, the manager checks to see if any of the pending requests can be granted (i.e., if the result would now be safe). If so the request is granted and the manager checks to see if another can be granted.

6.5.4: The Banker's Algorithm for Multiple Resources

At a high level the algorithm is identical: Stay in safe states.

What is a safe state? The same definition (if processes are run in a certain order they will all terminate).

Checking for safety is the same idea as above. The difference is that to tell if there enough free resources for a processes to terminate, the manager must check that for all resources, the number of free units is at least equal to the max additional need of the process.

Limitations of the banker's algorithm

Often users don't know the maximum requests a process will make. They can estimate conservatively (i.e., use big numbers for the claim) but then the manager becomes very conservative.

New processes arriving cause a problem (but not so bad as Tanenbaum suggests).

o The process's claim must be less than the total number of units of the resource in the system. If not, the process is not accepted by the manager.

o Since the state without the new process is safe, so is the state with the new process! Just use the order you had originally and put the new process at the end.

o Insuring fairness (starvation freedom) needs a little more work, but isn't too hard either (once an hour stop taking new processes until all current processes finish).

A resource becoming unavailable (e.g., a tape drive breaking), can result in an unsafe state.

Page 147: Class Note in O.S

128

6.7: Other Issues

6.7.1: Two-phase locking

This is covered (MUCH better) in a database text. We will skip it.

6.7.2: Non-resource deadlocks

You can get deadlock from semaphores as well as resources. This is trivial. Semaphores can be considered resources. P(S) is request S and V(S) is release S. The manager is the module implementing P and V. When the manager returns from P(S), it has granted the resource S.

6.7.3: Starvation

As usual FCFS is a good cure. Often this is done by priority aging and picking the highest priority process to get the resource. Also can periodically stop accepting new processes until all old ones get their resources.

The End: Good luck on the final

Page 148: Class Note in O.S

128

Compiled by:

Bhong F.Jr,