Operating Systems Midterm Preparation Summary DEEDS Group Important Note: These slides are only from...

Preview:

Citation preview

Operating Systems

Midterm Preparation Summary

DEEDS Group

Important Note: These slides are only from a selected set of topics for midterm review. The exam coverage is all the slides/topics covered in the class lectures, exercises and labs.

Processes

Process: program in executionPCRegistersProcess Stack (function params, return addresses, local

variables)Heap (many include)

ProcessActive entity

With a PC specifying the next instruction to execute and a set of associated resources.

ProgramPassive entity

File containing a list of instructions

Process States

new: process is being created. running: instructions are being executed.waiting: waiting for (I/O completion or reception of a

signal) event ready: waiting to be assigned to a processor terminated: finished execution

Process Control Block (PCB)

Process stateProgram counterCPU registersCPU scheduling informationMemory management informationAccounting informationI/O Status Information

Process Scheduling

Objective of multiprogramming: some process running at all times

Objective of time sharing: switch the CPU among processes

To meet these objectives, the process scheduler selects an available process for program execution on the CPU.

Scheduling queues: Ready queue Device queue

Selection of a process => Scheduler Long-term Short-term

Context Switch

When interrupt occurs, the system needs to save the current context of the process currently running on the CPU

“Context”Value of the CPU registersProcess stateMemory management information

Context switch State save of the current processState restore of a different process

Process Creation

A process may create several new processes via a create-process system callCreating process: parent processNew process: children

Process identifier (pid)When the process is created, initialization

data may be passed along by the parent process to the child process.

Process Creation & Termination

Two possibilities exist in terms of creationParent executes concurrently with its childrenParent waits until some or all of its children have

terminatedTwo possibilities exist in terms of the address

space of the new processChild process is a duplicate of the parent process (same

program and data as parent)Child process has a new program loaded into it

Process TerminationAll the resources of the process (physical and virtual

memory, open files, and I/O buffers) are deallocated by the operating system.

Interprocess Communication

Cooperating or independent processesCooperating processes require an

interprocess communication (IPC) mechanism

Two fundemental models of interprocess communicationShared memoryMessage passing

Interprocess Communication

Message Passing Shared Memory

Threads

Threads: Basic unit of CPU utilizationThread idPCRegister setStack

SharesCode sectionData sectionOther OS resources

Threads

Benefits of Multithreaded programming Responsiveness

Allows a program to continue even if parts of it are blockedEx: Tabs in Firefox, Opera, Text/Image Web server streams etc.

Resource Sharing (but also less protection!)Threads share memory and process resourcesAllows an application to perform several different activities within the

same address space Efficiency/Performance

More economical to context-switch threads than processesSolaris: 30-100 times faster thread creation vs. process creation;

context switch 5 times faster for threads vs. processes Utilization of multiprocessor architectures

Worker threads dispatching to different processors

Multithreading Models

Support for threadsUser threads

Supported above the kernelManaged without kernel support

Kernel threadsSupported and managed by OS

Three common way of establishing a relationship between user and kernel threadsMany-to-one modelOne-to-one modelMany-to-many model

1. Many-to-one Model

Maps many user-level threads to one kernel thread.

Thread management is done in user space => +

Entire process will block if a thread makes a blocking system call => -

Multiple threads are unable to run in parallel on multiprocessors (one thread can access the kernel at a time)

Examples: Green threads in Solaris GNU portable threads

2. One-to-one model Maps each user thread to a kernel thread Provides more concurrency than the many-to-one model Allows multiple threads to run in parallel on multiprocessors Drawback: Creating a user thread requires creating the corresponding kernel thread Examples:

WindowsLinuxSolaris 9

and newer

3. Many-to-one model Multiplexes many user level threads

to a smaller or equal number of kernel threads.

Lesser concurrency than one-to-one but easier scheduler and different kernel threads for different server types

Developers can create as many user threads as necessary and the corresponding kernel threads can run in parallel on a multiprocessor

When user-thread blocks, kernel schedules another for execution

Examples: Solaris prior to v9 Windows NT family with the

ThreadFiber package

4. Two-level model

Popular variation on the many-to-many model, except that it also allows a user thread to be bound to a kernel thread

Examples:HP-UX64-bit UnixSolaris 8 and earlier

Memory Management

Where in the memory should we place our programs? Limited amount of memory! More than one program! Programs have different sizes! Program size might grow (or

shrink)!

Memory allocation changes as Processes come into memory Leave memory

Swapping

Virtual Memory

SeparatesVirtual (logical) addressesPhysical addresses

Requires a translation at run timeVirtual PhysicalHandled in HW (MMU)

Paging

Paging

The relation betweenvirtual addresses and physical memory addresses given by page table

One page table per process is needed

Page table needs to be reloaded at context switch

Paging

Every memory lookupFind the page in the page tableFind the (physical) memory location

Now we have two memory accesses (per reference)

Solution: Translation Lookaside Buffer (TLB)(again a cache…)

TLBs – Translation Lookaside Buffers

TLBs – Translation Lookaside Buffers

Memory lookupLook for page in TLB (fast)

If hit, fine go ahead!If miss, find it and put it in the TLB

Find the page in the page table (hit)Reload the page from disk (miss)

What if the physical memory is full?

Page Replacement Algorithms

Optimal Page Replacement Algorithm Replace page needed at the farthest point in future

Optimal but unrealizableNot Recently Used

Each page has Reference bit, Modified bitBits are set by HW when page is referenced, modifiedReference bit is periodically unset (at clock ticks)

Pages are classifiedClass 0: not referenced, not modifiedClass 1: not referenced, modifiedClass 2: referenced, not modifiedClass 3: referenced, modified

NRU removes page at random from lowest numbered non empty class

Page Replacement Algorithms

FIFO Page Replacement AlgorithmMaintain a linked list of all pages

In the order they came into memoryPage at beginning of list replacedDisadvantage

Page in memory the longest may be used often

Second Chance Page Replacement AlgorithmPages sorted in FIFO order Inspect the R bit, give the page a second chance if R=1

Page Replacement Algorithms

The Clock Page Replacement Algorithm

Page Replacement Algorithms

Least Recently Used (LRU)Locality: pages used recently will be used soon

Throw out the page that has been unused longest

Keep a linked list of pages or a counter

Not Frequently Used (NFU) - Simulating LRU in SoftwareA counter is associated with each pageAt each clock interrupt add R to the counter

Small Modification to NFU: Aging

1) The counters are each shifted right 1 bit before the R bit is added in

2) R bit is added to the leftmost rather than the rightmost bit.

Segmentation

One-dimensional address space with growing tables

One table may bump into another

Segmentation

Segmentation

I/O

Goals for I/O HandlingEnable use of peripheral devicesPresent a uniform interface for

Users (files etc.)Devices (respective drivers)

Hide the details of devices from users (and OS)

I/O Most device controllers provide

buffers (in / out) control registers status registers

These are accessed from the OS/Apps I/O ports memory-mapped hybrid

Direct Memory Address (DMA)

I/O Handling

Three kinds of I/O handlingProgrammed I/OInterrupt-driven I/ODMA-based I/O

Programmed I/O

Interrupt-driven I/O

Code for system call Code for interrupt handler

I/O Using DMA

Printing a string using DMAa) code executed when the print system call is

made

b) interrupt service procedure

Deadlock

A set of processes each holding a resource and waiting to acquire a resource held by another.

Deadlock None of the processes can …run, release resources or be awakened

B

P1

A

P2

has needs

needs has

Deadlock Modeling

process A holding resource R <in-arrow to process>process B is waiting (requesting) for resource S <out-

arrow from process>process C and D are in deadlock over resources T and

U

C has U, wants T

D has T, wants U

has

wants haswants

wantshas

Deadlock Detection

1. Detection with One Resource of Each Type

T

holds

wants

• Develop resource ownership and requests graph• If a cycle can be found within the graph deadlock

Deadlock Detection

2. Detection with Multiple Resource of Each Type

2. Detection with Multiple Resource of Each Type

Deadlock Avoidance

Safe and Unsafe States

(a) (b) (c) (d) (e)

Safe: If there is a scheduling order that satisfies all processes even if they request their maximum resources at the same time * Keep in mind that only 1 process can execute at a given time!

Available Resources = 10

Safe and Unsafe States

Note: This is not a deadlock – just that the “potential” for a deadlock exists IF A or C ask for the max. If they ask for <max, the system works just fine!

(a) (b) (c) (d) “Potential” deadlock state as both A or C

can ask for 5 resources and only 4 are currently free!

Banker's (State) Algorithm for a Single Resource

Banker’s Algorithm for Multiple Resources

Scheduling

Task/Process/Thread TypesNon pre-emptive (NP): An ongoing task cannot be

displacedPre-emptive: Ongoing tasks can be switched in/out

as neededScheduling Algorithms

First Come, First Served (FCFS)Shortest Job First (SJF)Round Robin (RR)Priority Based (PB)

First-Come, First-Served (FCFS) Scheduling (Non-Preemptive)

Process Length (CPU Burst Time) P1 24 P2 3 P3 3

Processes arrive (& get executed) in their arrival order: P1 , P2 , P3

Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17

P1 P2 P3

24 27 300

Shortest-Job-First (SJF) Scheduling(Non-preemptive)

Process Arrival Time Burst TimeP1 0.0 7P2 2.0 4P3 4.0 1P4 5.0 4SJF (non-preemptive) P1, then P3 then P2, P4

Av. waiting = (0 + [8-2] + [7-4] + [12-5])/4 = 4.00

P1 P3 P2

73 160

P4

8 12

Shortest-Job-First (SJF) Scheduling(Preemptive)

Process Arrival Time Burst TimeP1 0.0 7P2 2.0 4P3 4.0 1P4 5.0 4

SJF (preemptive)

Average waiting time = P1:[0, (11-2)]; P2:[0, (5-4)]; P3: 0; P4: (7-5) Average waiting time = (9 + 1 + 0 +2)/4 = 3 …[6.75;4.00]

P1 P3P2

42 110

P4

5 7

P2 P1

16

P1: 5 left P2: 2 left P2:2, P4:4, P1:5

P1 P3P2

42 110

P4

5 7

P2 P1

16

P1: 5 left P2: 2 left P2:2, P4:4, P1:5

P1 P3P2

42 110

P4

5 7

P2 P1

16

P1: 5 left P2: 2 left P2:2, P4:4, P1:5 P1: 5 left P2: 2 left P2:2, P4:4, P1:5 P1: 5 left P2: 2 left

Round Robin

Each process gets a fixed slice of CPU time (time quantum: q), 10-100ms After this time has elapsed, the process is “forcefully” preempted and added to

the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each

process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.

Performance? q too large FIFO (poor response time & poor CPU utilization) q too small q must be large with respect to context switch overhead,

else context switching overhead reduces efficiency

ReadyQ

a a a bbb cc

Dynamic RR with Quantum = 20

Process Burst TimeP1 53P2 17P3 68P4 24

Typically, higher average turnaround than SJF, but better response Turnaround time – amount of time to execute a particular process

(minimize) Response time (min) – amount of time it takes from request submission until

first response

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

Priority Scheduling

A priority number (integer) is associated with each process The CPU is allocated to process with the highest priority

Preemptive (priority and order changes possible at runtime) Non-preemptive (initial execution order chosen by static priorities)

Problem Starvation: low priority processes may never execute Solution Aging: as time progresses increase priority of process

Race Condition

Shared memory, shared files, shared address spaces, so how do we prohibit more than 1 process from accessing shared data at the same time and providing ordered access?

Concurrent access to shared data (CS) by >1 processes outcome “can” depend solely on the order of accesses as seen by the resource instead of the proper request order: Race Condition

Mutual Exclusion

ME Solution Basis Exclusive CS Access

ME: Lock Variables

flag(lock) as global variable for access to shared section lock = 0 ; resource_free lock = 1 ; resource_in_use

Check lock; if free (0); set lock to 1 and then access CS A reads lock(0); initiates set_to_1 B comes in before lock(1) finished; sees lock(0), sets lock(1) Both A and B have access rights race condition Happening as “locking” (the global var.) is not an atomic action “Atomic”: All sub-actions finish for action to finish or nothing (All or Nothing)

unlocked? acquire lock

enter CS

done release lock

do non-CS

ME with “Busy Waiting”

A sees turn=0; enters CS B sees turn=0; busy_waits (CPU waste)A exits CS, sets turn =1 B sees turn=1; enters CS

B finishes CS; sets turn=0; A enters CS; finishes CS quickly; sets turn=1; B in non-CS, A in non-CS; A finishes non-CS & wants CS; BUT turn=1 A waits (Condition 3 of ME? Process seeking CS should not be blocked by a process not using CS! But, no race condition given the strict alternation!)

Petersons ME

int turn; “turn” to enter CSboolean flag[2]; TRUE indicates ready (access) to enter CS

do { flag[i] = TRUE; turn = j; set access for next_CS access while (flag[j] && turn = = j); CS only if flag[j]=FALSE or turn = i

CS flag[i] = FALSE;non-CS} while(TRUE);

Sleep and Wakeup

Shared fixed-size buffer- Producer puts info IN- Consumer takes info OUT

Semaphores

synchronization |mutual exclusion

Recommended