Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
NWI-IBC019: Operating systemsBernard van Gastel and Nils Jansen
Process scheduling and Synchronization (Part 1)
Recap: Threads
Why use threads and not just processes?
• Responsiveness – may allow continued execution if part of process is blocked, especially important for user interfaces
• Resource Sharing – threads share resources of process, easier than shared memory or message passing
• Economy – cheaper than process creation because allocating memory and resources is costly, thread switching has a lower overhead than context switching
• Scalability – process can take advantage of multiprocessor architectures
Recap: Threads
• Identifying tasks - Examine applications where areas can be divided into separate concurrent tasks
• Balance - Tasks may not be of equal value
• Data splitting - Data must be divided to run on separate cores
• Data dependency - Tasks may depend on data from other tasks
• Testing and debugging - Many possible execution paths make testing and debugging more difficult
Challenges of Threading?
Objectives — Scheduling
• CPU scheduling
• CPU-scheduling algorithms
• Evaluation
Basic Concepts
• Maximum CPU utilization obtained with multiprogramming
• CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait
• CPU burst followed by I/O burst
Histogram of CPU-burst Times
large number of short CPU bursts and small number of long CPU bursts
CPU Scheduler
• Short-term scheduler selects processes in ready queue, and allocates CPU• Queue may be FIFO, priority queue, tree…
• CPU needs to schedule when process1. Switches from running to waiting state2. Switches from running to ready state3. Switches from waiting to ready4. Terminates
• Scheduling under 1 and 4 is nonpreemptive• process keeps CPU until terminating or waiting
• All other scheduling is preemptive• Consider access to shared data• Consider preemption while in kernel mode• Consider interrupts occurring during crucial OS activities
Dispatcher
• Gives control of the CPU to process selected by short-term scheduler • switching context• switching to user mode• jumping to the proper location in the user program to restart that program
• Dispatch latency – time it takes for the dispatcher to stop one process and start another running
• Should be as fast as possible!
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – # of processes that complete their execution per time unit
• Turnaround time – amount of time to execute a particular process• Waiting to get into memory + waiting in the ready queue + CPU execution + I/O
• Waiting time – amount of time a process waits in the ready queue
• Response time – from submission of request until first response
Scheduling Optimization Criteria
• Max CPU utilization
• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
• Optimize the average measure
What are reasonable measures?
First-Come, First-Served Scheduling (FCFS)
Process Burst Time P1 24 P2 3 P3 3
• Waiting times: P1 = 0; P2 = 24; P3 = 27• Average waiting time: (0 + 24 + 27)/3 = 17
P1 P2 P3
24 27 300
Gantt Chart, Order P1, P2, P3
FCFS Scheduling - Problems
Gantt Chart, Order P2, P3, P1
P1P3P2
63 300
• Waiting times: P1 = 6; P2 = 0; P3 = 3• Average waiting time: (6 + 0 + 3)/3 = 3• Much better than previous case• Convoy effect - short process stuck behind long process
• One CPU-bound (long CPU burst) and many I/O-boundprocesses (short CPU burst)
preemptive or nonpreemptive?
Shortest-Job-First Scheduling (SJF)
• Associate length of its next CPU burst to processes and schedule process with shorts time
• SJF is optimal – minimum average waiting time for set of processe
Process Burst Time P1 6 P2 8 P3 7 P4 3
P4 P3P1
3 160 9
P2
24
Gantt Chart
• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
How to predict length of CPU burst?
Length of Next CPU Burst
• Estimate the length – should be similar to the previous one• Then pick process with shortest predicted next CPU burst
• Can be done by using the length of previous CPU bursts, using exponential averaging
• Commonly, α set to ½
:Define 4.10 , 3.
burst CPU next the for value predicted 2.burst CPU of length actual 1.
≤≤
=
=
+
αα
τ 1n
thn nt
• Preemptive version called shortest-remaining-time-first
preemptive version?
⌧n+1 = ↵tn + (1� ↵)⌧n
Properties of Exponential Averaging
• α =0• τn+1 = τn
• Recent history does not count• α =1
• τn+1 = α tn• Only the actual last CPU burst counts
• If we expand the formula, we get:• τn+1 = α tn+(1 - α)α tn -1 + …• +(1 - α )j α tn -j + …• +(1 - α )n +1 τ0
• Since both α and (1 - α) are less than or equal to 1, each successive term has less weight than its predecessor
Priority Scheduling
• Associate priority number with each process
• CPU allocated to process with highest priority (smallest number = highest priority)
Process Burst Time Priority P1 10 3 P2 1 1 P3 2 4 P4 1 5 P5 5 2
P2 P3P5
1 180 16
P4
196
P1
• Average waiting time = 8.2 msec
Priority Scheduling - Quiz
• SJF is priority scheduling where priority is the inverse of predicted next CPU burst time
• Problem = Starvation – low priority processes may never execute• Solution = Aging – as time progresses increase the priority of the process
relation to SJF scheduling?
problems?
preemptive vs. nonpreemptive?
• process with higher priority arrives at ready queue:• preemptive: preempt CPU• nonpreemptive: put new process to head of ready queue
Round-Robin Scheduling (RR)
• Each process gets a time quantum q (unit of CPU time), usually 10-100 milliseconds.
• Ready queue as FIFO queue• After q has elapsed, process is preempted and added to the end of the
ready queue.• Timer interrupts every quantum to schedule next process
Properties of RR?
• Performance• q large ⇒ FIFO• q small ⇒ q should be large compared to context switch time, otherwise• overhead is too high
Round-Robin Scheduling with Time Quantum = 4
Process Burst Time P1 24 P2 3 P3 3
• Typically, higher average turnaround than SJF, but better response
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Multilevel Queue Scheduling• Partition of processes in groups with different response time requirements
• foreground (interactive)• background (batch)
• Multilevel queue partitions ready queue into separate queues
• Each queue has its own scheduling algorithm:• foreground – RR (better response time)• background – FCFS (less context switches)
• Scheduling between the queues:• Fixed priority scheduling (first foreground then background)• Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes
preemptive, starvation
Multilevel Queue Scheduling
Scheduling Algorithm Evaluation
• First, define evaluation criteria• Maximize CPU utilization with threshold on maximal response time• Maximize throughput (# of processes that complete execution per time unit)
• Deterministic modeling• Analytic evaluation• Takes predetermined workload and defines the performance of each algorithm for that
workload
Problems?• requires exact input numbers, evaluation applies only to those cases
Scheduling Algorithm Evaluation
• Simulation• program model of the computer system• random number generator to generate processes• may be expensive, accuracy at the cost of computing time
• Implementation• program the scheduling algorithm and test it in the operating system• accurate but very costly• changing environment
Objectives — Synchronization
• The Critical-Section Problem
• Peterson’s Solution
Synchronization
• Processes may execute concurrently• May be interrupted at any time, partially completing execution
• Concurrent access to shared data may result in data inconsistency
• Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes
Producer-consumer Problem
• Producer process produces information that is consumed by consumer• Use shared memory for information• (Bounded) buffer of items that can be filled by producer and emptied by
consumer• Access must be synchronized• Introduce an integer counter that keeps track of the number of items in the
buffer• Initially, counter is set to 0. It is incremented by the producer after it
produces a new item and is decremented by the consumer after it consumes an item
Producer
while (true) { /* produce an item in next produced */
while (counter == BUFFER SIZE) ;
/* do nothing */
buffer[in] = next produced;
in = (in + 1) % BUFFER SIZE;
counter++;
}
Consumer
while (true) {
while (counter == 0)
; /* do nothing */
next consumed = buffer[out];
out = (out + 1) % BUFFER SIZE; counter--;
/* consume the item in next consumed */
}
Race Condition
• counter++ could be implemented as register1 = counter register1 = register1 + 1 counter = register1
• counter-- could be implemented as register2 = counter register2 = register2 - 1 counter = register2
• Consider this execution interleaving with “count = 5” initially: S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = counter {register2 = 5} S3: consumer execute register2 = register2 – 1 {register2 = 4} S4: producer execute counter = register1 {counter = 6 } S5: consumer execute counter = register2 {counter = 4}
Critical Section Problem
• Consider system of n processes {p0, p1, … pn-1}
• Each process has critical section segment of code• Common variables, updating table, writing file, …• When one process in critical section, no other may be
in its critical section
• Critical section problem is to design protocol to solve this
• Each process must ask permission to enter critical section in entry section, may follow critical section with exit section, then remainder section
Requirements for Solution of Critical Section Problem
• Mutual Exclusion - If process Pi is executing in its critical section, no other processes can be executing in their critical sections
• Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely
• Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted
What are reasonable requirements?
Peterson’s Solution
• Two process solution sharing two variables• int turn; • Boolean flag[2]
• turn indicates whose turn it is to enter the critical section
• The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready!
Peterson’s Solution for Process i
do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
critical section
flag[i] = false;
remainder section
} while (true);
• Mutual exclusion is preserved• Progress requirement is satisfied• Bounded-waiting requirement is met
Are the requirements satisfied?
wants to enter critical sectionsets turn to other process
busy waiting, enters only if j doesn’t want to
doesn’t want to enter anymore
Summary
Scheduling
• Criteria
• Algorithms
• Evaluation
Synchronization
• Critical Section Problem
• Peterson’s Solution
Scheduling — to read
• Thread Scheduling
Synchronization — to read
• Synchronization Hardware
Next week: Synchronization continued