Lecture 3 CPU Scheduling. Lecture Highlights Introduction to CPU scheduling What is CPU scheduling...

Preview:

Citation preview

Lecture 3

CPU Scheduling

Lecture Highlights Introduction to CPU scheduling

What is CPU scheduling Related Concepts of Starvation, Context

Switching and Preemption Scheduling Algorithms Parameters Involved Parameter-Performance

Relationships Some Sample Results

IntroductionWhat is CPU scheduling An operating system must select

processes for execution in some fashion.

CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU.

The selection process is carried out by an appropriate scheduling algorithm.

Related ConceptsStarvation

This is the situation where a process waits endlessly for CPU attention.

As a result of starvation, the starved process may never complete its designated task.

Related ConceptsContext Switch Switching the CPU to another process

requires saving the state of the old process and loading the state of the new process. This task is known as a context switch.

Increased context switching affects performance adversely because the CPU spends more time switching between tasks than it does with the tasks itself.

Related ConceptsPre-emption

Preemption involves the CPU purging a process being served in favor of another process.

The new process may be favored due to a variety of reasons like higher priority, smaller execution time, etc as compared to the currently executing process.

Scheduling Algorithms Scheduling algorithms deal with the problem

of deciding which of the processes in the ready queue is to be allocated the CPU.

Some commonly used scheduling algorithms: First In First Out (FIFO) Shortest Job First (SJF) without preemption Preemptive SJF Priority based without preemption Preemptive Priority base Round robin Multilevel Feedback Queue

Scheduling algorithmsA process-mix

When studying scheduling algorithms, we take a set of processes and their parameters and calculate certain performance measures

This set of processes, also termed a process-mix, is comprised of some of the PCB parameters.

Scheduling algorithmsA sample process-mix Following is a sample process-mix which we’ll

use to study the various scheduling algorithms.

Process ID

Time of Arrival

Priority Execution Time

1 0 20 10

2 2 10 1

3 4 58 2

4 8 40 4

5 12 30 3

Scheduling AlgorithmsFirst In First Out (FIFO)

This is the simplest algorithm and processes are served in order in which they arrive.

The timeline on the following slide should give you a better idea of how the FIFO algorithm works.

Scheduling AlgorithmsFirst In First Out (FIFO)

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

P1 P2

P3

P4

P5

P1

arrives

P2

arrives

P3

arrives

P4

arrives

P5

arrives

P1

ends

P2 ends

P3 ends

P4

ends

P5 ends

FIFO Scheduling AlgorithmCalculating Performance Measures

Turnaround Time = End Time – Time of Arrival

Turnaround Time for P1 = 10 – 0 = 10 Turnaround Time for P2 = 11 – 2 = 9 Turnaround Time for P3 = 13 – 4 = 9 Turnaround Time for P4 = 17 – 8 = 9 Turnaround Time for P5 = 20 – 12 = 8Total Turnaround Time = 10+9+9+9+8 = 45Average Turnaround Time = 45/5 = 9

FIFO Scheduling AlgorithmCalculating Performance Measures

Waiting Time = Turnaround Time – Execution Time

Waiting Time for P1 = 10 – 10 = 0 Waiting Time for P2 = 9 – 1 = 8 Waiting Time for P3 = 9 – 2 = 7 Waiting Time for P4 = 9 – 4 = 5 Waiting Time for P5 = 8 – 3 = 5Total Waiting Time = 0+8+7+5+5 = 25Average Waiting Time = 25/5 = 5

FIFO Scheduling AlgorithmCalculating Performance Measures

ThroughputTotal time for completion of 5 processes =

20Therefore, Throughput = 5/20

= 0.25 processes per unit time

FIFO Scheduling AlgorithmPros and cons

Pros FIFO is an easy algorithm to understand and

implement.

Cons The average waiting time under the FIFO

scheme is not minimal and may vary substantially if the process execution times vary greatly.

The CPU and device utilization is low. It is troublesome for time-sharing systems.

Scheduling AlgorithmsShortest Job First (SJF) without preemption

The CPU chooses the shortest job available in its queue and processes it to completion. If a shorter job arrives during processing, the CPU does not preempt the current process in favor of the new process.

The timeline on the following slide should give you a better idea of how the SJF algorithm without preemption works.

Scheduling AlgorithmsShortest Job First (SJF) without preemption

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

P1 P2

P3

P4

P5

P1

arrives

P2

arrives

P3

arrives

P4

arrives

P5

arrives

P1

ends

P2 ends

P3 ends

P4

endsP5 ends

SJF without preemptionCalculating Performance Measures

Turnaround Time = End Time – Time of Arrival Turnaround Time for P1 = 10 – 0 = 10 Turnaround Time for P2 = 11 – 2 = 9 Turnaround Time for P3 = 13 – 4 = 9 Turnaround Time for P4 = 20 – 8 = 12 Turnaround Time for P5 = 16 – 12 = 4Total Turnaround Time = 10+9+9+12+4 = 44Average Turnaround Time = 44/5 = 8.8

SJF without preemptionCalculating Performance Measures

Waiting Time = Turnaround Time – Execution Time

Waiting Time for P1 = 10 – 10 = 0 Waiting Time for P2 = 9 – 1 = 8 Waiting Time for P3 = 9 – 2 = 7 Waiting Time for P4 = 12 – 4 = 8 Waiting Time for P5 = 4 – 3 = 1Total Waiting Time = 0+8+7+8+1 = 24Average Waiting Time = 24/5 = 4.8

SJF without preemptionCalculating Performance Measures

ThroughputTotal time for completion of 5 processes =

20Therefore, Throughput = 5/20

= 0.25 processes per unit time

SJF without preemptionPros and cons

Pros The SJF scheduling is provably optimal, in that

it gives the minimum average waiting time for a given set of processes.

Cons Since there is no way of knowing execution

times, the same need to be estimated. This makes the implementation more complicated.

The scheme suffers from possible starvation.

Scheduling AlgorithmsShortest Job First (SJF) with preemption

The CPU chooses the shortest job available in its queue and begins processing it. If a shorter job arrives during processing, the CPU preempts the current process in favor of the new process.Execution times are compared by time remaining and not total execution time.

The timeline on the following slide should give you a better idea of how the SJF algorithm with preemption works.

Scheduling AlgorithmsShortest Job First (SJF) with preemption

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

P1

arrives

P2

arrives

P3

arrives

P4

arrives

P1

ends

P2 ends

P3 ends

P4

ends

P5 ends

P1P2

P3

P4

P5

P1 P1 P1

P5

arrives

SJF with preemptionCalculating Performance Measures

Turnaround Time = End Time – Time of Arrival

Turnaround Time for P1 = 20 – 0 = 20 Turnaround Time for P2 = 3 – 2 = 1 Turnaround Time for P3 = 6 – 4 = 2 Turnaround Time for P4 = 12 – 8 = 4 Turnaround Time for P5 = 15 – 12 = 3Total Turnaround Time = 20+1+2+4+3 = 30Average Turnaround Time = 30/5 = 6

SJF with preemptionCalculating Performance Measures

Waiting Time = Turnaround Time – Execution Time

Waiting Time for P1 = 20 – 10 = 10 Waiting Time for P2 = 1 – 1 = 0 Waiting Time for P3 = 2 – 2 = 0 Waiting Time for P4 = 4 – 4 = 0 Waiting Time for P5 = 3 – 3 = 0Total Waiting Time = 10+0+0+0+0 = 10Average Waiting Time = 10/5 = 2

SJF with preemptionCalculating Performance Measures

ThroughputTotal time for completion of 5 processes =

20Therefore, Throughput = 5/20

= 0.25 processes per unit time

SJF with preemptionPros and cons

Pros Preemptive SJF scheduling further reduces the

minimum average waiting time for a given set of processes.

Cons Since there is no way of knowing execution times,

the same need to be estimated. This makes the implementation more complicated.

Like the non-preemptive counterpart, the scheme suffers from possible starvation.

Too much preemption may result in higher context switching times adversely affecting system performance.

Scheduling AlgorithmsPriority based without preemption

The CPU chooses the highest priority job available in its queue and processes it to completion. If a higher priority job arrives during processing, the CPU does not preempt the current process in favor of the new process.

The timeline on the following slide should give you a better idea of how the priority based algorithm without preemption works.

Scheduling AlgorithmsPriority based without preemption

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

P1 P2

P3

P4

P5

P1

arrives

P2

arrives

P3

arrives

P4

arrives

P5

arrives

P1

ends

P2 ends

P3 ends

P4

ends

P5 ends

Priority based without preemptionCalculating Performance Measures

Turnaround Time = End Time – Time of Arrival Turnaround Time for P1 = 10 – 0 = 10 Turnaround Time for P2 = 11 – 2 = 9 Turnaround Time for P3 = 20 – 4 = 16 Turnaround Time for P4 = 15 – 8 = 7 Turnaround Time for P5 = 18 – 12 = 6Total Turnaround Time = 10+9+16+7+6 = 48Average Turnaround Time = 48/5 = 9.6

Priority based without preemptionCalculating Performance Measures

Waiting Time = Turnaround Time – Execution Time

Waiting Time for P1 = 10 – 10 = 0 Waiting Time for P2 = 9 – 1 = 8 Waiting Time for P3 = 16 – 2 = 14 Waiting Time for P4 = 7 – 4 = 3 Waiting Time for P5 = 6 – 3 = 3Total Waiting Time = 0+8+14+3+3 = 28Average Waiting Time = 28/5 = 5.6

Priority based without preemptionCalculating Performance Measures

ThroughputTotal time for completion of 5 processes =

20Therefore, Throughput = 5/20

= 0.25 processes per unit time

Priority based without preemptionPros and cons

Performance measures in a priority scheme have no meaning.

Cons A major problem with priority based

algorithms is indefinite blocking or starvation.

Scheduling AlgorithmsPriority based with preemption

The CPU chooses the job with highest priority available in its queue and begins processing it. If a job with higher priority arrives during processing, the CPU preempts the current process in favor of the new process.

The timeline on the following slide should give you a better idea of how the priority based algorithm with preemption works.

Scheduling AlgorithmsPriority based with preemption

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

P1

arrives

P2

arrives

P3

arrives

P4

arrives

P1

ends

P2 ends

P3 ends

P5 ends

P1P2

P3

P4

P5

P1P4

P5

arrives

P4

ends

Priority based with preemptionCalculating Performance Measures

Turnaround Time = End Time – Time of Arrival Turnaround Time for P1 = 11 – 0 = 11 Turnaround Time for P2 = 3 – 2 = 1 Turnaround Time for P3 = 20 – 4 = 16 Turnaround Time for P4 = 18 – 8 = 10 Turnaround Time for P5 = 15 – 12 = 3Total Turnaround Time = 11+1+16+10+3 =

41Average Turnaround Time = 41/5 = 8.2

Priority based with preemptionCalculating Performance Measures

Waiting Time = Turnaround Time – Execution Time

Waiting Time for P1 = 11 – 10 = 1 Waiting Time for P2 = 1 – 1 = 0 Waiting Time for P3 = 16 – 2 = 14 Waiting Time for P4 = 10 – 4 = 6 Waiting Time for P5 = 3 – 3 = 0Total Waiting Time = 1+0+14+6+0 = 21Average Waiting Time = 21/5 = 4.2

Priority based with preemptionCalculating Performance Measures

ThroughputTotal time for completion of 5 processes =

20Therefore, Throughput = 5/20

= 0.25 processes per unit time

Priority based with preemptionPros and cons

Performance measures in a priority scheme have no meaning.

Cons A major problem with priority based

algorithms is indefinite blocking or starvation. Too much preemption may result in higher

context switching times adversely affecting system performance.

Scheduling AlgorithmsRound Robin The CPU cycles through it’s process queue

and in succession gives its attention to every process for a predetermined time slot.

A very large time slot will result in a FIFO behavior and a very small time slot results in high context switching and a tremendous decrease in performance.

The timeline on the following slide should give you a better idea of how the round robin algorithm works.

Scheduling AlgorithmsRound Robin

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

P1P4 P

5

P1

arrives

P2

arrives

P3

arrives

P4

arrives

P5

arrives

P1

ends

P2 ends

P3 ends

P4

ends

P5 ends

P2

P3P1 P1 P1

P4

P1P5

Round Robin AlgorithmCalculating Performance Measures

Turnaround Time = End Time – Time of Arrival

Turnaround Time for P1 = 19 – 0 = 19 Turnaround Time for P2 = 3 – 2 = 1 Turnaround Time for P3 = 7 – 4 = 3 Turnaround Time for P4 = 15 – 8 = 7 Turnaround Time for P5 = 20 – 12 = 8Total Turnaround Time = 19+1+3+7+8 = 38Average Turnaround Time = 38/5 = 7.6

Round Robin AlgorithmCalculating Performance Measures

Waiting Time = Turnaround Time – Execution Time

Waiting Time for P1 = 19 – 10 = 9 Waiting Time for P2 = 1 – 1 = 0 Waiting Time for P3 = 3 – 2 = 1 Waiting Time for P4 = 7 – 4 = 3 Waiting Time for P5 = 8 – 3 = 5Total Waiting Time = 9+0+1+3+5 = 18Average Waiting Time = 18/5 = 3.6

Round Robin AlgorithmCalculating Performance Measures

ThroughputTotal time for completion of 5 processes =

20Therefore, Throughput = 5/20

= 0.25 processes per unit time

Round Robin AlgorithmPros and cons

Pros It is a fair algorithm.

Cons The performance of the scheme depends

heavily on the size of the time quantum. If context-switch time is about 10% of the

time quantum, then about 10% of the CPU time will be spent in context switch (the next example will give further insight into this).

Comparing Scheduling Algorithms

The following table summarizes the performance measures of the different algorithms:

Algorithm Av. Waiting Time

Av. Turnaround Time

Throughput

FIFO 5.0 9.0 0.25

SJF(no preemption) 4.8 8.8 0.25

SJF(preemptive) 2.0 6.0 0.25

Priority(no preemption)

5.6 9.6 0.25

Priority(preemptive)

4.2 8.2 0.25

Round Robin 3.6 7.6 0.25

Comparing Scheduling Algorithms The preemptive counterparts perform better

in terms of waiting time and turnaround time. However, they suffer from the disadvantage of possible starvation

FIFO and round robin ensure no starvation. FIFO, however, suffers from poor performance measures

The performance of round robin is dependent on the time slot value. It would also be the most affected once context switching time is taken into account as you’ll see in the next example (in the above cases context switching time = 0).

Round Robin AlgorithmAn example with context switching

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

P1

P4

P5

P1

arrives P2

arrives

P3

arrives

P4

arrives

P5

arrives

P1

ends

P4

ends

P5 ends

P2

P3

P1 P1 P1 P4

P1P5

P2 ends

P3 ends

Context Switch

Round Robin AlgorithmAn example with context switching

Turnaround Time = End Time – Time of Arrival Turnaround Time for P1 = 30 – 0 = 30 Turnaround Time for P2 = 4 – 2 = 2 Turnaround Time for P3 = 10 – 4 = 6 Turnaround Time for P4 = 25 – 8 = 17 Turnaround Time for P5 = 27 – 12 = 15Total Turnaround Time = 30+2+6+17+15 =

70Average Turnaround Time = 70/5 = 14

Round Robin AlgorithmAn example with context switching

Waiting Time = Turnaround Time – Execution Time

Waiting Time for P1 = 30 – 10 = 20 Waiting Time for P2 = 2 – 1 = 1 Waiting Time for P3 = 6 – 2 = 4 Waiting Time for P4 = 17 – 4 = 13 Waiting Time for P5 = 15 – 3 = 12Total Waiting Time = 20+1+4+13+12 = 50Average Waiting Time = 50/5 = 10

Round Robin AlgorithmAn example with context switching

ThroughputTotal time for completion of 5 processes = 30Therefore, Throughput = 5/30

= 0.17 processes per unit time

CPU Utilization% CPU Utilization = 20/30 * 100 = 66.67%

Comparing Performance Measures of Round Robin Algorithms

Performance Measures

Round Robin Scheme without

Context Switching

Round Robin Scheme with

Context Switching

Av. Waiting Time 3.6 10

Av. Turnaround Time

7.6 14

Throughput 0.25 0.17

% CPU Utilization 100% 66.67%

The following table summarizes the performance measures of the round robin algorithm with and without context switching:

Context Switching Remarks

The comparison of performance measures for the round robin algorithm with and without context switching clearly indicates – Context switching time is pure overload, because

the system does no useful work while switching. Thus, the time quantum should be large with

respect to context switching time. However, if it is too big, round robin scheduling degenerates to a FIFO policy. Consequently, finding the optimal value of time quantum for a system is crucial.

Scheduling AlgorithmsMulti-level Feedback Queue In the previous algorithms, we had a single

process queue and applied the same scheduling algorithm to every process.

A multi-level feedback queue involves having multiple queues with different levels of priority and different scheduling algorithms for each.

The figure on the following slide depicts a multi-level feedback queue.

Scheduling AlgorithmsMulti-level Feedback Queue

Queue 1System Jobs Round Robin

Queue 2Computation Intense SJF with preemption

Queue 3Less intense calculation Priority-based

Queue 4Multimedia Tasks FIFO

Scheduling AlgorithmsMulti-level Feedback Queue

As depicted in the diagram, different kinds of processes are assigned to queues with different scheduling algorithms. The idea is to separate processes with different CPU-burst characteristics and their relative priorities. Thus, we see that system jobs are assigned to the

highest priority queue with round robin scheduling. On the other hand, multimedia tasks are assigned to

the lowest priority queue. Moreover, the queue uses FIFO scheduling which is well-suited for processes with short CPU-burst characteristics.

Scheduling AlgorithmsMulti-level Feedback Queue Processes are assigned to a queue depending

on their importance. Each queue has a different scheduling

algorithm that schedules processes for the queue.

To prevent starvation, we utilize the concept of aging. Aging means that processes are upgraded to the next queue after they spend a predetermined amount of time in their original queue.

This algorithm, though optimal, is most complicated and high context switching is involved.

-value and initial time estimates Many algorithms use execution time of a

process to determine what job is processed next.

Since it is impossible to know the execution time of a process before it begins execution, this value has to be estimated.

, a first degree filter is used to estimate the execution time of a process.

Estimating execution time Estimated execution time is obtained

using the following formula:

zn = zn-1 + (1 - ) tn-1

where,z is estimated execution timet is the actual time

is the first degree filter and 01 The following example provides a more

complete picture

Estimating execution time

Here, = 0.5 z0 = 10

Then by formula,z1 = z0 + (1-) t0

= (0.5)(10)+(1-0.5)(6) = 8

and similarly z2, z3….z5 are calculated.

Processes

zn tn

P0 10 6

P1 8 4

P2 6 6

P3 6 4

P4 5 17

P5 11 13

Estimating execution timeEffect of The value of affects the estimation

process. = 0 implies that zn does not depend on

zn-1 and is equal to tn-1 = 1 implies that zn does not depend on

tn-1 and is equal to zn-1 Consequently, it is more favorable to start

off with a random value of and use an updating scheme as described in the next slide.

Estimating execution time-update

Steps of -updating scheme Start with random

value of (here, 0.5) Obtain the sum of

square difference I.e. f()

Differentiate f() and find the new value of

The following slide shows these steps for our example.

zntn Square

Difference

10 6  

()10+(1-)6=6 + 4

4 [(6+4) – 4]2 = (2+4)2

(6+4)+ (1-)4= 42+2+4

6 [(42+2+4)-6]2 =(42+2-2)2

Estimating execution time-update

In the above example, at the time of estimating execution time of P3, we update as follows. The sum of square differences is given by,SSD = (2+4)2 + (42+2-2)2 =164 + 163 + 42 + 8 +

8 And, d/dx [SSD] = 0 gives us, 83 + 62 + + 1 = 0 (Equation 1) Solving Equation 1, we get = 0.7916. Now, z3 = z2 + (1-) t2 Substituting values, we have

z3 = (0.7916) 6 + (1-0.7916) 6 = 6

Multilevel Feedback QueueParameters Involved Round Robin Queue Time Slot FIFO, Priority and SJF Queue Aging

Thresholds Preemptive vs. Non-Preemptive

Scheduling (2 switches in priority and SJF queues)

Context Switching Time -values and initial execution time

estimates

Multilevel Feedback QueueEffect of round robin time slot Small round robin quantum value lowers

system performance with increased context switching time.

Large quantum values result in FIFO behavior with effective CPU utilization, lower turnaround and waiting times as also the potential of starvation.

Finding an optimal time slot value becomes imperative for maximum CPU utilization with lowered starvation problem.

Multilevel Feedback QueueEffect of aging thresholds

A very large value of aging thresholds makes the waiting and turnaround times unacceptable. These are signs of processes nearing starvation.

On the other hand, a very small value makes it equivalent to one round robin queue.

Multilevel Feedback QueueEffect of preemption choice Preemption undoubtedly increases the

number of context switches and increased number of context switches inversely affects the efficiency of the system.

However, preemptive scheduling has been shown to decrease waiting and turnaround time measures in certain instances.

In SJF scheduling, the advantage of choosing preemption over non-preemption is largely dependent on the CPU burst time predictions that in itself is a difficult proposition.

Multilevel Feedback QueueEffect of context switching time An increasing value of context switching

time inversely affects the system performance in an almost linear fashion.

The context switching time tends to effect system performance inversely since as the context switching time increases so does the average turnaround and average waiting time. The increase of the above pulls down the CPU utilization.

Multilevel Feedback QueueEffect of -values and initial execution time

estimates There is no proven trend for this

parameter. In one of the studies, for the given

process mix, it is reported that the turnaround time obtained from predicted burst time is significantly lower than the one obtained by randomly generated burst time estimates. [Courtsey: Su’s project report]

Multilevel Feedback QueuePerformance Measures

The performance measures used in this module are:

Average Turnaround Time Average Waiting Time CPU utilization CPU throughput

Multilevel Feedback QueueImplementation

As part of Assignment 1, you’ll implement a multilevel feedback queue within an operating system satisfying the given requirements. (For complete details refer to Assignment 1)

We’ll see a brief explanation of the assignment in the following slides.

Multilevel Feedback QueueImplementation Details

Following are some specifications of the scheduling system you’ll implement: The scheduler consists of 4 linear Q's: The first Q is

FIFO, second Q is priority-based, third Q is SJF.and the fourth (highest Priority) is round robin. 

Feed back occurs through aging, aging parameters differ, i.e., each Q has a different aging threshold (time) before a process can migrate to a higher priority Q. (should be a variable parameter for each Q). 

The time slot for the Round Robin Q is a variable parameter. 

You should allow a variable context switching time to be specified by the user.

Multilevel Feedback QueueImplementation Details

Implement a switch (variable parameter) to allow choosing pre-emptive or non pre-emptive scheduling in the SJF, and priority-based Q's. 

Jobs cannot execute in a Q unless there are no jobs in higher priority Q's. 

The jobs are to be created with the following fields in their PCB: Job number, arrival time, actual execution time, priority,Queue number (process type 1 - 4). The creation is done randomly, choose your own ranges for the different fields in the PCB definition. 

Output should indicate a time line, i.e, every time step,indicate which processes are created (if any), which ones are completed (if any), processes which aged in different Q's, etc. 

Multilevel Feedback QueueImplementation Details

Once you’re done with the implementation, think of the problem from an algorithmic design point of view. The implementation involves many parameters such as: Average waiting time Average turnaround time  CPU utilization Maximum turnaround time Maximum wait time CPU Throughput. 

Multilevel Feedback QueueImplementation Details

The eventual goal would be to optimize several performance measures (enlisted earlier)

Perform several test runs and write a summation indicating how sensitive are some of the performance measures to some of the above parameters

Multilevel Feedback QueueSample Screenshot of Simulation

Sample Run 1: Round Robin Q Time slot: 2

Multilevel Feedback QueueSample tabulated data from simulation

RRTimeSlot Av.Turnaround Time

Av. Waiting Time

CPU Utilization

Throughput

2 19.75 17 66.67 % 0.026

3 22.67 20 75.19 % 0.023

4 43.67 41 80.00 % 0.024

5 26.5 25 83.33 % 0.017

6 38.5 37 86.21 % 0.017

Multilevel Feedback QueueSample Graphs(using data from simulation)

0

50

RRTimeSlot vs. Average Turnaround Time

RRTimeSlot

Av.TurnaroundTime 0

50

RRTimeSlot vs. Average Waiting Time

RRTimeSlot

Av. WaitingTime

0

50

100

RRTimeSlot vs. CPU Utilization

RRTimeSlot

CPUUtilization

Multilevel Feedback QueueConclusions from the sample simulation

The following emerged as the optimizing parameters for the given process mix: optimal value of the round robin

quantum smallest possible context switching

time update had no effect on the

performance measures in this case

Lecture Summary Introduction to CPU scheduling

What is CPU scheduling Related Concepts of Starvation, Context

Switching and Preemption Scheduling Algorithms Parameters Involved Parameter-Performance

Relationships Some Sample Results

Preview of next lectureThe following topics shall be covered in the next lecture:

Introduction to Process Synchronization and Deadlock Handling

Synchronization Methods Deadlocks

What are deadlocks The four necessary conditions

Deadlock Handling

Recommended