Upload
imani-huffman
View
25
Download
0
Embed Size (px)
DESCRIPTION
operating systems. Scheduling. operating systems. There are a number of issues that affect the way work is scheduled on the cpu. operating systems. Batch vs. Interactive. operating systems. Batch System vs. Interactive System. Scheduling Issues. - PowerPoint PPT Presentation
Citation preview
SchedulingSchedulingSchedulingScheduling
operatingsystems
There are a number of issuesthat affect the way work is
scheduled on the cpu.
operatingsystems
Batch vs. Interactive
operatingsystems
Scheduling IssuesScheduling IssuesScheduling IssuesScheduling IssuesBatch System vs. Interactive System
In a batch system, there are no users impatiently waitingat terminals for a quick response.
On large mainframe systems where batch jobs usually run,CPU time is still a precious resource.
Metrics for a batch system include * Throughput – number of jobs per hour that can be run * Turnaround – the average time for a job to complete * CPU utilization – keep the cpu busy all of the time
operatingsystems
Scheduling IssuesScheduling IssuesScheduling IssuesScheduling IssuesBatch System vs. Interactive System
In interactive systems the goal is to minimize response times.
Proportionality – complex things should take more time than simple things. Closing a window should be immediate. Making a dial-up connection would be expected to take a longer time.
One or two users should not be able to hog the cpu
operatingsystems
Single vs Multiple User
operatingsystems
Scheduling IssuesScheduling IssuesScheduling IssuesScheduling IssuesSingle User vs. Multi-User Systems
Scheduling is far less complex on a single user system:
* In today’s Personal computers (single user systems), it is rare for a user to run multiple processes at the same time.
* On a personal computer most wait time is for user input.
* CPU cycles on a personal computer are cheap.
operatingsystems
Scheduling IssuesScheduling IssuesScheduling IssuesScheduling IssuesCompute vs. I/O Bound Programs
Most programs exhibit a common behavior, they compute for a while, Then they do some I/O.
Compute Bound:
Relatively long bursts of CPU activity with short intervals waiting for I/O
I/O Bound:
Relatively short bursts of CPU activity with frequent long waits for I/O
operatingsystems
Scheduling IssuesScheduling IssuesScheduling IssuesScheduling IssuesWhen to Schedule
Job Scheduling: When new jobs enter the system Select jobs from a queue of incoming jobs and place them on a process queue where they will be subject to Process Scheduling.
The goal of the job scheduler is to put jobs in a sequence that will use all of the system’s resources as fully as possible.
Example: What happens if several I/O bound jobs are scheduled at the same time?
operatingsystems
Scheduling IssuesScheduling IssuesScheduling IssuesScheduling IssuesWhen to Schedule
Process Scheduling or Short Term Scheduling:
For all jobs on the process queue, Process Scheduling determines which job gets the CPU next, and for how long. It decides when processing should be interrupted, and when a process completes or should be terminated.
operatingsystems
Scheduling IssuesScheduling IssuesScheduling IssuesScheduling Issues
Preemptive vs. non-Preemptive Scheduling
Non-preemptive scheduling starts a process runningand lets it run until it either blocks, or voluntarily givesup the CPU.
Preemptive scheduling starts a process and only letsit run for a maximum of some fixed amount of time.
operatingsystems
Scheduling CriteriaScheduling CriteriaScheduling CriteriaScheduling Criteria
Turnaround time – complete programs quickly
Response Time – quickly respond to interactive requests
Deadlines – meet deadlines
Predictability – simple jobs should run quickly, complex job longer
Throughput – run as many jobs as possible over a time period
Turnaround time – complete programs quickly
Response Time – quickly respond to interactive requests
Deadlines – meet deadlines
Predictability – simple jobs should run quickly, complex job longer
Throughput – run as many jobs as possible over a time period
operatingsystems
Pick the criteria that are important to you.One algorithm cannot maximize all criteria.
CPU Utilization – maximize how the cpu is used
Fairness – give everyone an equal share of the
cpu
Enforcing Priorities – give cpu time based on
priority
Enforcing Installation Policies – give cpu time
based on policy
Balancing Resources – maximize use of files,
printers, etc
Scheduling CriteriaScheduling CriteriaScheduling CriteriaScheduling Criteriacontinuedoperating
systems
ReadyList
PCB
PCB
Scheduler CPU
PCB
Resource Mgr
PCB
PCBresources
Pre-emption or voluntary yield
running
request
blocked
new
allocate
ready
Scheduler
Enqueuer
ReadyList
A process enteringready state
PCB
Dispatcher
To the CPU
Context Switcher
The cost of a context switch
Assume that your machine has 32-bit registers
The context switch uses normal load and store ops.Lets assume that it takes 50 nanoseconds to storethe contents of a register in memory.
If our machine has 32 general purpose registers and8 status registers, it takes
(32 + 8 ) * 50 nanoseconds = 2 microseconds
to store all of the registers.
Another 2 microseconds are required to load theregisters for the new process.
Keep in mind that the dispatcher itself is a processthat requires a context switch. So we could estimatethe total time required to do a context switch as 8+microseconds.
On a 1Gh machine, register operations take about 2 nanoseconds. If we divide our 8 microseconds by2 nanoseconds we could execute upwards of 4000register instructions while a context switch is going on.
This only accounts for saving and restoring registers.It does not account for any time required to load memory.
Optimal SchedulingOptimal SchedulingOptimal SchedulingOptimal Scheduling
Given a set of processes where the cpu time required for each to complete is known beforehand, it is possible to select the best possible scheduling of the jobs, if
- we assume that no other jobs will enter the system - we have a pre-emptive scheduler - we have a specific goal (i.e. throughput) to meet
This is done by considering every possible ordering of time slices for each process, and picking the “best” one.
But this is not very realistic – why not?
Optimal SchedulingOptimal SchedulingOptimal SchedulingOptimal Scheduling
Given a set of processes where the cpu time required for each to complete is known beforehand, it is possible to select the best possible scheduling of the jobs, if
- we assume that no other jobs will enter the system - we have a pre-emptive scheduler - we have a specific goal (i.e. throughput) to meet
This is done by considering every possible ordering of time slices for each process, and picking the “best” one.
But this is not very realistic – why not?
Are there any examples where these requirements hold?
This could take more time than actually running the thread!
Scheduling ModelScheduling ModelScheduling ModelScheduling Model
P = {pi | 0 < i < n}
P is a set of processes
Each process pi in the set is represented by a descriptor {pi, j} that specifies a list of threads.
Each thread contains a state field S(pi, j) such that S(pi, j) is one of {running, blocked, ready}
Service Time (pi,j) The amount of time a thread needs to be inrunning state until it is completed.
Wait Time W (pi,j) The time the thread spends waiting in the ready state before its first transition torunning state. *
Turnaround Time T (Pi, j) The amount of time between the momentthe thread first enters the ready stateand the moment the thread exits therunning state for the last time.
Some Common Performance Metrics
* Silberschatz uses a different definition of wait time.
Response Time T (Pi, j) In an interactive system, one of the mostimportant performance metrics is responsetime. This is the time that it takes for the system to respond to some user action.
Some Common Performance Metrics
System LoadSystem LoadSystem LoadSystem Load
If is the mean arrival rate of new jobs into the system, and is the mean service time, then the fraction of the time that the cpu is busy can be calculated as
p = / .
This assumes no time for context switching and that the cpu has sufficient capacity to service the load.
Note: This is not the same lambda (λ) we saw a few slides back.
For example, given an average arrival rate of 10 threads per minute and an average service time of 3 seconds,
= 10 threads per minute
= 20 threads per minute ( 60 / 3)
p = 10 / 20 = 50%
What can you say about this system if the arrival rate, ,is greater than the mean service time, ?
Scheduling AlgorithmsScheduling AlgorithmsScheduling AlgorithmsScheduling Algorithms
First-Come First ServedShortest Job FirstPriority SchedulingDeadline SchedulingShortest Remaining Time NextRound RobinMulti-level QueueMulti-level Feedback Queue
First-Come First ServedShortest Job FirstPriority SchedulingDeadline SchedulingShortest Remaining Time NextRound RobinMulti-level QueueMulti-level Feedback Queue
First-Come First-ServedFirst-Come First-ServedFirst-Come First-ServedFirst-Come First-Served
The simplest of scheduling algorithms.
The Ready List is a fifo queue.
When a process enters the ready queue, it’s PCB is linkedto the tail of the queue. When the CPU is free, the schedulerpicks the process that is at the head of the queue.
First-come first-served is a non-preemptive schedulingalgorithm. Once a process gets the CPU it keeps it untilit either finishes, blocks for I/O, or voluntarily gives up the CPU.
When a process blocks, the next process in the queue is run. When the blocked process becomes ready, it is added back in to the end of the ready list, just as if it were a new process.
Waiting times in a First-Come First-Served System can vary substantially and can be very long. Consider three jobs with the following service times (no blocking):
i i 1 24ms2 3ms3 3msIf the processes arrive in the order p1, p2, and then p3
P1 P2 P3
0 24 27 30
Gannt Chart
Compute each thread’s turnaround time
T (p1) = (p1) = 24msT (p2) = (p2) + T (P1) = 3ms + 24ms = 27msT (p3) = (p3) + T (p2) = 3ms + 27ms = 30ms
Average turnaround time = (24 + 27 + 30)/3 = 81 / 3 = 27ms
Waiting times in a First-Come First-Served System can vary substantially and can be very long. Consider three jobs with the following run times (no blocking):
i i 1 24ms2 3ms3 3msIf the processes arrive in the order p1, p2, and then p3
P1 P2 P3
0 24 27 30
Compute each thread’s wait time
W (p1) = 0W (p2) = T (P1) = 24msW (p3) = T (p2) = 27ms
Average wait time = (0 + 24 + 27 ) / 3 = 51 / 3 = 17ms
Note how re-ordering the arrival times can significantly alter the average turnaround time and average wait time!
i i 1 3ms2 3ms3 24ms
P1P2 P3
0 3 6 30
Compute each thread’s turnaround time
T (p1) = (p1) = 3msT (p2) = (p2) + T (P2) = 3ms + 3ms = 6msT (p3) = (p3) + T (p2) = 6ms + 24ms = 30ms
Average turnaround time = (3 + 6 + 30)/3 = 39 / 3 = 13ms
Note how re-ordering the arrival times can significantly alter The average turnaround and average wait times.
i i 1 3ms2 3ms3 24ms
P1P2 P3
0 3 6 30
Compute each thread’s wait time
W (p1) = 0W (p2) = T (P1) = 3msW (p3) = T (p2) = 6ms
Average wait time = (0 + 3 + 6 ) / 3 = 9 / 3 = 3ms
Try your hand at calculating average turnaround and average wait times.
i i 1 350ms2 125ms3 475ms4 250ms5 75ms
100 200 300 400 500 600 700 800 900 1000 1100 1200
i i 1 350ms2 125ms3 475ms4 250ms5 75ms
Try your hand at calculating average turnaround and average wait times.
i i 1 350ms2 125ms3 475ms4 250ms5 75ms
p1 p2 p3p4 p5
0 350 475 950 1200 1275
Average turnaround = (350 + 475 + 950 + 1200 + 1275) / 5 = 850
Average wait time = (0 + 350 + 475 + 950 + 1200) / 5 = 595
The Convoy EffectThe Convoy EffectThe Convoy EffectThe Convoy Effect
Assume a situation where there is one CPU bound processand many I/O bound processes. What effect does this haveon the utilization of system resources?
The Convoy EffectThe Convoy Effect
CPU
I/O
Ready Queue
Blocked
I/O I/O CPU
The Convoy EffectThe Convoy Effect
CPUI/O
Ready Queue
Blocked
I/O
I/O
The Convoy EffectThe Convoy Effect
CPUI/O
Ready Queue
Blocked
I/OI/O
CPU
The Convoy EffectThe Convoy Effect
CPUI/O
Ready Queue
Blocked
I/O I/OCPU
Remember, first come first served scheduling is non-preemptive.
Run a long time
Shortest Job Next Shortest Job Next SchedulingScheduling
Shortest Job Next Shortest Job Next SchedulingScheduling
Shortest Job Next scheduling is also a non-preemptive algorithm. The scheduler picks the job from the ready list that has the shortest expected CPU time.
It can be shown that the Shortest Job Next algorithm gives the shortest average waiting time. However, there is a danger. What is it?
starvation for longer processes as long as there is a supply ofshort jobs.
Scheduling according to predicted processor time:
P4 P1 P3
0 3 9 16
Average turnaround time = (3 + 9 + 16 +24)/4 = 13ms
Consider the case where the following jobs are in the ready list:
Process i
1 6ms2 8ms3 7ms4 3ms
P2
24
Scheduling according to predicted processor time:
P4 P1 P3
0 3 9 16
Average wait time = (0 + 3 + 9 +16)/4 = 7ms
If we were using fcfs scheduling the average wait timewould have been 10.25ms.
Consider the case where the following jobs are in the ready list:
Process i
1 6ms2 8ms3 7ms4 3ms
P2
24
There is a practical issue involved in actually implementing SJN, can you guess what it is?
You don’t know how long the job will really take!
For batch jobs the user can estimate how long the process will take and provide this as part of the job parameters. Users are motivated to be as accurate as possible, because if a job exceeds the estimate, it could be kicked out of the system.
In a production environment, the same jobs are run over and over again (for example, a payroll program), so you could easily base the estimate on previous runs of the same program.
For interactive systems, it is possible to predict the timefor the next CPU burst of a process based on its history.
Consider the following:
Sn+1 = 1n ∑
i=1
n
Ti
Where
Ti = processor execution time for the ith burstSi = predicted time for the ith instance
Sn+1 = 1n ∑
i=1
n
Ti
To avoid recalculating the average each time, we can write this as
Sn+1 = T + S n n1n
n – 1 n
It is common to weight more recent instances more than earlier ones, Because they are better predictors of future behavior. This is done with a technique called exponential averaging.
Sn+1 = Tn - (1 - )Sn
Priority SchedulingPriority SchedulingPriority SchedulingPriority SchedulingIn priority scheduling, each job has a given priority, and the Scheduler always picks the job with the highest priority to run next. If two jobs have equal priority, they are run in FCFS order.
Priorities range across some fixed set of values. It is up to the scheduler to define whether or not the lowest value is also the lowest priority.
Note that shortest job next scheduling is really a case of priority scheduling, where the priority is the inverse of the predicted time of the next cpu burst.
Priority scheduling can either be preemptive or not.
Priorities can be assigned internally or externally.
Internally assigned priorities are based on various characteristics of the process, such as * memory required * number of open files * average cpu burst time * i/o bound or cpu bound
Externally assigned priorities are based on things such as * the importance of the job * the funds being used to pay for the job * the department that owns the job * other, often political, factors
Deadline SchedulingDeadline SchedulingDeadline SchedulingDeadline Scheduling
Real-time systems are characterized by having threads that must complete execution prior to some time deadline.
The critical performance measurement of such a system is whether the system will be able to meet the scheduling deadlines for all such threads. Measures of turnaround time and wait time are irrelevant.
In order to manage deadline scheduling, the system must have complete knowledge of the maximum service time for each process.
In periodic scheduling, a thread has a recurring service time and deadline, so the deadline must be met for each period in the thread’s life. A process is admitted to the ready list only if it can be guaranteed that the system can supply the specified service time before each deadline imposed by all of the processes.
Example: In a streaming media application, a recurring thread must meet its service criteria and deadline in order to prevent jitter and latency in audio or video processing.
What problem do you think priority scheduling suffers from?
Starvation!
There is a legend that says when MIT shut down their IBM 7094 in 1973, they found a low priority job that had been in the queue since 1967, and had never been run!
Can you think of a scheme to deal with this problem?
Round Robin SchedulingRound Robin SchedulingRound Robin SchedulingRound Robin Scheduling
With round robin scheduling, jobs are stored in a FCFS queue.
A clock is provided that generates an interrupt at periodic intervals. When an interrupt occurs, the running process is stopped and placed at the end of the ready list. The scheduler then picks the front job from the ready list and dispatches it.
Round Robin is pre-emptive!
Time QuantumsTime QuantumsTime QuantumsTime Quantums
The most significant design factor in round robin scheduling is the Time Quantum, the amount of time that each process is allowed to run. Why?
The rules are that1. No process ever gets more than 1 time quantum in a row2. If the process takes more than one time quantum, it is stopped when it’s time quantum runs out and another process is run. The process moves to the tail of the list.3. If a process does not use up it’s time quantum, (normally because it blocks) then the scheduler returns it to the tail of the list when it is unblocked and picks another process to run.
If the time quantum is very long, then the performance ofRound robin scheduling approaches that of FCFS scheduling. Interactive users would complain bitterly!
What happens if the time quantum is very, very long?
What happens if the time quantum is very short.
Too shortefficiency goes down becauseof context switches.
Too longInteractive users loseresponsiveness and the system appears sluggish.
Suppose that the time to do a context switch on a machineis 1ms and the quantum time is 4 ms. Remember that no useful work gets done during a context switch, so 20% of the cpu time is wasted doing context switches.
Relative Treatment of CPU Bound and I/O Bound Processes
The I/O bound processes tend to run in short bursts, not using their entire time quantum. CPU bound processes use their entire time quantum every time they are run. The net effect is that CPU bound processes get an unfair proportion of the CPU(but maybe that’s the way it should be)
Round Robin SchedulingRound Robin SchedulingRound Robin SchedulingRound Robin Scheduling
In the following example, we will assume a time quantum (time slice) of 4 ms. We will not take the time for a context switch into account.
Process i
1 24ms 2 3ms 3 3ms
4 8 12 16 20 24 28 32 36
Process i
1 24ms 2 3ms 3 3ms
Time slice is 4ms
4 8 12 16 20 24 28 32 36
Process i
1 24ms 2 3ms 3 3ms
Time slice is 4ms
Process 1 startsIt’s Time slice expires
4 8 12 16 20 24 28 32 36
Process i
1 24ms 2 3ms 3 3ms
Time slice is 4ms
Process 2 starts
P1
It finishes its work
7
4 8 12 16 20 24 28 32 36
Process i
1 24ms 2 3ms 3 3ms
Time slice is 4ms
Process 3 starts
P1
It finishes its work
7
P2
10
4 8 12 16 20 24 28 32 36
Process i
1 24ms 2 3ms 3 3ms
Time slice is 4ms
Process 1 starts
P1
Its time slice expires
7
P2
10
P3
Round Robin SchedulingRound Robin SchedulingRound Robin SchedulingRound Robin Scheduling
Process i
1 24ms 2 3ms 3 3ms
0 4 7 10 14 18 22 26 30
p1 p2 p3 p1 p1 p1 p1 p1
Average wait time = (0 + 4 + 7) / 3 = 3.66 ms
Significant Improvement in Wait Time!Wait time for this set of jobs using fcfs was 17ms
P1 P2 P3
Round Robin SchedulingRound Robin SchedulingRound Robin SchedulingRound Robin Scheduling
Process i
1 24ms 2 3ms 3 3ms
0 4 7 10 14 18 22 26 30
p1 p2 p3 p1 p1 p1 p1 p1
Round Robin does little to improve average turnaround time.with fcfs turnaround for this set of jobs was 27ms/13ms
Average Turnaround time = (30 + 7 + 10) / 3 = 15.66
P1P2 P3
Round Robin SchedulingRound Robin SchedulingRound Robin SchedulingRound Robin Scheduling
Now, consider the case where the context switch plus the scheduler time take 2ms
Process i
1 24ms 2 3ms 3 3ms
0 4 6 11 14 16 20 26 32
p1 p2p1p3 p1
p1p1 p1
Average wait time = (0 + 6 + 11) / 3 = 6.66 ms
Without context switching this was 3.66ms
9 22 28 34 38 40 44
Round Robin SchedulingRound Robin SchedulingRound Robin SchedulingRound Robin Scheduling
Now, consider the case where the context switch plus the scheduler time take 2ms
Process i
1 24ms 2 3ms 3 3ms
0 4 6 11 14 16 20 26 32
p1 p2p1p3 p1
p1p1 p1
Average turnaround time = (44 + 9 + 14) / 3 = 22.33 ms
Without context switching this was 15.66ms - about a 42% difference
9 22 28 34 38 40 44
Shortest Remaining Time Shortest Remaining Time NextNext
Shortest Remaining Time Shortest Remaining Time NextNext
This is the pre-emptive version of Shortest Job Next. It allocates the processor to the job closest to completion. Note that a running job could be pre-empted if a new job with a shorter completion time arrives on the queue.
This algorithm is not suited for interactive systems since it requires knowledge of the time required for a process to complete.
Arrival Time 0 1 2 3Job # 1 2 3 4Service Time 6ms 3ms 1ms 4ms
0
P1
Job 1 arrives and gets scheduled.It starts to run.
Arrival Time 0 1 2 3Job # 1 2 3 4Service Time 6ms 3ms 1ms 4ms
0 1
P1
Job 1 runs for 1 ms.When Job 2 arrives on the queue, it’s time to completion is lessthan the time required to complete Job 1, so Job 1 is pre-emptedBy job 2.
P2
5 ms left
Arrival Time 0 1 2 3Job # 1 2 3 4Service Time 6ms 3ms 1ms 4ms
0 1
P1
Job 2 runs for 1 ms.When Job 3 arrives on the queue, it’s time to completion is lessthan the time required to complete Job 2, so Job 2 is pre-emptedBy job 3.
P2 P3
2
5 ms left 2 ms left
Arrival Time 0 1 2 3Job # 1 2 3 4Service Time 6ms 3ms 1ms 4ms
0 1
P1
Job 3 runs to completion.The scheduler now looks to see which job has theshortest time to completion. Job #4 has arrived .
P2 P3
2 3
5ms left 2ms left done
Arrival Time 0 1 2 3Job # 1 2 3 4Service Time 6ms 3ms 1ms 4ms
0 1
P1
Job 3 runs to completion.The scheduler now looks to see which job has theshortest time to completion. Job #4 has arrived .
Job #2 has the shortest remaining time (2ms)It will run to completion.
P2 P3
2 3
5ms left 2ms left
5
P2
done
Arrival Time 0 1 2 3Job # 1 2 3 4Service Time 6ms 3ms 1ms 4ms
0 1
P1
Job 4 runs to completion.The scheduler now looks to see which job has theshortest time to completion.
There is only 1 job left, #1.It is scheduled and runs to completion.
P2 P3
2 3
5ms left 2ms left
5
P2 P4
9
P1
14
done
Arrival Time 0 1 2 3Job # 1 2 3 4Service Time 6ms 3ms 1ms 4ms
0 1
P1 P2 P3
2 3
5ms left 2ms left
5
P2 P4
9
P1
14
Average turnaround time = (14 + 4 + 1 + 6) / 4 = 6.25ms
Multi-Level QueuesMulti-Level QueuesMulti-Level QueuesMulti-Level Queues
Multi level queues are useful when all processes to be run can be put into one of a small set of categories.
System Processes
Interactive Processes
Batch Processes
Student Processes
Each queue may have its own scheduling algorithm. For example,interactive processes may run round robin, while batch processes are run FCFS. Scheduling between queues is usually a fixed-prioritypre-emptive scheduling.
for example, studentjobs won’t run at all,unless all of the otherqueues are empty.
Multilevel Feedback Multilevel Feedback QueuesQueues
Multilevel Feedback Multilevel Feedback QueuesQueues
Like multilevel queues, but with the ability of jobs tomove from one queue to another. This separates jobsaccording to their cpu burst characteristics. For example,jobs that tend to use too much cpu time will be movedto a lower priority queue.
Similarly, jobs that wait too long to get the cpu will moveto a higher priority queue. This type of aging tends to Eliminate starvation.
Lottery SchedulingLottery SchedulingLottery SchedulingLottery Scheduling
Processes get lottery tickets for various system resources, such as the cpu. When a schedulingdecision has to be made, the OS picks a lotteryticket at random. The holder of the ticket gets theresource.
More important (higher priority) processes can begiven more lottery tickets to increase their chanceof winning.
Cooperative processes can exchange tickets toboost one processes chance of winning and lessenanother’s.
Sun Solaris (a Unix O/S)Sun Solaris (a Unix O/S)Sun Solaris (a Unix O/S)Sun Solaris (a Unix O/S)
Solaris has four scheduling classes * Time Sharing * Real-Time * System * Interactive
Time sharing and Interactive use a multi-levelfeedback queue – scheduling is done on threads
Each priority is assigned a different time quantum
At the end of a time quantum the priority of a threadis lowered by 10
Windows XPWindows XPWindows XPWindows XP
Windows XP uses a quantum based, multiple priorityfeedback scheduling algorithm.
Scheduling is done on threads, not processes.
When a thread’s time quantum expires, its priority islowered by 1.
When a thread moves to the ready list after being blocked, its priority is boosted. The largest boost is for threads waiting for keyboard input.
Linux 2.5Linux 2.5Linux 2.5Linux 2.5
Linux 2.5 uses a quantum based, multiple priorityfeedback scheduling algorithm.
When a thread’s time quantum expires, its priority islowered by some amount.
When a thread moves to the ready list after being blocked, its priority is boosted. , based on how longit was in the waiting state.