View
221
Download
4
Tags:
Embed Size (px)
Citation preview
OS Fall’02
Processes
Operating Systems Fall 2002
OS Fall’02
What is a process? An instance of an application
execution Process is the most basic
abstractions provided by OSAn isolated computation context for each application
Computation contextCPU state + address space + environment
OS Fall’02
CPU state=Register contents Process Status Word (PSW)
exec. mode, last op. outcome, interrupt level
Instruction Register (IR)Current instruction being executed
Program counter (PC) Stack pointer (SP) General purpose registers
OS Fall’02
Address space Text
Program code
DataPredefined data (known in compile time)
HeapDynamically allocated data
StackSupporting function calls
OS Fall’02
Environment External entities
TerminalOpen filesCommunication channels Local With other machines
OS Fall’02
Process control block (PCB)
statememory
files
accounting
priorityuser
CPU registersstorage
text
data
heap
stack
PSW
IR
PC
SP
generalpurposeregisters
PCB CPUkernel user
OS Fall’02
Process States
running
ready blocked
created
schedule
preempt
event done
wait for event
terminated
OS Fall’02
UNIX Process Statesrunning
user
runningkernel
readyuser
readykernel blocked
zombie
sys. callinterrupt
schedule
created
return
terminated
wait for event
event done
schedule
preempt
interrupt
OS Fall’02
Threads Thread: an execution within a process A multithreaded process consists of
many co-existing executions Separate:
CPU state, stack
Shared:Everything else Text, data, heap, environment
OS Fall’02
Thread Support Operating system
Advantage: thread scheduling done by OS Better CPU utilization
Disadvantage: overhead if many threads
User-levelAdvantage: low overheadDisadvantage: not known to OS E.g., a thread blocked on I/O blocks all the
other threads within the same process
OS Fall’02
Multiprogramming Multiprogramming: having multiple
jobs (processes) in the systemInterleaved (time sliced) on a single CPUConcurrently executed on multiple CPUsBoth of the above
Why multiprogramming?Responsiveness, utilization, concurrency
Why not?Overhead, complexity
OS Fall’02
ResponsivenessJob 1arrives
Job 1terminates
Job1 Job2 Job3
Job 2terminates
Job 3terminates
Job 2arrives
Job 3arrives
Job1Job3
Job2
Job 1 terminates Job 3 terminates
Job 2 terminates
OS Fall’02
Workload matters!? Would CPU sharing improve
responsiveness if all jobs were taken the same time?
No. It makes it worse! For a given workload, the answer depends
on the value of coefficient of variation (CV) of the distribution of job runtimes
CV=stand. dev. / meanCV < 1 => CPU sharing does not helpCV > 1 => CPU sharing does help
OS Fall’02
Real workloads Exp. Dist: CV=1;Heavy Tailed Dist:
CV>1 Dist. of job runtimes in real systems is
heavy tailedCV ranges from 3 to 70
Conclusion: CPU sharing does improve
responsiveness CPU sharing is approximated byTime slicing: interleaved execution
OS Fall’02
Utilization
idle
idle
idle
idle
idle
idle
1st I/Ooperation
I/Oends
2nd I/Ooperation
I/Oends
3rd I/Ooperation
CPU
Disk
CPU
Disk idle idle
idle
idleJob1 Job1
Job1 Job1Job2
Job2
Job2
OS Fall’02
Workload matters!? Does it really matter?Yes, of course:
If all jobs are CPU bound (I/O bound), multiprogramming does not help to improve utilization
A suitable job mix is created by a long-term scheduling
Jobs are classified on-line to be CPU (I/O) bound according to the job’s history
OS Fall’02
Concurrency Concurrent programming
Several process interact to work on the same problem ls –l | more
Simultaneous execution of related applications Word + Excel + PowerPoint
Background execution Polling/receiving Email while working on
smth else
OS Fall’02
The cost of multiprogramming Switching overhead
Saving/restoring context wastes CPU cycles
Degrades performanceResource contentionCache misses
ComplexitySynchronization, concurrency control, deadlock avoidance/prevention
OS Fall’02
Short-Term Scheduling
running
ready blocked
created
schedule
preempt
event done
wait for event
terminated
OS Fall’02
Short-Term scheduling Process execution pattern consists
of alternating CPU cycle and I/O wait CPU burst – I/O burst – CPU burst – I/O
burst...
Processes ready for execution are hold in a ready (run) queue
STS is schedules process from the ready queue once CPU becomes idle
OS Fall’02
Metrics: Response time
Job arrives/becomes ready to run Starts running
Job terminates/blocks waiting for I/O
TwaitTrun
Tresp
Tresp= Twait + Trun
Response time (turnaround time) is the average over the jobs’Tresp
OS Fall’02
Other Metrics
Wait time: average of Twait
This parameter is under the system control
Response ratio or slowdownslowdown=Tresp / Trun
Throughput, utilization depend on workload=>
Less useful
OS Fall’02
Running time (Trun)
Length of the CPU burstWhen a process requests I/O it is still “running” in the systemBut it is not a part of the STS workload
STS view: I/O bound processes are short processes
Although text editor session may last hours!
OS Fall’02
Off-line vs. On-line scheduling Off-line algorithms
Get all the information about all the jobs to schedule as their inputOutputs the scheduling sequencePreemption is never needed
On-line algorithmsJobs arrive at unpredictable timesVery little info is available in advancePreemption compensates for lack of knowledge
OS Fall’02
First-Come-First-Serve (FCFS) Schedules the jobs in the order in
which they arriveOff-line FCFS schedules in the order the jobs appear in the input
Runs each job to completion Both on-line and off-line Simple, a base case for analysis Poor response time
OS Fall’02
Shortest Job First (SJF) Best response time
Short Long job
Long job Short
Inherently off-lineAll the jobs and their run-times must be available in advance
OS Fall’02
Preemption Preemption is the action of stopping
a running job and scheduling another in its place
Context switch: Switching from one job to another
OS Fall’02
Using preemption On-line short-term scheduling algorithms
Adapting to changing conditions e.g., new jobs arrive
Compensating for lack of knowledge e.g., job run-time
Periodic preemption keeps system in control
Improves fairnessGives I/O bound processes chance to run
OS Fall’02
Shortest Remaining Time first (SRT)
Jobs run-times are known Jobs arrival time are not known When a new job arrives:
if its run-time is shorter than the remaining time time of the currently executing process:preempt the currently executing process and schedule the newly arrived job
else, continue the current job and insert the new job into the sorted queue
OS Fall’02
Round Robin (RR) Both job arrival times and job run-
times are not known Run each job cyclically for a short
time quantumApproximates CPU sharing
Job 1arrives
Job1 3
Job 2arrives
Job 3arrives
2 1 2 31 2 31 2 1 Job2
Job 3terminates
Job 1terminates
Job 2terminates
OS Fall’02
ResponsivenessJob 1arrives
Job 1terminates
Job1 Job2 Job3
Job 2terminates
Job 3terminates
Job 2arrives
Job 3arrives
Job1Job3
Job2
Job 1 terminates Job 3 terminates
Job 2 terminates
OS Fall’02
Priority Scheduling RR is oblivious to the process past
I/O bound processes are treated equally with the CPU bound processes
Solution: prioritize processes according to their past CPU usage
16
1
8
1
4
1
2
1:
2
1
10 ,)1(
3211
1
nnnnn
nnn
TTTTE
ETE
Tn is the duration of the n-th CPU burst En+1 is the estimate of the next CPU burst
OS Fall’02
Multilevel feedback queues
quantum=10
quantum=20
quantum=40
FCFS
new jobs terminated
OS Fall’02
Multilevel feedback queues Priorities are implicit in this scheme Very flexible Starvation is possibleShort jobs keep arriving => long jobs
get starved Solutions:
Let it beAging
OS Fall’02
Priority scheduling in UNIX Multilevel feedback queues
The same quantum at each queueA queue per priority
Priority is based on past CPU usagepri=cpu_use+base+nice
cpu_use is dynamically adjustedIncremented each clock interrupt: 100 sec-1
Halved for all processes: 1 sec-1
OS Fall’02
Fair Share scheduling Achieving pre-defined goals
Administrative considerations Paying for machine usage, importance of
the project, personal importance, etc.
Quality-of-service, soft real-time Video, audio
OS Fall’02
Fair Share scheduling: algorithms
Term in the priority formula:Reflects a cumulative CPU usage + aging when not used
Lottery scheduling:Each process gets a number of lottery tickets proportional to its CPU allocationThe scheduler picks a ticket at random and gives it to the winning client
OS Fall’02
Fair Share scheduling: VTRR Virtual Time Round Robin (VTRR)
Order ready queue in the order of decreasing shares (the highest share at the head)Run Round Robin as usualOnce a process that exhausted its share is encountered:Go back to the head of the queue
OS Fall’02
Multiprocessor Scheduling Homogeneous vs. heterogeneous Homogeneity allows for load sharing
Separate ready queue for each processor or common ready queue?
SchedulingSymmetricMaster/slave
OS Fall’02
A Bank or a Supermarket?
4/
4/
departingjobs
departingjobs
departingjobs
departingjobs
arrivingjobs
sharedqueue
CPU1
CPU2
CPU3
CPU4
CPU1
CPU2
CPU3
CPU4
arrivingjobs
M/M/4 4 x M/M/1
4/
4/
OS Fall’02
It is a Bank!
0
5
10
15
20
25
30
Utilization
aver
age
resp
on
se t
ime
M/M/4
4 x M/M/1
OS Fall’02
Next: Concurrency