54
Sporadic Server Scheduling in Linux Theory vs. Practice Mark Stanovich Theodore Baker Andy Wang

Sporadic Server Scheduling in Linux Theory vs. Practice

  • Upload
    abia

  • View
    75

  • Download
    0

Embed Size (px)

DESCRIPTION

Sporadic Server Scheduling in Linux Theory vs. Practice. Mark Stanovich Theodore Baker Andy Wang. Real-Time Scheduling Theory. Analysis techniques to design a system to meet timing constraints Schedulability analysis Workload models Processor models Scheduling algorithms. - PowerPoint PPT Presentation

Citation preview

Page 1: Sporadic Server Scheduling in Linux Theory vs. Practice

Sporadic Server Scheduling in LinuxTheory vs. Practice

Mark StanovichTheodore Baker

Andy Wang

Page 2: Sporadic Server Scheduling in Linux Theory vs. Practice

Real-Time Scheduling Theory

• Analysis techniques to design a system to meet timing constraints

• Schedulability analysis– Workload models– Processor models– Scheduling algorithms

Page 3: Sporadic Server Scheduling in Linux Theory vs. Practice

Real-Time Scheduling Theory

• Analysis techniques to design a system to meet timing constraints

• Schedulability analysis– Workload models– Processor models– Scheduling algorithms

Page 4: Sporadic Server Scheduling in Linux Theory vs. Practice

Periodic Task

timePeriod = T Computation time

WCET = C

Deadline = D

Task

jobs (j1, j2, j3, …)

Release time

Task = {T, C, D}

Page 5: Sporadic Server Scheduling in Linux Theory vs. Practice

Periodic Task

sched_setscheduler(SCHED_FIFO)

time

clock_nanosleep()

Page 6: Sporadic Server Scheduling in Linux Theory vs. Practice

Periodic Task

• Assumptions– WCET is reliable– Arrivals are periodic

• Not realistic for most tasks

Page 7: Sporadic Server Scheduling in Linux Theory vs. Practice

Polling Server

time

time

Job arrivals

Initialbudget

Replenishment period

Page 8: Sporadic Server Scheduling in Linux Theory vs. Practice

Polling Server

• Type of aperiodic server• CPU usage no worse than an equivalent

periodic task– Can be modeled as a periodic task

• WCET = Initial Budget• Period = Replenishment Period

• Budget consumed as CPU time is used– CPU time forfeited if not used

• Replenish budget every period

Page 9: Sporadic Server Scheduling in Linux Theory vs. Practice

Polling Server

• Good– Bounds CPU usage– Analyzable workload model– Simplicity

• Can be better– Faster response time if budget is not forfeited

Page 10: Sporadic Server Scheduling in Linux Theory vs. Practice

Sporadic Server

time

time

Job arrivals

Initialbudget

Replenishment period

replenishments

Page 11: Sporadic Server Scheduling in Linux Theory vs. Practice

Sporadic Server

• Originally proposed by Sprunt et. al.• Parameters

– Initial budget– Replenishment period

• Bounds max CPU interference for other tasks• Fits into the periodic task workload model• Better avg. response time than polling server

Page 12: Sporadic Server Scheduling in Linux Theory vs. Practice

Sporadic Server

• Scheduling algorithm for fixed-task-priority systems– Can be used in UNIX priority model

• SCHED_SPORADIC is a version of SS defined in POSIX definition– POSIX variant has some errors– Corrected version in another paper

Page 13: Sporadic Server Scheduling in Linux Theory vs. Practice

Implementation

• Linux 2.6.38– Sporadic server implementation

• Corrected version– Softirq threading patch ported from earlier RT

patch• Only tested on uniprocessor

Page 14: Sporadic Server Scheduling in Linux Theory vs. Practice

Sporadic Server Performance

• Metrics– CPU interference for lower priority tasks– Average response time

Page 15: Sporadic Server Scheduling in Linux Theory vs. Practice

An Experiment

Sends UDP packet withcurrent timestamp

Receives UDP packets

Calculate response time based on arrival at UDP layer

Measure CPU time for 10 second burst

A B

Page 16: Sporadic Server Scheduling in Linux Theory vs. Practice

Measuring CPU Time

• Regehr's “hourglass” technique– Constantly read time stamp counter– Detect preemptions by large diff in read time– Sum execution chunks

• Hourglass thread lower than net-rx thread• Measures interference from net-rx thread

Page 17: Sporadic Server Scheduling in Linux Theory vs. Practice

Measuring CPU Time

• Network receive thread– SCHED_FIFO– Sporadic and polling server

• Budget = 1 msec• Period = 10 msec

• Hourglass thread– SCHED_FIFO– Lower priority than network receive thread

Page 18: Sporadic Server Scheduling in Linux Theory vs. Practice

CPU Utilization

Page 19: Sporadic Server Scheduling in Linux Theory vs. Practice

Average Response Time

Page 20: Sporadic Server Scheduling in Linux Theory vs. Practice

Average Response Time

Page 21: Sporadic Server Scheduling in Linux Theory vs. Practice

Interference

• CPU usage not limited properly• Additional overheads

– Context switch time– Cache eviction and reloading

• Not in theoretical workload model• Guarantees of theory require interference to

be included in the analysis

Page 22: Sporadic Server Scheduling in Linux Theory vs. Practice

Polling Server

time

timebudget CS+SS 2= aperiodic job CPU time

= aperiodic job arrival

Page 23: Sporadic Server Scheduling in Linux Theory vs. Practice

Sporadic Server

time

= aperiodic job CPU time

= aperiodic job arrival

= replenishment period

max_repl2 timebudget CS+SS

Page 24: Sporadic Server Scheduling in Linux Theory vs. Practice

Over Provisioning

• All context switch time may not be used– e.g., one replenishment per period

• Better solution– Account for CS time on-line– Charge SS for each preemption

Page 25: Sporadic Server Scheduling in Linux Theory vs. Practice

CPU Utilization

Page 26: Sporadic Server Scheduling in Linux Theory vs. Practice

Average Response Time

Page 27: Sporadic Server Scheduling in Linux Theory vs. Practice

Average Response Time

Page 28: Sporadic Server Scheduling in Linux Theory vs. Practice

Can we get the best of both?

Sporadic Sever– Light Load

– Low response time

Polling Sever– Heavy Load

– Low response time– No dropped pkts

Page 29: Sporadic Server Scheduling in Linux Theory vs. Practice

Hybrid Server

• How to switch– SS with 1 replenishment is same as polling server– Coalesce replenishments– Ensure bounded interference

• Push replenishments further into the future

• Switching point– Server has work but no budget

Page 30: Sporadic Server Scheduling in Linux Theory vs. Practice

Coalescing Replenishments

time

Out of budgetWork available

Page 31: Sporadic Server Scheduling in Linux Theory vs. Practice

Coalescing Replenishments

time

Out of budgetWork available

Page 32: Sporadic Server Scheduling in Linux Theory vs. Practice

Average Response Time

Page 33: Sporadic Server Scheduling in Linux Theory vs. Practice

CPU Utilization

Page 34: Sporadic Server Scheduling in Linux Theory vs. Practice

Switching Between Modes

• Immediate coalescing may be too extreme– CPU time could be used for better response time

• Gradual approach– Coalesce fewer replenishments at a time

Page 35: Sporadic Server Scheduling in Linux Theory vs. Practice

Gradual Coalescing

time

Out of budgetWork available

Page 36: Sporadic Server Scheduling in Linux Theory vs. Practice

Gradual Coalescing

time

Out of budgetWork available

Page 37: Sporadic Server Scheduling in Linux Theory vs. Practice

Gradual Coalescing

time

Page 38: Sporadic Server Scheduling in Linux Theory vs. Practice

Average Response Time

Page 39: Sporadic Server Scheduling in Linux Theory vs. Practice

CPU Utilization

Page 40: Sporadic Server Scheduling in Linux Theory vs. Practice

Conclusion

• Theoretical analysis provides solid guarantees• Implementation must match abstract models

– Additional interference terms need to be considered

– SS can fit into the theoretical analysis• CPU interference experienced by both SS and

preempted task

Page 41: Sporadic Server Scheduling in Linux Theory vs. Practice

Questions?

Page 42: Sporadic Server Scheduling in Linux Theory vs. Practice

42

Differences Break Model

• Budget amplification• Premature replenishment• Incomplete temporal isolation

Page 43: Sporadic Server Scheduling in Linux Theory vs. Practice

43

Budget Amplification

• Accounting error– Overruns not always charged to the server

• Max execution ≤ server budget + clock res.• “if the available execution capacity would

become negative...it shall be set to zero”

Page 44: Sporadic Server Scheduling in Linux Theory vs. Practice

44

Budget Amplification

Page 45: Sporadic Server Scheduling in Linux Theory vs. Practice

45

Premature Replenishment

Page 46: Sporadic Server Scheduling in Linux Theory vs. Practice

46

Defect #3: Incomplete Temporal Isolation

• With temporal isolation a failure in one task does not prevent others from meeting their timing constraints

• Problem: Execution at low priority– Still preempts non-”real-time” work

Page 47: Sporadic Server Scheduling in Linux Theory vs. Practice

47

Unreliable Temporal Isolation

SCHED_FIFOSCHED_RR

SCHED_SPORADIC

SCHED_OTHER

Highest Priority

Lowest Priority

Page 48: Sporadic Server Scheduling in Linux Theory vs. Practice

Deferrable Server

Page 49: Sporadic Server Scheduling in Linux Theory vs. Practice

Deferrable Server Bandwidth Preserving

Allow server to retain budget Periodically replenish budget WCET != Budget

Page 50: Sporadic Server Scheduling in Linux Theory vs. Practice

Response Time

Page 51: Sporadic Server Scheduling in Linux Theory vs. Practice

Replenishment Policy

replenishment periodreplenishment

initial budget

time

arrival time(work available for server)

Page 52: Sporadic Server Scheduling in Linux Theory vs. Practice

Bandwidth Preservation

replenishment

initial budget

time

arrival time(work available for server)

replenishment period

Page 53: Sporadic Server Scheduling in Linux Theory vs. Practice

Sporadic Server

time

Page 54: Sporadic Server Scheduling in Linux Theory vs. Practice

Analysis Light load

– Sporadic Server• Low response time

– Polling Server• High response time

Heavy load– Sporadic Server

• High response time• Dropped packets

– Polling Server• Low response time• No dropped packets