View
209
Download
0
Embed Size (px)
Citation preview
Optimal and Adaptive Multiprocessor
Real-Time Scheduling:The Quasi-Partitioned Approach
Γ = { { E.Massa, G.Lima, P.Regnier }, { G.Levin, S.Brandt } }
Real-Time Systems Andrei Petrov
Padua, October 13th, 2015
Outline
+ Motivation + QPS Algorithm+ Evaluation+ Conclusion
2
Introduction
3
QPS#optimal
#partitioned proportionate fairness#preserves space & time locality of tasks
#gentle off-line bin-packing
Motivation
4
QPS#on-line adaptation
#efficient switch mechanism P-EDF <--> G-Scheduling Rules#low preemption and migration overhead
QPS Algorithm
5
Off-line phase On-line phase
❏ Major/minor execution partitions
❏ Allocates processors to execution sets
❏ Generates the schedule❏ Manages the
activation/deactivation of QPS servers
Quasi-Partition cornerstone in dealing
with sporadic tasks
QPS Algorithm /1
6
System Model❏ Γ = {
i:(R
i, D
i) } for i=1,..,n
sporadic set ❏ independent, implicit-
deadlines tasks❏ Ri≤ 1 execution rate
❏ Di infinite set of deadlines
❏ m identical processors❏ migrations and preemptions
allowed
❏ A job J:(r,c,d)∈i
❏ J.r∈{ 0 U Di }
❏ J.d = min{d∈D, d>J.r}❏ J.c = R
i *(J.d-J.r)
QPS Algorithm /2
7
Definition❏ A server S is a scheduling
abstraction ❏ acts like a proxy for a
collection of client tasks/servers
❏ schedules its clients according to the EDF policy
❏ Fixed-Rate EDF Server S:(RS,D
S)
❏ RS = Σ
∈TR( )
❏ RS ≤ 1
❏ D
S = ⋃
∈TD( )
QPS Algorithm /3Definition
❏ Quasi-partition Q(Γ,m) is a partition of Γ on m processors s.t.
1. | Q(Γ,m)|≤m ➢ restricts the cardinality of execution sets to the
number of available processors
2. ∀P∈Q(Γ,m), 0<R(P)<2➢ Each element P in Q(Γ,m) is
○ (if R(P) <=1) a minor execution set ○ (if R(P) >1) a major execution set
➢ P at maximum requires two-processors for being correctly scheduled
3. ∀P∈Q(Γ,m), ∀σ∈P, R(P)>1 => R(σ)>R(P)-1
➢ P is a major execution➢ the P’s extra ratio must be
less than the ration of any other server σ in P
➢ keystone for the QPS’s flexibility
N.B. A graphical example follows
8
QPS Algorithm /4
9
➢ are given 2 processors and a task set P
P1:(0.6)
3:(0.8)
4:(0.2)
2:(0.3)
➔ R(P)=1.9≤2 ◆ 1st cond. is OK◆ 2 execution sets
Proc1 and Proc
2◆ max(capacity( Proc
i∈{1,2} ))=1
3
0
1
2
◆� � 3
Proc1 Proc2
➔ R(P)=1.9>1◆ 2nd condition is OK◆ more than 1 processor is required
to schedule P
➔ Let P = {PA, PB} such that ◆ PA={
3,
4 }, PB={
1,
2}
◆ R(PA)=1, R(PB)=0.9● no excess ratio,fully partitioned● the slack 1-R(PB) may be
an idle task on Proc2➔ if modifying the
4’s ratio to 0.3 then the
processor system becomes 100% utilized◆ the 3rd cond. holds◆ the excess rate (red color) is less than
the rate of all tasks in Proc1 execution set● otherwise a better re-allocation
of tasks exists
4
3
1
Example
QPS Algorithm /5
10
❏ QPS servers➔ P is a major execution set with rate R(P)
=1+x➔ P={PA, PB} is a bi-partition➔ σA:(R(PA)-x, PA)
σB:(R(PB)-x, PB)σM:(x, P)σS:(x, P)
Dedicated servers
Master/Slave servers
Excess ratio
➢ At any time t, all QPS servers associated with P share the same deadline D(P,t)
➢ Each QPS server σ is a Fixed-Rate EDF Server
Definitions
QPS Algorithm /6
11
❏ Γ = { i:(2/3,3) } for i=1,
2,3 ❏ two-processors❏ P
1 = {
1,
2}, P
2 = {
3}
❏ R(P1)= 4/3 , R(P
2)= 2/3
❏ [2,3] time interval denotes parallel execution
Fixed-Rate Servers
Master/Slave servers
QPS Algorithm /7
12
Off-line phase❏ Γ = { σ
i } for i = 1,2,...,5
a server set, three-processors
❏ R(σi ) = 0.6
❏ Proc.1 and Proc.2 dedicated processors
❏ Proc.3 shared processor
➢ σ6, σ
7 external servers reserve computer
capacity for exceeding parts
Processor Hierarchy
Lemma IV.1.Any server set Γ
0 with ceiling(R(Γ
0))≤m will be
allocated no more than m processors
QPS Algorithm /8
13
On-line phase Scheduling❏ servers and tasks are scheduled according to
the following rules:❏ visit each processor in revers order
to that of its allocation ❏ select via EDF the highest priority
server/task❏ if a master server is selected,
❏ select also its slave associated server
❏ for all selected servers ❏ select all highest priority client
❏ dispatch all selected tasks to execute
QPS Algorithm /9
14
On-line phase Scheduling❏ Γ ={
1:(6,15),
2:(12,30),
3:(5,10),
4:(3.5,5)}
set of sporadic tasks❏ to be scheduled by QPS
servers on two-processors❏ all tasks release their first
job at time 0 ❏ the second job of
3
arrives at time t=16 whereas the other tasks behave periodically
❏ Q(Γ,2)={P1,P
2} with P
1={
1,
2,
3 } and P
2={
4}
❏ R(P1)=1.3>1 ⟹ σA:(0.1, {
1}), σB:(0.6, {
2,
3 }),
σM:(0.3, P1), σS:(0.3, P
1)
QPS Correctness /1
15
Assumptions Time Definitions❏ Δ = [t,t*) time interval such that❏ P is major execution set
❏ Δ is complete EDF interval if Manager activates P at time t, next activates QPS mode at t*
❏ Δ is complete QPS interval if all tasks in P are active
❏ Δ is a QPS job interval if some task in P releases a job at time t and has the t*=D(Γ,t)
QPS
The Partitoner
The Manager The Dispatcher
QPS Correctness /2
16
Theorem V.1QPS produces a valid schedule for a set of implicit-deadline sporadic tasks Γ on m ≥ ceiling(R(Γ)) identical processors
Proof: Let P be a major execution set Lemma V.1.
Consider a complete QPS interval Δ. If the master server σM in charge of P is scheduled on its
shared processor so that it meets all its deadlines in Δ, then the other three QPS servers will also meet theirs on P’s dedicated processor
Lemma V.2. The individual tasks and servers in P will meet all their deadlines provided that the master server in charge of P meets its deadlines
Implementation
17
QPSimplemented 1
on top of LITMUSRT
1 Compagnin D., Mezzetti E., Vardanega T., “Experimental evaluation of optimal schedulers based on partitioned proportionate fairness ” ECRTS15
#off-line decisions may influence run-time performance #RUN vs QPS comparison
#empirical evaluation
Evaluation /1
18
QPS RUN1 U-EDF2
1 Regnier P., Lima G., Massa E., Levin G., Brandt S., “RUN: Optimal Multiprocessor Real-Time Scheduling via Reduction to Uniprocessor”, 2011 32nd IEEE Real-Time Systems Symposium
2 Nelissen G., Berten V., Nelis V., Goossens J., Milojevic D., “U-EDF: An Unfair but Optimal Multiprocessor Scheduling Algorithm for Sporadic Tasks”, 2012 ECRTS
#optimal scheduling algorithms
#the performance of the algorithms was assessed via simulation
#QPS’s performance is influenced by the “processor hierarchy”
tasks running on kth processor may migrate to any of m-k processors
Evaluation /2
19
QPS RUN U-EDFDeveloping intuition about run-time overhead
❏ given a set of ❏ m processors ❏ m+1 periodic tasks that fully utilize the m processors
❏ the average hierarchy size is “(m-1)/2” ❏ for instance:
❏ if m=5 then 2 is the average hierarchy size❏ m+1=5+1=6 periodic tasks❏ in the worst case the hierarchy levels are as
many as the number of available m processors (i.e. 6 processor levels ) ❏ bin over-packing propagation phenomenon
Evaluation /3
20
QPS RUN U-EDFContinuing the case study ❏ whenever the size of the task set grows up
❏ for example from m+1 tasks to 2m❏ the run-time overhead drops down due of
❏ more lighter tasks❏ more linear task set partitioning ❏ EDF mode which behaves like a
Partitioned EDF
Conclusion
21
❏ presents itself in a fashionable way against other state-of-the-art global schedulers★ while showing a manageable run-time overhead
❏ outperforms similar schedulers★ when the processor hierarchy size is gently low★ in presence of a fully partitioned task system (i.e P-EDF bevavior) ➔ off-line phase may influence at run-time significant overhead
◆ inter-server coordination my be quite not trivial
❏ depicts a simple scheduling abstraction model based on servers★ dynamic adaptation as a function of system load variations★ parallel execution needs are assured by master/slave relation
QPS
Future Work
22
❏ is related with the extension of QPS to broader problem constraints❏ arbitrarily deadlines, heterogeneous multiprocessors, etc
❏ different adaptation strategies to extend the current master/slave abstraction model to enhance less expensive inter-processor communication mechanism
❏ improve the quasi-partioning implementation ❏ extend it with different implementations ❏ evaluate the performance differences between them