View
213
Download
0
Tags:
Embed Size (px)
Citation preview
April 29, 2008 UC Berkeley EECS, Berkeley, CA 1
Anytime Control Algorithms for Embedded
Real-Time Systems
Anytime Control Algorithms for Embedded
Real-Time Systems
L. Greco, D. Fontanelli, A. BicchiInterdepartmental Research Center “E. Piaggio”
University of Pisa
April 29, 2008 UC Berkeley EECS, Berkeley, CA 2
IntroductionIntroduction
General tendency in embedded systems: implementation of many concurrent real-time tasks on the same platform overall HW cost and development time reduction
Highly time-critical control tasks traditionally scheduled with very conservative approaches rigid, hardly reconfigurable, underperforming architecture
Modern multitasking RTOS (e.g. in automotive ECUs), schedule their tasks dynamically, adapting to varying load conditions and QoS requirements.
April 29, 2008 UC Berkeley EECS, Berkeley, CA 3
IntroductionIntroduction
Real-time preemptive algorithms (e.g., RM or EDF) can suspend task execution on higher-priority interrupts
Guarantees of schedulability – based on estimates of Worst-Case Execution Time (WCET) – are obtained at the cost of HW underexploitation: e.g., RM can only guarantee schedulability if less than 70% CPU is utilized
In other terms: for most CPU cycles, a longer time is available than the worst-case guarantee
The problem of Anytime Control is to make good use of that extra time
April 29, 2008 UC Berkeley EECS, Berkeley, CA 4
Anytime algorithms and filters…
The execution can be interrupted any time, always producing a valid output;
Increasing the computational time increases the accuracy of the output (imprecise computation)
Can we apply this to controllers?
Anytime ParadigmAnytime Paradigm
)(2 zFr
)(1 zF
3,2i 3,2i
1i 1i
)(3 zF
3i 3i
2,1i 2,1i
y
yr+
+)(1 zF
3i
2i
)(2 zF
)(3 zF
++
April 29, 2008 UC Berkeley EECS, Berkeley, CA 5
Example (I)Example (I)
yr )(1 z+
+
-)(1 zC
3i
2i)(2 zC
)(3 zC
)(zG
)(2 z+
+
)(3 z
+
April 29, 2008 UC Berkeley EECS, Berkeley, CA 6
Example (II)Regulation Problem – RMS comparison
Example (II)Regulation Problem – RMS comparison
0 5 10 15 20 25 30 35 40 45 500
0.02
0.04
0.06
0.08
0.1
0.12
0.14
Time Periods
RM
S
RMS comparison
1
2
3
Not feasible
0 5 10 15 20 25 30 35 40 45 50
1
2
3
Time Periods
(Max
imum
) A
vaila
ble
Tim
e
Scheduled execution time
Conservative: stable but poor performance
April 29, 2008 UC Berkeley EECS, Berkeley, CA 7
Example (III)Regulation Problem – RMS comparison
Example (III)Regulation Problem – RMS comparison
0 5 10 15 20 25 30 35 40 45 500
1
2
3
4
5
6x 10
5
Time Periods
RM
S
RMS - Greedy Policy
Unstable!
Greedy: maximum allowed i
0 5 10 15 20 25 30 35 40 45 50
1
2
3
Time Periods
(Max
imum
) A
vaila
ble
Tim
e
Scheduled execution time
April 29, 2008 UC Berkeley EECS, Berkeley, CA 8
•Hierarchical Design: controllers must be ordered in a hierarchy of increasing
performance;
•Switched System Performance: stability and performance of the switched system must be addressed;
•Practicality: implementation of both control and scheduling algorithms must be simple (limited resources);
•Composability: computation of higher controllers should exploit
computations of lower controllers (recommended).
Issues in Anytime ControlIssues in Anytime Control
April 29, 2008 UC Berkeley EECS, Berkeley, CA 9
Consider a linear, discrete time, invariant plant
and a family of stabilizing feedback controllers
Controller i provides better performance than controller j if i > j (but WCETi > WCETj)
Problem FormulationProblem Formulation
The closed-loop system is
April 29, 2008 UC Berkeley EECS, Berkeley, CA 10
•Sampling instants:•Time allotted to the control task:
•Worst Case Execution Times:
•Time map:
Scheduler DescriptionScheduler Description
April 29, 2008 UC Berkeley EECS, Berkeley, CA 11
A simple stochastic description of the random sequence can be given as an i.i.d. process
1 2 n
i
At time t, the time slot is such that all controllers but no controller can be executed
ijj ,ikk ,
t
Scheduler DescriptionStochastic Scheduler as an I.I.D. Process
Scheduler DescriptionStochastic Scheduler as an I.I.D. Process
Pr
April 29, 2008 UC Berkeley EECS, Berkeley, CA 12
11p
22p12p
21p
23p
32p
13p
31p
33p
1 2
3
Description
Transition probability matrix:
Steady state probabilities:
More general description with a finite state, discrete-time, homogeneous, irreducible aperiodic Markov chain
Scheduler Description Stochastic Scheduler as a Markov Chain
Scheduler Description Stochastic Scheduler as a Markov Chain
April 29, 2008 UC Berkeley EECS, Berkeley, CA 13
m-step(lifted system)
Theorem: The MJLS is exponentially AS-stable if and only if such that the m-step condition holds
1-step(average contractivity)
[P. Bolzern, P. Colaneri, G.D. Nicolao – CDC ’04]
Almost Sure StabilityAlmost Sure Stability
Definition: The MJLS is exponentially AS-stable if
such that, and any initial distribution 0, the following
condition holds
Sufficient conditions
April 29, 2008 UC Berkeley EECS, Berkeley, CA 14
•Upper bound on the index of the executable controller
•Controller is computed, unless a preemption event forces
)(ts
1)( ts)()( tnts
Switching PolicyPreliminaries and AnalysisSwitching Policy
Preliminaries and Analysis
Switching policy map
Examples:•Conservative Policy (non-switching, always av.)
•Greedy Policy (if already AS-stable)
)(t
)(tsi
April 29, 2008 UC Berkeley EECS, Berkeley, CA 15
Switching PolicySynthesis Problem Formulation
Switching PolicySynthesis Problem Formulation
Problem: Given and the invariant scheduler distribution , find a switching policy such that the resulting system is a MJLS with invariant probability distribution
s
,,ˆ IiAi
•The computational time allotted by the scheduler cannot be increased;
•The probability of the i-th controller can be increased only by reducing the probabilities of more complex controllers.ij
jd,
How can we build a switching policy ensuring ?d
April 29, 2008 UC Berkeley EECS, Berkeley, CA 16
Use of an independent, conditioning Markov chain
•Same structure (number of states) of the scheduler chain
11p
22p12p
21p
23p
32p
13p
31p
33p
1 2
3
• : in the next sampling interval at most the i-th controller is computed (if no preemption occurs)
itst i )()( gtT
Stochastic PolicyStochastic Policy
How does the conditioning chain interact with the scheduler’s one?
April 29, 2008 UC Berkeley EECS, Berkeley, CA 17
Note: the extended chain has n2 states
Merging Markov Chains Mixing
Merging Markov Chains Mixing
Theorem: Consider two independent finite-state homogeneous irreducible aperiodic Markov chains and with state space
and respectively. The stochastic process is a finite-state homogeneous irreducible aperiodic Markov chain characterized by
April 29, 2008 UC Berkeley EECS, Berkeley, CA 18
The goal is to produce a process with a desired stationary probability with cardinality nd
After mixing, use an aggregation function derived from the schedulability constraints
The i-th controller is executed if and only if:• (i.e. limiting controller)• (i.e. preemption)
(aggregated process)
Merging Markov ChainsAggregating
Merging Markov ChainsAggregating
April 29, 2008 UC Berkeley EECS, Berkeley, CA 19
Remark: The aggregated process is a linear combination of two chains. Hence:
)(
Merging Markov ChainsAggregating (II)
Merging Markov ChainsAggregating (II)
Remark: The state evolution of the JLS driven by is the same as the one produced by an equivalent MJLS driven by the Markov chain , constructed associating to the index , hence the controlled system . Therefore:
)(
April 29, 2008 UC Berkeley EECS, Berkeley, CA 20
Markov Policy1-step contractive formulation
Markov Policy1-step contractive formulation
Anytime Problem – (Linear Programming)Find a vector such that
April 29, 2008 UC Berkeley EECS, Berkeley, CA 21
Example (Reprise) Example (Reprise)
yr )(1 z+
+
-)(1 zC
3i
2i)(2 zC
)(3 zC
)(zG
)(2 z+
+
)(3 z
+
April 29, 2008 UC Berkeley EECS, Berkeley, CA 22
Example - Furuta PendulumRegulation Problem – RMS comparison
Example - Furuta PendulumRegulation Problem – RMS comparison
]0.0014 0.9334, 0.0652,[
]0.002 0.982, 0.016,[
]0.70 0.25, 0.05,[
d
0 5 10 15 20 25 30 35 40 45 50
1
2
3
Time Periods
Ava
ilabl
e T
ime
Scheduled, conditioning and conditioned execution time for Markov policy
Scheduled Conditioning Conditioned
0 5 10 15 20 25 30 35 40 45 500
0.02
0.04
0.06
0.08
0.1
0.12
0.14
Time Periods
RM
S
RMS comparison
1
2
3
Markov
Markov policy
Improvement: > 55%
April 29, 2008 UC Berkeley EECS, Berkeley, CA 23
A 1-step contractive solution may not exist, but an m-step solution always exists for some m, since the minimal controller is always executable
Look for a solution to the Anytime Problem for increasing m
Markov Policym-step contractive formulation (I)
Markov Policym-step contractive formulation (I)
Key Idea: The switching policy supervises the controller choice so that some control patterns are preferred w.r.t. others
April 29, 2008 UC Berkeley EECS, Berkeley, CA 24
• Lifted Scheduler chain (nm states)• Conditioning chain not lifted (nm states)• : strings of symbols• Chain :
• Mixing: • Aggregating:
• Same as 1-step problem• Switching policy: every m steps a bet in
advance for an m-string
m
ji~~
~,~ mm
mmmm
)~,~min(~
~~~jik
d (elementwise minimum)
Markov Policym-step contractive formulation (II)
Markov Policym-step contractive formulation (II)
m
April 29, 2008 UC Berkeley EECS, Berkeley, CA 25
Example (TORA) (I)Example (TORA) (I)
yr )(1 z+
+
-)(1 zC
3i
2i)(2 zC
)(3 zC
)(zG
)(2 z+
+
)(3 z
+
April 29, 2008 UC Berkeley EECS, Berkeley, CA 26
0 5 10 15 20 25 30 35 40 45 500.5
1
1.5
2
2.5
3
3.5
Time Periods
(Max
imum
) A
vaila
ble
Tim
e
Scheduled execution time
Example (TORA) (II)Regulation Problem – RMS comparison
Example (TORA) (II)Regulation Problem – RMS comparison
Not feasible
Conservative: stable but poor performance
0 5 10 15 20 25 30 35 40 45 500.04
0.06
0.08
0.1
0.12
0.14
0.16
Time Periods
RM
S
RMS comparison
1
2
3
Greedy
Greedy
April 29, 2008 UC Berkeley EECS, Berkeley, CA 27
Example (TORA) (III)Regulation Problem – RMS comparison
Example (TORA) (III)Regulation Problem – RMS comparison
Markov policy
0 5 10 15 20 25 30 35 40 45 500.5
1
1.5
2
2.5
3
3.5
Time Periods
Ava
ilabl
e T
ime
Scheduled, conditioning and conditioned execution time for Markov policy
Scheduled Conditioning Conditioned
0 5 10 15 20 25 30 35 40 45 500.04
0.06
0.08
0.1
0.12
0.14
0.16
Time PeriodsR
MS
RMS comparison
Markov
1
2
3
Greedy
4-step solution
Most likely control pattern:
April 29, 2008 UC Berkeley EECS, Berkeley, CA 28
Tracking and BumplessTracking and Bumpless
In tracking tasks the performance can be severely impaired by switching between different controllers
The activation of higher level controller abruptly introduces the dynamics of the re-activated (sleeping) states (low-to-high level switching)
The use of bumpless-like techniques can assist in making smoother transitions
Practicality considerations must be taken into account in developing a bumpless transfer method
April 29, 2008 UC Berkeley EECS, Berkeley, CA 29
0 20 40 60 80 100 120 140 160 180 2000.5
1
1.5
2
2.5
3
3.5
Time Periods
(Max
imum
) A
vaila
ble
Tim
e
Scheduled execution time
Example (F.P.) (V)Tracking Problem – RMS comparison
Example (F.P.) (V)Tracking Problem – RMS comparison
Not feasible
Conservative: stable but poor performance
0 20 40 60 80 100 120 140 160 180 2000
10
20
30
40
50
60
70
Time Periods
RM
S
RMS comparison
1
2
3
April 29, 2008 UC Berkeley EECS, Berkeley, CA 30
Example (F.P.) (VI)Tracking Problem – Reference & output
comparison
Example (F.P.) (VI)Tracking Problem – Reference & output
comparison
0 20 40 60 80 100 120 140 160 180 200-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
Time Periods
Ref
eren
ce
Reference signal
0 20 40 60 80 100 120 140 160 180 200-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
Time Periods
Out
put
Output comparisons
Markov
1
2
3
MarkovBumpless
Markov policy
Markov Bumpless policy
April 29, 2008 UC Berkeley EECS, Berkeley, CA 31
0 20 40 60 80 100 120 140 160 180 200-2.5
-2
-1.5
-1
-0.5
0
0.5
1
1.5x 10
35
Time Periods
Out
put
Output of the greedy policy
Example (F.P.) (VII)Tracking Problem – Greedy Policy
Example (F.P.) (VII)Tracking Problem – Greedy Policy
Unstable!
Greedy: maximum allowed i
0 20 40 60 80 100 120 140 160 180 2000.5
1
1.5
2
2.5
3
3.5
Time Periods
(Max
imum
) A
vaila
ble
Tim
e
Scheduled execution time
April 29, 2008 UC Berkeley EECS, Berkeley, CA 32
Example (F.P.) (VIII)Tracking Problem – RMS comparison
Example (F.P.) (VIII)Tracking Problem – RMS comparison
Markov policy
0 20 40 60 80 100 120 140 160 180 2000.5
1
1.5
2
2.5
3
3.5
Time Periods
Ava
ilabl
e T
ime
Scheduled, conditioning and conditioned execution time for Markov policy
Scheduled Conditioning Conditioned
0 20 40 60 80 100 120 140 160 180 2000
10
20
30
40
50
60
70
Time Periods
RM
S
RMS comparison
Markov
1
2
3
MarkovBumpless
Markov Bumpless policy
April 29, 2008 UC Berkeley EECS, Berkeley, CA 33
0 20 40 60 80 100 120 140 160 180 2000.5
1
1.5
2
2.5
3
3.5
Time Periods
(Max
imum
) A
vaila
ble
Tim
e
Scheduled execution time
Example (TORA) (IV)Tracking Problem – RMS comparison
Example (TORA) (IV)Tracking Problem – RMS comparison
Not feasible
Conservative: stable but poor performance
Greedy
0 20 40 60 80 100 120 140 160 180 2000
2
4
6
8
10
12
14
Time Periods
RM
S
RMS comparison
1
2
3
Greedy
April 29, 2008 UC Berkeley EECS, Berkeley, CA 34
Example (TORA) (V)Tracking Problem – Reference & output
comparison
Example (TORA) (V)Tracking Problem – Reference & output
comparison
Markov policy
Markov Bumpless policy
Greedy
0 20 40 60 80 100 120 140 160 180 200-0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Time Periods
Out
put
Output comparisons
Markov
1
2
3
GreedyMarkovBumpless
0 20 40 60 80 100 120 140 160 180 200-0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Time Periods
Ref
eren
ce
Reference signal
April 29, 2008 UC Berkeley EECS, Berkeley, CA 35
Example (TORA) (VI)Tracking Problem – RMS comparison
Example (TORA) (VI)Tracking Problem – RMS comparison
0 20 40 60 80 100 120 140 160 180 2000
2
4
6
8
10
12
14
Time Periods
RM
S
RMS comparison
Markov
1
2
3
Greedy
MarkovBumpless
Markov policy
Markov Bumpless policy
Greedy
0 20 40 60 80 100 120 140 160 180 2000.5
1
1.5
2
2.5
3
3.5
Time Periods
Ava
ilabl
e T
ime
Scheduled, conditioning and conditioned execution time for Markov policy
Scheduled Conditioning Conditioned
April 29, 2008 UC Berkeley EECS, Berkeley, CA 36
• Performance (not just stability) under switching must be considered for tracking
• Ongoing work is addressing:– hierarchic design of (composable)
controllers for anytime control– numerical aspects of the m-step solution– implementation on real systems
ConclusionsConclusions
April 29, 2008 UC Berkeley EECS, Berkeley, CA 37
Anytime Control Algorithms for Embedded
Real-Time Systems
Anytime Control Algorithms for Embedded
Real-Time Systems
L. Greco, D. Fontanelli, A. BicchiInterdepartmental Research Center “E. Piaggio”
University of Pisa