Upload
others
View
9
Download
0
Embed Size (px)
Citation preview
Lecture 13: Multivariable Control of Robot Manipulators
• PD-Control
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 1/17
Lecture 13: Multivariable Control of Robot Manipulators
• PD-Control
• Feedback Linearization
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 1/17
Lecture 13: Multivariable Control of Robot Manipulators
• PD-Control
• Feedback Linearization
• Robust and Adaptive Motion Control
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 1/17
Improved Dynamical Model
Given a mechanical system with n-degrees of freedom
D(q)q+C(q, q)q+g(q) = τ , q =[q1, . . . , qn
]T
, τ =[τ1, . . . , τn
]T
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 2/17
Improved Dynamical Model
Given a mechanical system with n-degrees of freedom
D(q)q+C(q, q)q+g(q) = τ , q =[q1, . . . , qn
]T
, τ =[τ1, . . . , τn
]T
We suppose that each of degrees of freedom is controlled by ageared DC-motor
Jmkθk + Bkθk =
Kmk
Rk
Vk −1
rk
τk, k = 1, . . . , n
where• θk is the kth-motor angle;• rk is a gear ration;• Jmk
, Bk, Kmk, Rk are parameters or computed from
parameters of the kth DC-motor
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 2/17
Improved Dynamical Model
Given a mechanical system with n-degrees of freedom
D(q)q+C(q, q)q+g(q) = τ , q =[q1, . . . , qn
]T
, τ =[τ1, . . . , τn
]T
We suppose that each of degrees of freedom is controlled by ageared DC-motor
Jmkθk + Bkθk =
Kmk
Rk
Vk −1
rk
τk, k = 1, . . . , n
and that the motor and the link angles are related by
θmk= rkqk, k = 1, . . . , n
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 2/17
Improved Dynamical Model
Given a mechanical system with n-degrees of freedom
D(q)q+C(q, q)q+g(q) = τ , q =[q1, . . . , qn
]T
, τ =[τ1, . . . , τn
]T
We suppose that each of degrees of freedom is controlled by ageared DC-motor
Jmkθk + Bkθk =
Kmk
Rk
Vk −1
rk
τk, k = 1, . . . , n
and that the motor and the link angles are related by
θmk= rkqk, k = 1, . . . , n
Then the actuator equations are
r2kJmk
qk + r2kBkqk = rk
Kmk
Rk
Vk − τk, k = 1, . . . , n
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 2/17
Improved Dynamical Model (Cont’d)
The dynamical systems
D(q)q+C(q, q)q+g(q) = τ , q =[q1, . . . , qn
]T
, τ =[τ1, . . . , τn
]T
r2kJmk
qk + r2kBkqk = rk
Kmk
Rk
Vk − τk, k = 1, . . . , n
can be rewritten as one, if we exclude τ !
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 3/17
Improved Dynamical Model (Cont’d)
The dynamical systems
D(q)q+C(q, q)q+g(q) = τ , q =[q1, . . . , qn
]T
, τ =[τ1, . . . , τn
]T
r2kJmk
qk + r2kBkqk = rk
Kmk
Rk
Vk − τk, k = 1, . . . , n
can be rewritten as one, if we exclude τ !
Indeed, it is
M(q)q + C(q, q)q + Bq + g(q) = u
where
• M(q) = D(q) + J with J = diag{r21Jm1
, . . . , r2nJmn
}
• B =[r21B1, r2
2B2, . . . , r2
nBn
]T
• u = [u1, u2, . . . , un]T with uk = rkKmk
Rk
Vk, k = 1, . . . , n
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 3/17
PD-Controller Design
Given a mechanical system
M(q)q + C(q, q)q + Bq + g(q) = u
we are interested• to design a controller to stabilize a particular configuration
of the robot: q = qd
• to analyze the closed-loop system
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 4/17
PD-Controller Design
Given a mechanical system
M(q)q + C(q, q)q + Bq + g(q) = u
we are interested• to design a controller to stabilize a particular configuration
of the robot: q = qd
• to analyze the closed-loop system
Assume that B = 0 and g(p) = 0
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 4/17
PD-Controller Design
Given a mechanical system
M(q)q + C(q, q)q + Bq + g(q) = u
we are interested• to design a controller to stabilize a particular configuration
of the robot: q = qd
• to analyze the closed-loop system
Assume that B = 0 and g(p) = 0
The first controller to analyze is
u = −Kp (q − qd)−Kdq
with Kp and Kd are diagonal matrices with positive elements.
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 4/17
PD-Controller Design (Cont’d)
To analyze the behavior of the closed loop system
M(q)q + C(q, q)q = u = −Kp (q − qd)−Kdq
consider a scalar function
V (q, q) = 1
2qT M(q)q + 1
2(q − qd)
T Kp (q − qd)
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 5/17
PD-Controller Design (Cont’d)
To analyze the behavior of the closed loop system
M(q)q + C(q, q)q = u = −Kp (q − qd)−Kdq
consider a scalar function
V (q, q) = 1
2qT M(q)q + 1
2(q − qd)
T Kp (q − qd)
Its time-derivative along solutions of the closed-loop system is
ddt
V = qT M(q)q + qT ddt
1
2[M(q)] q + qT Kp (q − qd)
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 5/17
PD-Controller Design (Cont’d)
To analyze the behavior of the closed loop system
M(q)q + C(q, q)q = u = −Kp (q − qd)−Kdq
consider a scalar function
V (q, q) = 1
2qT M(q)q + 1
2(q − qd)
T Kp (q − qd)
Its time-derivative along solutions of the closed-loop system is
ddt
V = qT M(q)q + qT ddt
1
2[M(q)] q + qT Kp (q − qd)
= qT[u− C(q, q)q
]+ qT d
dt1
2[M(q)] q + qT Kp (q − qd)
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 5/17
PD-Controller Design (Cont’d)
To analyze the behavior of the closed loop system
M(q)q + C(q, q)q = u = −Kp (q − qd)−Kdq
consider a scalar function
V (q, q) = 1
2qT M(q)q + 1
2(q − qd)
T Kp (q − qd)
Its time-derivative along solutions of the closed-loop system is
ddt
V = qT M(q)q + qT ddt
1
2[M(q)] q + qT Kp (q − qd)
= qT[u− C(q, q)q
]+ qT d
dt1
2[M(q)] q + qT Kp (q − qd)
= qT[u + Kp (q − qd)
]+ qT
{ddt
1
2[M(q)]− C(q, q)
}
q
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 5/17
PD-Controller Design (Cont’d)
To analyze the behavior of the closed loop system
M(q)q + C(q, q)q = u = −Kp (q − qd)−Kdq
consider a scalar function
V (q, q) = 1
2qT M(q)q + 1
2(q − qd)
T Kp (q − qd)
Its time-derivative along solutions of the closed-loop system is
ddt
V = qT M(q)q + qT ddt
1
2[M(q)] q + qT Kp (q − qd)
= qT[u− C(q, q)q
]+ qT d
dt1
2[M(q)] q + qT Kp (q − qd)
= qT[u + Kp (q − qd)
]+ qT
{ddt
1
2[D(q) + J ]− C(q, q)
}
q︸ ︷︷ ︸
= 0
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 5/17
PD-Controller Design (Cont’d)
To analyze the behavior of the closed loop system
M(q)q + C(q, q)q = u = −Kp (q − qd)−Kdq
consider a scalar function
V (q, q) = 1
2qT M(q)q + 1
2(q − qd)
T Kp (q − qd)
Its time-derivative along solutions of the closed-loop system is
ddt
V = qT M(q)q + qT ddt
1
2[M(q)] q + qT Kp (q − qd)
= qT[u− C(q, q)q
]+ qT d
dt1
2[M(q)] q + qT Kp (q − qd)
= qT[u + Kp (q − qd)
]
= qT[−Kp (q − qd)−Kdq + Kp (q − qd)
]= −qT Kdq
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 5/17
PD-Controller Design (Cont’d)
To analyze the behavior of the closed loop system
M(q)q + C(q, q)q = u = −Kp (q − qd)−Kdq
consider a scalar function
V (q, q) = 1
2qT M(q)q + 1
2(q − qd)
T Kp (q − qd)
Its time-derivative along solutions of the closed-loop system is
ddt
V = −qT Kdq ≤ 0
Therefore• V is positive definite, V (q, q) = 0 ⇒ {q = qd, q = 0}• V (q(t), q(t)) is monotonically decreasing!
⇒ ∃ limt→+∞
V (q(t), q(t)) = V∞ and ∃ limt→+∞
q(t) = q∞(t) = 0
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 5/17
PD-Controller Design (Cont’d)
To analyze the behavior of the closed loop system
M(q)q + C(q, q)q = u = −Kp (q − qd)−Kdq
consider a scalar function
V (q, q) = 1
2qT M(q)q + 1
2(q − qd)
T Kp (q − qd)
If we substitute this limit trajectory into the dynamics, we obtain
M(q∞) q∞︸︷︷︸
= 0
+C(q∞, q∞) q∞︸︷︷︸
= 0
= −Kp (q∞− qd)−Kd q∞︸︷︷︸
= 0
that is0 = −Kp (q∞− qd)
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 5/17
PD-Controller Design (Cont’d)
To analyze the behavior of the closed loop system
M(q)q + C(q, q)q = u = −Kp (q − qd)−Kdq
consider a scalar function
V (q, q) = 1
2qT M(q)q + 1
2(q − qd)
T Kp (q − qd)
If we substitute this limit trajectory into the dynamics, we obtain
M(q∞) q∞︸︷︷︸
= 0
+C(q∞, q∞) q∞︸︷︷︸
= 0
= −Kp (q∞− qd)−Kd q∞︸︷︷︸
= 0
that is0 = −Kp (q∞− qd)
Kp = diag {Kp1, Kp2, . . . , Kpn} > 0 ⇒ q∞ = qd
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 5/17
PD-Controller Design with Gravity Compensation
Given a mechanical system
M(q)q + C(q, q)q + Bq + g(q) = u
How to modify the controller
u = −Kp (q − qd)−Kdq
if B 6= 0 and g(p) 6= 0?
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 6/17
PD-Controller Design with Gravity Compensation
Given a mechanical system
M(q)q + C(q, q)q + Bq + g(q) = u
How to modify the controller
u = −Kp (q − qd)−Kdq
if B 6= 0 and g(p) 6= 0?
The controller
u = −Kp (q − qd)−Kdq + g(q)
is stabilizing, if Kp and Kd are diagonal matrices such that
Kp > 0 Kd −B > 0
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 6/17
Lecture 13: Multivariable Control of Robot Manipulators
• PD-Control
• Feedback Linearization
• Robust and Adaptive Motion Control
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 7/17
Feedback Linearization
Given a mechanical system
M(q)q + C(q, q)q + Bq + g(q) = u
How to define a control variable
u = α(q, q) + β(q, q)v
so that the closed loop system is:• linear? I.e. is equivalent to
x = Ax + Bv
• linear and stabilizable? I.e. the pair (A, B) is stabilizable
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 8/17
Feedback Linearization
Given a mechanical system
M(q)q + C(q, q)q + Bq + g(q) = u
How to define a control variable
u = α(q, q) + β(q, q)v
so that the closed loop system is:• linear? I.e. is equivalent to
x = Ax + Bv
• linear and stabilizable? I.e. the pair (A, B) is stabilizable
What ifu = M(q)v + C(q, q)q + Bq + g(q)?
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 8/17
Feedback Linearization
Given a mechanical system
M(q)q + C(q, q)q + Bq + g(q) = u
How to define a control variable
u = α(q, q) + β(q, q)v
so that the closed loop system is:• linear? I.e. is equivalent to
x = Ax + Bv
• linear and stabilizable? I.e. the pair (A, B) is stabilizable
ThenM(q)q = M(q)v ⇒ q = v
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 8/17
Feedback Linearization
Given a mechanical system
M(q)q + C(q, q)q + Bq + g(q) = u
How to define a control variable
u = α(q, q) + β(q, q)v
so that the closed loop system is:• linear? I.e. is equivalent to
x = Ax + Bv
• linear and stabilizable? I.e. the pair (A, B) is stabilizable
Then
q = v ⇒ x =
[
0n In
0n 0n
]
x +
[
0n
In
]
v, x = [qT , qT ]T
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 8/17
Feedback Linearization
Given a mechanical system
M(q)q + C(q, q)q + Bq + g(q) = u
and the desired trajectory qd = qd(t), introduce the controller
u = M(q)v + C(q, q)q + Bq + g(q)
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 9/17
Feedback Linearization
Given a mechanical system
M(q)q + C(q, q)q + Bq + g(q) = u
and the desired trajectory qd = qd(t), introduce the controller
u = M(q)v + C(q, q)q + Bq + g(q)
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
Then the closed loop system is
q = v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 9/17
Feedback Linearization
Given a mechanical system
M(q)q + C(q, q)q + Bq + g(q) = u
and the desired trajectory qd = qd(t), introduce the controller
u = M(q)v + C(q, q)q + Bq + g(q)
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
Then the closed loop system is
q = v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
It can be rewritten in error-variables as
e + Kde + Kpe = 0, e = q − qd(t)
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 9/17
Lecture 13: Multivariable Control of Robot Manipulators
• PD-Control
• Feedback Linearization
• Robust and Adaptive Motion Control
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 10/17
Robust and Adaptive Motion Control
Given a mechanical system
M(q)q + C(q, q)q + g(q) = u
and the desired trajectory qd = qd(t), the controller
u = M(q)v + C(q, q)q + g(q)
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
cannot be implemented!
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 11/17
Robust and Adaptive Motion Control
Given a mechanical system
M(q)q + C(q, q)q + g(q) = u
and the desired trajectory qd = qd(t), the controller
u = M(q)v + C(q, q)q + g(q)
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
cannot be implemented!
The approximation might be used
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 11/17
Robust and Adaptive Motion Control
The uncertainty in parameters of controller
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
motivates two approaches
• Robust Control:Design Kp, Kd and qd(t) such that the error signal
e(t) = q(t)− qd(t) ≈ 0 ∀ {△M,△C,△g} ∈ W
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 12/17
Robust and Adaptive Motion Control
The uncertainty in parameters of controller
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
motivates two approaches
• Robust Control:Design Kp, Kd and qd(t) such that the error signal
e(t) = q(t)− qd(t) ≈ 0 ∀ {△M,△C,△g} ∈ W
• Adaptive Control: Improve estimates for
M(q), C(q, q), g(q)
in the course of regulating the system
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 12/17
Robust Motion Control Based on Feedback Linearization
Given a trajectory q = qd(t), consider the closed loop system
M(q)q + C(q, q)q + g(q) = u
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 13/17
Robust Motion Control Based on Feedback Linearization
Given a trajectory q = qd(t), consider the closed loop system
M(q)q + C(q, q)q + g(q) = u
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
M(q)q+C(q, q)q+g(q) =
= [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 13/17
Robust Motion Control Based on Feedback Linearization
Given a trajectory q = qd(t), consider the closed loop system
M(q)q + C(q, q)q + g(q) = u
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
M(q)q+C(q, q)q+g(q) =
= [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
⇒ M(q)q = [M(q) +△M ] v +△Cq +△g
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 13/17
Robust Motion Control Based on Feedback Linearization
Given a trajectory q = qd(t), consider the closed loop system
M(q)q + C(q, q)q + g(q) = u
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
M(q)q+C(q, q)q+g(q) =
= [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
⇒ M(q)q = [M(q) +△M ] v +△Cq +△g
⇒ q = M(q)−1 [M(q) +△M ] v + M(q)−1 [△Cq +△g]
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 13/17
Robust Motion Control Based on Feedback Linearization
Given a trajectory q = qd(t), consider the closed loop system
M(q)q + C(q, q)q + g(q) = u
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
M(q)q+C(q, q)q+g(q) =
= [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
⇒ M(q)q = [M(q) +△M ] v +△Cq +△g
q = M−1 [M +△M ] v + M−1 [△Cq +△g]± M−1Mv
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 13/17
Robust Motion Control Based on Feedback Linearization
Given a trajectory q = qd(t), consider the closed loop system
M(q)q + C(q, q)q + g(q) = u
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
M(q)q+C(q, q)q+g(q) =
= [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
⇒ M(q)q = [M(q) +△M ] v +△Cq +△g
q = M−1 [M +△M ] v + M−1 [△Cq +△g]± M−1Mv
⇒ q = v + M−1(q) [△M(q)v +△C(q, q)q +△g(q)]
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 13/17
Robust Motion Control Based on Feedback Linearization
Given a trajectory q = qd(t), consider the closed loop system
M(q)q + C(q, q)q + g(q) = u
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
M(q)q+C(q, q)q+g(q) =
= [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
⇒ M(q)q = [M(q) +△M ] v +△Cq +△g
q = M−1 [M +△M ] v + M−1 [△Cq +△g]± M−1Mv
q = v + η(q, q, v)
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 13/17
Robust Motion Control Based on Feedback Linearization
Given a trajectory q = qd(t), consider the closed loop system
M(q)q + C(q, q)q + g(q) = u
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t))
M(q)q+C(q, q)q+g(q) =
= [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
⇒ M(q)q = [M(q) +△M ] v +△Cq +△g
q = M−1 [M +△M ] v + M−1 [△Cq +△g]± M−1Mv
q = v + η(q, q, v)
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 13/17
Robust Motion Control Based on Feedback Linearization
Given a trajectory q = qd(t), consider the closed loop system
M(q)q + C(q, q)q + g(q) = u
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t)) +w
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 14/17
Robust Motion Control Based on Feedback Linearization
Given a trajectory q = qd(t), consider the closed loop system
M(q)q + C(q, q)q + g(q) = u
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t)) +w
It can be rewritten as
q = v + η(q, q, v)or
(q − qd) + Kd (q − qd) + Kp (q − qd) = w + η(q, q, v)
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 14/17
Robust Motion Control Based on Feedback Linearization
Given a trajectory q = qd(t), consider the closed loop system
M(q)q + C(q, q)q + g(q) = u
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t)) +w
It can be rewritten as
q = v + η(q, q, v)or
(q − qd) + Kd (q − qd) + Kp (q − qd) = w + η(q, q, v)
or
ddt
e =
[
0n In
−Kp −Kd
]
e+
[
0n
In
]
(w + η) , e =
[
(q − qd)
(q − qd)
]
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 14/17
Robust Motion Control Based on Feedback Linearization
Given a trajectory q = qd(t), consider the closed loop system
M(q)q + C(q, q)q + g(q) = u
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t)) +w
It can be rewritten as
q = v + η(q, q, v)or
(q − qd) + Kd (q − qd) + Kp (q − qd) = w + η(q, q, v)
or
ddt
e = Ae + B (w + η) , e =
[
(q − qd)
(q − qd)
]
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 14/17
Robust Motion Control Based on Feedback Linearization
To continue with design of w for the system
ddt
e = Ae + B[w + η(e, w)
]
we need to impose some assumptions on η(·), namely
‖η(e, w)‖ ≤ α ‖w‖+ γ1 ‖e‖+ γ2 ‖e‖2 + γ3, α < 1
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 15/17
Robust Motion Control Based on Feedback Linearization
To continue with design of w for the system
ddt
e = Ae + B[w + η(e, w)
]
we need to impose some assumptions on η(·), namely
‖η(e, w)‖ ≤ α ‖w‖+ γ1 ‖e‖+ γ2 ‖e‖2 + γ3, α < 1
Matrix A is stable, therefore ∀Q > 0, ∃P = P T > 0 such that
AT P + PA = −Q
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 15/17
Robust Motion Control Based on Feedback Linearization
To continue with design of w for the system
ddt
e = Ae + B[w + η(e, w)
]
we need to impose some assumptions on η(·), namely
‖η(e, w)‖ ≤ α ‖w‖+ γ1 ‖e‖+ γ2 ‖e‖2 + γ3, α < 1
Matrix A is stable, therefore ∀Q > 0, ∃P = P T > 0 such that
AT P + PA = −Q
Consider a Lyapunov function candidate as V = eT Pe, then
ddt
V = ddt
eT Pe + eT P ddt
e
= eT (AT P + PA)e + 2eT PB[w + η(e, w)
]
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 15/17
Robust Motion Control Based on Feedback Linearization
To continue with design of w for the system
ddt
e = Ae + B[w + η(e, w)
]
we need to impose some assumptions on η(·), namely
‖η(e, w)‖ ≤ α ‖w‖+ γ1 ‖e‖+ γ2 ‖e‖2 + γ3, α < 1
Matrix A is stable, therefore ∀Q > 0, ∃P = P T > 0 such that
AT P + PA = −Q
Consider a Lyapunov function candidate as V = eT Pe, then
ddt
V = −eT Qe + 2eT PB[w + η(e, w)
]
≤ 0 ← How to achieve this by choosing w?
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 15/17
Robust Motion Control Based on Feedback Linearization
To continue with design of w for the system
ddt
e = Ae + B[w + η(e, w)
]
we need to impose some assumptions on η(·), namely
‖η(e, w)‖ ≤ α ‖w‖+ γ1 ‖e‖+ γ2 ‖e‖2 + γ3, α < 1
Let us look at the second term
ddt
V = −eT Qe + 2eT PB[w + η(e, w)
]
when w has the form
w = −ρ(·) z√
zT z, z = BT Pe , ρ is a function to choose
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 15/17
Robust Motion Control Based on Feedback Linearization
To continue with design of w for the system
ddt
e = Ae + B[w + η(e, w)
]
we need to impose some assumptions on η(·), namely
‖η(e, w)‖ ≤ α ‖w‖+ γ1 ‖e‖+ γ2 ‖e‖2 + γ3, α < 1
Let us look at the second term
ddt
V = −eT Qe + 2eT PB[w + η(e, w)
]
when w has the form
w = −ρ(·) z√
zT z, z = BT Pe , ρ is a function to choose
zT
(
−ρz√
zT z+ η
)
≤ −ρ‖z‖+ ‖z‖‖η‖ = ‖z‖ (−ρ + ‖η‖)
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 15/17
Robust Motion Control Based on Feedback Linearization
To sum up, we search for a scalar function ρ(·) such that
(−ρ + ‖η‖) ≤ 0 ⇔ ‖η‖ ≤ ρ
and
‖η(e, w)‖ ≤ α ‖w‖+ γ1 ‖e‖+ γ2 ‖e‖2 + γ3, α < 1
withw = −ρ(·) z
√zT z
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 16/17
Robust Motion Control Based on Feedback Linearization
To sum up, we search for a scalar function ρ(·) such that
(−ρ + ‖η‖) ≤ 0 ⇔ ‖η‖ ≤ ρ
and
‖η(e, w)‖ ≤ α ‖w‖+ γ1 ‖e‖+ γ2 ‖e‖2 + γ3, α < 1
withw = −ρ(·) z
√zT z
These two inequalities imply the next one
α
∥∥∥∥ρ(·) z√
zT z
∥∥∥∥
+ γ1 ‖e‖+ γ2 ‖e‖2 + γ3 ≤ ρ(·)
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 16/17
Robust Motion Control Based on Feedback Linearization
To sum up, we search for a scalar function ρ(·) such that
(−ρ + ‖η‖) ≤ 0 ⇔ ‖η‖ ≤ ρ
and
‖η(e, w)‖ ≤ α ‖w‖+ γ1 ‖e‖+ γ2 ‖e‖2 + γ3, α < 1
withw = −ρ(·) z
√zT z
These two inequalities imply the next one
αρ(·) + γ1 ‖e‖+ γ2 ‖e‖2 + γ3 ≤ ρ(·)
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 16/17
Robust Motion Control Based on Feedback Linearization
To sum up, we search for a scalar function ρ(·) such that
(−ρ + ‖η‖) ≤ 0 ⇔ ‖η‖ ≤ ρ
and
‖η(e, w)‖ ≤ α ‖w‖+ γ1 ‖e‖+ γ2 ‖e‖2 + γ3, α < 1
withw = −ρ(·) z
√zT z
These two inequalities imply the next one
αρ(·) + γ1 ‖e‖+ γ2 ‖e‖2 + γ3 ≤ ρ(·)
ρ(e) ≥ 1
1− α
[
γ1 ‖e‖+ γ2 ‖e‖2 + γ3
]
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 16/17
Final Form of the Controller
Given a trajectory q = qd(t), consider the closed loop system
M(q)q + C(q, q)q + g(q) = u
u = [M(q) +△M ] v + [C(q, q) +△C] q + [g(q) +△g]
v = qd(t)−Kp(q − qd(t))−Kd(q − qd(t)) +w
w =
{
−ρ(e) z√
zT z, if z = BT Pe 6= 0
0, if z = BT Pe = 0
where ρ(·) is any function that satisfies to inequality
ρ(e) ≥ 1
1− α
[
γ1 ‖e‖+ γ2 ‖e‖2 + γ3
]
where constants α, γ1-γ2 are from the inequality
‖η(·)‖ ≤ α ‖w‖+ γ1 ‖e‖+ γ2 ‖e‖2 + γ3, α < 1
andη(q, q, v) = M−1 [△Mv +△Cq +△g] , e =
[
(q − qd)
(q − qd)
]
c©Anton Shiriaev. 5EL158: Lecture 13 – p. 17/17