25
Chapter 1 ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea of state of a system dates back to classical physics. Roughly speaking, the state of a system is that quantity which, together with knowledge of future inputs to the system, determine the future behaviour of the system. For example, in mechanics, knowledge of the current position and velocity (equivalently, momentum) of a particle, and the future forces acting it, determines the future trajectory of the particle. Thus, the position and the velocity together qualify as the state of the particle. In the case of a continuous-time linear system, its state space representation corresponds to a system of first order differential equations describing its behaviour. The fact that this description fits the idea of state will become evident after we discuss the solution of linear systems of differential equation. To illustrate how we can obtain state equations from simple differential equation models, consider the second order linear differential equation ¨ y(t)= u(t) (1.1) This is basically Newton’s law F = ma for a particle moving on a line, where we have used u to denote F m . It also corresponds to a transfer function from u to y to be 1 s 2 . Let x 1 (t) = y(t) x 2 (t) = ˙ y(t) Set x T (t)=[x 1 (t) x 2 (t)]. Then we can write dx(t) dt = 0 1 0 0 x(t)+ 0 1 u(t) (1.2) which is a system of first order linear differential equations. The above construction can be readily extended to higher order linear differential equations of the form y (n) (t)+ a 1 y (n1) (t)+ ··· + a n1 ˙ y(t)+ a n y(t)= u(t) Define x(t)= y dy dt ··· d n-1 y dt n-1 T . It is straightforward to check that x(t) satisfies the differential equation dx(t) dt = 0 1 0 0 ··· 0 0 0 1 0 ··· 0 . . . . . . ··· . . . . . . . . . 0 0 0 1 a n a n1 ··· a 2 a 1 x(t)+ 0 . . . 0 1 u(t) 1

ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

  • Upload
    others

  • View
    8

  • Download
    1

Embed Size (px)

Citation preview

Page 1: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

Chapter 1

ANALYSIS OF LINEAR SYSTEMS IN

STATE SPACE FORM

This course focuses on the state space approach to the analysis and design of control systems. The ideaof state of a system dates back to classical physics. Roughly speaking, the state of a system is thatquantity which, together with knowledge of future inputs to the system, determine the future behaviourof the system. For example, in mechanics, knowledge of the current position and velocity (equivalently,momentum) of a particle, and the future forces acting it, determines the future trajectory of the particle.Thus, the position and the velocity together qualify as the state of the particle.

In the case of a continuous-time linear system, its state space representation corresponds to a systemof first order differential equations describing its behaviour. The fact that this description fits the idea ofstate will become evident after we discuss the solution of linear systems of differential equation.

To illustrate how we can obtain state equations from simple differential equation models, consider thesecond order linear differential equation

y(t) = u(t) (1.1)

This is basically Newton’s law F = ma for a particle moving on a line, where we have used u to denote Fm

.It also corresponds to a transfer function from u to y to be 1

s2 . Let

x1(t) = y(t)

x2(t) = y(t)

Set xT (t) = [x1(t) x2(t)]. Then we can write

dx(t)

dt=

[

0 10 0

]

x(t) +

[

01

]

u(t) (1.2)

which is a system of first order linear differential equations.The above construction can be readily extended to higher order linear differential equations of the form

y(n)(t) + a1y(n−1)(t) + · · · + an−1y(t) + any(t) = u(t)

Define x(t) =[

ydydt

· · · dn−1ydtn−1

]T

. It is straightforward to check that x(t) satisfies the differential

equation

dx(t)

dt=

0 1 0 0 · · · 00 0 1 0 · · · 0...

... · · ·. . .

......

0 0 0 1−an −an−1 · · · −a2 −a1

x(t) +

0...01

u(t)

1

Page 2: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

2 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

On the other hand, it is not quite obvious how we should define the state x for a differential equation ofthe form

y(t) + 3y(t) + 2y(t) = u(t) + 4u(t) (1.3)

The general problem of connecting state space representations with transfer function representations willbe discussed later in this chapter. We shall only consider differential equations with constant coefficientsin this course, although many results can be extended to differential equations whose coefficients vary withtime.

Once we have obtained a state equation, we need to solve it to determine the state. For example, in thecase of Newton’s law (1.1), solving the corresponding state equation (1.2) gives the position and velocityof the particle. We now turn our attention, in the next few sections, to the solution of state equations.

1.1 The Transition Matrix for Linear Differential Equations

We begin by discussing the solution of the linear constant coefficient homogeneous differential equation

x(t) = Ax(t)

x(t) ∈ Rn (1.4)

x(0) = x0

Very often, for simplicity, we shall suppress the time dependence in the notation. We shall now develop asystematic method for solving (1.4) using a matrix-valued function called the transition matrix.

It is known from the theory of ordinary differential equations that a linear systems of differentialequations of the form

x(t) = Ax(t) t ≥ 0 (1.5)

x(0) = x0

has a unique solution passing through x0 at t = 0. To obtain the explicit solution for x, recall that thescalar linear differential equation

y = ay

y(0) = y0

has the solution

y(t) = eaty0

where the scalar exponential function eat has the infinite series expansion

eat =

∞∑

k=0

aktk

k!(1.6)

and satisfiesd

dteat = aeat

By analogy, let us define the matrix exponential

eA =∞∑

n=0

An

n!

Page 3: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

1.2. PROPERTIES OF THE TRANSITION MATRIX AND SOLUTION OF LINEAR ODES 3

where A0 is defined to be I, the identity matrix (analogous to a0 = 1). We can then define a function eAt,which we call the transition matrix of (1.4), by

eAt =

∞∑

k=0

Aktk

k!(1.7)

The infinite sum can be shown to converge always, and its derivative can be evaluated by differentiatingthe infinite sum term by term. The analogy with the solution of the scalar equation suggests that eAtx0 isthe natural candidate for the the solution of (1.4).

1.2 Properties of the Transition Matrix and Solution of Linear ODEs

We now prove several useful properties of the transition matrix

(Property 1) eAt|t=0 = I (1.8)

Follows immediately from the definition of the transition matrix (1.7).

(Property 2) d

dteAt = AeAt (1.9)

To prove this, differentiate the infinite series in (1.7) term by term to get

d

dteAt =

d

dt

∞∑

k=0

Aktk

k!

=

∞∑

k=1

Aktk−1

(k − 1)!

= A

∞∑

k=0

Aktk

k!

= AeAt

Properties (1) and (2) together can be considered the defining equations of the transition matrix. Inother words, eAt is the unique solution of the linear matrix differential equation

dF

dt= AF, with F (0) = I (1.10)

To see this, note that (1.10) is a linear differential equation. Hence there exists a unique solution to theinitial value problem. Properties (1) and (2) show that eAt is a solution to (1.10). By uniqueness, eAt isthe only solution. This shows (1.10) can be considered the defining equation for eAt.

(Property 3) If A and B commute, i.e. AB = BA, then

e(A+B)t = eAteBt

Proof: Consider eAteBt

d

dt(eAteBt) = AeAteBt + eAtBeBt (1.11)

Page 4: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

4 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

If A and B commutes, eAtB = BeAt so that the R.H.S. of (1.11) becomes (A + B)eAteBt. Furthermore,at t = 0, eAteBt = I. Hence eAteBt satisfies the same differential equation as well as initial condition ase(A+B)t. By uniqueness, they must be equal.

Setting t = 1 gives

eA+B = eAeB whenever AB = BA

In particular, since At commutes with As for all t, s, we have

eA(t+s) = eAteAs ∀ t, s

This is often referred to as the semigroup property.

(Property 4) (eAt)−1 = e−At

Proof: We have

e−AteAt = eA(t−t) = I

Since eAt is a square matrix, this shows that (Property 4) is true. Thus, eAt is invertible for all t.

Using properties (1) and (2), we immediately verify that

x(t) = eAtx0 (1.12)

satisfies the differential equation

x(t) = Ax(t) (1.13)

and the initial condition x(0) = x0. Hence it is the unique solution.

More generally,

x(t) = eA(t−t0)x0 t ≥ t0 (1.14)

is the unique solution to (1.13) with initial condition x(t0) = x0.

1.3 Computing eAt: Linear Algebraic Methods

The above properties of eAt do not provide a direct way for its computation. In this section, we developlinear algebraic methods for computing eAt.

Let us first consider the special case where A is a diagonal matrix given by

A =

λ1 0 00 λ2

. . . 00 0 λn

The λi’s are not necessarily distinct and they may be complex, provided we interpret (1.13) as a differentialequation in Cn. Since A is diagonal,

Ak =

λk1 0 00 λk

2. . . 0

0 0 λkn

Page 5: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

1.3. COMPUTING EAT : LINEAR ALGEBRAIC METHODS 5

If we directly evaluate the sum of the infinite series in the definition of eAt in (1.7), we find for examplethat the (1, 1) entry of eAt is given by

∞∑

k=0

λk1t

k

k!= eλ1t

Hence, in the diagonal case,

eAt =

eλ1t 0 00 eλ2t

. . . 00 0 eλnt

(1.15)

We can also interpret this result by examining the structure of the state equation. In this case (1.13) iscompletely decoupled into n differential equations

xi(t) = λixi(t) i = 1, ..., n (1.16)

so thatxi(t) = eλitx0i (1.17)

where x0i is the ith component of x0. It follows that

x(t) =

eλ1t 0 00 eλ2t

. . . 00 0 eλnt

x0 (1.18)

Since x0 is arbitrary, eAt must be given by (1.15).We now extend the above results for a general A. We examine 2 cases.

Case I: A Diagonalizable

By definition, A diagonalizable means there exists a nonsingular matrix P (in general complex) suchthat P−1AP is a diagonal matrix, say

P−1AP = Λ =

λ1 0 00 λ2

. . . 00 0 λn

(1.19)

where the λ′

is are the eigenvalues of the A matrix. Then

eAt = ePΛP−1t (1.20)

Now notice that for any matrix B and nonsingular matrix P ,

(PBP−1)2 = PBP−1PBP−1 = PB2P−1

so that in general(PBP−1)k = PBkP−1 (1.21)

We then have the following

ePBP−1t = PeBtP−1 for any n × n matrix B (1.22)

Page 6: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

6 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

Proof:

ePBP−1t =

∞∑

k=0

(PBP−1)ktk

k!

=

∞∑

k=0

PBktk

k!P−1

= PeBtP−1

Combining (1.15), (1.20) and (1.22), we find that whenever A is diagonalizable,

eAt = P

eλ1t 0 00 eλ2t

. . . 00 0 eλnt

P−1 (1.23)

From linear algebra, we know that a matrix A is diagonalizable if and only if it has n linearly indepen-dent eigenvectors. Each of the following two conditions is sufficient, though not necessary, to guaranteediagonalizability:

(i) A has distinct eigenvalues

(ii) A is a symmetric matrix

In these two cases, A has a complete set of n independent eigenvectors v1, v2, · · · , vn, with correspondingeigenvalues λ1, λ2, · · · , λn. Write

P =[

v1 v2 · · · vn

]

(1.24)

Then

P−1AP = Λ =

λ1 0 00 λ2

. . . 00 0 λn

(1.25)

The equations (1.24), (1.25), and (1.23) together provide a linear algebraic method for omputing eAt.From (1.23), we also see that the dynamic behaviour of eAt is completely characterized by the behaviourof eλit, i = 1, 2, · · · , n.

Example 1:

A =

[

1 2−1 4

]

The characteristic equation is given by

det(sI − A) = s2 − 5s + 6 = 0

giving eigenvalues 2, 3. For the eigenvalue 2, its eigenvector v satisfies

(2I − A)v =

[

1 −21 −2

]

v = 0

Page 7: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

1.3. COMPUTING EAT : LINEAR ALGEBRAIC METHODS 7

We can choose v =

[

21

]

. For the eigenvalue 3, its eigenvector w satisfies

(3I − A)w =

[

2 −21 −1

]

w = 0

We can choose w =

[

11

]

. The diagonalizing matrix P is given by

P =[

v w]

=

[

2 11 1

]

P−1 =

[

1 −1−1 2

]

Finally

eAt = PeΛtP−1

=

[

2 11 1

] [

e2t 00 e3t

] [

1 −1−1 2

]

=

[

2e2t − e3t −2e2t + 2e3t

e2t − e3t −e2t + 2e3t

]

Example 2:

A =

1 0 01 2 01 0 −1

Owing to the form of A, we see right away that the eigenvalues are 1, 2 and -1 so that

Λ =

1 0 00 2 00 0 −1

The eigenvector corresponding to the eigenvalue 2 is given by[

0 1 0]T

, while that of -1 is[

0 0 1]T

.To find the eigenvector for the eigenvalue 1, we have

1 0 01 2 01 0 −1

v1

v2

v3

=

v1

v2

v3

so that

2−2

1

is an eigenvector. The diagonalizing matrix P is then

P =

2 0 0−2 1 0

1 0 1

P−1 =

12 0 01 1 0

−12 0 1

Page 8: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

8 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

eAt =

2 0 0−2 1 0

1 0 1

et 0 00 e2t 00 0 e−t

12 0 01 1 0

−12 0 1

=

et 0 0−et + e2t e2t 0

12(et − e−t) 0 e−t

Model Decomposition:

We can give a dynamical interpretation to the above results. Suppose A has distinct eigenvaluesλ1, ..., λn so that it has a set of linearly independent eigenvectors v1, ..., vn. If x0 = vj , then

x(t) = eAtx0 = eAtvj

=

∞∑

k=0

Aktk

k!vj =

∞∑

k=0

λkj t

k

k!vj

= eλjtvj

This means that if we start along en eigenvector of A, the solution x will stay in the direction of the eigen-vector, with length being stretched or shrunk by eλjt. In general, because the v′is are linearly independent,an arbitrary initial condition x0 can be expressed as

x0 =

n∑

j=1

ξjvj = Pξ (1.26)

where

P =[

v1 v2 · · · vn

]

and ξ =

ξ1...

ξn

Using this representation, we have

x(t) = eAtx0 = eAt

n∑

j=1

ξjvj

=

n∑

j=1

ξjeAtvj =

n∑

j=1

ξjeλjtvj (1.27)

so that x(t) is expressible as a (time-varying) linear combination of the eigenvectors of A.

We can connect this representation of x(t) with the representation of eAt in (1.23). Note that the P in(1.26) is the diagonalizing matrix for A, i.e.

P−1AP = Λ

and

eAtx0 = PeΛtP−1x0 = PeΛtξ

= P

ξ1eλ1t

ξ2eλ2t

...ξneλnt

=∑

j

ξjeλjtvj

Page 9: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

1.3. COMPUTING EAT : LINEAR ALGEBRAIC METHODS 9

the same result as in (1.27). The representation (1.27) of the solution x(t) in terms of the eigenvectors ofA is often called the modal decomposition of x(t).

Case II: A not Diagonalizable

If A is not diagonalizable, then the above procedure cannot be carried out. If A does not have distincteigenvalues, it is in general not diagonalizable since it may not have n linearly independent eigenvectors.For example, the matrix

Anl =

[

0 10 0

]

which arises in the state space representation of Newton’s Law, has 0 as a repeated eigenvalue and is notdiagonalizable. On the other hand, note that this Anl satisfies A2

nl = 0, so that we can just sum the infiniteseries

eAnlt = I + Anlt =

[

1 t

0 1

]

The general situation is quite complicated, and we shall focus on the 3× 3 case for illustration. Let usconsider the following 3 × 3 matrix A which has the eigenvalue λ with multiplicity 3:

A =

λ 1 00 λ 10 0 λ

(1.28)

Write A = λI + N where

N =

0 1 00 0 10 0 0

(1.29)

Direct calculation shows that

N2 =

0 0 10 0 00 0 0

andN3 = 0

A matrix B satisfying Bk = 0 for some k ≥ 1 is called nilpotent. The smallest integer k such that Bk = 0is called the index of nilpotence. Thus the matrix N in (1.29) is nilpotent with index 3. But then thetransition matrix eNt is easily evaluated to be

eNt =

2∑

j=0

N jtj

j!=

1 t t2

2!0 1 t

0 0 1

(1.30)

Once we understand the 3 × 3 case, it is not difficult to check that an l × l matrix of the form

N =

0 1 0 · · · 0. . .

.... . . 0

10 · · · 0

i.e., 1′s along the superdiagonal, and 0 everywhere else, is nilpotent and satisfies N l = 0

Page 10: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

10 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

To compute eAt when A is of the form (1.28), recall Property 4 of matrix exponentials:

If A and B commute, i.e. AB = BA, then e(A+B)t = eAteBt.

If A is of the form (1.28), then since λI commutes with N , we can write, using Property 3 of transitionmatrices,

eAt = e(λI+N)t = e(λI)teNt = (eλtI)(eNt)

= eλt

1 t t2

2!0 1 t

0 0 1

(1.31)

using (1.30).

We are now in a position to sketch the general structure of the matrix exponential. More details aregiven in the Appendix to this chapter. The idea is to transform a general A using a nonsingular matrix P

into

P−1AP = J =

J1 0J2

. . .

0 Jk

where each Ji = λiI + Ni, Ni nilpotent. J is called the Jordan form of A. With the block diagonalstructure, we can verify

Jm =

Jm1 0

Jm2

. . .

0 Jmk

By applying the definition of the transition matrix, we obtain

eJt =

eJ1t 0eJ2t

. . .

0 eJkt

(1.32)

Each eJit can be evaluated by the method described above. Finally,

eAt = PeJtP−1 (1.33)

The above procedure in principle enables us to evaluate eAt, the only problem being to find the matrixP which transforms A either to diagonal or Jordan form. In general, this can be a very tedious task (SeeAppendix to Chapter 1 for an explanation on how it can be done). The above formula is most useful asa means for studying the qualitative dependence of eAt on t, and will be particularly important in ourdiscussion of stability.

We remark that the above results hold regardless of whether the λi’s are real or complex. In the lattercase, we shall take the underlying vector space to be complex and the results then go through withoutmodification.

1.4 Computing eAt: Laplace Transform Method

Another method of evaluating eAt analytically is to use Laplace transforms.

Page 11: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

1.4. COMPUTING EAT : LAPLACE TRANSFORM METHOD 11

If we let G(t) = eAt, then the Laplace transform of G(t), denoted by G(s), satisfies

sG(s) = AG(s) + I

or

G(s) = (sI − A)−1 (1.34)

If we denote the inverse Laplace transform operation by L−1

eAt = L−1[(sI − A)−1] (1.35)

The Laplace transform methods thus involves 2 steps:

(a) Find (sI − A)−1. The entries of (sI − A)−1 will be strictly proper rational functions.

(b) Determine the inverse Laplace transform for each entry of (sI − A)−1. This is usually accomplishedby doing partial fractions expansion and using the fact that L−1[ 1

s−λ] = eλt (or looking up a mathe-

matical table!).

Determining (sI −A)−1 using the cofactor expansion method from linear algebra can be quite tedious.Here we give an alternative method. Let det(sI − A) = sn + p1s

n−1 + ... + pn = p(s). Write

(sI − A)−1 =B(s)

p(s)=

sn−1B1 + sn−2B2 + ... + Bn

p(s)(1.36)

Then the Bi matrices can be determined recursively as follows:

B1 = I

Bk+1 = ABk + pkI 1 ≤ k ≤ n − 1 (1.37)

To illustrate the Laplace transform method for evaluating eAt, again consider

A =

1 0 01 2 01 0 −1

p(s) = det

s − 1 0 0−1 s − 2 0−1 0 s + 1

= (s − 1)(s − 2)(s + 1)

= s3 − 2s2 − s + 2

The matrix polynomial B(s) can then be determined using the recursive procedure (1.37).

B2 = A − 2I =

−1 0 01 0 01 0 −3

B3 = AB2 − I =

−2 0 01 −1 0−2 0 2

Page 12: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

12 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

giving

B(s) =

s2 − s − 2 0 0s + 1 s2 − 1 0s − 2 0 s2 − 3s + 2

Putting into (1.36) and doing partial fractions expansion, we obtain

(sI − A)−1 =1

(s − 1)(−2)

−2 0 02 0 0

−1 0 0

+1

(s − 2)(3)

0 0 03 3 00 0 0

+1

(s + 1)(6)

0 0 00 0 0

−3 0 6

eAt =

et 0 0−et 0 012et 0 0

+

0 0 0e2t e2t 00 0 0

+

0 0 00 0 0

−12e−t 0 e−t

=

et 0 0−et + e2t e2t 0

12(et − e−t) 0 e−t

the same result as before.

As a second example, let A =

[

σ ω

−ω σ

]

. Then

A =

[

σ 00 σ

]

+

[

0 ω

−ω 0

]

.

But

e

2

4

0 ω

−ω 0

3

5t

= L−1

{

[

s −ω

ω s

]

−1}

where L−1 is the inverse Laplace transform operator

= L−1

ss2+ω2

ωs2+ω2

− ωs2+ω2

ss2+ω2

=

[

cos ωt sin ωt

− sinωt cos ωt

]

eAt = e(σI)t

[

cos ωt sinωt

− sin ωt cos ωt

]

=

[

eσt cos ωt eσt sin ωt

−eσt sin ωt eσt cos ωt

]

In this and the previous sections, we have described analytically procedures for computing eAt. Ofcourse, when the dimension n is large, these procedures would be virtually impossible to carry out. Ingeneral, we must resort to numerical techniques. Numerically stable and efficient methods of evaluatingthe matrix exponential can be found in the research literature.

1.5 Differential Equations with Inputs and Outputs

The solution of the homogeneous equation (1.4) can be easily generalized to differential equations withinputs. Consider the equation

x(t) = Ax(t) + Bu(t)x(0) = x0

(1.38)

Page 13: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

1.5. DIFFERENTIAL EQUATIONS WITH INPUTS AND OUTPUTS 13

where u is a piecewise continuous Rm-valued function. Define a function z(t) = e−Atx(t). Then z(t)

satisfies

z(t) = −e−AtAx(t) + e−AtAx(t) + e−AtBu(t)

= e−AtBu(t)

Since the above equation does not depend on z(t) on the right hand side, it can be directly integrated togive

z(t) = z(0) +

∫ t

0e−AsBu(s)ds = x(0) +

∫ t

0e−AsBu(s)ds

Hencex(t) = eAtz(t)

= eAtx0 +∫ t

0 eA(t−s)Bu(s)ds(1.39)

More generally, we have

x(t) = eA(t−t0)x(t0) +

∫ t

t0

eA(t−s)Bu(s)ds (1.40)

This is the variation of parameters formula for solving (1.38).We now consider linear systems with inputs and outputs.

x(t) = Ax(t) + Bu(t) (1.41)

x(0) = x0

y(t) = Cx(t) + Du(t) (1.42)

where the output y is a piecewise continuous Rp-valued function. Using (1.39) in (1.42) give the solutions

immediately

y(t) = CeAtx0 +

∫ t

0CeA(t−s)Bu(s)ds + Du(t) (1.43)

In the case x0 = 0 and D = 0, we find

y(t) =

∫ t

0CeA(t−s)Bu(s)ds

which is of the form y(t) =∫ t

0 h(t − s)u(s)ds, a convolution integral. The function h(t) = CeAtB iscalled the impulse response of the system. If we allow generalized functions, we can incorporate a nonzeroD term as

h(t) = CeAtB + Dδ(t) (1.44)

where δ(t) is the Dirac δ-function. Noting that the Laplace transform of eAt is (sI − A)−1, the transferfunction of the linear system from u to y, which is the Laplace transform of the impulse response, is givenby

H(s) = C(sI − A)−1B + D (1.45)

The entries of H(s) are proper rational functions, i.e., ratios of polynomials with degree of numerator≤ degree of denominator. If D = 0 (no direct transmission from u to y), the entries are strictly proper.This is an important property of transfer functions arising from a state space representation of the formgiven by (1.41), (1.42).

As an example, consider the standard circuit below.

Page 14: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

14 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

C

L R

++

u --

x1

x2

The choice of x1 to denote the voltage across the capacitor and x2 the current through the inductorgives a state description for the circuit. The equations are

Cx1 = x2 (1.46)

Lx2 + Rx2 + x1 = u (1.47)

Converting into standard form, we have

x =

0 1C

− 1L

−RL

+

0

1L

u (1.48)

Take C = 0.5, L = 1, R = 3. The circuit state equation satisfies

x =

[

0 2−1 −3

]

+

[

01

]

u = Ax + Bu (1.49)

We have

(sI − A)−1 =

s −2

1 s + 3

−1

=

s + 3 2

−1 s

s2 + 3s + 2

=

2s+1 − 1

s+22

s+1 − 2s+2

− 1s+1 + 1

s+2 − 1s+1 + 2

s+2

Upon inversion, we get

eAt =

2e−t − e−2t 2e−t − 2e−2t

−e−t + e−2t −e−t + 2e−2t

(1.50)

If we are interested in the voltage across the capacitor as the output, we then have

y = x1 =[

1 0]

x

The impluse response from the input voltage source to the output voltage is given by

h(t) = CeAtB = 2e−t − 2e−2t

Page 15: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

1.6. STABILITY 15

If we have a constant input u = 1, the output voltage using (1.39), assuming initial rest (x0 = 0), is givenby

y(t) =

∫ t

0[2e−(t−s) − 2e−2(t−s)]ds

= 2(1 − e−t) − (1 − e−2t)

Armed with (1.39) and (1.43), we can see why (1.38) deserves the name of state equation. The solutionsfor x(t) and y(t) from (1.39) and (1.43) show that indeed knowledge of the current state (x0) and futureinputs u(t), t ≥ 0 determines the future states x(t), t ≥ 0 and outputs y(t), t ≥ 0.

1.6 Stability

Let us consider the unforced state equation first:

x = Ax, x(0) = x0 .

The vector x(t) → 0 as t → ∞ for every x0 ⇔ all the eigenvalues of A lie in the open left half-plane.

Idea of proof: From (1.33),

x(t) = eAtx0 = PeJtP−1x0

The entries in the matrix eJt are of the form eλitp(t) where p(t) is a polynomial in t, and λi is an eigenvalueof A. These entries converge to 0 if and only if Reλi < 0. Thus

x(t) → 0 ∀x0

⇔ all eigenvalues of A lie in {λ : Reλ < 0}

Now let us look at the full system model:

x = Ax + Bu

y = Cx + Du .

The transfer matrix from u to y is

H(s) = C(sI − A)−1B + D .

We can write this as

H(s) =1

det(sI − A)C · adj(sI − A) · B + D .

Notice that the elements of the matrix adj(sI − A) are all polynomials in s; consequently, they have nopoles. Notice also that det(sI − A) is the characteristic polynomial of A. We can therefore conclude fromthe preceding equation that

{eigenvalues of A} ⊃ {poles of H(s)} .

Hence, if all the eigenvalues of A are in the open left half-plane, then H(s) is a stable transfer matrix. Theconverse is not necessarily true.

Page 16: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

16 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

Example:

A =

−1 0

0 2

, B =

1

0

C = [1 0], D = 0

H(s) =1

s + 1.

So the transfer function has stable poles, but the system is obviously unstable. Any initial conditionx0 with a nonzero second component will cause x(t) to grow without bound.

A more subtle example is the following

Example:

A =

0 1 1

−2 −2 0

2 1 −1

, B =

−1 0

2 1

−1 −1

C = [2 2 1], D = [0 0]

{eigenvalues of A} = {0,−1,−2}

H(s) =1

det(sI − A)C · adj(sI − A) · B

=1

s2 + 3s2 + 2s· [2 2 1] ·

s2 + 3s + 2 s + 2 s + 2

−2s − 2 s2 + s − 2 −2

2s + 2 s + 2 s2 + 2s + 2

−1 0

2 1

−1 −1

=1

s2 + 3s2 + 2s[2s2 + 4s + 2 2s2 + 5s + 2 s2 + 4s + 2]

−1 0

2 1

−1 −1

=1

s3 + 3s2 + 2s[s2 + 2s s2 + s]

=

[

s + 2

s2 + 3s + 2

s + 1

s2 + 3s + 2

]

Thus {poles of H(s)} = {-1,-2}. Hence the eigenvalue of A at λ = 0 does not appear as a pole of H(s). Notethat there is a cancellation of a factor of s in the numerator and denominator of H(s). This correspondsto a pole-zero cancellation in the determination of H(s), and indicates something internal in the systemcannot be seen from the input-output behaviour.

1.7 Transfer Function to State Space

We have already seen, in Section 1.5, how we can determine the impulse response and the transfer functionof a linear time-invariant system from its state space description. In this section, we discuss the converse,

Page 17: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

1.7. TRANSFER FUNCTION TO STATE SPACE 17

harder problem of determining a state space description of a linear time-invariant system from its transferfunction. This is also called the realization problem in control theory. We shall only solve the single-input single-output case, as the general multivariable problem is much harder and beyond the scope of thiscourse.

Suppose we are given the scalar-valued transfer function H(s) of a single-input single-output lineartime invariant system. We assume H(s) is a proper rational function; otherwise we cannot find a statespace representation of the form (1.41), (1.42) for H(s). We also assume that there are no common factorsin the numerator and denominator of H(s). After long division, if necessary, we can write

H(s) = d +q(s)

p(s)

where degree of q(s) < degree of p(s). The constant d corresponds to the direct transmission term from u

to y, so the realization problem reduces to finding a triple (cT , A, b) such that

cT (sI − A)−1b =q(s)

p(s)

so that the transfer function H(s) is given by

H(s) = cT (sI − A)−1b + d = H(s) + d

Note that we have used the notation cT for the 1 × n matrix C, in keeping with our normal conventionthat lower case letters denote column vectors.

We now give 2 solutions to the realization problem. Let

H(s) =q(s)

p(s)

with

p(s) = sn + p1sn−1 + · · · + pn

q(s) = q1sn−1 + q2s

n−2 + · · · + qn

Then the following (cT , A, b) triple realizes q(s)p(s)

A =

0 · · · 0 −pn

1 0 · · ·...

.... . .

......

· · · · · · 1 −p1

b =

qn

...

q1

cT = [0 · · · 0 1]

To see this, let

ξT (s) = [ξ1(s) ξ2(s) ... ξn(s)]

Page 18: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

18 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

= cT (sI − A)−1

Then

ξT (s)(sI − A) = cT (1.51)

Writing out (1.51) in component form, we have

sξ1(s) − ξ2(s) = 0

sξ2(s) − ξ3(s) = 0

...

sξn−1(s) − ξn(s) = 0

sξn(s) +

n∑

i=1

ξn−i+1(s)pi = 1

Successively solving these equations, we find

ξk(s) = sk−1ξ1(s), 2 ≤ k ≤ n

and

snξ1(s) +

n∑

i=1

pisn−iξ1(s) = 1 (1.52)

so that

ξ1(s) =1

p(s)

Hence

ξT (s) =1

p(s)[1 s s2 · · · sn−1] (1.53)

and

cT (sI − A)−1b = ξT (s)b =q(s)

p(s)

The resulting system is often said to be in observable canonical form, We shall discuss the meaning ofthe word “observable” in control theory later in the course.

As an example, we can now write down the state space representation for the equation given in (1.3).The transfer function of (1.3) is

H(s) =s + 4

s2 + 3s + 2

The observable representation is therefore given by

x =

0 −2

1 −3

x +

4

1

u

y =[

0 1]

x

Page 19: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

1.8. LINEARIZATION 19

Now note that because H(s) is scalar-valued, H(s) = HT (s). We can immediately conclude that the

following (cT1 , A1, b1) triple also realizes q(s)

p(s)

A1 =

0 1 0 0 · · · 0

0 0 1 0 · · · 0...

... · · ·. . .

......

0 0 0 1

−pn −pn−1 · · · −p2 −p1

b1 =

0...

0

1

cT1 = [qn ... q1]

so that H(s) can also be written as H(s) = d + cT1 (sI − A1)

−1b1.

The resulting system is said to be in controllable canonical form. We shall discuss the meaning of theword “controllable” in the next chapter. The controllable and observable canonical forms will be used laterfor control design.

1.8 Linearization

So far, we have only considered systems described by linear constant coefficient differential equations.Many systems are nonlinear, with dynamics described by nonlinear differential equations of the form

x = f(x, u) (1.54)

where f has n components, each of which is a continuously differentiable function of x1, · · · xn, u1, · · · , um.Suppose the constant vectors (xp, up) corresponds to a desired operating point, also called an equilibriumpoint, for the system, i.e.

f(xp, up) = 0

If the initial condition x0 = xp, then by using u(t) = up for all t, x(t) = xp for all t, i.e. the desiredoperating point is maintained. However, if x0 6= xp then to satisfy (1.54), (x, u) will be different from(xp, up). However, if x0 is close to xp then by continuity, we expect x and u to be close to xp and up aswell. Let

x = xp + δx

u = up + δu

Taylor’s series expansion shows that

fi(x, u) = fi(xp + δx, up + δu)

= [∂fi

∂x1

∂fi

∂x2· · ·

∂fi

∂xn

](xp,up)δx + [∂fi

∂u1

∂fi

∂u2· · ·

∂fi

∂um

](xp,up)δu + h.o.t.

Page 20: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

20 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

If we drop higher order terms (h.o.t.) involving δxj and δuk, we obtain a linear differential equation for δx

d

dtδx = Aδx + Bδu (1.55)

where [A]jk =∂fj

∂xk(xp, up), [B]jk =

∂fj

∂uk(xp, up). A is called the Jacobian matrix of f with respect to x,

sometimes, simply written as ∂f∂x

(xp, up). Similarly, B = ∂f∂u

(xp, up).The resulting system (1.55) is called the linearized system of the nonlinear system (1.54) about the

operating point (xp, up). The process is called linearization of the nonlinear system about (xp, up). Keepingthe original system near the operating point can often be achieved by keeping the linearized system near0.

Example:

Consider a pendulum subjected to an applied torque. The normalized equation of motion is

θ +g

lsin θ = u (1.56)

where g is the gravitational constant, and l is the length of the pendulum. Let x =[

θ θ

]T

. We have

the following nonlinear differential equation

x1 = x2

x2 = −g

lsinx1 + u (1.57)

We recognize

f(x, u) =

x2

− glsinx1 + u

Clearly (x, u) = (0, 0) is an equilibrium point. The linearized system is given by

x =

0 1

− gl

0

x +

0

1

u = A0x + Bu

The system matrix A0 has purely imaginary eigenvalues, corresponding to harmonic motion. Note,however, that (x1, x2, u) = (π, 0, 0) is also an equilibrium point, corresponding to the pendulum in theinverted vertical position. The linearized system is then given by

x =

0 1

gl

0

x +

0

1

u = Aπx + Bu

The system matrix Aπ has an unstable eigenvalue, and the behaviour of the linearized system aboutthe 2 equilibrium points are quite different.

Page 21: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

APPENDIX TO CHAPTER I 21

Appendix to Chapter 1: Jordan Forms

This appendix provides more details on the Jordan form of a matrix and its use in computing the transitionmatrix. The results are deep and complicated, and we cannot give a complete exposition here. Standardtextbooks on linear algebra should be consulted for a more thorough treatment.

When a matrix A has repeated eigenvalues, it is usually not diagonalizable. However, there alwaysexists a nonsingular matrix P such that

P−1AP = J =

J1 0

J2

. . .

0 Jk

(A.1)

where Ji is of the form

Ji =

λi 1 0 · · · 0

λi 1 0...

1

0 λi

, an mi × mi matrix (A.2)

The λ′

is need not be all distinct, but∑k

i=1 mi = n. The special form on the right hand side of (A.1) iscalled the Jordan form of A, and Ji is the Jordan block associated with eigenvalue λi. Note that if mi = 1for all i, i.e., each Jordan block is 1 × 1 and k = n, the Jordan form specializes to the diagonal form.

There are immediately some natural questions which arise in connection with the Jordan form: for aneigenvalue λi with multiplicity ni, how many Jordan blocks can there be associated with λi, and how do wedetermine the size of the various blocks? Also, how do we determine the nonsingular P which transformsA into Jordan form?

First recall that the nullspace of A, denoted by N (A), is the set of vectors x such that Ax = 0. N (A)is a subspace and has a dimension. If λ is an eigenvalue of A, the dimension of N (A − λI) is called thegeometric multiplicity of λ. The geometric multiplicity of λ corresponds to the number of independenteigenvectors associated with λ.

Fact: Suppose λ is an eigenvalue of A. The number of Jordan blocks associated with λ is equal to thegeometric multiplicity of λ.

The number of times λ is repeated as a root of the characteristic equation det(sI − A) = 0 is calledits algebraic multiplicity (commonly simplified to just multiplicity). The geometric multiplicity of λ isalways less than or equal to its algebraic multiplicity. If the geometric multiplicity is strictly less than thealgebraic multiplicity, λ has generalized eigenvectors. A generalized eigenvector is a nonzero vector v

such that for some positive integer k, (A − λI)kv = 0, but that (A− λI)k−1v 6= 0. Such a vector v definesa chain of generalized eigenvectors {v1, v2, · · · , vk} through the equations vk = v, vk−1 = (A − λI)vk,vk−2 = (A − λI)vk−1, · · · , v1 = (A − λI)v2. Note that (A − λI)v1 = (A − λI)kv = 0 so that v1 is an

Page 22: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

22 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

eigenvector. This set of equations is often written as

Av2 = λv2 + v1

Av3 = λv3 + v2

...

Avk = λvk + vk−1

It can be shown that the generalized eigenvectors associated with an eigenvalue λ are linearly indepen-dent. The nonsingular matrix P which transforms A into its Jordan form is constructed from the set ofeigenvectors and generalized eigenvectors.

The complete procedure for determining all the generalized eigenvectors, though too complex to describeprecisely here, can be well-illustrated by an example.

Example:

Consider the matrix A given by

A =

0 0 1 0

0 0 0 1

0 0 −1 1

0 0 2 −2

det(sI − A) = s3(s + 3)

so that the eigenvalues are −3, and 0, with algebraic multiplicity 3. Let us first determine the geometricmultiplicity of the eigenvalue 0. The equation

Aw =

0 0 1 0

0 0 0 1

0 0 −1 1

0 0 2 −2

w1

w2

w3

w4

= 0 (A.3)

gives w3 = 0, w4 = 0, and w1, w2 arbitrary. Thus the geometric multiplicity of 0 is 2, i.e., there are 2 Jordanblocks associated with 0. Since 0 has algebraic multiplicity 3, this means there are 2 eigenvectors and 1generalized eigenvector. To determine the generalized eigenvector, we solve the homogeneous equationA2v = 0, for solutions v such that Av 6= 0. Now

A2 =

0 0 1 0

0 0 0 1

0 0 −1 1

0 0 2 −2

0 0 1 0

0 0 0 1

0 0 −1 1

0 0 2 −2

=

0 0 −1 1

0 0 2 −2

0 0 3 −3

0 0 −6 6

Page 23: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

APPENDIX TO CHAPTER I 23

It is readily seen that

v =

0

0

1

1

solves A2v = 0 and

Av =

1

1

0

0

so that we have the chain of generalized eigenvectors

v2 =

0

0

1

1

, v1 =

1

1

0

0

with v1 an eigenvector. From (A.3), a second independent eigenvector is w =[

1 0 0 0]T

. This

completes the determination of the eigenvectors and generalized eigenvectors associated with 0. Finallythe eigenvector for the eigenvalue −3 is given by the solution of

(A + 3I)z =

3 0 1 0

0 3 0 1

0 0 2 1

0 0 2 1

z = 0

yielding z =[

1 −2 −3 6]T

The nonsingular matrix bringing A to Jordan form is then given by

P =[

w v1 v2 z

]

=

1 1 0 1

0 1 0 −2

0 0 1 −3

0 0 1 6

Note that the chain of generalized eigenvectors v1, v2 go together. One can verify that

P−1 =

1 −1 13 −1

3

0 1 −29 −2

9

0 0 23

13

0 0 −19

19

Page 24: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

24 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

and

P−1AP =

0 0 0 00 0 1 00 0 0 00 0 0 −3

(A.4)

The ordering of the Jordan blocks is not unique. If we choose

M =[

v1 v2 w z]

=

1 0 1 11 0 0 −20 1 0 −30 1 0 6

M−1AM =

0 1 0 00 0 0 00 0 0 00 0 0 −3

(A.5)

although it is common to choose the blocks ordered in decreasing size, i.e., that given in (A.5).Once we have transformed A into its Jordan form (A.1), we can readily determine the transition matrix

eAt. As before, we haveAm = (PJP−1)m = PJmP−1

By the block diagonal nature of J , we readily see

Jm =

Jm1 0

Jm2

. . .

0 Jmk

Thus,

eAt = P

eJ1t 0eJ2t

. . .

0 eJkt

P−1

Since the mi × mi block Ji = λiI + Ni, using the methods of Section 1.3, we can immediately write down

eJit = eλit

1 t · · · tmi−1

(mi−1)!

. . .

. . . t

0 1

We have now a complete linear algebraic procedure for determining eAt when A is not diagonalizable.For

A =

0 0 1 00 0 0 10 0 −1 10 0 2 −2

Page 25: ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM · STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea ... Since

APPENDIX TO CHAPTER I 25

and using the M matrix for transformation to Jordan form, we have

eAt = M

1 t 0 0

0 1 0 0

0 0 1 0

0 0 0 e−3t

M−1

=

1 0 19 + 2

3t − 19e−3t −1

9 + 13t + 1

9e−3t

0 1 −29 + 2

3t + 29e−3t 2

9 + 13 t − 2

9e−3t

0 0 23 + 1

3e−3t 13 − 1

3e−3t

0 0 23 − 2

3e−3t 13 + 2

3e−3t