152
Matrix Analysis Collection Editor: Steven Cox

Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Matrix Analysis

Collection Editor:Steven Cox

Page 2: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet
Page 3: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Matrix Analysis

Collection Editor:Steven Cox

Authors:Steven Cox

Doug DanielsCJ Ganier

Online:<http://cnx.org/content/col10048/1.4/ >

C O N N E X I O N S

Rice University, Houston, Texas

Page 4: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

©2008 Steven Cox

This selection and arrangement of content is licensed under the Creative Commons Attribution License:

http://creativecommons.org/licenses/by/1.0

Page 5: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Table of Contents

1 Preface

1.1 Preface to Matrix Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2

2 Matrix Methods for Electrical Systems

2.1 Nerve Fibers and the Strang Quartet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 CAAM 335 Chapter 1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3 Matrix Methods for Mechanical Systems

3.1 A Uniaxial Truss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.2 A Small Planar Truss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.3 The General Planar Truss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.4 CAAM 335 Chapter 2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4 The Fundamental Subspaces

4.1 Column Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.2 Null Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .324.3 The Null and Column Spaces: An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.4 Left Null Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384.5 Row Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394.6 Exercises: Columns and Null Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.7 Appendices/Supplements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

5 Least Squares

5.1 Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

6 Matrix Methods for Dynamical Systems

6.1 Nerve Fibers and the Dynamic Strang Quartet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .516.2 The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566.3 The Inverse Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586.4 The Backward-Euler Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596.5 Exercises: Matrix Methods for Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .606.6 Supplemental . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

7 Complex Analysis 1

7.1 Complex Numbers, Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737.2 Complex Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757.3 Complex Dierentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777.4 Exercises: Complex Numbers, Vectors, and Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

8 Complex Analysis 2

8.1 Cauchy's Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858.2 Cauchy's Integral Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888.3 The Inverse Laplace Transform: Complex Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 928.4 Exercises: Complex Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

9 The Eigenvalue Problem

9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 959.2 The Resolvent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 969.3 The Partial Fraction Expansion of the Resolvent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979.4 The Spectral Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100

Page 6: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

iv

9.5 The Eigenvalue Problem: Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019.6 The Eigenvalue Problem: Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

10 The Symmetric Eigenvalue Problem

10.1 The Spectral Representation of a Symmetric Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10310.2 Gram-Schmidt Orthogonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10710.3 The Diagonalization of a Symmetric Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

11 The Matrix Exponential

11.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11111.2 The Matrix Exponential as a Limit of Powers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11311.3 The Matrix Exponential as a Sum of Powers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11411.4 The Matrix Exponential via the Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11611.5 The Matrix Exponential via Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11711.6 The Mass-Spring-Damper System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

12 Singular Value Decomposition

12.1 The Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136Attributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139

Page 7: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

1

Page 8: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

2 CHAPTER 1. PREFACE

Chapter 1

Preface

1.1 Preface to Matrix Analysis1

Matrix Analysis

Figure 1.1

1This content is available online at <http://cnx.org/content/m10144/2.8/>.

Page 9: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

3

Bellman has called matrix theory 'the arithmetic of higher mathematics.' Under the inuence of Bellmanand Kalman, engineers and scientists have found in matrix theory a language for representing and analyzingmultivariable systems. Our goal in these notes is to demonstrate the role of matrices in the modeling ofphysical systems and the power of matrix theory in the analysis and synthesis of such systems.

Beginning with modeling of structures in static equilibrium we focus on the linear nature of the relation-ship between relevant state variables and express these relationships as simple matrix-vector products. Forexample, the voltage drops across the resistors in a network are linear combinations of the potentials at eachend of each resistor. Similarly, the current through each resistor is assumed to be a linear function of thevoltage drop across it. And, nally, at equilibrium, a linear combination (in minus out) of the currents mustvanish at every node in the network. In short, the vector of currents is a linear transformation of the vectorof voltage drops which is itself a linear transformation of the vector of potentials. A linear transformationof n numbers into m numbers is accomplished by multiplying the vector of n numbers by an m-by- n ma-trix. Once we have learned to spot the ubiquitous matrix-vector product we move on to the analysis of theresulting linear systems of equations. We accomplish this by stretching your knowledge of three-dimensionalspace. That is, we ask what does it mean that the m-by- n matrix X transforms <n (real n-dimensionalspace) into <m? We shall visualize this transformation by splitting both <n and <m each into two smallerspaces between which the given X behaves in very manageable ways. An understanding of this splitting ofthe ambient spaces into the so called four fundamental subspaces of X permits one to answer virtuallyevery question that may arise in the study of structures in static equilibrium.

In the second half of the notes we argue that matrix methods are equally eective in the modeling andanalysis of dynamical systems. Although our modeling methodology adapts easily to dynamical problemswe shall see, with respect to analysis, that rather than splitting the ambient spaces we shall be betterserved by splitting X itself. The process is analogous to decomposing a complicated signal into a sum ofsimple harmonics oscillating at the natural frequencies of the structure under investigation. For we shall seethat (most) matrices may be written as weighted sums of matrices of very special type. The weights areeigenvalues, or natural frequencies, of the matrix while the component matrices are projections composedfrom simple products of eigenvectors. Our approach to the eigendecomposition of matrices requires a briefexposure to the beautiful eld of Complex Variables. This foray has the added benet of permitting us amore careful study of the Laplace Transform, another fundamental tool in the study of dynamical systems.

Steve Cox

Page 10: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

4 CHAPTER 1. PREFACE

Page 11: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Chapter 2

Matrix Methods for Electrical Systems

2.1 Nerve Fibers and the Strang Quartet1

2.1.1 Nerve Fibers and the Strang Quartet

We wish to conrm, by example, the prefatory claim that matrix algebra is a useful means of organizing(stating and solving) multivariable problems. In our rst such example we investigate the response of a nerveber to a constant current stimulus. Ideally, a nerve ber is simply a cylinder of radius a and length l thatconducts electricity both along its length and across its lateral membrane. Though we shall, in subsequentchapters, delve more deeply into the biophysics, here, in our rst outing, we shall stick to its purely resistiveproperties. The latter are expressed via two quantities:

1. ρi, the resistivity in Ωcm of the cytoplasm that lls the cell, and2. ρm, the resistivity in Ωcm2 of the cell's lateral membrane.

1This content is available online at <http://cnx.org/content/m10145/2.7/>.

5

Page 12: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

6 CHAPTER 2. MATRIX METHODS FOR ELECTRICAL SYSTEMS

A 3 compartment model of a nerve cell

Figure 2.1

Although current surely varies from point to point along the ber it is hoped that these variations areregular enough to be captured by a multicompartment model. By that we mean that we choose a numberN and divide the ber into N segments each of length l

N . Denoting a segment's

Denition 1: axial resistance

Ri =ρi

lN

πa2

and

Denition 2: membrane resistance

Rm =ρm

2πa lN

we arrive at the lumped circuit model of Figure 2.1 (A 3 compartment model of a nerve cell). For a berin culture we may assume a constant extracellular potential, e.g., zero. We accomplish this by connectingand grounding the extracellular nodes, see Figure 2.2 (A rudimentary circuit model).

Page 13: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

7

A rudimentary circuit model

Figure 2.2

Figure 2.2 (A rudimentary circuit model) also incorporates the exogenous disturbance, a currentstimulus between ground and the left end of the ber. Our immediate goal is to compute the resultingcurrents through each resistor and the potential at each of the nodes. Our longrange goal is to providea modeling methodology that can be used across the engineering and science disciplines. As an aid tocomputing the desired quantities we give them names. With respect to Figure 2.3 (The fully dressed circuitmodel), we label the vector of potentials

x =(

x1 x2 x3 x4

)and the vector of currents

y =(

y1 y2 y3 y4 y5 y6

).

We have also (arbitrarily) assigned directions to the currents as a graphical aid in the consistent applicationof the basic circuit laws.

Page 14: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

8 CHAPTER 2. MATRIX METHODS FOR ELECTRICAL SYSTEMS

The fully dressed circuit model

Figure 2.3

We incorporate the circuit laws in a modeling methodology that takes the form of a Strang Quartet[1]:

• (S1) Express the voltage drops via e = − (Ax).• (S2) Express Ohm's Law via y = Ge.• (S3) Express Kirchho's Current Law via AT y = −f .• (S4) Combine the above into AT GAx = f .

The A in (S1) is the node-edge adjacency matrix it encodes the network's connectivity. The Gin (S2) is the diagonal matrix of edge conductances it encodes the physics of the network. The f in (S3)is the vector of current sources it encodes the network's stimuli. The culminating AT GA in (S4) is thesymmetric matrix whose inverse, when applied to f , reveals the vector of potentials, x. In order to makethese ideas our own we must work many, many examples.

2.1.2 Example

2.1.2.1 Strang Quartet, Step 1

With respect to the circuit of Figure 2.3 (The fully dressed circuit model), in accordance with step (S1) (list,1st bullet, p. 8), we express the six potential dierences (always tail minus head)

e1 = x1 − x2

e2 = x2

e3 = x2 − x3

e4 = x3

Page 15: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

9

e5 = x3 − x4

e6 = x4

Such long, tedious lists cry out for matrix representation, to wit e = − (Ax) where

A =

−1 1 0 0

0 −1 0 0

0 −1 1 0

0 0 −1 0

0 0 −1 1

0 0 0 −1

2.1.2.2 Strang Quartet, Step 2

Step (S2) (list, 2nd bullet, p. 8), Ohm's Law, states:

Law 2.1: Ohm's LawThe current along an edge is equal to the potential drop across the edge divided by the resistanceof the edge.

In our case,

yj =ej

Ri, j = 1, 3, 5 and yj =

ej

Rm, j = 2, 4, 6

or, in matrix notation, y = Ge where

G =

1Ri

0 0 0 0 0

0 1Rm

0 0 0 0

0 0 1Ri

0 0 0

0 0 0 1Rm

0 0

0 0 0 0 1Ri

0

0 0 0 0 0 1Rm

2.1.2.3 Strang Quartet, Step 3

Step (S3) (list, 3rd bullet, p. 8), Kirchho's Current Law2, states:

Law 2.2: Kirchho's Current LawThe sum of the currents into each node must be zero.

In our casei0 − y1 = 0

y1 − y2 − y3 = 0

y3 − y4 − y5 = 02"Kircho's Laws": Section Kircho's Current Law <http://cnx.org/content/m0015/latest/#current>

Page 16: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

10 CHAPTER 2. MATRIX METHODS FOR ELECTRICAL SYSTEMS

y5 − y6 = 0

or, in matrix termsBy = −f

where

B =

−1 0 0 0 0 0

1 −1 −1 0 0 0

0 0 1 −1 −1 0

0 0 0 0 1 −1

and f =

i0

0

0

0

2.1.2.4 Strang Quartet, Step 4

Looking back at A:

A =

−1 1 0 0

0 −1 0 0

0 −1 1 0

0 0 −1 0

0 0 −1 1

0 0 0 −1

we recognize in B the transpose of A. Calling it such, we recall our main steps

• (S1) e = − (Ax),• (S2) y = Ge, and• (S3) AT y = −f .

On substitution of the rst two into the third we arrive, in accordance with (S4) (list, 4th bullet, p. 8), at

AT GAx = f . (2.1)

This is a system of four equations for the 4 unknown potentials, x1 through x4. As you know, the system(2.1) may have either 1, 0, or innitely many solutions, depending on f and AT GA. We shall devote (FIXME CNXN TO CHAPTER 3 AND 4) to an unraveling of the previous sentence. For now, we cross ourngers and `solve' by invoking the Matlab program, b1.m 3 .

3http://www.caam.rice.edu/∼caam335/cox/lectures/b1.m

Page 17: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

11

Results of a 64 compartment simulation

Figure 2.4

Results of a 64 compartment simulation

(a) (b)

Figure 2.5

This program is a bit more ambitious than the above in that it allows us to specify the number ofcompartments and that rather than just spewing the x and y values it plots them as a function of distance

Page 18: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

12 CHAPTER 2. MATRIX METHODS FOR ELECTRICAL SYSTEMS

along the ber. We note that, as expected, everything tapers o with distance from the source and that theaxial current is signicantly greater than the membrane, or leakage, current.

2.1.3 Example

We have seen in the previous example (Section 2.1.2: Example) how a current source may produce a potentialdierence across a cell's membrane. We note that, even in the absence of electrical stimuli, there is always adierence in potential between the inside and outside of a living cell. In fact, this dierence is the biologist'sdenition of `living.' Life is maintained by the fact that the cell's interior is rich in potassium ions, K+,and poor in sodium ions, Na+, while in the exterior medium it is just the opposite. These concentrationdierences beget potential dierences under the guise of the Nernst potentials:

Denition 3: Nernst potentials

ENa =RT

Flog

([Na]o[Na]i

)and EK =

RT

Flog

([K]o[K]i

)where R is the gas constant, T is temperature, and F is the Faraday constant.

Associated with these potentials are membrane resistances

ρm,Na and ρm,K

that together produce the ρm above (list, item 2, p. 5) via

1ρm

=1

ρm,Na+

1ρm,K

and produce the aforementioned rest potential

Em = ρm

(ENa

ρm,Na+

EK

ρm,Na

)With respect to our old circuit model (Figure 2.3: The fully dressed circuit model), each compartment

now sports a battery in series with its membrane resistance, as shown in Figure 2.6 (Circuit model withresting potentials).

Page 19: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

13

Circuit model with resting potentials

Figure 2.6

Revisiting steps (S1-4) (p. 8) we note that in (S1) the even numbered voltage drops are now

e2 = x2 − Em

e4 = x3 − Em

e6 = x4 − Em

We accommodate such things by generalizing (S1) (list, 1st bullet, p. 8) to:

• (S1') Express the voltage drops as e = b−Ax where b is the vector of batteries.

No changes are necessary for (S2) and (S3). The nal step now reads,

• (S4') Combine (S1') (p. 13), (S2) (list, 2nd bullet, p. 8), and (S3) (list, 3rd bullet, p. 8) to produceAT GAx = AT Gb + f .

Returning to Figure 2.6 (Circuit model with resting potentials), we note that

b = −

Em

0

1

0

1

0

1

and AT Gb =

Em

Rm

0

1

1

1

Page 20: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

14 CHAPTER 2. MATRIX METHODS FOR ELECTRICAL SYSTEMS

This requires only minor changes to our old code. The new program is called b2.m 4 and results of its useare indicated in the next two gures.

Results of a 64 compartment simulation with batteries

Figure 2.7

4http://www.caam.rice.edu/∼caam335/cox/lectures/b2.m

Page 21: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

15

Results of a 64 compartment simulation with batteries

(a) (b)

Figure 2.8

2.2 CAAM 335 Chapter 1 Exercises5

Exercise 2.1

2.2.1 Question 1

In order to refresh your matrix-vector multiply skills please calculate, by hand, the product AT GAin the 3 compartment case and write out the 4 equations in the vector equation we arrived at instep (S4) (2.1): AT GAx = f .

Exercise 2.2

2.2.1 Question 2

We began our discussion with the 'hope' that a multicompartment model could indeed adequatelycapture the ber's true potential and current proles. In order to check this one should run b1.m6

with increasing values of N until one can no longer detect changes in the computed potentials.

• (a) Please run b1.m7 with N = 8, 16, 32, and 64. Plot all of the potentials on the same (usehold) graph, using dierent line types for each. (You may wish to alter fib1.m so that itaccepts N as an argument).

5This content is available online at <http://cnx.org/content/m10299/2.8/>.6http://www.caam.rice.edu/∼caam335/cox/lectures/b1.m7http://www.caam.rice.edu/∼caam335/cox/lectures/b1.m

Page 22: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

16 CHAPTER 2. MATRIX METHODS FOR ELECTRICAL SYSTEMS

Let us now interpret this convergence. The main observation is that the dierence equation, (2.2),approaches a dierential equation. We can see this by noting that

dz ≡ l

N

acts as a spatial 'step' size and that xk, the potential at (k − 1) dz, is approximately the value ofthe true potential at (k − 1) dz. In a slight abuse of notation, we denote the latter

x ((k − 1) dz)

Applying these conventions to (2.2) and recalling the denitions of Ri and Rm we see (2.2) become

πa2

ρi

− (x (0)) + 2x (dz)− x (2dz)dz

+2πadz

ρmx (dz) = 0

or, after multiplying through by ρm

πadz ,

aρm

ρi

− (x (0)) + 2x (dz)− x (2dz)d (z2)

+ 2x (dz) = 0

. We note that a similar equation holds at each node (save the ends) and that as N → ∞ andtherefore dz → 0 we arrive at

d2

dz2x (z)− 2ρi

aρmx (z) = 0 (2.3)

• (b) With µ ≡ 2ρi

aρmshow that

x (z) = αsinh (√

µz) + βcosh (√

µz) (2.4)

satises (2.3) regardless of α and β.

We shall determine α and β by paying attention to the ends of the ber. At the near end we nd

πa2

ρi

x (0)− x (dz)dz

= i0

which, as dz → 0 becomesd

dzx (0) = −

(ρii0πa2

)(2.5)

At the far end, we interpret the condition that no axial current may leave the last node to mean

d

dzx (l) = 0 (2.6)

• (c) Substitute (2.4) into (2.5) and (2.6) and solve for α and β and write out the nal x (z).• (d) Substitute into x the l, a, ρi, and ρm values used in b1.m8 , plot the resulting function

(using, e.g., ezplot) and compare this to the plot achieved in part (a) (p. 15).

8http://www.caam.rice.edu/∼caam335/cox/lectures/b1.m

Page 23: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Chapter 3

Matrix Methods for Mechanical Systems

3.1 A Uniaxial Truss1

3.1.1 Introduction

We now investigate the mechanical prospection of tissue, an application extending techniques developed inthe electrical analysis of a nerve cell (Section 2.1). In this application, one applies traction to the edges of asquare sample of planar tissue and seeks to identify, from measurement of the resulting deformation, regionsof increased `hardness' or `stiness.' For a sketch of the associated apparatus, visit the Biaxial Test site 2 .

3.1.2 A Uniaxial Truss

A uniaxial truss

Figure 3.1

As a precursor to the biaxial problem (Section 3.2) let us rst consider the uniaxial case. We connect 3masses with four springs between two immobile walls, apply forces at the masses, and measure the associateddisplacement. More precisely, we suppose that a horizontal force, fj , is applied to each mj , and produces adisplacement xj , with the sign convention that rightward means positive. The bars at the ends of the gureindicate rigid supports incapable of movement. The kj denote the respective spring stinesses. The analog

1This content is available online at <http://cnx.org/content/m10146/2.6/>.2http://health.upenn.edu/orl/research/bioengineering/proj2b.jpg

17

Page 24: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

18 CHAPTER 3. MATRIX METHODS FOR MECHANICAL SYSTEMS

of potential dierence (see the electrical model (Section 2.1.2.1: Strang Quartet, Step 1)) is here elongation.If ej denotes the elongation of the jth spring then naturally,

e1 = x1

e2 = x2 − x1

e3 = x3 − x2

e4 = −x3

or, in matrix terms, e = Ax, where

A =

1 0 0

−1 1 0

0 −1 1

0 0 −1

We note that ej is positive when the spring is stretched and negative when compressed. This observation,Hooke's Law, is the analog of Ohm's Law in the electrical model (Law 2.1, Ohm's Law, p. 9).

Denition 4: Hooke's Law1. The restoring force in a spring is proportional to its elongation. We call the constant of propor-tionality the stiness, kj , of the spring, and denote the restoring force by yj .2. The mathematical expression of this statement is: yj = kjej , or,3. in matrix terms: y = Ke where

K =

k1 0 0 0

0 k2 0 0

0 0 k3 0

0 0 0 k4

The analog of Kirchho's Current Law (Section 2.1.2.3: Strang Quartet, Step 3) is here typically called

`force balance.'

Denition 5: force balance1. Equilibrium is synonymous with the fact that the net force acting on each mass must vanish.2. In symbols,

y1 − y2 − f1 = 0

y2 − y3 − f2 = 0

y3 − y4 − f3 = 0

3. or, in matrix terms, By = f where

f =

f1

f2

f3

andB =

1 −1 0 0

0 1 −1 0

0 0 1 −1

Page 25: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

19

As in the electrical example (Section 2.1.2.4: Strang Quartet, Step 4) we recognize in B the transpose ofA. Gathering our three important steps:

e = Ax (3.1)

y = Ke (3.2)

AT y = f (3.3)

we arrive, via direct substitution, at an equation for x. Namely,

AT y = f ⇒ AT Ke = f ⇒ AT KAx = f

Assembling AT KAx we arrive at the nal system:k1 + k2 −k2 0

−k2 k2 + k3 −k3

0 −k3 k3 + k4

x1

x2

x3

=

f1

f2

f3

(3.4)

3.1.3 Gaussian Elimination and the Uniaxial Truss

Although Matlab solves systems like the one above with ease our aim here is to develop a deeper understand-ing of Gaussian Elimination and so we proceed by hand. This aim is motivated by a number of importantconsiderations. First, not all linear systems have solutions and even those that do do not necessarily possessunique solutions. A careful look at Gaussian Elimination will provide the general framework for not onlyclassifying those systems that possess unique solutions but also for providing detailed diagnoses of thosedefective systems that lack solutions or possess too many.

In Gaussian Elimination one rst uses linear combinations of preceding rows to eliminate nonzeros belowthe main diagonal and then solves the resulting triangular system via back-substitution. To rm up ourunderstanding let us take up the case where each kj = 1 and so (3.4) takes the form

2 −1 0

−1 2 −1

0 −1 2

x1

x2

x3

=

f1

f2

f3

(3.5)

We eliminate the (2,1) (row 2, column 1) element by implementing

new row 2 = old row 2 +12row 1

bringing 2 −1 0

0 32 −1

0 −1 2

x1

x2

x3

=

f1

f2 + f12

f3

We eliminate the current (3,2) element by implementing

new row 3 = old row 3 +23row 2

Page 26: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

20 CHAPTER 3. MATRIX METHODS FOR MECHANICAL SYSTEMS

bringing the upper-triangular system2 −1 0

0 32 −1

0 0 43

x1

x2

x3

=

f1

f2 + f12

f3 + 2f23 + f1

3

One now simply reads o

x3 =f1 + 2f2 + 3f3

4This in turn permits the solution of the second equation

x2 =2(x3 + f2 + f1

2

)3

=f1 + 2f2 + f3

2

and, in turn,

x1 =x2 + f1

2=

3f1 + 2f2 + f3

4One must say that Gaussian Elimination has succeeded here. For, regardless of the actual elements of f , wehave produced an x for which AT KAx = f .

3.1.4 Alternate Paths to a Solution

Although Gaussian Elimination remains the most ecient means for solving systems of the form Sx = f itpays, at times, to consider alternate means. At the algebraic level, suppose that there exists a matrix that`undoes' multiplication by S in the sense that multiplication by 2−1 undoes multiplication by 2. The matrixanalog of 2−12 = 1 is

S−1S = I

where I denotes the identity matrix (all zeros except the ones on the diagonal). We refer to S−1 as:

Denition 6: Inverse of SAlso dubbed "S inverse" for short, the value of this matrix stems from watching what happenswhen it is applied to each side of Sx = f . Namely,

Sx = f ⇒ S−1Sx = S−1f ⇒ Ix = S−1f ⇒ x = S−1f

Hence, to solve Sx = f for x it suces to multiply f by the inverse of S.

3.1.5 Gauss-Jordan Method: Computing the Inverse of a Matrix

Let us now consider how one goes about computing S−1. In general this takes a little more than twice thework of Gaussian Elimination, for we interpret

S−1S = I

as n (the size of S) applications of Gaussian elimination, with f running through n columns of the identitymatrix. The bundling of these n applications into one is known as the Gauss-Jordan method. Let usdemonstrate it on the S appearing in (3.5). We rst augment S with I.

2 −1 0 [U+2502] 1 0 0

−1 2 −1 [U+2502] 0 1 0

0 −1 2 [U+2502] 0 0 1

Page 27: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

21

We then eliminate down, being careful to address each of the three f vectors. This produces2 −1 0 [U+2502] 1 0 0

0 32 −1 [U+2502] 1

2 1 0

0 0 43 [U+2502] 1

323 1

Now, rather than simple backsubstitution we instead eliminate up. Eliminating rst the (2,3) element wend

2 −1 0 [U+2502] 1 0 0

0 32 0 [U+2502] 3

432

34

0 0 43 [U+2502] 1

323 1

Now, eliminating the (1,2) element we achieve

2 0 0 [U+2502] 32 1 1

2

0 32 0 [U+2502] 3

432

34

0 0 43 [U+2502] 1

323 1

In the nal step we scale each row in order that the matrix on the left takes on the form of the identity. Thisrequires that we multiply row 1 by 1

2 , row 2 by 32 , and row 3 by 3

4 , with the result1 0 0 [U+2502] 3

412

14

0 1 0 [U+2502] 12 1 1

2

0 0 1 [U+2502] 14

12

34

Now in this transformation of S into I we have, ipso facto, transformed I to S−1; i.e., the matrix that

appears on the right after applying the method of Gauss-Jordan is the inverse of the matrix that began onthe left. In this case,

S−1 =

34

12

14

12 1 1

2

14

12

34

One should check that S−1f indeed coincides with the x computed above.

3.1.6 Invertibility

Not all matrices possess inverses:

Denition 7: singular matrixA matrix that does not have an inverse.ExampleA simple example is: 1 1

1 1

Alternately, there are

Page 28: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

22 CHAPTER 3. MATRIX METHODS FOR MECHANICAL SYSTEMS

Denition 8: Invertible, or Nonsingular MatricesMatrices that do have an inverse.ExampleThe matrix S that we just studied is invertible. Another simple example is 0 1

1 1

3.2 A Small Planar Truss3

We return once again to the biaxial testing problem, introduced in the uniaxial truss module (Section 3.1.1:Introduction). It turns out that singular matrices are typical in the biaxial testing problem. As our initialstep into the world of such planar structures let us consider the simple truss in the gure of a simple swing(Figure 3.2: A simple swing).

A simple swing

Figure 3.2

We denote by x1 and x2 the respective horizontal and vertical displacements of m1 (positive is right anddown). Similarly, f1 and f2 will denote the associated components of force. The corresponding displacementsand forces at m2 will be denoted by x3, x4 and f3, f4. In computing the elongations of the three springs weshall make reference to their unstretched lengths, L1, L2, and L3.

3This content is available online at <http://cnx.org/content/m10147/2.6/>.

Page 29: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

23

Now, if spring 1 connects 0,−L1 to 0, 1 when at rest and 0,−L1 to x1, x2 when stretched thenits elongation is simply

e1 =√

x12 + (x2 + L1)

2 − L1 (3.6)

The price one pays for moving to higher dimensions is that lengths are now expressed in terms of squareroots. The upshot is that the elongations are not linear combinations of the end displacements as they werein the uniaxial case (Section 3.1.2: A Uniaxial Truss). If we presume, however, that the loads and stinessesare matched in the sense that the displacements are small compared with the original lengths, then we mayeectively ignore the nonlinear contribution in (3.6). In order to make this precise we need only recall the

Rule 3.1: Taylor development of the square root of (1 + t)The Taylor development of

√1 + t about t = 0 is

√1 + t = 1 +

t

2+ O

(t2)

where the latter term signies the remainder.

With regard to e1 this allows

e1 =√

x12 + x2

2 + 2x2L1 + L12 − L1

= L1

√1 + x12+x22

L12 + 2x2

L1− L1

(3.7)

e1 = L1 + x12+x2

2

2L1+ x2 + L1O

((x1

2+x22

L12 + 2x2

L1

)2)− L1

= x2 + x12+x2

2

2L1+ L1O

((x1

2+x22

L12 + 2x2

L1

)2) (3.8)

If we now assume thatx1

2 + x22

2L1is small compared to x2 (3.9)

then, as the O term is even smaller, we may neglect all but the rst terms in the above and so arrive at

e1 = x2

To take a concrete example, if L1 is one meter and x1 and x2 are each one centimeter, then x2 is one hundred

times x12+x2

2

2L1.

With regard to the second spring, arguing as above, its elongation is (approximately) its stretch alongits initial direction. As its initial direction is horizontal, its elongation is just the dierence of the respectivehorizontal end displacements, namely,

e2 = x3 − x1

Finally, the elongation of the third spring is (approximately) the dierence of its respective vertical enddisplacements, i.e.,

e3 = x4

We encode these three elongations in

e = Ax where A =

0 1 0 0

−1 0 1 0

0 0 0 1

Page 30: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

24 CHAPTER 3. MATRIX METHODS FOR MECHANICAL SYSTEMS

Hooke's law (Denition: "Hooke's Law", p. 18) is an elemental piece of physics and is not perturbed byour leap from uniaxial to biaxial structures. The upshot is that the restoring force in each spring is stillproportional to its elongation, i.e., yj = kjej where kj is the stiness of the jth spring. In matrix terms,

y = Ke where K =

k1 0 0

0 k2 0

0 0 k3

Balancing horizontal and vertical forces at m1 brings

−y2 − f1 = 0

andy1 − f2 = 0

while balancing horizontal and vertical forces at m2 brings

y2 − f3 = 0

andy3 − f4 = 0

We assemble these into

By = f where B =

0 −1 0

1 0 0

0 1 0

0 0 1

,

and recognize, as expected, that B is nothing more than AT . Putting the pieces together, we nd that xmust satisfy Sx = f where

S = AT KA =

k2 0 −k2 0

0 k1 0 0

−k2 0 k2 0

0 0 0 k3

Applying one step of Gaussian Elimination (Section 3.1.3: Gaussian Elimination and the Uniaxial Truss)

brings k2 0 −k2 0

0 k1 0 0

0 0 0 0

0 0 0 k3

x1

x2

x3

x4

=

f1

f2

f1 + f3

f4

and back substitution delivers

x4 =f4

k3

0 = f1 + f3

x2 =f2

k1

Page 31: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

25

x1 − x3 =f1

k2

The second of these is remarkable in that it contains no components of x. Instead, it provides a conditionon f . In mechanical terms, it states that there can be no equilibrium unless the horizontal forces on the twomasses are equal and opposite. Of course one could have observed this directly from the layout of the truss.In modern, threedimensional structures with thousands of members meant to shelter or convey humansone should not however be satised with the `visual' integrity of the structure. In particular, one desiresa detailed description of all loads that can, and, especially, all loads that can not, be equilibrated by theproposed truss. In algebraic terms, given a matrix S, one desires a characterization of

1. all those f for which Sx = f possesses a solution2. all those f for which Sx = f does not possess a solution

We will eventually provide such a characterization in our later discussion of the column space (Section 4.1)of a matrix.

Supposing now that f1 + f3 = 0 we note that although the system above is consistent it still fails touniquely determine the four components of x. In particular, it species only the dierence between x1 andx3. As a result both

x =

f1k2

f2k1

0f4k3

andx =

0f2k1

−(

f1k2

)f4k3

satisfy Sx = f . In fact, one may add to either an arbitrary multiple of

z ≡

1

0

1

0

(3.10)

and still have a solution of Sx = f . Searching for the source of this lack of uniqueness we observe someredundancies in the columns of S. In particular, the third is simply the opposite of the rst. As S is simplyAT KA we recognize that the original fault lies with A, where again, the rst and third columns are opposites.These redundancies are encoded in z in the sense that

Az = 0

Interpreting this in mechanical terms, we view z as a displacement and Az as the resulting elongation. In

Az = 0

we see a nonzero displacement producing zero elongation. One says in this case that the truss deformswithout doing any work and speaks of z as an unstable mode. Again, this mode could have been observedby a simple glance at Figure 3.2 (A simple swing). Such is not the case for more complex structures andso the engineer seeks a systematic means by which all unstable modes may be identied. We shall see laterthat all these modes are captured by the null space (Section 4.2) of A.

FromSz = 0

one easily deduces that S is singular (Denition: "singular matrix", p. 21). More precisely, if S−1 were toexist then S−1Sz would equal S−10, i.e., z = 0, contrary to (3.10). As a result, Matlab will fail to solveSx = f even when f is a force that the truss can equilibrate. One way out is to use the pseudo-inverse, aswe shall see in the General Planar Truss (Section 3.3) module.

Page 32: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

26 CHAPTER 3. MATRIX METHODS FOR MECHANICAL SYSTEMS

3.3 The General Planar Truss4

Let us now consider something that resembles the mechanical prospection problem introduced in the intro-duction to matrix methods for mechanical systems (Section 3.1.1: Introduction). In the gure below we oera crude mechanical model of a planar tissue, say, e.g., an excised sample of the wall of a vein.

A crude tissue model

Figure 3.3

Elastic bers, numbered 1 through 20, meet at nodes, numbered 1 through 9. We limit our observation tothe motion of the nodes by denoting the horizontal and vertical displacements of node j by x2j−1 (horizontal)and x2j (vertical), respectively. Retaining the convention that down and right are positive we note that theelongation of ber 1 is

e1 = x2 − x8

while that of ber 3 ise3 = x3 − x1.

As bers 2 and 4 are neither vertical nor horizontal their elongations, in terms of nodal displacements,are not so easy to read o. This is more a nuisance than an obstacle however, for noting our discussion ofelongation (p. 23) in the small planar truss module, the elongation is approximately just the stretch along itsundeformed axis. With respect to ber 2, as it makes the angle −

(π4

)with respect to the positive horizontal

axis, we nd

e2 = (x9 − x1) cos(−(π

4

))+ (x10 − x2) sin

(−(π

4

))=

x9 − x1 + x2 − x10√2

.

Similarly, as ber 4 makes the angle −(

3π4

)with respect to the positive horizontal axis, its elongation is

e4 = (x7 − x3) cos

(−(

4

))+ (x8 − x4) sin

(−(

4

))=

x3 − x7 + x4 − x8√2

.

4This content is available online at <http://cnx.org/content/m10148/2.9/>.

Page 33: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

27

These are both direct applications of the general formula

ej = (x2n−1 − x2m−1) cos (θj) + (x2n − x2m) sin (θj) (3.11)

for ber j, as depicted in Figure 3.4 below, connecting node m to node n and making the angle θj withthe positive horizontal axis when node m is assumed to lie at the point (0,0). The reader should check thatour expressions for e1 and e3 indeed conform to this general formula and that e2 and e4 agree with onesintuition. For example, visual inspection of the specimen suggests that ber 2 can not be supposed to stretch(i.e., have positive e2) unless x9 > x1 and/or x2 > x10. Does this jive with (3.11)?

Figure 3.4: Elongation of a generic bar, see (3.11).

Applying (3.11) to each of the remaining bers we arrive at e = Ax where A is 20-by-18, one row foreach ber, and one column for each degree of freedom. For systems of such size with such a well denedstructure one naturally hopes to automate the construction. We have done just that in the accompanyingM-le5 and diary6 . The M-le begins with a matrix of raw data that anyone with a protractor could havekeyed in directly from Figure 3.3 (A crude tissue model):

data = [ % one row of data for each fiber, the

1 4 -pi/2 % first two columns are starting and ending

1 5 -pi/4 % node numbers, respectively, while the third is the

1 2 0 % angle the fiber makes with the positive horizontal axis

2 4 -3*pi/4

...and so on... ]

This data is precisely what (3.11) requires in order to know which columns of A receive the proper cos orsin. The nal A matrix is displayed in the diary7 .

5http://cnx.rice.edu/modules/m10148/latest/ber.m6http://cnx.rice.edu/modules/m10148/latest/lec2adj7http://cnx.rice.edu/modules/m10148/latest/lec2adj

Page 34: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

28 CHAPTER 3. MATRIX METHODS FOR MECHANICAL SYSTEMS

The next two steps are now familiar. If K denotes the diagonal matrix of ber stinesses and f denotesthe vector of nodal forces then

y = Ke and AT y = f

and so one must solve Sx = f where S = AT KA. In this case there is an entire threedimensional class ofz for which Az = 0 and therefore Sz = 0. The three indicates that there are three independent unstablemodes of the specimen, e.g., two translations and a rotation. As a result S is singular and x = S\f inMATLAB will get us nowhere. The way out is to recognize that S has 18 − 3 = 15 stable modes and thatif we restrict S to 'act' only in these directions then it `should' be invertible. We will begin to make thesenotions precise in discussions on the Fundamental Theorem of Linear Algebra. (Section 4.5) For now let usnote that every matrix possesses such a pseudo-inverse and that it may be computed in MATLAB via thepinv command. Supposing the ber stinesses to each be one and the edge traction to be of the form

f =(−1 1 0 1 1 1 −1 0 0 0 1 0 −1 −1 0 −1 1 −1

)T

,

we arrive at x via x=pinv(S)*f and oer below its graphical representation.

3.3.1 Before-After Plot

Figure 3.5: Before and after shots of the truss in Figure 3.3 (A crude tissue model). The solid (dashed)circles correspond to the nodal positions before (after) the application of the traction force, f .

3.4 CAAM 335 Chapter 2 Exercises8

Exercise 3.1With regard to the uniaxial truss gure (Figure 3.1: A uniaxial truss),

• (i) Derive the A and K matrices resulting from the removal of the fourth spring,• (ii) Compute the inverse, by hand via Gauss-Jordan (Section 3.1.5: Gauss-Jordan Method:

Computing the Inverse of a Matrix), of the resulting AT KA with k1 = k2 = k3 = k

8This content is available online at <http://cnx.org/content/m10300/2.6/>.

Page 35: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

29

• (iii) Use the result of (ii) to nd the displacement corresponding to the load f = (0, 0, F )T.

Exercise 3.2Generalize example 3, the general planar truss (Section 3.3), to the case of 16 nodes connected by42 bers. Introduce one sti (say k = 100) ber and show how to detect it by 'properly' choosing f .Submit your well-documented M-le as well as the plots, similar to the before-after plot (Figure 3.6)in the general planar module (Section 3.3), from which you conclude the presence of a sti ber.

Figure 3.6: A copy of the before-after gure from the general planar module.

Page 36: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

30 CHAPTER 3. MATRIX METHODS FOR MECHANICAL SYSTEMS

Page 37: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Chapter 4

The Fundamental Subspaces

4.1 Column Space1

4.1.1 The Column Space

We begin with the simple geometric interpretation of matrix-vector multiplication. Namely, the multipli-cation of the n-by-1 vector x by the m-by-n matrix A produces a linear combination of the columns of A.More precisely, if aj denotes the jth column of A, then

Ax =(

a1 a2 . . . an

)

x1

x2

. . .

xn

= x1a1 + x2a2 + · · ·+ xnan

(4.1)

The picture that I wish to place in your mind's eye is that Ax lies in the subspace spanned (Denition:"Span", p. 42) by the columns of A. This subspace occurs so frequently that we nd it useful to distinguishit with a denition.

Denition 9: Column SpaceThe column space of the m-by-n matrix S is simply the span of the its columns, i.e. Ra (S) ≡Sx |x ∈ Rn. This is a subspace (Section 4.7.2) of <m. The notation Ra stands for range in thiscontext.

4.1.2 Example

Let us examine the matrix:

A =

0 1 0 0

−1 0 1 0

0 0 0 1

(4.2)

1This content is available online at <http://cnx.org/content/m10266/2.9/>.

31

Page 38: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

32 CHAPTER 4. THE FUNDAMENTAL SUBSPACES

The column space of this matrix is:

Ra (A) =

x1

0

−1

0

+ x2

1

0

0

+ x3

0

1

0

+ x4

0

0

1

|x ∈ R4

(4.3)

As the third column is simply a multiple of the rst, we may write:

Ra (A) =

x1

0

1

0

+ x2

1

0

0

+ x3

0

0

1

|x ∈ R3

(4.4)

As the three remaining columns are linearly independent (Denition: "Linear Independence", p. 42) wemay go no further. In this case, Ra (A) comprises all of R3.

4.1.3 Method for Finding a Basis

To determine the basis for Ra (A) (where A is an arbitrary matrix) we must nd a way to discard itsdependent columns. In the example above, it was easy to see that columns 1 and 3 were colinear. We seek,of course, a more systematic means of uncovering these, and perhaps other less obvious, dependencies. Suchdependencies are more easily discerned from the row reduced form. In the reduction of the above problem,we come very easily to the matrix

Ared =

−1 0 1 0

0 1 0 0

0 0 0 1

(4.5)

Once we have done this, we can recognize that the pivot column are the linearly independent columns ofAred. One now asks how this might help us distinguish the independent columns of A. For, although therows of Ared are linear combinations of the rows of A, no such thing is true with respect to the columns.The answer is: pay attention to the indices of the pivot columns. In our example, columns 1, 2, 4 are thepivot columns of Ared and hence the rst, second, and fourth columns of A, i.e.,

0

−1

0

,

1

0

0

,

0

0

1

(4.6)

comprise a basis for Ra (A). In general:

Denition 10: A Basis for the Column SpaceSuppose A is m-by-n. If columns cj |j = 1, ..., r are the pivot columns of Ared then columnscj |j = 1, ..., r of A constitute a basis for Ra (A).

4.2 Null Space2

4.2.1 Null Space

Denition 11: Null SpaceThe null space of an m-by-n matrix A is the collection of those vectors in Rn that A maps to the

2This content is available online at <http://cnx.org/content/m10293/2.9/>.

Page 39: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

33

zero vector in Rm. More precisely,

N (A) = x ∈ Rn |Ax = 0

4.2.2 Null Space Example

As an example, we examine the matrix A:

A =

0 1 0 0

−1 0 1 0

0 0 0 1

(4.7)

It is fairly easy to see that the null space of this matrix is:

N (A) =

t

1

0

1

0

|t ∈ R

(4.8)

This is a line in R4.The null space answers the question of uniqueness of solutions to Sx = f . For, if Sx = f and Sy = f

then S (x− y) = Sx− Sy = f − f = 0 and so x− y ∈ N (S). Hence, a solution to Sx = f will be unique if,and only if, N (S) = 0.

4.2.3 Method for Finding the Basis

Let us now exhibit a basis for the null space of an arbitrary matrix A. We note that to solve Ax = 0 is tosolve Aredx = 0. With respect to the latter, we suppose that

cj |j = 1, . . . , r (4.9)

are the indices of the pivot columns and that

cj |j = r + 1, . . . , n (4.10)

are the indices of the nonpivot columns. We accordingly dene the r pivot variablesxcj |j = 1, . . . , r

(4.11)

and the n− r free variables xcj

|j = r + 1, . . . , n

(4.12)

One solves Aredx = 0 by expressing each of the pivot variables in terms of the nonpivot, or free, variables.In the example above, x1, x2, and x4 are pivot while x3 is free. Solving for the pivot in terms of the free, wend x4 = 0, x3 = x1, x2 = 0, or, written as a vector,

x = x3

1

0

1

0

(4.13)

Page 40: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

34 CHAPTER 4. THE FUNDAMENTAL SUBSPACES

where x3 is free. As x3 ranges over all real numbers the x above traces out a line in R4. This line is preciselythe null space of A. Abstracting these calculations we arrive at:

Denition 12: A Basis for the Null SpaceSuppose that A is m-by-n with pivot indices cj |j = 1, . . . , r and free indicescj |j = r + 1, . . . , n. A basis for N (A) may be constructed of n− r vectors

z1, z2, . . . , zn−r

where zk, and only zk, possesses a nonzero in its cr+k component.

4.2.4 A MATLAB Observation

As usual, MATLAB has a way to make our lives simpler. If you have dened a matrix A and want tond a basis for its null space, simply call the function null(A). One small note about this function: if oneadds an extra ag, 'r', as in null(A, 'r'), then the basis is displayed "rationally" as opposed to purelymathematically. The MATLAB help pages dene the dierence between the two modes as the rational modebeing useful pedagogically and the mathematical mode of more value (gasp!) mathematically.

4.2.5 Final thoughts on null spaces

There is a great deal more to nding null spaces; enough, in fact, to warrant another module. One importantaspect and use of null spaces is their ability to inform us about the uniqueness of solutions. If we use thecolumn space3 to determine the existence of a solution x to the equation Ax = b. Once we know that asolution exists it is a perfectly reasonable question to want to know whether or not this solution is the onlysolution to this problem. The hard and fast rule is that a solution x is unique if and only if the null space ofA is empty. One way to think about this is to consider that if Ax = 0 does not have a unique solution then,by linearity, neither does Ax = b. Conversely, if Az = 0 and z 6= 0 and Ay = b then A (z + y) = b as well.

4.3 The Null and Column Spaces: An Example4

4.3.1 Preliminary Information

Let us compute bases for the null and column spaces of the adjacency matrix associated with the ladderbelow.

3 <http://cnx.org/content/columnspace/latest/>4This content is available online at <http://cnx.org/content/m10368/2.4/>.

Page 41: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

35

An unstable ladder?

Figure 4.1

The ladder has 8 bars and 4 nodes, so 8 degrees of freedom. Denoting the horizontal and verticaldisplacements of node j by x2j−1 and x2j , respectively, we arrive at the A matrix

A =

1 0 0 0 0 0 0 0

−1 0 1 0 0 0 0 0

0 0 −1 0 0 0 0 0

0 −1 0 0 0 1 0 0

0 0 0 −1 0 0 0 1

0 0 0 0 1 0 0 0

0 0 0 0 −1 0 1 0

0 0 0 0 0 0 −1 0

4.3.2 Finding a Basis for the Column Space

To determine a basis for R (A) we must nd a way to discard its dependent columns. A moment's reectionreveals that columns 2 and 6 are colinear, as are columns 4 and 8. We seek, of course, a more systematicmeans of uncovering these and perhaps other less obvious dependencies. Such dependencies are more easily

Page 42: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

36 CHAPTER 4. THE FUNDAMENTAL SUBSPACES

discerned from the row reduced form (Section 4.7.3)

Ared = rref (A) =

1 0 0 0 0 0 0 0

0 1 0 0 0 −1 0 0

0 0 1 0 0 0 0 0

0 0 0 1 0 0 0 −1

0 0 0 0 1 0 0 0

0 0 0 0 0 0 1 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

Recall that rref performs the elementary row operations necessary to eliminate all nonzeros below thediagonal. For those who can't stand to miss any of the action I recommend rrefmovie.

Each nonzero row of Ared is called a pivot row. The rst nonzero in each row of Ared is called a pivot.Each column that contains a pivot is called a pivot column. On account of the staircase nature of Ared wend that there are as many pivot columns as there are pivot rows. In our example there are six of each and,again on account of the staircase nature, the pivot columns are the linearly independent (Denition: "LinearIndependence", p. 42) columns of Ared. One now asks how this might help us distinguish the independentcolumns of A. For, although the rows of Ared are linear combinations of the rows of A, no such thing is truewith respect to the columns. In our example, columns 1, 2, 3, 4, 5, 7 are the pivot columns. In general:

Proposition 4.1:Suppose A is m-by-n. If columns cj |j = 1, ..., r are the pivot columns of Ared. then columnscj |j = 1, ..., r of A constitute a basis for R (A).Proof: Note that the pivot columns of Ared. are, by construction, linearly independent. Suppose,however, that columns cj |j = 1, ..., r of A are linearly dependent. In this case there exists anonzero x ∈ Rn for which Ax = 0 and

xk = 0 , k /∈ cj |j = 1, ..., r (4.14)

Now Ax = 0 necessarily implies that Aredx = 0, contrary to the fact that columns cj |j = 1, ..., rare the pivot columns of Ared.

We now show that the span of columns cj |j = 1, ..., r of A indeed coincides with R (A). Thisis obvious if r = n, i.e., if all of the columns are linearly independent. If r < n, there exists aq /∈ cj |j = 1, ..., r. Looking back at Ared. we note that its qth column is a linear combinationof the pivot columns with indices not exceeding q. Hence, there exists an x satisfying (4.14) andAredx = 0, and xq = 1. This x then necessarily satises Ax = 0. This states that the qth columnof A is a linear combination of columns cj |j = 1, ..., r of A.

4.3.3 Finding a Basis for the Null Space

Let us now exhibit a basis for N (A). We exploit the already mentioned fact that N (A) = N (Ared).Regarding the latter, we partition the elements of x into so called pivot variables,

xcj |j = 1, ..., r

and free variablesxk |k /∈ cj |j = 1, ..., r

Page 43: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

37

There are evidently n− r free variables. For convenience, let us denote these in the future byxcj

|j = r + 1, ..., n

One solves Aredx = 0, by expressing each of the pivot variables in terms of the nonpivot, or free, variables.In the example above, x1, x2, x3, x4, x5, and x7 are pivot while x6 and x8 are free. Solving for the pivot interms of the free we nd

x7 = 0

x5 = 0

x4 = x8

x3 = 0

x2 = x6

x1 = 0

or, written as a vector,

x = x6

0

1

0

0

0

1

0

0

+ x8

0

0

0

1

0

0

0

1

(4.15)

where x6 and x8 are free. As x6 and x8 range over all real numbers, the x above traces out a plane in R8.This plane is precisely the null space of A and (4.15) describes a generic element as the linear combinationof two basis vectors. Compare this to what MATLAB returns when faced with null(A,'r'). Abstractingthese calculations we arrive at

Proposition 4.2:Suppose that A is m-by-n with pivot indices cj |j = 1, ..., r and free indices cj |j = r + 1, ..., n.A basis for N (A). may be constructed of n − r vectors

z1, z2, ..., zn−r

where zk, and only zk,

possesses a nonzero in its cr+k component.

4.3.4 The Physical Meaning of Our Calculations

Let us not end on an abstract note however. We ask what R (A) and N (A) tell us about the ladder.Regarding R (A) the answer will come in the next chapter. The null space calculation however has revealedtwo independent motions against which the ladder does no work! Do you see that the two vectors in (4.15)encode rigid vertical motions of bars 4 and 5 respectively? As each of these lies in the null space of A, theassociated elongation is zero. Can you square this with the ladder as pictured in Figure 4.1 (An unstableladder?)? I hope not, for vertical motion of bar 4 must 'stretch' bars 1, 2, 6, and 7. How does one resolvethis (apparent) contradiction?

Page 44: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

38 CHAPTER 4. THE FUNDAMENTAL SUBSPACES

4.4 Left Null Space5

4.4.1 Left Null Space

If one understands the concept of a null space (Section 4.2), the left null space is extremely easy to understand.

Denition 13: Left Null SpaceThe Left Null Space of a matrix is the null space (Section 4.2) of its transpose, i.e.,

N(AT)

=y ∈ Rm |AT y = 0

The word "left" in this context stems from the fact that AT y = 0 is equivalent to yT A = 0 where y

"acts" on A from the left.

4.4.2 Example

As Ared was the key to identifying the null space (Section 4.2) of A, we shall see that ATred is the key to the

null space of AT . If

A =

1 1

1 2

1 3

(4.16)

then

AT =

1 1 1

1 2 3

(4.17)

and so

ATred =

1 1 1

0 1 2

(4.18)

We solve ATred = 0 by recognizing that y1 and y2 are pivot variables while y3 is free. Solving AT

redy = 0 forthe pivot in terms of the free we nd y2 = − (2y3) and y1 = y3 hence

N(AT)

=

y3

1

−2

1

|y3 ∈ R

(4.19)

4.4.3 Finding a Basis for the Left Null Space

The procedure is no dierent than that used to compute the null space (Section 4.2) of A itself. In fact

Denition 14: A Basis for the Left Null SpaceSuppose that AT is n-by-m with pivot indices cj |j = 1, . . . , r and free indicescj |j = r + 1, . . . ,m. A basis for N

(AT)

may be constructed of m − r vectorsz1, z2, . . . , zm−r

where zk, and only zk, possesses a nonzero in its cr+k component.

5This content is available online at <http://cnx.org/content/m10292/2.7/>.

Page 45: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

39

4.5 Row Space6

4.5.1 The Row Space

As the columns of AT are simply the rows of A we call Ra(AT)the row space of AT . More precisely

Denition 15: Row SpaceThe row space of the m-by-n matrix A is simply the span of its rows, i.e.,

Ra(AT)≡

AT y |y ∈ Rm

This is a subspace of Rn.

4.5.2 Example

Let us examine the matrix:

A =

0 1 0 0

−1 0 1 0

0 0 0 1

(4.20)

The row space of this matrix is:

Ra(AT)

=

y1

0

1

0

0

+ y2

−1

0

1

0

+ y3

0

0

0

1

|y ∈ R3

(4.21)

As these three rows are linearly independent (Denition: "Linear Independence", p. 42) we may go nofurther. We "recognize" then Ra

(AT)as a three dimensional subspace (Section 4.7.2) of R4.

4.5.3 Method for Finding the Basis of the Row Space

Regarding a basis for Ra(AT)we recall that the rows of Ared, the row reduced form (Section 4.7.3) of the

matrix A, are merely linear combinations of the rows of A and hence

Ra(AT)

= Ra (Ared) (4.22)

This leads immediately to:

Denition 16: A Basis for the Row SpaceSuppose A is m-by-n. The pivot rows of Ared constitute a basis for Ra

(AT).

With respect to our example,

0

1

0

0

,

−1

0

1

0

,

0

0

0

1

(4.23)

comprises a basis for Ra(AT).

6This content is available online at <http://cnx.org/content/m10296/2.7/>.

Page 46: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

40 CHAPTER 4. THE FUNDAMENTAL SUBSPACES

4.6 Exercises: Columns and Null Spaces7

Exercises

1. I encourage you to use rref and null for the following.

• (i) Add a diagonal crossbar between nodes 3 and 2 in the unstable ladder gure (Figure 4.1: Anunstable ladder?) and compute bases for the column and null spaces of the new adjacency matrix.As this crossbar fails to stabilize the ladder, we shall add one more bar.

• (ii) To the 9 bar ladder of (i) add a diagonal cross bar between nodes 1 and the left end of bar 6.Compute bases for the column and null spaces of the new adjacency matrix.

2. We wish to show that N (A) = N(AT A

)regardless of A.

• (i) We rst take a concrete example. Report the ndings of null when applied to A and AT Afor the A matrix associated with the unstable ladder gure (Figure 4.1: An unstable ladder?).

• (ii) Show that N (A) ⊆ N(AT A

), i.e. that if Ax = 0 then AT Ax = 0.

• (iii) Show that N(AT A

)⊆ N (A), i.e., that if AT Ax = 0 then Ax = 0. (Hint: if AT Ax = 0 then

xT AT Ax = 0.)

3. Suppose that A is m-by-n and that N (A) = Rn. Argue that A must be the zero matrix.

4.7 Appendices/Supplements

4.7.1 Vector Space8

4.7.1.1 Introduction

You have long taken for granted the fact that the set of real numbers, R, is closed under addition andmultiplication, that each number has a unique additive inverse, and that the commutative, associative, anddistributive laws were right as rain. The set, C, of complex numbers also enjoys each of these properties, asdo the sets Rn and Cn of columns of n real and complex numbers, respectively.

To be more precise, we write x and y in Rn asx = (x1, x2, . . . , xn)T

y = (y1, y2, . . . , yn)T

and dene their vector sum as the elementwise sum

x + y =

x1 + y1

x2 + y2

...

xn + yn

(4.24)

and similarly, the product of a complex scalar, z ∈ C with x as:

zx =

zx1

zx2

...

zxn

(4.25)

7This content is available online at <http://cnx.org/content/m10367/2.4/>.8This content is available online at <http://cnx.org/content/m10298/2.6/>.

Page 47: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

41

4.7.1.2 Vector Space

These notions lead naturally to the concept of vector space. A set V is said to be a vector space if

1. x + y = y + x for each x and y in V2. x + y + z = x + y + z for each x, y, and z in V3. There is a unique "zero vector" such that x + 0 = x for each x in V4. For each x in V there is a unique vector −x such that x + (−x) = 0.5. 1x = x6. (c1c2)x = c1 (c2x) for each x in V and c1 and c2 in C.7. c (x + y) = cx + cy for each x and y in V and c in C.8. (c1 + c2)x = c1x + c2x for each x in V and c1 and c2 in C.

4.7.2 Subspaces9

4.7.2.1 Subspace

A subspace is a subset of a vector space (Section 4.7.1) that is itself a vector space. The simplest exampleis a line through the origin in the plane. For the line is denitely a subset and if we add any two vectors onthe line we remain on the line and if we multiply any vector on the line by a scalar we remain on the line.The same could be said for a line or plane through the origin in 3 space. As we shall be travelling in spaceswith many many dimensions it pays to have a general denition.

Denition 17: A subset S of a vector space V is a subspace of V when1. if x and y belong to S then so does x + y.2. if x belongs to S and t is real then tx belong to S.

As these are oftentimes unwieldy objects it pays to look for a handful of vectors from which the entiresubset may be generated. For example, the set of x for which x1 + x2 + x3 + x4 = 0 constitutes a subspaceof R4. Can you 'see' this set? Do you 'see' that

−1

−1

0

0

and

−1

0

1

0

and

−1

0

0

1

not only belong to a set but in fact generate all possible elements? More precisely, we say that these vectorsspan the subspace of all possible solutions.

9This content is available online at <http://cnx.org/content/m10297/2.6/>.

Page 48: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

42 CHAPTER 4. THE FUNDAMENTAL SUBSPACES

Denition 18: SpanA nite collection s1, s2, . . . , sn of vectors in the subspace S is said to span S if each element ofS can be written as a linear combination of these vectors. That is, if for each s ∈ S there exist nreals x1, x2, . . . , xn such that s = x1s1 + x2s2 + · · ·+ xnsn.

When attempting to generate a subspace as the span of a handful of vectors it is natural to ask what isthe fewest number possible. The notion of linear independence helps us clarify this issue.

Denition 19: Linear IndependenceA nite collection s1, s2, . . . , sn of vectors is said to be linearly independent when the only reals,x1, x2, . . . , xn for which x1 +x2 + · · ·+xn = 0 are x1 = x2 = · · · = xn = 0. In other words, whenthe null space (Section 4.2) of the matrix whose columns are s1, s2, . . . , sn contains only the zerovector.

Combining these denitions, we arrive at the precise notion of a 'generating set.'

Denition 20: BasisAny linearly independent spanning set of a subspace S is called a basis of S.

Though a subspace may have many bases they all have one thing in common:

Denition 21: DimensionThe dimension of a subspace is the number of elements in its basis.

4.7.3 Row Reduced Form10

4.7.3.1 Row Reduction

A central goal of science and engineering is to reduce the complexity of a model without sacricing itsintegrity. Applied to matrices, this goal suggests that we attempt to eliminate nonzero elements and so'uncouple' the rows. In order to retain its integrity the elimination must obey two simple rules.

Denition 22: Elementary Row Operations1. You may swap any two rows.2. You may add to a row a constant multiple of another row.

With these two elementary operations one can systematically eliminate all nonzeros below the diagonal.For example, given

0 1 0 0

−1 0 1 0

0 0 0 1

1 2 3 4

(4.26)

it seems wise to swap the rst and fourth rows and so arrive at1 2 3 4

0 1 0 0

−1 0 1 0

0 0 0 1

(4.27)

10This content is available online at <http://cnx.org/content/m10295/2.6/>.

Page 49: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

43

adding the rst row to the third now produces1 2 3 4

0 1 0 0

0 2 4 4

0 0 0 1

(4.28)

subtracting twice the second row from the third yields1 2 3 4

0 1 0 0

0 0 4 4

0 0 0 1

(4.29)

a matrix with zeros below its diagonal. This procedure is not restricted to square matrices. For example,given

1 1 1 1

2 4 4 2

3 5 5 3

(4.30)

we start at the bottom left then move up and right. Namely, we subtract 3 times the rst row from thethird and arrive at

1 1 1 1

2 4 4 2

0 2 2 0

(4.31)

and then subtract twice the rst row from the second,1 1 1 1

0 2 2 0

0 2 2 0

(4.32)

and nally subtract the second row from the third,1 1 1 1

0 2 2 0

0 0 0 0

(4.33)

It helps to label the before and after matrices.

Denition 23: The Row Reduced FormGiven the matrix A we apply elementary row operations until each nonzero below the diagonal iseliminated. We refer to the resulting matrix as Ared.

Page 50: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

44 CHAPTER 4. THE FUNDAMENTAL SUBSPACES

4.7.3.2 Uniqueness and Pivots

As there is a certain amount of exibility in how one carries out the reduction it must be admitted thatthe reduced form is not unique. That is, two people may begin with the same matrix yet arrive at dierentreduced forms. The dierences however are minor, for both will have the same number of nonzero rows andthe nonzeros along the diagonal will follow the same pattern. We capture this pattern with the followingsuite of denitions,

Denition 24: Pivot RowEach nonzero row of Ared is called a pivot row.

Denition 25: PivotThe rst nonzero term in each row of Ared is called a pivot.

Denition 26: Pivot ColumnEach column of Ared that contains a pivot is called a pivot column.

Denition 27: RankThe number of pivots in a matrix is called the rank of that matrix.

Regarding our example matrices, the rst (4.26) has rank 4 and the second (4.30) has rank 2.

4.7.3.3 Row Reduction in MATLAB

MATLAB's rref command goes full-tilt and attempts to eliminate ALL o diagonal terms and to leavenothing but ones on the diagonal. I recommend you try it on our two examples. You can watch its individualdecisions by using rrefmovie instead.

Page 51: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Chapter 5

Least Squares

5.1 Least Squares1

5.1.1 Introduction

We learned in the previous chapter that Ax = b need not possess a solution when the number of rows of Aexceeds its rank, i.e., r < m. As this situation arises quite often in practice, typically in the guise of 'moreequations than unknowns,' we establish a rationale for the absurdity Ax = b.

5.1.2 The Normal Equations

The goal is to choose x such that Ax is as close as possible to b. Measuring closeness in terms of the sum ofthe squares of the components we arrive at the 'least squares' problem of minimizing

res(‖ Ax− b ‖)2 = (Ax− b)T (Ax− b) (5.1)

over all x ∈ R. The path to the solution is illuminated by the Fundamental Theorem. More precisely, wewrite b = bR + bN , bR ∈ RA and bN ∈ NAT . On noting that (i) Ax − bR ∈ RA , x ∈ Rn and(ii)(RA ⊥ NAT

)we arrive at the Pythagorean Theorem.

Pythagorean Theorem

norm2 (Ax− b) = (‖ Ax− (bR + bN ) ‖)2

= (‖ Ax− bR ‖)2 + (‖ bN ‖)2(5.2)

It is now clear from the Pythagorean Theorem (5.2: Pythagorean Theorem) that the best x is the one thatsatises

Ax = bR (5.3)

As bR ∈ RA this equation indeed possesses a solution. We have yet however to specify how one computesbR given b. Although an explicit expression for bR, the so called orthogonal projection of b onto RA, interms of A and b is within our grasp we shall, strictly speaking, not require it. To see this, let us note thatif x satises the above equation (5.3) then

Ax− b = Ax− (bR + bN )

= −bN

(5.4)

1This content is available online at <http://cnx.org/content/m10371/2.9/>.

45

Page 52: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

46 CHAPTER 5. LEAST SQUARES

As bN is no more easily computed than bR you may claim that we are just going in circles. The 'practical'information in the above equation (5.4) however is that Ax− b ∈ AT , i.e., AT (Ax− b) = 0, i.e.,

AT Ax = AT b (5.5)

As AT b ∈ RAT regardless of b this system, often referred to as the normal equations, indeed has asolution. This solution is unique so long as the columns of AT A are linearly independent, i.e., so long asN(AT A

)= 0. Recalling Chapter 2, Exercise 2, we note that this is equivalent to NA = 0. We summarize

our ndings in

Theorem 5.1:The set of x ∈ bR for which the mist (‖ Ax− b ‖)2 is smallest is composed of those x for which

AT Ax = AT b. There is always at least one such x. There is exactly one such x if NA = 0.

As a concrete example, suppose with reference to Figure 5.1 that A =

1 1

0 1

0 0

and b =

1

1

1

.

Figure 5.1: The decomposition of b.

Page 53: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

47

As b 6= RA there is no x such that Ax = b. Indeed, (‖ Ax− b ‖)2 = (x1 + x2 + (−1))2+(x2 − 1)2+1 ≥ 1,

with the minimum uniquely attained at x =

0

1

, in agreement with the unique solution of the above

equation (5.5), for AT A =

1 1

1 2

and AT b =

1

2

. We now recognize, a posteriori, that bR = Ax =1

1

0

is the orthogonal projection of b onto the column space of A.

5.1.3 Applying Least Squares to the Biaxial Test Problem

We shall formulate the identication of the 20 ber stinesses in this previous gure (Figure 3.3: A crudetissue model), as a least squares problem. We envision loading, f , the 9 nodes and measuring the associated18 displacements, x. From knowledge of x and f we wish to infer the components of K = diag (k) where kis the vector of unknown ber stinesses. The rst step is to recognize that

AT KAx = f (5.6)

may be written asBk = f , B = AT diag (Ax) (5.7)

Though conceptually simple this is not of great use in practice, for B is 18-by-20 and hence the aboveequation (5.7) possesses many solutions. The way out is to compute k as the result of more than oneexperiment. We shall see that, for our small sample, 2 experiments will suce. To be precise, we supposethat x1 is the displacement produced by loading f1 while x2 is the displacement produced by loading f2.

We then piggyback the associated pieces in B =

AT diag(Ax1

)AT diag

(Ax2

) and f =

f1

f2

This B is 36-by-20

and so the system Bk = f is overdetermined and hence ripe for least squares.We proceed then to assemble B and f . We suppose f1 and f2 to correspond to horizontal and vertical

stretching

f1 =(−1 0 0 0 1 0 −1 0 0 0 1 0 −1 0 0 0 1 0

)T

(5.8)

f2 =(

0 1 0 1 0 1 0 0 0 0 0 0 0 −1 0 −1 0 −1)T

(5.9)

respectively. For the purpose of our example we suppose that each kj = 1 except k8 = 5. We assembleAT KA as in Chapter 2 and solve

AT KAxj = f j (5.10)

with the help of the pseudoinverse. In order to impart some `reality' to this problem we taint each xj with10 percent noise prior to constructing B. Regarding

BT Bk = BT f (5.11)

we note that Matlab solves this system when presented with k=B\f when B is rectangular. We have plottedthe results of this procedure in the Figure 5.2. The sti ber is readily identied.

Page 54: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

48 CHAPTER 5. LEAST SQUARES

Figure 5.2: Results of a successful biaxial test.

5.1.4 Projections

From an algebraic point of view (5.5))is an elegant reformulation of the least squares problem. Though easyto remember it unfortunately obscures the geometric content, suggested by the word 'projection,' of (5.4).As projections arise frequently in many applications we pause here to develop them more carefully. Withrespect to the normal equations we note that if NA = 0 then

x =(AT A

)−1AT b (5.12)

and so the orthogonal projection of b onto RA is:

bR = Ax

= A(AT A

)−1AT b

(5.13)

Dening

P = A(AT A

)−1AT (5.14)

Page 55: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

49

(5.13) takes the form bR = Pb. Commensurate with our notion of what a 'projection' should be we expectthat P map vectors not in RA onto RA while leaving vectors already in RA unscathed. More succinctly, weexpect that PbR = bR, i.e., PbR = PbR. As the latter should hold for all b ∈ Rm we expect that

P 2 = P (5.15)

With respect to (5.14) we nd that indeed

P 2 = A(AT A

)−1AT A

(AT A

)−1AT

= A(AT A

)−1AT

= P

(5.16)

We also note that the P in (5.14) is symmetric. We dignify these properties through

Denition 28: orthogonal projectionA matrix P that satises P 2 = P is called a projection. A symmetric projection is called anorthogonal projection.

We have taken some pains to motivate the use of the word 'projection.' You may be wondering howeverwhat symmetry has to do with orthogonality. We explain this in terms of the tautology

b = Pb + (I − P ) b (5.17)

Now, if P is a projection then so too is I − P . Moreover, if P is symmetric then the dot product of b's twoconstituents is

(Pb)T (I − P ) b = bT PT (I − P ) b

= bT(P − P 2

)b

= bT 0b

= 0

(5.18)

i.e., Pb is orthogonal to (I − P ) b. As examples of a nonorthogonal projections we oer P =1 0 0−12 0 0−14

−12 1

and I − P . Finally, let us note that the central formula, P = A(AT A

)−1 = AT , is

even a bit more general than advertised. It has been billed as the orthogonal projection onto the columnspace of A. The need often arises however for the orthogonal projection onto some arbitrary subspace M .The key to using the old P is simply to realize that every subspace is the column space of some matrix.More precisely, if

x1, ..., xm (5.19)

is a basis for M then clearly if these xj are placed into the columns of a matrix called A then RA = M .

For example, if M is the line through(

1 1)T

then

P =

1

1

12

(1 1

)

= 12

1 1

1 1

(5.20)

is orthogonal projection onto M .

Page 56: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

50 CHAPTER 5. LEAST SQUARES

5.1.5 Exercises

1. Gilbert Strang was stretched on a rack to lengths ` = 6, 7, and 8 feet under applied forces of f = 1, 2,and 4 tons. Assuming Hooke's law `− L = cf , nd his compliance, c, and original height, L, by leastsquares.

2. With regard to the example of 3 note that, due to the the random generation of the noise that taintsthe displacements, one gets a dierent 'answer' every time the code is invoked.

(a) Write a loop that invokes the code a statistically signicant number of times and submit bar plotsof the average ber stiness and its standard deviation for each ber, along with the associatedMle.

(b) Experiment with various noise levels with the goal of determining the level above which it becomesdicult to discern the sti ber. Carefully explain your ndings.

3. Find the matrix that projects R3 onto the line spanned by(

1 0 1)T

.

4. Find the matrix that projects R3 onto the plane spanned by(

1 0 1)T

and(

1 1 −1)T

.

5. If P is the projection of Rm onto a kdimensional subspace M , what is the rank of P and what is RP?

Page 57: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Chapter 6

Matrix Methods for Dynamical Systems

6.1 Nerve Fibers and the Dynamic Strang Quartet1

6.1.1 Introduction

Up to this point we have largely been concerned with

1. Deriving linear systems of algebraic equations (from considerations of static equilibrium) and2. The solution of such systems via Gaussian elimination.

In this module we hope to begin to persuade the reader that our tools extend in a natural fashion to theclass of dynamic processes. More precisely, we shall argue that

1. Matrix Algebra plays a central role in the derivation of mathematical models of dynamical systemsand that,

2. With the aid of the Laplace transform in an analytical setting or the Backward Euler method in thenumerical setting, Gaussian elimination indeed produces the solution.

6.1.2 Nerve Fibers and the Dynamic Strang Quartet

6.1.2.1 Gathering Information

A nerve ber's natural electrical stimulus is not direct current but rather a short burst of current, the so-called nervous impulse. In such a dynamic environment the cell's membrane behaves not only like a leakyconductor but also like a charge separator, or capacitor.

1This content is available online at <http://cnx.org/content/m10168/2.6/>.

51

Page 58: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

52 CHAPTER 6. MATRIX METHODS FOR DYNAMICAL SYSTEMS

An RC model of a nerve ber

Figure 6.1

The typical value of a cell's membrane capacitance is

c = 1µF

cm2

where µF denotes micro-Farad. Recalling our variable conventions (Section 2.1.1: Nerve Fibers and theStrang Quartet), the capacitance of a single compartment is

Cm = 2πal

Nc

and runs parallel to each Rm, see Figure 6.1 (An RC model of a nerve ber). This gure also diers fromthe simpler circuit (Figure 2.3: The fully dressed circuit model) from the introductory electrical modelingmodule in that it possesses two edges to the left of the stimuli. These edges serve to mimic that portion ofthe stimulus current that is shunted by the cell body. If Acb denotes the surface area of the cell body, thenit has

Denition 29: capacitance of cell bodyCcb = Acbc

Denition 30: resistance of cell bodyRcb = Acbρm.

6.1.2.2 Updating the Strang Quartet

We ask now how the static Strang Quartet (p. 8) of the introductory electrical module should be augmented.

6.1.2.2.1 Updating (S1')

Regarding (S1') (p. 13) we proceed as before. The voltage drops are

e1 = x1

Page 59: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

53

e2 = x1 − Em

e3 = x1 − x2

e4 = x2

e5 = x2 − Em

e6 = x2 − x3

e7 = x3

e8 = x3 − Em

and so

e = b−Ax where b = (−Em)

0

1

0

0

1

0

0

1

and A =

−1 0 0

−1 0 0

−1 1 0

0 −1 0

0 −1 0

0 −1 1

0 0 −1

0 0 −1

6.1.2.2.2 Updating (S2)

To update (S2) (Section 2.1.2.2: Strang Quartet, Step 2) we must now augment Ohm's law with

Denition 31: Voltage-current law obeyed by a capacitorThe current through a capacitor is proportional to the time rate of change of the potential acrossit.

This yields, (denoting derivative by '),y1 = Ccbe1

y2 =e2

Rcb

y3 =e3

Ri

y4 = Cme4′

y5 =e5

Rm

y6 =e6

Ri

Page 60: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

54 CHAPTER 6. MATRIX METHODS FOR DYNAMICAL SYSTEMS

y7 = Cme7′

y8 =e8

Rm

or, in matrix terms,y = Ge + Ce′

where

G =

0 0 0 0 0 0 0 0

0 1Rcb

0 0 0 0 0 0

0 0 1Ri

0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 1Rm

0 0 0

0 0 0 0 0 1Ri

0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 1Rm

and

C =

Ccb 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 Cm 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 Cm 0

0 0 0 0 0 0 0 0

are the conductance and capacitance matrices.

6.1.2.2.3 Updating (S3)

As Kirchho's Current law is insensitive to the type of device occupying an edge, step (S3) proceeds exactlyas before (Section 2.1.2.3: Strang Quartet, Step 3).

i0 − y1 − y2 − y3 = 0

y3 − y4 − y5 − y6 = 0

y6 − y7 − y8 = 0

or, in matrix terms,

AT y = −f where f =

i0

0

0

T

Page 61: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

55

6.1.2.2.4 Step (S4): Assembling

Step (S4) remains one of assembling,

AT y = −f ⇒ AT (Ge + Ce′) = −f ⇒ AT (G (b−Ax) + C (b′ −Ax′)) = −f

becomesAT CAx′ + AT GAx = AT Gb + f + AT Cb′. (6.1)

This is the general form of the potential equations for an RC circuit. It presumes of the user knowledgeof the initial value of each of the potentials,

x (0) = X (6.2)

Regarding the circuit of Figure 6.1 (An RC model of a nerve ber), and letting G = 1R , we nd

AT CA =

Ccb 0 0

0 C 0

0 0 C

, AT GA =

Gcb + Gi −Gi 0

−Gi 2Gi + Gm −Gi

0 −Gi Gi + Gm

AT Gb = Em

Gcb

Gm

Gm

and AT Cb′ =

0

0

0

and an initial (rest) potential of

x (0) = Em

1

1

1

T

6.1.2.3 Modes of Attack

We shall now outline two modes of attack on such problems. The Laplace Transform (Section 6.2) is ananalytical tool that produces exact, closed-form solutions for small tractable systems and therefore oersinsight into how larger systems 'should' behave. The Backward-Euler method (Section 6.4) is a techniquefor solving a discretized (and therefore approximate) version of (6.1). It is highly exible, easy to code, andworks on problems of great size. Both the Backward-Euler and Laplace Transform methods require, at theircore, the algebraic solution of a linear system of equations. In deriving these methods we shall nd it moreconvenient to proceed from the generic system

x′ = Bx + g (6.3)

With respect to our ber problem

B =(−((

AT CA)−1))

AT GA

=

−(Gcb+Gi)

Ccb

Gi

Ccb

0Gi

Cm

−(2Gi+Gm)Cm

Gi

Cm

0 Gi

Cm

−(Gi+Gm)Cm

(6.4)

and

g =(AT CA

)−1 (AT Gb + f

)=

GcbEm+i0

Ccb

EmGm

Cm

EmGm

Cm

Page 62: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

56 CHAPTER 6. MATRIX METHODS FOR DYNAMICAL SYSTEMS

6.2 The Laplace Transform2

The Laplace Transform is typically credited with taking dynamical problems into static problems. Recallthat the Laplace Transform of the function h is

Lh (s) ≡∫ ∞

0

e−(st)h (t) dt.

MATLAB is very adept at such things. For example:

Example 6.1: The Laplace Transform in MATLAB

syms t

laplace(exp(t))

ans = 1/(s-1)

laplace(t*(exp(-t))

ans = 1/(s+1)2

The Laplace Transform of a matrix of functions is simply the matrix of Laplace transforms of the individualelements.

Example 6.2: Laplace Transform of a matrix of functions

L

et

te−t

=

1s−1

1(s+1)2

Now, in preparing to apply the Laplace transform to our equation from the dynamic strang quartet module(6.3):

x′ = Bx + g,

we write it as

L

d

dt(x)

= LBx + g (6.5)

and so must determine how L acts on derivatives and sums. With respect to the latter it follows directlyfrom the denition that

LBx + g = LBx+ Lg= BLx+ Lg

. (6.6)

Regarding its eect on the derivative we nd, on integrating by parts, that

L

d

dt(x)

=∫ ∞

0

e−(st) d

dt(x (t)) dt = x (t) e−(st)|∞0 + s

∫ ∞

0

e−(st)x (t) dt.

Supposing that x and s are such that x (t) e−(st) → 0 as t →∞ we arrive at

L

d

dt(x)

= sLx − x (0) . (6.7)

2This content is available online at <http://cnx.org/content/m10169/2.5/>.

Page 63: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

57

Now, upon substituting (6.6) and (6.7) into (6.5) we nd

sLx − x (0) = BLx+ Lg ,

which is easily recognized to be a linear system for Lx, namely

(sI −B)Lx = Lg+ x (0) . (6.8)

The only thing that distinguishes this system from those encountered since our rst brush (Section 2.1) withthese systems is the presence of the complex variable s. This complicates the mechanical steps of GaussianElimination or the Gauss-Jordan Method but the methods indeed apply without change. Taking up thelatter method, we write

Lx = (sI −B)−1 (Lg+ x (0)) .

The matrix (sI −B)−1is typically called the transfer function or resolvent, associated with B, at s.

We turn to MATLAB for its symbolic calculation. (for more information, see the tutorial3 on MATLAB'ssymbolic toolbox). For example,

Example 6.3

B = [2 -1; -1 2]

R = inv(s*eye(2)-B)

R =

[ (s-2)/(s*s-4*s+3), -1/(s*s-4*s+3)]

[ -1/(s*s-4*s+3), (s-2)/(s*s-4*s+3)]

We note that (sI −B)−1is well dened except at the roots of the quadratic, s2 − 4s + 3. This quadratic is

the determinant of sI −B and is often referred to as the characteristic polynomial of B. Its roots arecalled the eigenvalues of B.

Example 6.4As a second example let us take the B matrix of the dynamic Strang quartet module (6.4) withthe parameter choices specied in b3.m4 , namely

B =

−0.135 0.125 0

0.5 −1.01 0.5

0 0.5 −0.51

(6.9)

The associated (sI −B)−1is a bit bulky (please run b3.m5 ) so we display here only the denom-

inator of each term, i.e.,s3 + 1.655s2 + 0.4078s + 0.0039. (6.10)

Assuming a current stimulus of the form i0 (t) = t3e−( t

6 )10000 and Em = 0 brings

(Lg) (s) =

0.191

(s+ 16 )

4

0

0

3http://www.mathworks.com/access/helpdesk/help/toolbox/symbolic/symbolic.shtml4http://www.caam.rice.edu/∼caam335/cox/lectures/b3.m5http://www.caam.rice.edu/∼caam335/cox/lectures/b3.m

Page 64: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

58 CHAPTER 6. MATRIX METHODS FOR DYNAMICAL SYSTEMS

and so (6.10) persists in

Lx = (sI −B)−1Lg =0.191(

s + 16

)4 (s3 + 1.655s2 + 0.4078s + 0.0039)

s2 + 1.5s + 0.27

0.5s + 0.26

0.2497

Now comes the rub. A simple linear solve (or inversion) has left us with the Laplace transform of x. Theaccursed

Theorem 6.1: No Free Lunch TheoremWe shall have to do some work in order to recover x from Lx.

confronts us. We shall face it down in the Inverse Laplace module (Section 6.3).

6.3 The Inverse Laplace Transform6

6.3.1 To Come

In The Transfer Function (Section 9.2) we shall establish that the inverse Laplace transform of a function his (

L−1 (h))(t) =

12π

∫ ∞

−∞e(c+yi)th ((c + yi) t) dy (6.11)

where i ≡√−1 and the real number c is chosen so that all of the singularities of h lie to the left of the

line of integration.

6.3.2 Proceeding with the Inverse Laplace Transform

With the inverse Laplace transform one may express the solution of x′ = Bx + g , as

x (t) = L−1((sI −B)−1

)(Lg+ x (0)) (6.12)

As an example, let us take the rst component of Lx, namely

Lx1 (s) =0.19

(s2 + 1.5s + 0.27

)(s + 1

6

)4 (s3 + 1.655s2 + 0.4078s + 0.0039).

We dene:

Denition 32: polesAlso called singularities, these are the points s at which Lx1 (s) blows up.These are clearly the roots of its denominator, namely

−1/100,

(−329/400±

√73

16

), and − 1/6. (6.13)

All four being negative, it suces to take c = 0 and so the integration in (6.11) proceeds up the imaginaryaxis. We don't suppose the reader to have already encountered integration in the complex plane but hopethat this example might provide the motivation necessary for a brief overview of such. Before that howeverwe note that MATLAB has digested the calculus we wish to develop. Referring again to b3.m7 for detailswe note that the ilaplace command produces

6This content is available online at <http://cnx.org/content/m10170/2.8/>.7http://www.caam.rice.edu/∼caam335/cox/lectures/b3.m

Page 65: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

59

x1 (t) = 211.35e−t100 − (0.0554t3 + 4.5464t2 + 1.085t + 474.19) e

−t6 +

e−(329t)

400

(262.842cosh

(√73t16

))+ 262.836sinh

(√73t16

)

Figure 6.2: The 3 potentials associated with the RC circuit model gure (Figure 6.1: An RC modelof a nerve ber).

The other potentials, see the gure above, possess similar expressions. Please note that each of the polesof Lx1 appear as exponents in x1 and that the coecients of the exponentials are polynomials whosedegrees is determined by the order of the respective pole.

6.4 The Backward-Euler Method8

Where in the Inverse Laplace Transform (Section 6.3) module we tackled the derivative in

x′ = Bx + g, (6.14)

via an integral transform we pursue in this section a much simpler strategy, namely, replace the derivativewith a nite dierence quotient. That is, one chooses a small dt and 'replaces' (6.14) with

x (t)− x (t− dt)dt

= Bx (t) + g (t) . (6.15)

The utility of (6.15) is that it gives a means of solving for x at the present time, t, from the knowledge ofx in the immediate past, t− dt. For example, as x (0) = x (0) is supposed known we write (6.15) as(

I

dt−B

)x (dt) =

x (0)dt

+ g (dt) .

8This content is available online at <http://cnx.org/content/m10171/2.6/>.

Page 66: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

60 CHAPTER 6. MATRIX METHODS FOR DYNAMICAL SYSTEMS

Solving this for x (dt) we return to (6.15) and nd(I

dt−B

)x (2dt) =

x (dt)dt

+ g (2dt) .

and solve for x (2dt). The general step from past to present,

x (jdt) =(

I

dt−B

)−1(x ((j − 1) dt)

dt+ g (jdt)

)(6.16)

is repeated until some desired nal time, Tdt, is reached. This equation has been implemented in b3.m9

with dt = 1 and B and g as in the dynamic Strang module (6.4). The resulting x ( run b3.m10 yourself!)is indistinguishable from the plot we obtained (Figure 6.2) in the Inverse Laplace module.

Comparing the two representations, this equation (6.12) and (6.16), we see that they both produce thesolution to the general linear system of ordinary equations, see this eqn (6.3), by simply inverting a shiftedcopy of B. The former representation is hard but exact while the latter is easy but approximate. Of coursewe should expect the approximate solution, x , to approach the exact solution, x, as the time step, dt ,approaches zero. To see this let us return to (6.16) and assume, for now, that g ≡ 0 . In this case, one canreverse the above steps and arrive at the representation

x (jdt) =((I − dtB)−1

)j

x (0) (6.17)

Now, for a xed time t we suppose that dt = tj and ask whether

x (t) = limj→∞

((I − t

jB

)−1)j

x (0)

This limit, at least when B is one-by-one, yields the exponential

x (t) = eBtx (0)

clearly the correct solution to this equation (6.3). A careful explication of the matrix exponential and itsrelationship to this equation (6.12) will have to wait until we have mastered the inverse laplace transform.

6.5 Exercises: Matrix Methods for Dynamical Systems11

1. Compute, without the aid of a machine, the Laplace transforms of et and te−t. Show ALL of yourwork.

2. Extract from fib3.m analytical expressions for x2 and x3.3. Use eig to compute the eigenvalues of B as given in this equation (6.9). Use det to compute the

characteristic polynomial of B. Use roots to compute the roots of this characteristic polynomial.Compare these to the results of eig. How does Matlab compute the roots of a polynomial? (type helproots for the answer).

4. Adapt the Backward Euler portion of fib3.m so that one may specify an arbitrary number of com-partments, as in fib1.m. Submit your well documented M-le along with a plot of x1 and x10 versustime (on the same well labeled graph) for a nine compartment ber of length l = 1cm.

5. Derive this equation (6.17) from a previous equation (6.16) by working backwards toward x (0). Alongthe way you should explain why (

Idt −B

)−1

dt= (I − dtB)−1

.9http://www.caam.rice.edu/∼caam335/cox/lectures/b3.m

10http://www.caam.rice.edu/∼caam335/cox/lectures/b3.m11This content is available online at <http://cnx.org/content/m10526/2.4/>.

Page 67: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

61

6. Show, for scalar B, that

((1− t

j B)−1

)j

→ eBt as j →∞. Hint: By denition

((1− t

jB

)−1)j

= ejlog

„1

1− tj

B

«

now use L'Hopital's rule to show that jlog(

11− t

j B

)→ Bt.

6.6 Supplemental

6.6.1 Matrix Analysis of the Branched Dendrite Nerve Fiber12

6.6.1.1 Introduction

In the prior modules on static (Section 2.1) and dynamic electrical systems, we analyzed basic, hypotheticalone-branch nerve bers using a modeling methodology we dubbed the Strang Quartet. You may be askingyourself whether this method is stout enough to handle the real ber of our minds. Indeed, can we use ourtools in a real-world setting?

6.6.1.2 Presentation

An Actual Nerve Fiber

Figure 6.3: A pyramidal neuron from the CA3 region of a rat's hippocampus, scanned at (FIX ME) Xmagnication.

To answer your question, the above is a rendering of a neuron from a rat's hippocampus. The tools wehave rened will enable us to model the electrical properties of a dendrite leaving the neuron's cell body. Athree-branch model of such a dendrite, traced out with painstaking accuracy, appears in the diagram below.

12This content is available online at <http://cnx.org/content/m10177/2.7/>.

Page 68: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

62 CHAPTER 6. MATRIX METHODS FOR DYNAMICAL SYSTEMS

3-branch Dendrite Model

Figure 6.4: Multi-compartment electrical model of a rendered dendrite ber.

Our multi-compartment model reveals a 3 branch, 10 node, 27 edge structure to the ber. Note thatwe have included the Nernst potentials (Denition: "Nernst potentials", p. 12), the nervous impulse as acurrent source, and the additional leftmost edges depicting stimulus current shunted by the cell body.

We will continue using our previous notation, namely: Ri and Rm denoting cell body (Denition: "axialresistance", p. 6) and membrane (Denition: "membrane resistance", p. 6) resistances, respectively; xrepresenting the vector of potentials x1 . . . x10, and y denoting the vector of currents y1 . . . y27. Using thetypical value for a cell's membrane capacitance,

c = 1(µF/cm2),

we derive (see variable conventions (Section 2.1.1: Nerve Fibers and the Strang Quartet)):

Denition 33: Capacitance of a Single Compartment

Cm = 2πal

Nc

This capacitance is modeled in parallel with the cell's membrane resistance. Additionally, letting Acb

denote the cell body's surface area, we recall that its capacitance and resistance are

Denition 34: Capacitance of cell bodyCcb = Acbc

Denition 35: Resistance of cell bodyRcb = Acbρm.

Page 69: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

63

6.6.1.3 Applying the Strang Quartet

6.6.1.3.1 Step (S1')Voltage Drops

Let's begin lling out the Strang Quartet. For Step (S1'), we rst observe the voltage drops in the gure.Since there are a whopping 27 of them, we include only the rst six, which are slightly more than we needto cover all variations in the set:

e1 = x1

e2 = x1 − Em

e3 = x1 − x2

e4 = x2

e5 = x2 − Em

e6 = x2 − x3 . . .

e27 = x10 − Em

Page 70: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

64 CHAPTER 6. MATRIX METHODS FOR DYNAMICAL SYSTEMS

In matrix for, letting b denote the vector of batteries,

e = b−Ax where b = (−Em)

0

1

0

0

1

0

0

1

0

0

0

1

0

0

1

0

0

1

0

0

1

0

0

1

0

0

1

Page 71: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

65

and

A =

−1 0 0 0 0 0 0 0 0 0

−1 0 0 0 0 0 0 0 0 0

−1 1 0 0 0 0 0 0 0 0

0 −1 0 0 0 0 0 0 0 0

0 −1 0 0 0 0 0 0 0 0

0 −1 1 0 0 0 0 0 0 0

0 0 −1 0 0 0 0 0 0 0

0 0 −1 0 0 0 0 0 0 0

0 0 −1 1 0 0 0 0 0 0

0 0 0 1 −1 0 0 0 0 0

0 0 0 0 −1 0 0 0 0 0

0 0 0 0 −1 0 0 0 0 0

0 0 0 0 −1 1 0 0 0 0

0 0 0 0 0 −1 0 0 0 0

0 0 0 0 0 −1 0 0 0 0

0 0 0 0 0 −1 1 0 0 0

0 0 0 0 0 0 −1 0 0 0

0 0 0 0 0 0 −1 0 0 0

0 0 0 −1 0 0 0 1 0 0

0 0 0 0 0 0 0 −1 0 0

0 0 0 0 0 0 0 −1 0 0

0 0 0 0 0 0 0 −1 1 0

0 0 0 0 0 0 0 0 −1 0

0 0 0 0 0 0 0 0 −1 0

0 0 0 0 0 0 0 0 −1 1

0 0 0 0 0 0 0 0 0 −1

0 0 0 0 0 0 0 0 0 −1

Although our adjacency matrix A is appreciably larger than our previous examples, we have captured thesame phenomena as before.

6.6.1.3.2 Applying (S2): Ohm's Law Augmented with Voltage-Current Law for Capacitors

Now, recalling Ohm's Law and remembering that the current through a capacitor varies proportionatelywith the time rate of change of the potential across it, we assemble our vector of currents. As before, we listonly enough of the 27 currents to fully characterize the set:

y1 = Ccb

d

dt(e1)

y2 =e2

Rcb

Page 72: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

66 CHAPTER 6. MATRIX METHODS FOR DYNAMICAL SYSTEMS

y3 =e3

Ri

y4 = Cmd

dt(e4)

y5 =e5

Rm

y27 =e27

Rm

In matrix terms, this compiles to

y = Ge + Cd

dt(e) ,

where

Page 73: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

67

Conductance matrix

G =

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 1Rcb

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 1Ri

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 1Rm

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 1Ri

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 1Rm

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 1Ri

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 1Ri

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 1Rm

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 1Ri

0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 1Rm

0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1Ri

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1Rm

0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1Ri

0 0 0 0 0 0 0 0

0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1Rm

0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1Ri

0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1Rm

0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1Ri

0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1Rm

(6.18)

and

Page 74: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

68 CHAPTER 6. MATRIX METHODS FOR DYNAMICAL SYSTEMS

Capacitance matrix

C =

Ccb 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 Cm 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 Cm 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 Cm 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 Cm 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Cm 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Cm 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Cm 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Cm 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

.(6.19)

Page 75: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

69

6.6.1.3.3 Step (S3): Applying Kircho's Law

Our next step is to write out the equations for Kircho's Current Law. We see:

i0 − y1 − y2 − y3 = 0

y3 − y4 − y5 − y6 = 0

y6 − y7 − y8 − y9 = 0

y9 − y10 − y19 = 0

y10 − y11 − y12 − y13 = 0

y10 − y11 − y12 − y13 = 0

y13 − y14 − y15 − y16 = 0

y16 − y17 − y18 = 0

y19 − y20 − y21 − y22 = 0

y22 − y23 − y24 − y25 = 0

y25 − y26 − y27 = 0

Since the B coecient matrix we'd form here is equal to AT , we can say in matrix terms:

AT y = −f

where the vector f is composed of f1 = i0 and f2...27 = 0.

6.6.1.3.4 Step (S4): Stirring the Ingredients Together

Step (S4) directs us to assemble our previous toils together into a nal equation, which we will then endeavorto solve. Using the process (Section 6.1.2.2.4: Step (S4): Assembling) derived in the dynamic Strang module,we arrive at the equation

AT CAd

dt(x) + AT GAx = AT Gb + f + AT C

d

dt(b) , (6.20)

which is the general form for RC circuit potential equations. As we have mentioned, this equation presumesknowledge of the initial value of each of the potentials, x (0) = X.

Page 76: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

70 CHAPTER 6. MATRIX METHODS FOR DYNAMICAL SYSTEMS

Observing our circuit (Figure 6.4: 3-branch Dendrite Model), and letting 1Rfoo

= Gfoo, we calculate the

necessary quantities to ll out (6.20)'s pieces (for these calculations, see dendrite.m13 ):

AT CA =

Ccb 0 0 0 0 0 0 0 0 0

0 Cm 0 0 0 0 0 0 0 0

0 0 Cm 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 Cm 0 0 0 0 0

0 0 0 0 0 Cm 0 0 0 0

0 0 0 0 0 0 Cm 0 0 0

0 0 0 0 0 0 0 Cm 0 0

0 0 0 0 0 0 0 0 Cm 0

0 0 0 0 0 0 0 0 0 Cm

AT GA =

Gi + Gcb −Gi 0 0 0 0 0 0 0 0

−Gi 2Gi + Gm −Gi 0 0 0 0 0 0 0

0 −Gi 2Gi + Gm −Gi 0 0 0 0 0 0

0 0 −Gi 3Gi −Gi 0 0 −Gi 0 0

0 0 0 −Gi 2Gi + Gm −Gi 0 0 0 0

0 0 0 0 −Gi 2Gi + Gm −Gi 0 0 0

0 0 0 0 0 −Gi Gi + Gm 0 0 0

0 0 0 −Gi 0 0 0 2Gi + Gm −Gi 0

0 0 0 0 0 0 0 −Gi 2Gi + Gm −Gi

0 0 0 0 0 0 0 0 −Gi Gi + Gm

AT Gb = Em

Gcb

Gm

Gm

0

Gm

Gm

Gm

Gm

Gm

Gm

13http://www.ece.rice.edu/∼rainking/dendrite/matlab/dendrite.m

Page 77: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

71

AT Cd

dt(b) = 0,

and an initial (rest) potential of

x (0) = Em

1

1

1

1

1

1

1

1

1

1

.

6.6.1.4 Applying the Backward-Euler Method

Since our system is so large, the Backward-Euler method is the best path to a solution. Looking at thematrix AT CA, we observe that it is singular and therefore non-invertible. This singularity arises from thenode connecting the three branches of the ber and prevents us from using the simple equation x′ = Bx+g,we used in earlier Backward-Euler-ings (Section 6.4). However, we will see that a modest generalization toour previous form yields (6.21):

Dx′ = Ex + g (6.21)

capturing the form of our system and allowing us to solve for x (t). We manipulate (6.21) as follows:

Dx′ = Ex + g

Dx (t)− x (t− dt)

dt= Ex (t) + g

(D − Edt) x (t) = Dx (t− dt) + gdt

x (t) = (D − Edt)−1 (Dx (t− dt) + gdt) ,

where in our caseD = AT CA,

E = −(AT GA

), and

g = AT Gb + f .

This method is implemented in dendrite.m14 with typical cell dimensions and resistivity properties, yieldingthe following graph of potentials.

14http://it.is.rice.edu/∼rainking/dendrite/matlab/dendrite.m

Page 78: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

72 CHAPTER 6. MATRIX METHODS FOR DYNAMICAL SYSTEMS

Graph of Dendrite Potentials

(a)

(b)

Figure 6.5: (a) Large view of potentials. (b) Zoomed view of potentials showing maxima.

Page 79: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Chapter 7

Complex Analysis 1

7.1 Complex Numbers, Vectors and Matrices1

7.1.1 Complex Numbers

A complex number is simply a pair of real numbers. In order to stress however that the two arithmeticsdier we separate the two real pieces by the symbol +i. More precisely, each complex number, z, may beuniquely expressed by the combination x+ iy, where x and y are real and i denotes

√−1. We call x the real

part and y the imaginary part of z. We now summarize the main rules of complex arithmetic.If z1 = x1 + iy1 and z2 = x2 + iy2 then

Denition 36: Complex Additionz1 + z2 ≡ x1 + x2 + i (y1 + y2)Denition 37: Complex Multiplicationz1z2 ≡ (x1 + iy1) (x2 + iy2) = x1x2 − y1y2 + i (x1y2 + x2y1)Denition 38: Complex Conjugationz1 ≡ x1 − iy1

Denition 39: Complex Divisionz1z2≡ z1

z2

z2z2

= x1x2+y1y2+i(x2y1−x1y2)x22+y22

Denition 40: Magnitude of a Complex Number|z1| ≡

√z1z1 =

√x1

2 + y12

7.1.2 Polar Representation

In addition to the Cartesian representation z = x + iy one also has the polar form

z = |z| (cos (θ) + isin (θ)) (7.1)

where θ = arctan(

yx

).

This form is especially convenient with regards to multiplication. More precisely,

z1z2 = |z1||z2| (cos (θ1) cos (θ2)− sin (θ1) sin (θ2) + i (cos (θ1) sin (θ2) + sin (θ1) cos (θ2)))

= |z1||z2| (cos (θ1 + θ2) + isin (θ1 + θ2))(7.2)

As a result:zn = (|z|)n (cos (nθ) + isin (nθ)) (7.3)

1This content is available online at <http://cnx.org/content/m10504/2.5/>.

73

Page 80: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

74 CHAPTER 7. COMPLEX ANALYSIS 1

7.1.3 Complex Vectors and Matrices

A complex vector (matrix) is simply a vector (matrix) of complex numbers. Vector and matrix additionproceed, as in the real case, from elementwise addition. The dot or inner product of two complex vectorsrequires, however, a little modication. This is evident when we try to use the old notion to dene the lengthof a complex vector. To wit, note that if:

z =

1 + i

1− i

then

zT z = (1 + i)2 + (1− i)2 = 1 + 2i− 1 + 1− 2i− 1 = 0

Now length should measure the distance from a point to the origin and should only be zero for the zerovector. The x, as you have probably guessed, is to sum the squares of the magnitudes of the components ofz. This is accomplished by simply conjugating one of the vectors. Namely, we dene the length of a complexvector via:

(z) =√

(z)Tz (7.4)

In the example above this produces √(|1 + i|)2 + (|1− i|)2 =

√4 = 2

As each real number is the conjugate of itself, this new denition subsumes its real counterpart.The notion of magnitude also gives us a way to dene limits and hence will permit us to introduce

complex calculus. We say that the sequence of complex numbers,

zn |n =

1

2

. . .

, converges to the

complex number z0 and writezn → z0

orz0 = lim

n→∞zn

when, presented with any ε > 0 one can produce an integer N for which |zn − z0| < ε when n ≥ N . As anexample, we note that

(i2

)n → 0.

7.1.4 Examples

Example 7.1As an example both of a complex matrix and some of the rules of complex arithmetic, let usexamine the following matrix:

F =

1 1 1 1

1 i −1 −i

1 −1 1 −1

1 −i −1 i

(7.5)

Let us attempt to nd FF . One option is simply to multiply the two matrices by brute force,but this particular matrix has some remarkable qualities that make the job signicantly easier.

Page 81: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

75

Specically, we can note that every element not on the diagonal of the resultant matrix is equal to0. Furthermore, each element on the diagonal is 4. Hence, we quickly arrive at the matrix

FF =

4 0 0 0

0 4 0 0

0 0 4 0

0 0 0 4

= 4i

(7.6)

This nal observation, that this matrix multiplied by its transpose yields a constant times theidentity matrix, is indeed remarkable. This particular matrix is an example of a Fourier matrix,and enjoys a number of interesting properties. The property outlined above can be generalized forany Fn, where F refers to a Fourier matrix with n rows and columns:

FnFn = nI (7.7)

7.2 Complex Functions2

7.2.1 Complex Functions

A complex function is merely a rule for assigning certain complex numbers to other complex numbers. Thesimplest (nonconstant) assignment is the identity function f (z) ≡ z. Perhaps the next simplest functionassigns to each number its square, i.e., f (z) ≡ z2. As we decomposed the argument of f , namely z, into itsreal and imaginary parts, we shall also nd it convenient to partition the value of f , z2 in this case, into itsreal and imaginary parts. In general, we write

f (x + iy) = u (x, y) + iv (x, y) (7.8)

where u and v are both real-valued functions of two real variables. In the case that f (z) ≡ z2 we nd

u (x, y) = x2 − y2

andv (x, y) = 2xy

With the tools of complex numbers (Section 7.1), we may produce complex polynomials

f (z) = zm + cm−1zm−1 + · · ·+ c1z + c0 (7.9)

We say that such an f is order m. We shall often nd it convenient to represent polynomials as the productof their factors, namely

f (z) = (z − λ1)d1(z − λ2)

d2 . . . (z − λh)dh (7.10)

Each λj is a root of degree dj . Here h is the number of distinct roots of f . We call λj a simple root whendj = 1. We can observe the appearance of ratios of polynomials or so called rational functions. Suppose

q (z) =f (z)g (z)

2This content is available online at <http://cnx.org/content/m10505/2.7/>.

Page 82: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

76 CHAPTER 7. COMPLEX ANALYSIS 1

in rational, that f is of order at most m − 1 while g is of order m with the simple roots λ1, . . . , λm. Itshould come as no surprise that such q should admit a Partial Fraction Expansion

q (z) =m∑

j=1

(qj

z − λj

)(7.11)

One uncovers the qj by rst multiplying each side by z−λj and then letting z tend to λj . For example, if

1z2 + 1

=q1

z + i+

q2

z − i(7.12)

then multiplying each side by z + i produces

1z − i

= q1 +q2 (z + i)

z − i(7.13)

Now, in order to isolate q1 it is clear that we should set z = −i. So doing we nd that q1 = i2 . In order to

nd q2 we multiply (7.12) by z − i. and then set z = i. So doing we nd q2 = −i2 , and so

1z2 + i

=i2

z + i+

−i2

z − i(7.14)

. Returning to the general case, we encode the above in the simple formula

qj = limz→λj

(z − λj) q (z) (7.15)

You should be able to use this to conrm that

z

z2 + 1=

1/2z + i

+1/2z − i

(7.16)

Recall that the transfer function we met in The Laplace Transform (p. 56) module was in fact a matrixof rational functions. Now, the partial fraction expansion of a matrix of rational functions is simply thematrix of partial fraction expansions of each of its elements. This is easier done than said. For example, thetransfer function of

B =

0 1

−1 0

is

(zI −B)−1 = 1z2+1

z 1

−1 z

= 1

z+i

1/2 i2

−i2 1/2

+ 1z−i

1/2 −i2

i2 1/2

(7.17)

The rst line comes form either Gauss-Jordan by hand or via the symbolic toolbox in Matlab. Moreimportantly, the second line is simply an amalgamation of (7.12) and (7.14). Complex matrices have nallyentered the picture. We shall devote all of Chapter 10 to uncovering the remarkable properties enjoyedby the matrices that appear in the partial fraction expansion of (zI −B)−1

Have you noticed that, in ourexample, the two matrices are each projections, and they sum to I, and that their product is 0? Could thisbe an accident?

In The Laplace Transform (Section 6.2) module we were confronted with the complex exponential. Byanalogy to the real exponential we dene

ez ≡∞∑

n=0

(zn

n!

)(7.18)

Page 83: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

77

and nd that

ee = 1 + iθ + (iθ)2

2 + (iθ)3

3! + (iθ)4

4! + . . .

= 1− θ2

2 + θ4

4! − · · ·+ i(θ − θ3

3! + θ5

5! − . . .)

= cos (θ) + isin (θ)

(7.19)

With this observation, the polar form (Section 7.1.2: Polar Representation) is now simply z = |z|eiθ.One may just as easily verify that

cos (θ) =eiθ + e(−i)θ

2and

sin (θ) =eiθ − e(−i)θ

2i

These suggest the denitions, for complex z, of

cos (z) ≡ eiz + ei−z

2(7.20)

and

sin (z) ≡ eiz − e(−i)z

2i(7.21)

As in the real case the exponential enjoys the property that

ez1+z2 = ez1ez2

and in particular

ex+iy = exeiy

= excos (y) + iexsin (y)(7.22)

Finally, the inverse of the complex exponential is the complex logarithm,

lnz ≡ ln (|z|) + iθ (7.23)

for z = |z|eiθ. One nds that ln ((−1) + i) = ln(√

2)

+ i 3π4 .

7.3 Complex Dierentiation3

7.3.1 Complex Dierentiation

The complex f is said to be dierentiable at z0 if

limz→z0

f (z)− f (z0)z − z0

exists, by which we mean thatf (zn)− f (z0)

zn − z0

converges to the same value for every sequence zn that converges to z0. In this case we naturally call thelimit d

dz f (z0)

3This content is available online at <http://cnx.org/content/m10276/2.7/>.

Page 84: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

78 CHAPTER 7. COMPLEX ANALYSIS 1

To illustrate the concept of 'for every ' mentioned above, we utilize the following picture. We assume thepoint z0 is dierentiable, which means that any conceivable sequence is going to converge to z0. We outlinethree sequences in the picture: real numbers, imaginary numbers, and a spiral pattern of both.

Sequences Approaching A Point In The Complex Plane

Figure 7.1: The green is real, the blue is imaginary, and the red is the spiral.

Page 85: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

79

7.3.2 Examples

Example 7.2The derivative of z2 is 2z.

limz→z0

z2−z02

z−z0= lim

z→z0

(z−z0)(z+z0)z−z0

= 2z0

(7.24)

Example 7.3The exponential is its own derivative.

limz→z0

ez−ez0

z−z0= ez0 lim

z→z0

ez−z0−1z−z0

= ez0 limz→z0

∑∞n=0

((z−z0)

n

(n+1)!

)= ez0

(7.25)

Example 7.4The real part of z is not a dierentiable function of z.

We show that the limit depends on the angle of approach. First, when zn → z0 on a line parallelto the real axis, e.g., zn = x0 + 1

n + iy0, we nd

limn→∞

x0 + 1n − x0

x0 + 1n + iy0 − (x0 + iy0)

= 1 (7.26)

while if zn → z0 in the imaginary direction, e.g., zn = x0 + i(y0 + 1

n

), then

limn→∞

x0 − x0

x0 + i(y0 + 1

n

)− (x0 + iy0)

= 0 (7.27)

7.3.3 Conclusion

This last example suggests that when f is dierentiable a simple relationship must bind its partial derivativesin x and y.

Proposition 7.1: Partial Derivative Relationship

If f is dierentiable at z0 then ddz f (z0) = ∂

∂xf (z0) = −(i ∂∂y f (z0)

)Proof: With z = x + iy0,

ddz f (z0) = lim

z→z0

f(z)−f(z0)z−z0

= limx→x0

f(x+iy0)−f(x0+iy0)x−x0

= ∂∂xf (z0)

(7.28)

Alternatively, when z = x0 + iy then

ddz f (z0) = lim

z→z0

f(z)−f(z0)z−z0

= limy→y0

f(x0+iy)−f(x0+iy0)i(y−y0)

= −(i ∂∂y f (z0)

) (7.29)

Page 86: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

80 CHAPTER 7. COMPLEX ANALYSIS 1

7.3.4 Cauchy-Reimann Equations

In terms of the real and imaginary parts of f this result brings the Cauchy-Riemann equations.

∂x(u) =

∂y(v) (7.30)

and∂

∂x(v) = −

(∂

∂y(u))

(7.31)

Regarding the converse proposition we note that when f has continuous partial derivatives in region obeyingthe Cauchy-Reimann equations then f is in fact dierentiable in the region.

We remark that with no more energy than that expended on their real cousins one may uncover the rulesfor dierentiating complex sums, products, quotients, and compositions.

As one important application of the derivative let us attempt to expand in partial fractions a rationalfunction whose denominator has a root with degree larger than one. As a warm-up let us try to nd q1,1

and q1,2 in the expressionz + 2

(z + 1)2=

q1,1

z + 1+

q1,2

(z + 1)2

Arguing as above, it seems wise to multiply through by (z + 1)2 and so arrive at

z + 2 = q1,1 (z + 1) + q1,2 (7.32)

On setting z = −1 this gives q1,2 = 1. With q1,2 computed, (7.32) takes the simple form z + 1 = q1,1 (z + 1)and so q1,2 = 1 as well. Hence,

z + 2(z + 1)2

=1

z + 11

(z + 1)2

This latter step grows more cumbersome for roots of higher degrees. Let us consider

(z + 2)2

(z + 1)3=

q1,1

z + 1+

q1,2

(z + 1)2+

q1,3

(z + 1)3

The rst step is still correct: multiply through by the factor at its highest degree, here 3. This leaves uswith

(z + 2)2 = q1,1(z + 1)2 + q1,2 (z + 1) + q1,3 (7.33)

Setting z = −1 again produces the last coecient, here q1,3 = 1. We are left however with one equation intwo unknowns. Well, not really one equation, for (7.33) is to hold for all z. We exploit this by taking twoderivatives, with respect to z, of (7.33). This produces

2 (z + 2) = 2q1,1 (z + 1) + q1,2

and 2 = q1,1 The latter of course needs no comment. We derive q1,2 from the former by setting z = −1.This example will permit us to derive a simple expression for the partial fraction expansion of the generalproper rational function q = f

g where g has h distinct roots λ1, . . . , λh of respective degrees d1, . . . , dh.We write

q (z) =h∑

j=1

dj∑k=1

(qj,k

(z − λj)k

) (7.34)

and note, as above, that qj,k is the coecient of (z − dj)dj−k

in the rational function

rj (z) ≡ q (z) (z − λj)dj

Page 87: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

81

Hence, qj,k may be computed by setting z = λj in the ratio of the dj − kth derivative of rj to (dj − k)!.That is,

qj,k = limz→λj

1(dj − k)!

ddj−k

dzdj−k

(z − λj)

dj q (z)

(7.35)

As a second example, let us take

B =

1 0 0

1 3 0

0 1 1

(7.36)

and compute the Φj,k matrices in the expansion

(zI −B)−1 =

1

z−1 0 01

(z−1)(z−3)1

z−3 01

(z−1)2(z−3)1

(z−1)(z−3)1

z−1

=1

z − 1Φ1,1 +

1(z − 1)2

Φ1,2 +1

z − 3Φ2,1

The only challenging term is the (3, 1) element. We write

1(z − 1)2 (z − 3)

=q1,1

z − 1+

q1,2

(z − 1)2+

q2,1

z − 3

It follows from (7.35) that

q1,1 = ddz

(1

z−31)

= −1/4(7.37)

and

q1,2 = 1z−31

= −1/4(7.38)

and

q2,1 =(

1z−3

)2

3

= 1/4(7.39)

It now follows that

(zI −B)−1 =1

z − 1

1 0 0

−1/2 0 0

−1/4 −1/2 1

+1

(z − 1)2

0 0 0

0 0 0

−1/2 0 0

+1

z − 3

0 0 0

1/2 1 0

1/4 1/2 0

(7.40)

In closing, let us remark that the method of partial fraction expansions has been implemented in Matlab.In fact, (7.37), (7.38), and (7.39) all follow from the single command: [r,p,k]=residue([0 0 0 1],[1 -5

7 -3]). The rst input argument is Matlab-speak for the polynomial f (z) = 1 while the second argumentcorresponds to the denominator

g (z) = (z − 1)2 (z − 3) = z3 − 5z2 + 7z − 3

.

Page 88: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

82 CHAPTER 7. COMPLEX ANALYSIS 1

7.4 Exercises: Complex Numbers, Vectors, and Functions4

7.4.1 Exercises

Exercise 7.1 (Solution on p. 83.)

Express |ez| in terms of x and/or y.

Exercise 7.2 (Solution on p. 83.)

Conrm that elnz = z and ln (ez) = z.

Exercise 7.3 (Solution on p. 83.)

Find the real and imaginary parts of cos (z) and sin (z). Express your answers in terms of regularand hyperbolic trigonometric functions.

Exercise 7.4 (Solution on p. 83.)

Show that cos2 (z) + sin2 (z) = 1Exercise 7.5 (Solution on p. 83.)

With zw ≡ ewlnz for complex z and w compute√

i

Exercise 7.6 (Solution on p. 83.)

Verify that cos (z) and sin (z) satisfy the Cauchy-Riemann equations and use the proposition toevaluate their derivatives.

Exercise 7.7 (Solution on p. 83.)

Submit a Matlab diary documenting your use of residue in the partial fraction expansion of thetransfer function of

B =

2 0 0

−1 4 0

0 −1 2

.

4This content is available online at <http://cnx.org/content/m10506/2.5/>.

Page 89: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

83

Solutions to Exercises in Chapter 7

Solution to Exercise 7.1 (p. 82)Pending completion of assignment.

Solution to Exercise 7.2 (p. 82)Pending completion of assignment.

Solution to Exercise 7.3 (p. 82)Pending completion of assignment.

Solution to Exercise 7.4 (p. 82)Pending completion of assignment.

Solution to Exercise 7.5 (p. 82)Pending completion of assignment.

Solution to Exercise 7.6 (p. 82)Pending completion of assignment.

Solution to Exercise 7.7 (p. 82)Pending completion of assignment.

Page 90: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

84 CHAPTER 7. COMPLEX ANALYSIS 1

Page 91: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Chapter 8

Complex Analysis 2

8.1 Cauchy's Theorem1

8.1.1 Introduction

Our main goal is a better understanding of the partial fraction expansion of a given transfer function. Withrespect to the example that closed the discussion of complex dierentiation, see this equation (7.36) - In thisequation (7.40), we found

(zI −B)−1 =1

z − λ1P1 +

1(z − λ1)

2 D1 +1

z − λ2P2

where the Pj and Dj enjoy the amazing properties

1.

BP1 = P1B

= λ1P1 + D1

(8.1)

andBP2 = P2B = λ2P2

2.P1 + P2 = I (8.2)

P12 = P1

P22 = P2

andD1

2 = 0

3.

P1D1 = D1P1

= D1

(8.3)

andP2D1 = D1P2 = 0

In order to show that this always happens, i.e., that it is not a quirk produced by the particular B in thisequation (7.36), we require a few additional tools from the theory of complex variables. In particular, weneed the fact that partial fraction expansions may be carried out through complex integration.

1This content is available online at <http://cnx.org/content/m10264/2.8/>.

85

Page 92: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

86 CHAPTER 8. COMPLEX ANALYSIS 2

8.1.2 Integration of Complex Functions Over Complex Curves

We shall be integrating complex functions over complex curves. Such a curve is parameterized by onecomplex valued or, equivalently, two real valued, function(s) of a real parameter (typically denoted by t).More precisely,

C ≡ z (t) = x (t) + iy (t) |a ≤ t ≤ b

For example, if x (t) = y (t) = t while a = 0 and b = 1, then C is the line segment joining 0 + i0 to 1 + i.We now dene ∫

C

f (z) dz ≡∫ b

a

f (z (t)) z′ (t) dt

For example, if C = t + it |0 ≤ t ≤ 1 as above and f (z) = z then∫C

zdz =∫ 1

0

(t + it) (1 + i) dt =∫ 1

0

t− t + i2tdt = i

while if C is the unit circle

eit |0 ≤ t ≤ 2πthen∫

C

zdz =∫ 2π

0

eitieitdt = i

∫ 2π

0

ei2tdt = i

∫ 2π

0

cos (2t) + isin (2t) dt = 0

Remaining with the unit circle but now integrating f (z) = 1z we nd∫

C

z−1dz =∫ 2π

0

e−(it)ieitdt = 2πi

We generalize this calculation to arbitrary (integer) powers over arbitrary circles. More precisely, forinteger m and xed complex a we integrate (z − a)m

over

C (a, r) ≡

a + reit |0 ≤ t ≤ 2π

the circle of radius r centered at a. We nd∫C(a,r)

(z − a)mdz =

∫ 2π

0

(a + reit − a

)mrieitdt

= irm+1∫ 2π

0ei(m+1)tdt

(8.4)

∫C(a,r)

(z − a)mdz = irm+1

∫ 2π

0

cos ((m + 1) t) + isin ((m + 1) t) dt =

2πi if m = −1

0 otherwise

When integrating more general functions it is often convenient to express the integral in terms of its realand imaginary parts. More precisely∫

C

f (z) dz =∫C

u (x, y) + iv (x, y) dx + i

∫C

u (x, y) + iv (x, y) dy

∫C

f (z) dz =∫C

u (x, y) dx−∫C

v (x, y) dy + i

∫C

v (x, y) dx + i

∫C

u (x, y) dy

∫C

f (z) dz =∫ b

a

u (x (t) , y (t))x′ (t)−v (x (t) , y (t)) y′ (t) dt+ i

∫ b

a

u (x (t) , y (t)) y′ (t)+v (x (t) , y (t))x′ (t) dt

Page 93: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

87

The second line should invoke memories of:

Theorem 8.1: Green's TheoremIf C is a closed curve and M and N are continuously dierentiable real-valued functions on Cin,the region enclosed by C, then∫

C

Mdx +∫C

Ndy =∫ ∫

Cin

∂x(N)− ∂

∂y(M) dxdy

Applying this to the situation above, we nd, so long as C is closed, that

∫C

f (z) dz = −

∫ ∫Cin

∂x(v) +

∂y(u) dxdy

+ i

∫ ∫Cin

∂x(u) +

∂y(v) dxdy

At rst glance it appears that Green's Theorem only serves to muddy the waters. Recalling the Cauchy-Riemann equations (Section 7.3.4: Cauchy-Reimann Equations) however we nd that each of these doubleintegrals is in fact identically zero! In brief, we have proven:

Theorem 8.2: Cauchy's TheoremIf f is dierentiable on and in the closed curve C then

∫C

f (z) dz = 0.

Strictly speaking, in order to invoke Green's Theorem we require not only that f be dierentiable butthat its derivative in fact be continuous. This however is simply a limitation of our simple mode of proof;Cauchy's Theorem is true as stated.

This theorem, together with (8.4), permits us to integrate every proper rational function. More precisely,if q = f

g where f is a polynomial of degree at most m− 1 and g is an mth degree polynomial with h distinct

zeros at λj |j = 1, . . . , h with respective multiplicities of mj |j = 1, . . . , h we found that

q (z) =h∑

j=1

(mj∑k=1

(qj,k

(z − λj)k

))(8.5)

Observe now that if we choose rj so small that λj is the only zero of g encircled by Cj ≡ C (λj , rj) then byCauchy's Theorem ∫

Cj

q (z) dz =mj∑k=1

qj,k

∫Cj

1

(z − λj)kdz

In (8.4) we found that each, save the rst, of the integrals under the sum is in fact zero. Hence,∫

Cj

q (z) dz = 2πiqj,1 (8.6)

With qj,1 in hand, say from this equation (7.35) or residue, one may view (8.6) as a means for computingthe indicated integral. The opposite reading, i.e., that the integral is a convenient means of expressing qj,1,will prove just as useful. With that in mind, we note that the remaining residues may be computed asintegrals of the product of q and the appropriate factor. More precisely,∫

Cj

q (z) (z − λj)k−1

dz = 2πiqj,k (8.7)

One may be led to believe that the precision of this result is due to the very special choice of curve andfunction. We shall see ...

Page 94: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

88 CHAPTER 8. COMPLEX ANALYSIS 2

8.2 Cauchy's Integral Formula2

8.2.1 The Residue Theorem

After Cauchy's Theorem eqn6 (8.6) and Cauchy's Theorem eqn7 (8.7) perhaps the most useful consequenceof Cauchy's Theorem (Theorem 8.2, Cauchy's Theorem, p. 87) is the

Lemma 8.1: The Curve Replacement LemmaSuppose that C2 is a closed curve that lies inside the region encircled by the closed curve C1. If fis dierentiable in the annular region outside C2 and inside C1 then∫

C1

f (z) dz =∫C2

f (z) dz

.Proof: With reference to Figure 8.1 (Curve Replacement Figure) we introduce two verticalsegments and dene the closed curves C3 = abcda (where the bc arc is clockwise and the da arcis counter-clockwise) and C4 = adcba (where the ad arc is counter-clockwise and the cb arc isclockwise). By merely following the arrows we learn that∫

C1

f (z) dz =∫C2

f (z) dz +∫C3

f (z) dz +∫C4

f (z) dz

As Cauchy's Theorem (Theorem 8.2, Cauchy's Theorem, p. 87) implies that the integrals over C3

and C4 each vanish, we have our result.

2This content is available online at <http://cnx.org/content/m10246/2.8/>.

Page 95: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

89

Curve Replacement Figure

Figure 8.1: The Curve Replacement Lemma

This Lemma says that in order to integrate a function it suces to integrate it over regions where it issingular, i.e. nondierentiable.

Let us apply this reasoning to the integral∫C

z

(z − λ1) (z − λ2)dz

where C encircles both λ1 and λ2 as depicted in Figure 8.2. We nd that∫C

z

(z − λ1) (z − λ2)dz =

∫C1

z

(z − λ1) (z − λ2)dz +

∫C2

z

(z − λ1) (z − λ2)dz

Developing the integrand in partial fractions we nd∫C1

z

(z − λ1) (z − λ2)dz =

λ1

λ1 − λ2

∫C1

1z − λ1

dz +λ2

λ2 − λ1

∫C1

1z − λ2

dz =2πiλ1

λ1 − λ2

Similarly, ∫C2

z

(z − λ1) (z − λ2)dz =

2πiλ2

λ2 − λ1

Page 96: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

90 CHAPTER 8. COMPLEX ANALYSIS 2

Putting things back together we nd∫C

z(z−λ1)(z−λ2)

dz = 2πi(

λ1λ1−λ2

+ λ2λ2−λ1

)= 2πi

(8.8)

Figure 8.2: Concentrating on the poles.

We may view (8.8) as a special instance of integrating a rational function around a curve that encirclesall of the zeros of its denominator. In particular, recalling that cauchy's theorem eqn5 (8.5) and Cauchy'stheorem eqn6 (8.6), we nd

∫C

q (z) dz =∑h

j=1

(∑mj

k=1

(∫Cj

qj,k

(z−λj)k dz

))= 2πi

∑hj=1 qj,1

(8.9)

To take a slightly more complicated example let us integrate f(z)z−a over some closed curve C inside of

which f is dierentiable and a resides. Our Curve Replacement Lemma now permits us to claim that∫C

f (z)z − a

dz =∫

C(a,r)

f (z)z − a

dz

It appears that one can go no further without specifying f . The alert reader however recognizes that in theintegral over C (a, r) is independent of r and so proceeds to let r → 0, in which case z → a and f (z) → f (a).Computing the integral of 1

z−a along the way we are led to the hope that∫C

f (z)z − a

dz = f (a) 2πi

In support of this conclusion we note that∫C(a,r)

f (z)z − a

dz =∫

C(a,r)

f (z)z − a

+f (a)z − a

− f (a)z − a

dz = f (a)∫

C(a,r)

1z − a

dz +∫

C(a,r)

f (z)− f (a)z − a

dz

Now the rst term is f (a) 2πi regardless of r while, as r → 0, the integrand of the second term approachesddaf (a) and the region of integration approaches the point a. Regarding this second term, as the integrandremains bounded as the perimeter of C (a, r) approaches zero the value of the integral must itself be zero.This result if typically known as

Formula 8.1: Cauchy's Integral FormulaIf f is dierentiable on and in the closed curve C then

f (a) =1

2πi

∫C

f (z)z − a

dz (8.10)

Page 97: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

91

for each a lying inside C.

The consequences of such a formula run far and deep. We shall delve into only one or two. First, we notethat, as a does not lie on C, the right hand side is a perfectly smooth function of a. Hence, dierentiatingeach side, we nd

d

daf (a) =

12πi

∫C

f (z)(z − a)2

dz (8.11)

for each a lying inside C. Applying this reasoning n times we arrive at a formula for the n-th derivative off at a,

dn

danf (a) =

n!2πi

∫C

f (z)(z − a)1+n dz (8.12)

for each a lying inside C. The upshot is that once f is shown to be dierentiable it must in fact be innitelydierentiable. As a simple extension let us consider

12πi

∫C

f (z)(z − λ1) (z − λ2)

2 dz

where f is still assumed dierentiable on and in C and that C encircles both λ1 and λ2. By the curvereplacement lemma this integral is the sum

12πi

∫C1

f (z)(z − λ1) (z − λ2)

2 dz +1

2πi

∫C2

f (z)(z − λ1) (z − λ2)

2 dz

where λj now lies in only Cj . Asf(z)z−λ2

is well behaved in C1 we may use (8.10) to conclude that

12πi

∫C1

f (z)(z − λ1) (z − λ2)

2 dz =f (λ1)

(λ1 − λ2)2

Similarly, as f(z)z−λ1

is well behaved in C2 we may use (8.11) to conclude that

12πi

∫C2

f (z)(z − λ1) (z − λ2)

2 dz =d

da

(f (a)

a− λ1

)|a=λ2

These calculations can be read as a concrete instance of

Theorem 8.3: The Residue TheoremIf g is a polynomial with roots λj |j = 1, . . . , h of degree dj |j = 1, . . . , h and C is a closedcurve encircling each of the λj and f is dierentiable on and in C then∫

C

f (z)g (z)

dz = 2πi

h∑j=1

(res (λj))

where

res (λj) = limz→λj

1(dj − 1)!

ddj−1

dzdj−1

((z − λj)

djf (z)g (z)

)is called the residue of f

g at λj .

One of the most important instances of this theorem is the formula for the Inverse Laplace Transform.

Page 98: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

92 CHAPTER 8. COMPLEX ANALYSIS 2

8.3 The Inverse Laplace Transform: Complex Integration3

8.3.1 The Inverse Laplace Transform

If q is a rational function with poles λj |j = 1, . . . , h then the inverse Laplace transform of q is(L−1 (q)

)(t) ≡ 1

2πi

∫C

q (z) eztdz (8.13)

where C is a curve that encloses each of the poles of q. As a result

(L−1 (q)

)(t) =

h∑j=1

(res (λj)) (8.14)

Let us put this lovely formula to the test. We take our examples from discussion of the Laplace Transform(Section 6.2) and the inverse Laplace Transform (Section 6.3). Let us rst compute the inverse LaplaceTransform of

q (z) =1

(z + 1)2

According to (8.14) it is simply the residue of q (z) ezt at z = −1, i.e.,

res (−1) = limz→−1

d

dz

(ezt)

= te−t

This closes the circle on the example begun in the discussion of the Laplace Transform (Section 6.2) andcontinued in exercise one for chapter 6.

For our next example we recall

Lx1 (s) =0.19

(s2 + 1.5s + 0.27

)(s + 1/6)4 (s3 + 1.655s2 + 0.4978s + 0.0039)

from the Inverse Laplace Transform (Section 6.3). Using numde, sym2poly and residue, see fib4.m fordetails, returns

r1 =

0.0029

262.8394

−474.1929

−1.0857

−9.0930

−0.3326

211.3507

and

p1 =

−1.3565

−0.2885

−0.1667

−0.1667

−0.1667

−0.1667

−0.0100

3This content is available online at <http://cnx.org/content/m10523/2.3/>.

Page 99: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

93

You will be asked in the exercises to show that this indeed jibes with the

x1 (t) = 211.35e−t100−

((0.0554t3 + 4.5464t2 + 1.085t + 474.19) e

−t6 + e

−329t400

(262.842cosh

(√73t16

)+ 262.836sinh

(√73t16

)))achieved in the Laplace Transform (Section 6.2) via ilaplace.

8.4 Exercises: Complex Integration4

1. Let us conrm the representation of this Cauchy's Theorem equation (8.7) in the matrix case. More

precisely, if Φ (z) ≡ (zI −B)−1is the transfer function associated with B then this Cauchy's Theorem

equation (8.7) states that

Φ (z) =h∑

j=1

dj∑k=1

(Φj,k

(z − λj)k

)where

Φj,k =1

2πi

∫Cj

Φ (z) (z − λj)k−1

dz (8.15)

Compute the Φj,k per (8.15) for the B in this equation (7.36) from the discussion of Complex Dif-ferentiation. Conrm that they agree with those appearing in this equation (7.40) from the ComplexDierentiation discussion.

2. Use this inverse Laplace Transform equation (8.14) to compute the inverse Laplace transform of1

s2+2s+2 .3. Use the result of the previous exercise to solve, via the Laplace transform, the dierential equation

d

dt(x) (t) + x (t) = e−tsin (t) , x (0) = 0

Hint: Take the Laplace transform of each side.4. Explain how one gets from r1 and p1 to x1 (t).5. Compute, as in fib4.m, the residues of Lx2 (s) and Lx3 (s) and conrm that they give rise to the

x2 (t) and x3 (t) you derived in the discussion of Chapter 1 (Section 2.1).

4This content is available online at <http://cnx.org/content/m10524/2.4/>.

Page 100: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

94 CHAPTER 8. COMPLEX ANALYSIS 2

Page 101: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Chapter 9

The Eigenvalue Problem

9.1 Introduction1

9.1.1 Introduction

Harking back to our previous discussion of The Laplace Transform (Section 6.2) we labeled the complexnumber λ an eigenvalue of B if λI − B was not invertible. In order to nd such λ one has only to ndthose s for which (sI −B)−1

is not dened. To take a concrete example we note that if

B =

1 1 0

0 1 0

0 0 2

(9.1)

then

(sI −B)−1 =1

(s− 1)2 (s− 2)

(s− 1) (s− 2) s− 2 0

0 (s− 1) (s− 2) 0

0 0 (s− 1)2

(9.2)

and so λ1 = 1 and λ2 = 2 are the two eigenvalues of B. Now, to say that λjI − B is not invertible is tosay that its columns are linearly dependent, or, equivalently, that the null space N (λjI −B) contains morethan just the zero vector. We call N (λjI −B) the jth eigenspace and call each of its nonzero members ajth eigenvector. The dimension of N (λjI −B) is referred to as the geometric multiplicity of λj . Withrespect to B above, we compute N (λ1I −B) by solving (I −B) x = 0, i.e.,

0 −1 0

0 0 0

0 0 1

x1

x2

x3

=

0

0

0

Clearly

N (λ1I −B) =

c(

1 0 0)T

|c ∈ R

Arguing along the same lines we also nd

N (λ2I −B) =

c(

0 0 1)T

|c ∈ R

1This content is available online at <http://cnx.org/content/m10405/2.4/>.

95

Page 102: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

96 CHAPTER 9. THE EIGENVALUE PROBLEM

That B is 3x3 but possesses only 2 linearly eigenvectors leads us to speak of B as defective. The cause ofits defect is most likely the fact that λ1 is a double pole of (sI −B)−1

In order to esh out that remark anduncover the missing eigenvector we must take a much closer look at the transfer function

R (s) ≡ (sI −B)−1

In the mathematical literature this quantity is typically referred to as the Resolvent of B.

9.2 The Resolvent2

9.2.1 The Transfer Function

One means by which to come to grips with R (s) is to treat it as the matrix analog of the scalar function

1s− b

(9.3)

This function is a scaled version of the even simpler function 11−z . This latter function satises the identity

(just multiply across by 1− z to check it)

11− z

= 1 + z + z2 + ... + zn−1 +zn

1− z(9.4)

for each positive integer n. Furthermore, if |z| < 1 then zn → 0 as n → ∞ and so (9.4) becomes, in thelimit,

11− z

=∞∑

n=0

(zn)

the familiar geometric series. Returning to (9.3) we write

1s− b

=1s

1− bs

=1s

+b

s2+ ... +

bn−1

sn+

bn

sn

1s− b

and hence, so long as |s| > |b| we nd,

1s− b

=1s

∞∑n=0

((b

s

)n)This same line of reasoning may be applied in the matrix case. That is,

(sI −B)−1 = s−1

(I − B

s

)−1(1s

+B

s2+ ... +

Bn−1

sn+

Bn

sn(sI −B)−1

)(9.5)

and hence, so long as |s| >‖ B ‖ where ‖ B ‖ is the magnitude of the largest element of B, we nd

(sI −B)−1 = s−1∞∑

n=0

((B

s

)n)(9.6)

Although (9.6) is indeed a formula for the transfer function you may, regarding computation, not nd itany more attractive than the Gauss-Jordan method. We view (9.6) however as an analytical rather than

2This content is available online at <http://cnx.org/content/m10490/2.3/>.

Page 103: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

97

computational tool. More precisely, it facilitates the computation of integrals of R (s). For example, if Cρ isthe circle of radius ρ centered at the origin and ρ >‖ B ‖ then∫

Cρ(sI −B)−1

ds = (∑∞

n=0 (Bn))∫

Cρs−1−nds

= 2πiI(9.7)

This result is essential to our study of the eigenvalue problem. As are the two resolvent identities. Regardingthe rst we deduce from the simple observation

(s2I −B)−1 − (s1I −B)−1 = (s2I −B)−1 (s1I −B − s2I + B) (s1I −B)−1

thatR (s2)−R (s1) = (s1 − s2) R (s2) R (s1) (9.8)

The second identity is simply a rewriting of

(sI −B) (sI −B)−1 = (sI −B)−1 (sI −B) = I

namely,

BR (s) = R (s) B

= sR (s)− I(9.9)

9.3 The Partial Fraction Expansion of the Resolvent3

9.3.1 Partial Fraction Expansion of the Transfer Function

The Gauss-Jordan method informs us that R will be a matrix of rational functions with a common denom-inator. In keeping with the notation of the previous chapters, we assume the denominator to have the hdistinct roots, λj |j = 1, . . . , h with associated multiplicities mj |j = 1, . . . , h.

Now, assembling the partial fraction expansions of each element of R we arrive at

R (s) =h∑

j=1

(mj∑k=1

(Rj,k

(s− λj)k

))(9.10)

where, recalling the equation from Cauchy's Theorem (8.7), the matrix Rj,k equals the following:

Rj,k =1

2πj

∫Cj

R (z) (z − λj)k−1

dz (9.11)

Example 9.1: Concrete ExampleAs we look at this example with respect to the eigenvalue problem eqn1 (9.1) and eqn2 (9.2), wend

R1,1 =

1 0 0

0 1 0

0 0 0

R1,2 =

0 1 0

0 0 0

0 0 0

and R2,1 =

0 0 0

0 0 0

0 0 1

One notes immediately that these matrices enjoy some amazing properties. For example

R1,12 = R1,1, R2,1

2 = R2,1, R1,1R2,1 = 0, and R2,12 = 0 (9.12)

3This content is available online at <http://cnx.org/content/m10491/2.5/>.

Page 104: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

98 CHAPTER 9. THE EIGENVALUE PROBLEM

Below we will now show that this is no accident. As a consequence of (9.11) and the rst resolventidentity, we shall nd that these results are true in general.

Proposition 9.1:Rj,1

2 = Rj,1 as seen above in (9.12).Proof: Recall that the Cj appearing in (9.11) is any circle about λj that neither touches norencircles any other root. Suppose that Cj and Cj

′ are two such circles and Cj′ encloses Cj . Now

Rj,1 =1

2πj

∫Cj

R (z) dz =1

2πj

∫Cj

R (z) dz

and so

Rj,12 =

1(2πi)2

∫Cj

R (z) dz

∫Cj

R (w) dw

Rj,12 =

1(2πi)2

∫Cj

∫Cj

R (z) R (w) dwdz

Rj,12 =

1(2πi)2

∫Cj

∫Cj

R (z)−R (w)w − z

dwdz

Rj,12 =

1(2πi)2

∫Cj

R (z)∫Cj

1w − z

dwdz −∫Cj

R (w)∫Cj

1w − z

dzdw

Rj,1

2 =1

2πi

∫Cj

R (z) dz = Rj,1

We used the rst resolvent identity, This Transfer Function eqn (9.8), in moving from the secondto the third line. In moving from the fourth to the fth we used only∫

Cj′

1w − z

dw = 2πi (9.13)

and ∫Cj

1w − z

dz = 0

The latter integrates to zero because Cj does not encircle w.From the denition of orthogonal projections (Denition: "orthogonal projection", p. 49), which

states that matrices that equal their squares are projections, we adopt the abbreviation

Pj ≡ Rj,1

With respect to the product PjPk, for j 6= k, the calculation runs along the same lines. Thedierence comes in (9.13) where, as Cj lies completely outside of Ck, both integrals are zero.Hence,

Proposition 9.2:If j 6= k then PjPk = 0.

Page 105: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

99

Along the same lines we deneDj ≡ Rj,2

and prove

Proposition 9.3:If 1 ≤ k ≤ mj − 1 then Dj

k = Rj,k+1. Djmj = 0.

Proof: For k and l greater than or equal to one,

Rj,k+1Rj,l+1 =1

(2πi)2

∫Cj

R (z) (z − λj)kdz

∫Cj

R (w) (w − λj)ldw

Rj,k+1Rj,l+1 =1

(2πi)2

∫Cj

∫Cj

R (z) R (w) (z − λj)k(w − λj)

ldwdz

Rj,k+1Rj,l+1 =1

(2πi)2

∫Cj

∫Cj

R (z)−R (w)w − z

(z − λj)k(w − λj)

ldwdz

Rj,k+1Rj,l+1 =1

(2πi)2

∫Cj

R (z) (z − λj)k∫Cj

(w − λj)l

w − zdwdz − 1

(2πi)2

∫Cj

R (w) (w − λj)l∫Cj

(z − λj)k

w − zdzdw

Rj,k+1Rj,l+1 =1

2πi

∫Cj

R (z) (z − λj)k+l

dz = Rj,k+l+1

because ∫Cj

(w − λj)l

w − zdw = 2πi(z − λj)

l(9.14)

and ∫Cj

(z − λj)k

w − zdz = 0

With k = l = 1 we have shown Rj,22 = Rj,3, i.e., Dj

2 = Rj,3. Similarly, with k = 1 and l = 2 wend Rj,2Rj,3 = Rj,4, i.e., Dj

3 = Rj,4. Continuing in this fashion we nd Rj,kRj,k+1 = Rj,k+2 = j,

or Djk+1 = Rj,k+2. Finally, at k = mj − 1 this becomes

Djmj = Rj,mj+1 =

12πi

∫Cj

R (z) (z − λj)mj dz = 0

by Cauchy's Theorem.With this we now have the sought after expansion

R (z) =h∑

j=1

(1

z − λjPj +

mj−1∑k=1

(1

(z − λj)k+1

Djk

))(9.15)

along with the verication of a number of the properties laid out in Complex Integration eqns 1-3(8.1).

Page 106: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

100 CHAPTER 9. THE EIGENVALUE PROBLEM

9.4 The Spectral Representation4

With just a little bit more work we shall arrive at a similar expansion for B itself. We begin by applying thesecond resolvent identity (9.9) to Pj . More precisely, we note that the second resolvent identity (9.9) impliesthat

BPj = PjB

=∫

CjzR (z)− Idz

(9.16)

PjB =∫

Cj

zR (z) dz

PjB =∫

Cj

R (z) (z − λj) dz + λj

∫Cj

R (z) dz

PjB = Dj + λjPj

Summing this over j we nd

B

h∑j=1

Pj =h∑

j=1

(λjPj) +h∑

j=1

Dj (9.17)

We can go one step further, namely the evaluation of the rst sum. This stems from the eqn in the discussionof the transfer function (9.7) where we integrated R (s) over a circle Cρ where ρ >‖ B ‖. The connection tothe Pj is made by the residue theorem. More precisely,∫

R (z) dz = 2πi

h∑j=1

Pj

Comparing this to the eqn (9.7) from the discussion of the transfer function we nd

h∑j=1

Pj = I (9.18)

and so (9.17) takes the form

B =h∑

j=1

(λjPj) +h∑

j=1

Dj (9.19)

It is this formula that we refer to as the Spectral Representation of B. To the numerous connectionsbetween the Pj and Dj we wish to add one more. We rst write (9.16) as

(B − λjI) Pj = Dj

and then raise each side to the mj power. As Pmj

j = Pj and Dmj

j = 0 we nd

(B − λjI)mj Pj = 0 (9.20)

For this reason we call the range of Pj the jth generalized eigenspace, call each of its nonzero membersa jth generalized eigenvector and refer to the dimension of R (Pj) as the algebraic multiplicity ofλj . With regard to the rst example (p. 95) from the discussion of the eigenvalue problem, we note thatalthough it has only two linearly independent eigenvectors the span of the associated generalized eigenspacesindeed lls out R3. One may view this as a consequence of P1 + P2 = I, or, perhaps more concretely, as

appending the generalized rst eigenvector(

0 1 0)T

to the original two eigenvectors(

1 0 0)T

and(

0 0 1)T

. In still other words, the algebraic multiplicities sum to the ambient dimension (here 3),

while the sum of geometric multiplicities falls short (here 2).

4This content is available online at <http://cnx.org/content/m10492/2.3/>.

Page 107: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

101

9.5 The Eigenvalue Problem: Examples5

We take a look back at our previous examples in light of the results of two previous sections The SpectralRepresentation (Section 9.4) and The Partial Fraction Expansion of the Transfer Function (Section 9.3).With respect to the rotation matrix

B =

0 1

−1 0

we recall, see Cauchy's Theorem eqn6 (8.6), that

R (s) =1

s2 + 1

s 1

−1 s

R (s) =1

s− i

1/2 −i2

i2 1/2

+1

s + i

1/2 i2

−i2 1/2

(9.21)

R (s) =1

s− λ1P1 +

1s− λ2

P2

and so

B = λ1P1 + λ2P2 = i

1/2 −i2

i2 1/2

+ (−i)

1/2 i2

−i2 1/2

From m1 = m2 = 1 it follows that R (P1) and R (P2) are actual (as opposed to generalized) eigenspaces.These column spaces are easily determined. In particular, R (P1) is the span of

e1 =

1

i

while R (P2) is the span of

e2 =

1

−i

To recapitulate, from partial fraction expansion one can read o the projections from which one can read othe eigenvectors. The reverse direction, producing projections from eigenvectors, is equally worthwhile. Welaid the groundwork for this step in the discussion of Least Squares (Section 5.1). In particular, this LeastSquares projection equation (5.14) stipulates that

P1 = e1

(e1

T e1

)−1e1

T and P2 = e2

(e2

T e2

)−1e2

T

As e1T e1 = e1

T e1 = 0 these formulas can not possibly be correct. Returning to the Least Squares discussionwe realize that it was, perhaps implicitly, assumed that all quantities were real. At root is the notion of thelength of a complex vector. It is not the square root of the sum of squares of its components but rather thesquare root of the sum of squares of the magnitudes of its components. That is, recalling that the magnitudeof a complex quantity z is

√zz,

(‖ e1 ‖)2 6= e1T e1 rather (‖ e1 ‖)2 6= (e1)

Te1

5This content is available online at <http://cnx.org/content/m10493/2.4/>.

Page 108: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

102 CHAPTER 9. THE EIGENVALUE PROBLEM

Yes, we have had this discussion before, recall complex numbers, vectors, and matrices (Section 7.1). Theupshot of all of this is that, when dealing with complex vectors and matrices, one should conjugate beforeevery transpose. Matlab (of course) does this automatically, i.e., the ' symbol conjugates and transposessimultaneously. We use xH to denote `conjugate transpose', i.e.,

xH ≡ (x)T

All this suggests that the desired projections are more likely

P1 = e1

(e1

He1

)−1e1

H and P2 = e2

(e2

He2

)−1e2

H (9.22)

Please check that (9.22) indeed jives with (9.21).

9.6 The Eigenvalue Problem: Exercises6

9.6.1 Exercises

1. Argue as in Proposition 1 (p. 98) in the discussion of the partial fraction expansion of the transferfunction that if j 6= k then DjPk = PjDk = 0.

2. Argue from this equation (9.17) from the discussion of the Spectral Representation that DjPj =PjDj = Dj .

3. The two previous exercises come in very handy when computing powers of matrices. For example,suppose B is 4-by-4, that h = 2 and m1 = m2 = 2. Use the spectral representation of B together withthe rst two exercises to arrive at simple formulas for B2 and B3.

4. Compute the spectral representation of the circulant matrix

B =

2 8 6 4

4 2 8 6

6 4 2 8

8 6 4 2

Carefully label all eigenvalues, eigenprojections and eigenvectors.

6This content is available online at <http://cnx.org/content/m10494/2.3/>.

Page 109: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Chapter 10

The Symmetric Eigenvalue Problem

10.1 The Spectral Representation of a Symmetric Matrix1

10.1.1 Introduction

Our goal is to show that if B is symmetric then

• each λj is real,• each Pj is symmetric and• each Dj vanishes.

Let us begin with an example.

Example 10.1The transfer function of

B =

1 1 1

1 1 1

1 1 1

is

R (s) =1

s (s− 3)

s− 2 1 1

1 s− 2 1

1 1 s− 2

R (s) =1s

2/3 −1/3 −1/3

−1/3 2/3 −1/3

−1/3 −1/3 −1/3

+1

s− 3

1/3 1/3 1/3

1/3 1/3 1/3

1/3 1/3 1/3

R (s) =

1s− λ1

P1 +1

s− λ2P2

and so indeed each of the bullets holds true. With each of the Dj falling by the wayside you may also expectthat the respective geometric and algebraic multiplicities coincide.

1This content is available online at <http://cnx.org/content/m10382/2.4/>.

103

Page 110: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

104 CHAPTER 10. THE SYMMETRIC EIGENVALUE PROBLEM

10.1.2 The Spectral Representation

We have amassed anecdotal evidence in support of the claim that each Dj in the spectral representation

B =h∑

j=1

(λjPj) +h∑

j=1

Dj (10.1)

is the zero matrix when B is symmetric, i.e., when B = BT , or, more generally, when B = BH where

BH ≡(B)T

Matrices for which B = BH are called Hermitian. Of course real symmetric matrices areHermitian.

Taking the conjugate transpose throughout (10.1) we nd

BH =h∑

j=1

(λjPj

H)

+h∑

j=1

(Dj

H)

(10.2)

That is, the λj are the eigenvalues of BH with corresponding projections PjH and nilpotents Dj

H Hence,if B = BH , we nd on equating terms that

λj = λj

Pj = PjH

andDj = Dj

H

The former states that the eigenvalues of an Hermitian matrix are real. Our main concern however is withthe consequences of the latter. To wit, notice that for arbitrary x,(

‖ Djmj−1x ‖

)2= xH

(Dj

mj−1)H

Djmj−1x

(‖ Dj

mj−1x ‖)2

= xHDjmj−1Dj

mj−1x

(‖ Dj

mj−1x ‖)2

= xHDjmj−2Dj

mj x

(‖ Dj

mj−1x ‖)2

= 0

As Djmj−1x = 0 for every x it follows (recall this previous exercise (list, item 3, p. 40)) that Dj

mj−1 = 0.Continuing in this fashion we nd Dj

mj−2 = 0 and so, eventually, Dj = 0. If, in addition, B is real then asthe eigenvalues are real and all the Dj vanish, the Pj must also be real. We have now established

Proposition 10.1:If B is real and symmetric then

B =h∑

j=1

(λjPj) (10.3)

where the λj are real and the Pj are real orthogonal projections that sum to the identity andwhose pairwise products vanish.Proof: One indication that things are simpler when using the spectral representation is

B100 =h∑

j=1

(λj

100Pj

)(10.4)

Page 111: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

105

As this holds for all powers it even holds for power series. As a result,

eB =h∑

j=1

(eλj Pj

)It is also extremely useful in attempting to solve

Bx = b

for x. Replacing B by its spectral representation and b by Ib or, more to the point by∑

j (Pjb) wend

h∑j=1

(λjPjx) =h∑

j=1

(Pjb)

Multiplying through by P1 gives λ1P1x = P1b or P1x = P1bλ1

. Multiplying through by the subsequent

Pj 's gives Pjx = Pjbλj

. Hence,

x =∑h

j=1 (Pjx)

=∑h

j=1

(1λj

Pjb) (10.5)

We clearly run in to trouble when one of the eigenvalues vanishes. This, of course, is to be expected.For a zero eigenvalue indicates a nontrivial null space which signies dependencies in the columnsof B and hence the lack of a unique solution to Bx = b.

Another way in which (10.5) may be viewed is to note that, when B is symmetric, this previousequation (9.15) takes the form

(zI −B)−1 =h∑

j=1

(1

z − λjPj

)Now if 0 is not an eigenvalue we may set z = 0 in the above and arrive at

B−1 =h∑

j=1

(1λj

Pj

)(10.6)

Hence, the solution to Bx = b is

x = B−1b =h∑

j=1

(1λj

Pjb

)as in (10.5). With (10.6) we have nally reached a point where we can begin to dene an inverseeven for matrices with dependent columns, i.e., with a zero eigenvalue. We simply exclude theoending term in (10.6). Supposing that λh = 0 we dene the pseudo-inverse of B to be

B+ ≡h−1∑j=1

(1λj

Pj

)Let us now see whether it is deserving of its name. More precisely, when b ∈ R (B) we would expectthat x = B+b indeed satises Bx = b. Well

BB+b = B

h−1∑j=1

(1λj

Pjb

)=

h−1∑j=1

(1λj

BPjb

)=

h−1∑j=1

(1λj

λjPjb

)=

h−1∑j=1

(Pjb)

Page 112: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

106 CHAPTER 10. THE SYMMETRIC EIGENVALUE PROBLEM

It remains to argue that the latter sum really is b. We know that

b =h∑

j=1

(Pjb) , b ∈ < (B)

The latter informs us that(b ⊥ N

(BT)). As B = BT , we have, in fact, that (b ⊥ N (B)). As Ph is

nothing but orthogonal projection onto N (B) it follows that Phb = 0 and so B (B+b) = b, that is,x = B+b is a solution to Bx = b. The representation (10.4) is unarguably terse and in fact is oftenwritten out in terms of individual eigenvectors. Let us see how this is done. Note that if x ∈ < (P1)then x = P1x and so,

Bx = BP1x =h∑

j=1

(λjPjP1x) = λ1P1x = λ1x

i.e., x is an eigenvector of B associated with λ1. Similarly, every (nonzero) vector in < (Pj) is aneigenvector of B associated with λj .

Next let us demonstrate that each element of < (Pj) is orthogonal to each element of < (Pk)when j 6= k. If x ∈ < (Pj) and y ∈ < (Pk) then

xT y = (Pjx)TPky = xT PjPky = 0

With this we note that ifxj,1, xj,2, . . . , xj,nj

constitutes a basis for < (Pj) then in fact the union

of such bases,xj,p |1 ≤ j ≤ h and 1 ≤ p ≤ nj

forms a linearly independent set. Notice now that this set has

h∑j=1

nj

elements. That these dimensions indeed sum to the ambient dimension, n, follows directly fromthe fact that the underlying Pj sum to the n-by-n identity matrix. We have just proven

Proposition 10.2:If B is real and symmetric and n-by-n, then B has a set of n linearly independent eigenvectors.Proof: Getting back to a more concrete version of (10.4) we now assemble matrices from theindividual bases

Ej ≡xj,1, xj,2, . . . , xj,nj

and note, once again, that Pj = Ej

(Ej

T Ej

)−1Ej

T , and so

B =h∑

j=1

(λjEj

(Ej

T Ej

)−1Ej

T)

I understand that you may feel a little overwhelmed with this formula. If we work a bit harder we canremove the presence of the annoying inverse. What I mean is that it is possible to choose a basis foreach < (Pj) for which each of the corresponding Ej satisfy Ej

T Ej = I As this construction is fairlygeneral let us devote a separate section to it (see Gram-Schmidt Orthogonalization (Section 10.2)).

Page 113: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

107

10.2 Gram-Schmidt Orthogonalization2

Suppose that M is an m-dimensional subspace with basis

x1, . . . , xm

We transform this into an orthonormal basis

q1, . . . , qm

for M via

1. Set y1 = x1 and q1 = y1‖y1‖

2. y2 = x2 minus the projection of x2 onto the line spanned by q1. That is

y2 = x2 − q1

(q1

T q1

)−1q1

T x2 = x2 − q1q1T x2

Set q2 = y2‖y2‖ and Q2 = [q1, q2].

3. y3 = x3 minus the projection of x3 onto the plane spanned by q1 and q2. That is

y3 = x3 −Q2

(Q2

T Q2

)−1Q2

T x3 = −q2q2T x3

Set q3 = y3‖y3‖ and Q3 = q1, q2, q3. Continue in this fashion through step (m).

• (m) ym = xm minus its projection onto the subspace spanned by the columns of Qm−1. That is

ym = xm −Qm−1

(Qm−1

T Qm−1

)−1Qm−1

T xm = xm −m−1∑j=1

(qjqj

T xm

)Set qm = ym

‖ym‖ To take a simple example, let us orthogonalize the following basis for R3,

x1 =

1

0

0

x2 =

1

1

0

x3 =

1

1

1

1. q1 = y1 = x1.

2. y2 = x2 − q1q1T x2 =

(0 1 0

)T

and so, q2 = y2.

3. y3 = −q2q2T x3 =

(0 0 1

)T

and so, q3 = y3.

We have arrived at

q1 =

1

0

0

q2 =

0

1

0

q3 =

0

0

1

. Once the idea is grasped the actual calculations are best left to a machine. Matlab accomplishes this viathe orth command. Its implementation is a bit more sophisticated than a blind run through our steps (1)through (m). As a result, there is no guarantee that it will return the same basis. For example

2This content is available online at <http://cnx.org/content/m10509/2.4/>.

Page 114: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

108 CHAPTER 10. THE SYMMETRIC EIGENVALUE PROBLEM

X=[1 1 1;0 1 1 ;0 0 1];

Q=orth(X)

Q=

0.7370 -0.5910 0.3280

0.5910 0.3280 -0.7370

0.3280 0.7370 0.5910

This ambiguity does not bother us, for one orthogonal basis is as good as another. Let us put this intopractice, via (10.8).

10.3 The Diagonalization of a Symmetric Matrix3

By choosing an orthogonal basis qj,k |1 ≤ k ≤ nj for each RPj and collecting the basis vectors in

Qj =(

qj,1 qj,2 . . . qj,nj

)we nd that

Pj = QjQjT =

nj∑k=1

(qj,kqj,k

T)

As a result, the spectral representation (Section 10.1) takes the form

B =∑h

j=1

(λjQjQj

T)

=(∑h

j=1 λj

)∑nj

k=1

(qj,kqj,k

T) (10.7)

This is the spectral representation in perhaps its most detailed dress. There exists, however, still anotherform! It is a form that you are likely to see in future engineering courses and is achieved by assembling theQj into a single n-by-n orthonormal matrix

Q =(

Q1 . . . Qh

)Having orthonormal columns it follows that QT Q = I. Q being square, it follows in addition that QT = Q−1.Now

Bqj,k = λjqj,k

may be encoded in matrix terms viaBQ = QΛ (10.8)

where Λ is the n-by- n diagonal matrix whose rst n1 diagonal terms are λ1, whose next n2 diagonal termsare λ2, and so on. That is, each λj is repeated according to its multiplicity. Multiplying each side of (10.8),from the right, by QT we arrive at

B = QΛQT (10.9)

3This content is available online at <http://cnx.org/content/m10558/2.4/>.

Page 115: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

109

Because one may just as easily writeQT BQ = Λ (10.10)

one says that Q diagonalizes B.Let us return the our example

B =

1 1 1

1 1 1

1 1 1

of the last chapter. Recall that the eigenspace associated with λ1 = 0 had

e1,1 =

−1

1

0

and

e1,2 =

−1

0

1

for a basis. Via Gram-Schmidt we may replace this with

q1,1 =1√2

−1

1

0

and

q1,2 =1√6

−1

−1

2

Normalizing the vector associated with λ2 = 3 we arrive at

q2,1 =1√3

1

1

1

and hence

Q =(

q11 q2

1 q2

)=

1√6

−(√

3)

−1√

2√

3 −1√

2

0 2√

2

and

Λ =

0 0 0

0 0 0

0 0 3

Page 116: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

110 CHAPTER 10. THE SYMMETRIC EIGENVALUE PROBLEM

Page 117: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Chapter 11

The Matrix Exponential

11.1 Overview1

The matrix exponential is a powerful means for representing the solution to n linear, constant coecient,dierential equations. The initial value problem for such a system may be written

x′ (t) = Ax (t)

x (0) = x0

where A is the n-by-n matrix of coecients. By analogy to the 1-by-1 case we might expect

x (t) = eAtu (11.1)

to hold. Our expectations are granted if we properly dene eAt. Do you see why simply exponentiatingeach element of At will no suce?

There are at least 4 distinct (but of course equivalent) approaches to properly dening eAt. The rsttwo are natural analogs of the single variable case while the latter two make use of heavier matrix algebramachinery.

1. The Matrix Exponential as a Limit of Powers (Section 11.2)2. The Matrix Exponential as a Sum of Powers (Section 11.3)3. The Matrix Exponential via the Laplace Transform (Section 11.4)4. The Matrix Exponential via Eigenvalues and Eigenvectors (Section 11.5)

Please visit each of these modules to see the denition and a number of examples.For a concrete application of these methods to a real dynamical system, please visit the Mass-Spring-

Damper module (Section 11.6).Regardless of the approach, the matrix exponential may be shown to obey the 3 lovely properties

1. ddt

(eAt)

= AeAt = eAtA

2. eA(t1+t2) = eAt1eAt2

3. eAt is nonsingular and(eAt)−1 = e−(At)

Let us conrm each of these on the suite of examples used in the submodules.

1This content is available online at <http://cnx.org/content/m10677/2.9/>.

111

Page 118: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

112 CHAPTER 11. THE MATRIX EXPONENTIAL

Example 11.1If

A =

1 0

0 2

then

eAt =

et 0

0 e2t

1. ddt

(eAt)

=

et 0

0 2e2t

=

1 0

0 2

et 0

0 e2t

2.

et1+t2 0

0 e2t1+2t2

=

et1et2 0

0 e2t1e2t2

=

et1 0

0 e2t1

et2 0

0 e2t2

3.(eAt)−1 =

e−t 0

0 e−(2t)

= e−(At)

Example 11.2If

A =

0 1

−1 0

then

eAt =

cos (t) sin (t)

− (sin (t)) cos (t)

1. ddt

(eAt)

=

− (sin (t)) cos (t)

− (cos (t)) − (sin (t))

and AeAt =

− (sin (t)) cos (t)

− (cos (t)) − (sin (t))

2. You will recognize this statement as a basic trig identity cos (t1 + t2) sin (t1 + t2)

− (sin (t1 + t2)) cos (t1 + t2)

=

cos (t1) sin (t1)

− (sin (t1)) cos (t1)

cos (t2) sin (t2)

− (sin (t2)) cos (t2)

3. Using the cofactor expansion we nd

(eAt)−1

=

cos (t) − (sin (t))

sin (t) cos (t)

=

cos (−t) − (sin (−t))

sin (−t) cos (−t)

= e−(At)

Example 11.3If

A =

0 1

0 0

Page 119: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

113

then

eAt =

1 t

0 1

1. ddt

(eAt)

=

0 1

0 0

= AeAt

2.

1 t1 + t2

0 1

=

1 t1

0 1

1 t2

0 1

3.

1 t

0 1

−1

=

1 −t

0 1

= e−(At)

11.2 The Matrix Exponential as a Limit of Powers2

You may recall from Calculus that for any numbers a and t one may achieve eat via

eat = limk→∞

(1 +

at

k

)k

(11.2)

The natural matrix denition is therefore

eAt = limk→∞

(I +

At

k

)k

(11.3)

where I is the n-by-n identity matrix.

Example 11.4The easiest case is the diagonal case, e.g.,

A =

1 0

0 2

for then (

I +At

k

)k

=

(1 + t

k

)k 0

0(1 + 2t

k

)k

and so (recalling (11.2) above)

eAt =

et 0

0 e2t

Note that this is NOT the exponential of each element of A.

Example 11.5As a concrete example let us suppose

A =

0 1

−1 0

2This content is available online at <http://cnx.org/content/m10683/2.7/>.

Page 120: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

114 CHAPTER 11. THE MATRIX EXPONENTIAL

From

I + At =

1 t

−t 1

(

I +At

2

)2

=

1 t2

−t2 1

1 t2

−t2 1

=

1− t2

4 t

−t 1− t2

4

(

I +At

3

)3

=

1− t2

3 t− t3

27

−t + t3

27 1− t2

3

(

I +At

4

)4

=

−3t2

8 + t4

256 + 1 t− t3

16

−t + t3

16−3t2

8 + t4

256 + 1

(

I +At

5

)5

=

−2t2

5 + t4

125 + 1 t− 2t3

25 + t5

3125

−t + 2t3

25 −t5

3125−2t2

5 + t4

125 + 1

We discern a pattern: the diagonal elements are equal even polynomials while the o diagonalelements are equal but opposite odd polynomials. The degree of the polynomial will grow with kand in the limit we 'recognize'

eAt =

cos (t) sin (t)

− (sin (t)) cos (t)

Example 11.6If

A =

0 1

0 0

then (

I +At

k

)k

=

1 t

0 1

for each value of k and so

eAt =

1 t

0 1

11.3 The Matrix Exponential as a Sum of Powers3

You may recall from Calculus that for any numbers a and t one may achieve eat via

eat =∞∑

k=0

((at)k

k!

)(11.4)

3This content is available online at <http://cnx.org/content/m10678/2.15/>.

Page 121: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

115

The natural matrix denition is therefore

eAt =∞∑

k=0

((At)k

k!

)(11.5)

where A0 = I is the n-by-n identity matrix.

Example 11.7The easiest case is the diagonal case, e.g.,

A =

1 0

0 2

for then

(At)k =

tk 0

0 (2t)k

and so (recalling (11.4) above)

eAt =

et 0

0 e2t

Note that this is NOT the exponential of each element of A.

Example 11.8As a second example let us suppose

A =

0 1

−1 0

We recognize that its powers cycle, i.e.,

A2 =

−1 0

0 −1

A3 =

0 −1

1 0

A4 =

1 0

0 1

A5 =

0 1

−1 0

= A

and so

eAt =

1− t2

2! + t4

4! − . . . t− t3

3! + t5

5! − . . .

−t + t3

3! −t5

5! + . . . 1− t2

2! + t4

4! − . . .

=

cos (t) sin (t)

− (sin (t)) cos (t)

Page 122: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

116 CHAPTER 11. THE MATRIX EXPONENTIAL

Example 11.9If

A =

0 1

0 0

then

A2 = A3 = Ak =

0 1

0 0

and so

eAt = I + tA =

1 t

0 1

11.4 The Matrix Exponential via the Laplace Transform4

You may recall from the Laplace Transform module5 that may achieve eat via

eat = L−1

(1

s− a

)(11.6)

The natural matrix denition is therefore

eAt = L−1((sI −A)−1

)(11.7)

where I is the n-by-n identity matrix.

Example 11.10The easiest case is the diagonal case, e.g.,

A =

1 0

0 2

for then

(sI −A)−1 =

1s−1 0

0 1s−2

and so (recalling (11.6) above)

eAt =

L−1(

1s−1

)0

0 L−1(

1s−2

) =

et 0

0 e2t

Example 11.11As a second example let us suppose

A =

0 1

−1 0

and compute, in matlab,

4This content is available online at <http://cnx.org/content/m10679/2.10/>.5"The Laplace Transform" <http://cnx.org/content/m10731/latest/>

Page 123: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

117

inv(s*eye(2)-A)

ans = [ s/(s2+1), 1/(s2+1)]

[-1/(s2+1), s/(s2+1)]

ilaplace(ans)

ans = [ cos(t), sin(t)]

[-sin(t), cos(t)]

Example 11.12If

A =

0 1

0 0

then

inv(s*eye(2)-A)

ans = [ 1/s, 1/s2]

[ 0, 1/s]

ilaplace(ans)

ans = [ 1, t]

[ 0, 1]

11.5 The Matrix Exponential via Eigenvalues and Eigenvectors6

In this module we exploit the fact that the matrix exponential of a diagonal matrix is the diagonal matrixof element exponentials. In order to exploit it we need to recall that all matrices are almost diagonalizable.Let us begin with the clean case: if A is n-by-n and has n distinct eigenvalues, λj , and therefore n lineareigenvectors, sj , then we note that

Asj = λjsj , j ∈ 1, . . . , n6This content is available online at <http://cnx.org/content/m10680/2.13/>.

Page 124: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

118 CHAPTER 11. THE MATRIX EXPONENTIAL

may be writtenA = SΛS−1 (11.8)

where S =(

s1 s2 . . . sn

)is the full matrix of eigenvectors and Λ = diag (λ1, λ2, . . . , λn) is the

diagonal matrix of eigenvalues. One cool reason for writing A as in (11.8) is that

A2 = SΛS−1SΛS−1 = SΛ2S−1

and, more generallyAk = SΛkS−1

If we now plug this into the denition in The Matrix Exponential as a Sum of Powers (Section 11.3), wend

eAt = SeΛtS−1

where eΛt is simplydiag

(eλ1t, eλ2t, . . . , eλnt

)Let us exercise this on our standard suite of examples.

Example 11.13If

A =

1 0

0 2

then

S = IΛ = A

and so eAt = eΛt. This was too easy!

Example 11.14As a second example let us suppose

A =

0 1

−1 0

and compute, in matlab,

[S, Lam] = eig(A)

S = 0.7071 0.7071

0 + 0.7071i 0 - 0.7071i

Lam = 0 + 1.0000i 0

0 0 - 1.0000i

Si = inv(S)

Si = 0.7071 0 - 0.7071i

0.7071 0 + 0.7071i

Page 125: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

119

simple(S*diag(exp(diag(Lam)*t))*Si)

ans = [ cos(t), sin(t)]

[-sin(t), cos(t)]

Example 11.15If

A =

0 1

0 0

then matlab delivers

[S, Lam] = eig(A)

S = 1.0000 -1.0000

0 0.0000

Lam = 0 0

0 0

So zero is a double eigenvalue with but one eigenvector. Hence S is not invertible and we can notinvoke ((11.8)). The generalization of ((11.8)) is often called the Jordan Canonical Form or theSpectral Representation (Section 9.4). The latter reads

A =h∑

j=1

(λjPj + Dj)

where the λj are the distinct eigenvalues of A while, in terms of the resolvent R (z) = (zI −A)−1,

Pj =1

2πi

∫Cj

R (z) dz

is the associated eigen-projection and

Dj =1

2πi

∫Cj

R (z) (z − λj) dz

is the associated eigen-nilpotent. In each case, Cj is a small circle enclosing only λj .Conversely we express the resolvent

R (z) =h∑

j=1

(1

z − λjPj +

mj−1∑k=1

(1

(z − λj)k+1

Dkj

))

wheremj = dimR (Pj)

Page 126: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

120 CHAPTER 11. THE MATRIX EXPONENTIAL

with this preparation we recall Cauchy's integral formula (Section 8.2) for a smooth function f

f (a) =1

2πi

∫C(a)

f (z)z − a

dz

where C (a) is a curve enclosing the point a. The natural matrix analog is

f (A) =−12πi

∫C(r)

f (z) R (z) dz

where C (r) encloses ALL of the eigenvalues of A. For f (z) = ezt we nd

eAt =h∑

j=1

(eλjt

(Pj +

mj−1∑k=1

(tk

k!Dk

j

)))(11.9)

with regard to our example we nd, h = 1, λ1 = 0, P1 = I, m1 = 2, D1 = A so

eAt = I + tA

Let us consider a slightly bigger example, if

A =

1 1 0

0 1 0

0 0 2

then

R = inv(s*eye(3)-A)

R = [ 1/(s-1), 1/(s-1)2, 0]

[ 0, 1/(s-1), 0]

[ 0, 0, 1/(s-2)]

and so λ1 = 1 and λ2 = 2 while

P1 =

1 0 0

0 1 0

0 0 0

and so m1 = 2

D1 =

0 1 0

0 0 0

0 0 0

and

P2 =

0 0 0

0 0 0

0 0 1

Page 127: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

121

and m2 = 1 and D2 = 0. Hence

eAt = et (P1 + tD1) + e2tP2

et tet 0

0 et 0

0 0 e2t

11.6 The Mass-Spring-Damper System7

Figure 11.1: Mass, spring, damper system

If one provides an initial displacement, x0, and velocity, v0, to the mass depicted in Figure 11.1 then onends that its displacement, x (t) at time t satises

md2

dt2x (t) + 2c

d

dtx (t) + kx (t) = 0 (11.10)

x (0) = x0

x′ (0) = v0

where prime denotes dierentiation with respect to time. It is customary to write this single second orderequation as a pair of rst order equations. More precisely, we set

u1 (t) = x (t)

u2 (t) = x′ (t)

and note that (11.10) becomesmu2

′ (t) = − (ku1 (t))− 2cu2 (t) (11.11)

7This content is available online at <http://cnx.org/content/m10691/2.4/>.

Page 128: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

122 CHAPTER 11. THE MATRIX EXPONENTIAL

u1′ (t) = u2 (t)

Denoting u (t) ≡(

u1 (t) u2 (t))T

we write (11.11) as

u′ (t) = Au (t) , A =

0 1−km

−2cm

(11.12)

We recall from The Matrix Exponential module that

u (t) = eAtu (0)

We shall proceed to compute the matrix exponential along the lines of The Matrix Exponential via Eigen-values and Eignevectors module (Section 11.5). To begin we record the resolvent

R (z) =−1

mz2 + 2cz + k

2c + mz m

−k mz

The eigenvalues are the roots of mz2 + 2cz + k, namely

λ1 =−c− d

m, d =

√c2 −mk

λ2 =−c + d

m, d =

√c2 −mk

We naturally consider two cases, the rst being

1. d 6= 0. In this case the partial fraction expansion of R (z) yields

R (z) =−1

z − λ1

12d

d− c −m

k c + d

+−1

z − λ2

12d

c + d m

−k d− c

=−1

z − λ1P1 +

−1z − λ2

P2

and so eAt = eλ1tP1 + eλ2tP2. If we now suppose a negligible initial velocity, i.e., = v0, it follows that

x (t) =x0

2d

(eλ1t (d− c) + eλ2t (c + d)

)(11.13)

If d is real, i.e., if c2 > mk, then both λ1 and λ2 are negative real numbers and x (t) decays to 0without oscillation. If, on the contrary, d is imaginary, i.e., c2 < mk, then

x (t) = e−(ct)

(cos (|d|t) +

c

|d|sin (|d|t)

)(11.14)

and so x decays to 0 in an oscillatory fashion. When (11.13) holds the system is said to be overdampedwhile when (11.14) governs then we speak of the system as underdamped. It remains to discuss thecase of critical damping.

2. d = 0. In this case λ1 = λ2 = −(√

km

), and so we need only compute P1 and D1. As there is but

one Pj and the Pj are known to sum to the identity it follows that P1 = I. Similarly, this equation(9.16) dictates that

D1 = AP1 − λ1P1 = A− λ1I =

km 1

−(

km

)−(√

km

)

Page 129: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

123

On substitution of this into this equation (11.9) we nd

eAt = e−

“t√

km

” 1 + t√

km t

−(

t√

km

)1− t

√km

(11.15)

Under the assumption, as above, that v0 = 0, we deduce from (11.15) that

x (t) = e−

“t√

km

”(1 + t

√k

m

)x0

Page 130: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

124 CHAPTER 11. THE MATRIX EXPONENTIAL

Page 131: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Chapter 12

Singular Value Decomposition

12.1 The Singular Value Decomposition1

The singular value decomposition is another name for the spectral representation of a rectangular matrix.Of course if A is m-by-n and m 6= n then it does not make sense to speak of the eigenvalues of A. Wemay, however, rely on the previous section to give us relevant spectral representations of the two symmetricmatrices

• AT A• AAT

That these two matrices together may indeed tell us 'everything' about A can be gleaned from

N(AT A

)= N (A)

N(AAT

)= N

(AT)

R(AT A

)= R

(AT)

R(AAT

)= R (A)

You have proven the rst of these in a previous exercise. The proof of the second is identical. The row andcolumn space results follow from the rst two via orthogonality.

On the spectral side, we shall now see that the eigenvalues of AAT and AT A are nonnegative and thattheir nonzero eigenvalues coincide. Let us rst conrm this on the adjacency matrix associated with theunstable swing (see gure in another module (Figure 3.2: A simple swing))

A =

0 1 0 0

−1 0 1 0

0 0 0 1

(12.1)

The respective products are

AAT =

1 0 0

0 2 0

0 0 1

1This content is available online at <http://cnx.org/content/m10739/2.4/>.

125

Page 132: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

126 CHAPTER 12. SINGULAR VALUE DECOMPOSITION

AT A =

1 0 −1 0

0 1 0 0

−1 0 1 0

0 0 0 1

Analysis of the rst is particularly simple. Its null space is clearly just the zero vector while λ1 = 2 andλ2 = 1 are its eigenvalues. Their geometric multiplicities are n1 = 1 and n2 = 2. In AT A we recognizethe S matrix from the exercise in another module2and recall that its eigenvalues are λ1 = 2, λ2 = 1, andλ3 = 0 with multiplicities n1 = 1, n2 = 2, and n3 = 1. Hence, at least for this A, the eigenvalues of AAT

and AT A are nonnegative and their nonzero eigenvalues coincide. In addition, the geometric multiplicitiesof the nonzero eigenvalues sum to 3, the rank of A.

Proposition 12.1:The eigenvalues of AAT and AT A are nonnegative. Their nonzero eigenvalues, including geometricmultiplicities, coincide. The geometric multiplicities of the nonzero eigenvalues sum to the rank ofA.Proof: If AT Ax = λx then xT AT Ax = λxT x, i.e., (‖ Ax ‖)2 = λ(‖ x ‖)2 and so λ ≥ 0. A similarargument works for AAT .

Now suppose that λj > 0 and that xj,knj

k=1 constitutes an orthogonal basis for the eigenspace R (Pj).Starting from

AT Axj,k = λjxj,k (12.2)

we nd, on multiplying through (from the left) by A that

AAT Axj,k = λjAxj,k

i.e., λj is an eigenvalue of AAT with eigenvector Axj,k, so long as Axj,k 6= 0. It follows from the rstparagraph of this proof that ‖ Axj,k ‖=

√λj , which, by hypothesis, is nonzero. Hence,

yj,k ≡Axj,k√

λj

, 1 ≤ k ≤ nj (12.3)

is a collection of unit eigenvectors of AAT associated with λj . Let us now show that these vectors areorthonormal for xed j.

yTj,iyj,k =

1λj

xTj,iA

T Axj,k = xTj,ixj,k = 0

We have now demonstrated that if λj > 0 is an eigenvalue of AT A of geometric multiplicity nj . Reversing theargument, i.e., generating eigenvectors of AT A from those of AAT we nd that the geometric multiplicitiesmust indeed coincide.

Regarding the rank statement, we discern from (12.2) that if λj > 0 then xj,k ∈ R(AT A

). The union of

these vectors indeed constitutes a basis for R(AT A

), for anything orthogonal to each of these xj,k necessarily

lies in the eigenspace corresponding to a zero eigenvalue, i.e., in N(AT A

). As R

(AT A

)= R

(AT)it follows

that dimR(AT A

)= r and hence the nj , for λj > 0, sum to r.

Let us now gather together some of the separate pieces of the proof. For starters, we order the eigenvaluesof AT A from high to low,

λ1 > λ2 > · · · > λh

and writeAT A = XΛnXT (12.4)

2"Symmetric Matrix Spectral Representation Exercises" <http://cnx.org/content/m10557/latest/#para1>

Page 133: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

127

whereX = X1, . . . , Xh , Xj =

xj,1, . . . , xj,nj

and Λn is the n-by-n diagonal matrix with λ1 in the rst n1 slots, λ2 in the next n2 slots, etc. Similarly

AAT = Y ΛmY T (12.5)

whereY = Y1, . . . , Yh , Yj =

yj,1, . . . , yj,nj

and Λm is the m-by-m diagonal matrix with λ1 in the rst n1 slots, λ2 in the next n2 slots, etc. The yj,k

were dened in (12.3) under the assumption that λj > 0. If λj = 0 let Yj denote an orthonormal basis forN(AAT

). Finally, call

σj =√

λj

and let Σ denote the m-by-n matrix diagonal matrix with σ1 in the rst n1 slots, σ2 in the next n2 slots,etc. Notice that

ΣT Σ = Λn (12.6)

ΣΣT = Λm (12.7)

Now recognize that (12.3) may be written

Axj,k = σjyj,k

and that this is simply the column by column rendition of

AX = Y Σ

As XXT = I we may multiply through (from the right) by XT and arrive at the singular value decom-position of A,

A = Y ΣXT (12.8)

Let us conrm this on the A matrix in (12.1). We have

Λ4 =

2 0 0 0

0 1 0 0

0 0 1 0

0 0 0

X =1√2

−1 0 0 1

0√

2 0 0

1 0 0 1

0 0√

2 0

Λ3 =

2 0 0

0 1 0

0 0 1

Y =

0 1 0

1 0 0

0 0 1

Page 134: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

128 CHAPTER 12. SINGULAR VALUE DECOMPOSITION

Hence,

Σ =

2 0 0 0

0 1 0 0

0 0 1 0

(12.9)

and so A = Y ΣXT says that A should coincide with

0 1 0

1 0 0

0 0 1

2 0 0 0

0 1 0 0

0 0 1 0

−1√2

0 1√2

0

0 1 0 0

0 0 0 11√2

0 1√2

0

This indeed agrees with A. It also agrees (up to sign changes on the columns of X) with what one receivesupon typing [Y, SIG, X] = scd(A) in Matlab.

You now ask what we get for our troubles. I express the rst dividend as a proposition that looks to melike a quantitative version of the fundamental theorem of linear algebra.

Proposition 12.2:If Y ΣXT is the singular value decomposition of A then

1. The rank of A, call it r, is the number of nonzero elements in Σ.2. The rst r columns of X constitute an orthonormal basis for R

(AT). The n− rlast columns

of X constitute an orthonormal basis for N (A).3. The rst r columns of Y constitute an orthonormal basis for R (A). The m− r last columns

of Y constitute an orthonormal basis for N(AT).

Let us now 'solve' Ax = b with the help of the pseudo-inverse of A. You know the 'right' thing to do,namely reciprocate all of the nonzero singular values. Because m is not necessarily n we must also be carefulwith dimensions. To be precise, let Σ+ denote the n-by-m matrix whose rst n1 diagonal elements are 1

σ1,

whose next n2 diagonal elements are 1σ2

and so on. In the case that σh = 0, set the nal nh diagonal elements

of Σ+ to zero. Now, one denes the pseudo-inverse of A to be

A+ ≡ XΣ+Y T

In the case of that A is that appearing in (12.1) we nd

Σ+ =

2 0 0

0 1 0

0 0 1

0 0 0

and so

A+ =

−1√

20 1√

20

0 1 0 0

0 0 0 11√2

0 1√2

0

1√2

0 0

0 1 0

0 0 1

0 0 0

0 1 0

1 0 0

0 0 1

Page 135: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

129

therefore,

A+ =

0 −1

2 0

1 0 0

0 12 0

0 0 1

in agreement with what appears from pinv(A). Let us now investigate the sense in which A+ is the inverseof A. Suppose that b ∈ Rm and that we wish to solve Ax = b. We suspect that A+b should be a goodcandidate. Observe by (12.4) that (

AT A)A+b = XΛnXT XΣ+Y T b

because XT X = I (AT A

)A+b = XΛnΣ+Y T b

by (12.6) and (12.7) (AT A

)A+b = XΣT ΣΣ+Y T b

because ΣT ΣΣ+ = ΣT (AT A

)A+b = XΣT Y T b

by (12.8) (AT A

)A+b = AT b

that is A+b satises the least-squares problem AT Ax = AT b.

Page 136: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

130 GLOSSARY

Glossary

A A Basis for the Column Space

Suppose A is m-by-n. If columns cj |j = 1, ..., r are the pivot columns of Ared then columnscj |j = 1, ..., r of A constitute a basis for Ra (A).

A Basis for the Left Null Space

Suppose that AT is n-by-m with pivot indices cj |j = 1, . . . , r and free indicescj |j = r + 1, . . . ,m. A basis for N

(AT)may be constructed of m− r vectors

z1, z2, . . . , zm−rwhere zk, and only zk, possesses a nonzero in its cr+k component.

A Basis for the Null Space

Suppose that A is m-by-n with pivot indices cj |j = 1, . . . , r and free indicescj |j = r + 1, . . . , n. A basis for N (A) may be constructed of n− r vectorsz1, z2, . . . , zn−r

where zk, and only zk, possesses a nonzero in its cr+k component.

A Basis for the Row Space

Suppose A is m-by-n. The pivot rows of Ared constitute a basis for Ra(AT).

A subset S of a vector space V is a subspace of V when

1. if x and y belong to S then so does x + y.

2. if x belongs to S and t is real then tx belong to S.

axial resistance

Ri =ρi

lN

πa2

B Basis

Any linearly independent spanning set of a subspace S is called a basis of S.

C Capacitance of a Single Compartment

Cm = 2πal

Nc

capacitance of cell body

Ccb = Acbc

Column Space

The column space of the m-by-n matrix S is simply the span of the its columns, i.e.Ra (S) ≡ Sx |x ∈ Rn. This is a subspace (Section 4.7.2) of <m. The notation Ra stands forrange in this context.

Complex Addition

z1 + z2 ≡ x1 + x2 + i (y1 + y2)

Complex Conjugation

z1 ≡ x1 − iy1

Page 137: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

GLOSSARY 131

Complex Division

z1z2≡ z1

z2

z2z2

= x1x2+y1y2+i(x2y1−x1y2)x22+y22

Complex Multiplication

z1z2 ≡ (x1 + iy1) (x2 + iy2) = x1x2 − y1y2 + i (x1y2 + x2y1)

D Dimension

The dimension of a subspace is the number of elements in its basis.

E Elementary Row Operations

1. You may swap any two rows.

2. You may add to a row a constant multiple of another row.

F force balance

1. Equilibrium is synonymous with the fact that the net force acting on each mass must vanish.

2. In symbols,y1 − y2 − f1 = 0

y2 − y3 − f2 = 0

y3 − y4 − f3 = 0

3. or, in matrix terms, By = f where

f =

f1

f2

f3

andB =

1 −1 0 0

0 1 −1 0

0 0 1 −1

H Hooke's Law

1. The restoring force in a spring is proportional to its elongation. We call the constant ofproportionality the stiness, kj , of the spring, and denote the restoring force by yj .

2. The mathematical expression of this statement is: yj = kjej , or,

3. in matrix terms: y = Ke where

K =

k1 0 0 0

0 k2 0 0

0 0 k3 0

0 0 0 k4

I Inverse of S

Also dubbed "S inverse" for short, the value of this matrix stems from watching what happenswhen it is applied to each side of Sx = f . Namely,

Sx = f ⇒ S−1Sx = S−1f ⇒ Ix = S−1f ⇒ x = S−1f

Page 138: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

132 GLOSSARY

Invertible, or Nonsingular Matrices

Matrices that do have an inverse.

Example: The matrix S that we just studied is invertible. Another simple example is 0 1

1 1

L Left Null Space

The Left Null Space of a matrix is the null space (Section 4.2) of its transpose, i.e.,

N(AT)

=y ∈ Rm |AT y = 0

Linear Independence

A nite collection s1, s2, . . . , sn of vectors is said to be linearly independent when the onlyreals, x1, x2, . . . , xn for which x1 + x2 + · · ·+ xn = 0 are x1 = x2 = · · · = xn = 0. In otherwords, when the null space (Section 4.2) of the matrix whose columns are s1, s2, . . . , sncontains only the zero vector.

M Magnitude of a Complex Number

|z1| ≡√

z1z1 =√

x12 + y1

2

membrane resistance

Rm =ρm

2πa lN

N Nernst potentials

ENa =RT

Flog

([Na]o[Na]i

)and EK =

RT

Flog

([K]o[K]i

)where R is the gas constant, T is temperature, and F is the Faraday constant.

Null Space

The null space of an m-by-n matrix A is the collection of those vectors in Rn that A maps to thezero vector in Rm. More precisely,

N (A) = x ∈ Rn |Ax = 0

O orthogonal projection

A matrix P that satises P 2 = P is called a projection. A symmetric projection is called anorthogonal projection.

P Pivot Column

Each column of Ared that contains a pivot is called a pivot column.

Pivot Row

Each nonzero row of Ared is called a pivot row.

Pivot

Page 139: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

GLOSSARY 133

The rst nonzero term in each row of Ared is called a pivot.

poles

Also called singularities, these are the points s at which Lx1 (s) blows up.

R Rank

The number of pivots in a matrix is called the rank of that matrix.

resistance of cell body

Rcb = Acbρm.

Row Space

The row space of the m-by-n matrix A is simply the span of its rows, i.e.,

Ra(AT)≡

AT y |y ∈ Rm

S singular matrix

A matrix that does not have an inverse.

Example: A simple example is: 1 1

1 1

Span

A nite collection s1, s2, . . . , sn of vectors in the subspace S is said to span S if each element ofS can be written as a linear combination of these vectors. That is, if for each s ∈ S there exist nreals x1, x2, . . . , xn such that s = x1s1 + x2s2 + · · ·+ xnsn.

T The Row Reduced Form

Given the matrix A we apply elementary row operations until each nonzero below the diagonal iseliminated. We refer to the resulting matrix as Ared.

V Voltage-current law obeyed by a capacitor

The current through a capacitor is proportional to the time rate of change of the potential acrossit.

Page 140: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

134 BIBLIOGRAPHY

Page 141: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Bibliography

[1] G. Strang. Introduction to Applied Mathematics. Wellesley-Cambridge Press, 1986.

135

Page 142: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

136 INDEX

Index of Keywords and Terms

Keywords are listed by the section with that keyword (page numbers are in parentheses). Keywordsdo not necessarily appear in the text of the page. They are merely associated with that section. Ex.apples, 1.1 (1) Terms are referenced by the page they appear on. Ex. apples, 1

3 335, 4.6(40)

A A Basis for the Column Space, 32A Basis for the Left Null Space, 38A Basis for the Null Space, 34A Basis for the Row Space, 39A subset S of a vector space V is a subspace ofV when, 41adjacency matrix, 2.1(5)algebra, 2.2(15)algebraic multiplicity, 9.4(100), 100argument, 7.2(75)arithmetic, 4.7.1(40)axial current, 2.1(5)axial resistance, 6

B Backward euler, 6.1(51), 6.5(60)Backward-euler method, 6.1(51), 6.4(59)basis, 4.3(34), 4.7.2(41), 42biaxial testing problem, 3.2(22)big O, 3.2(22)branched dendrite ber, 6.6.1(61)

C CAAM, 4.6(40)Capacitance of a Single Compartment, 62capacitance of cell body, 52, 62capacitor, 51cauchy, 8.1(85), 8.2(88)cauchy's, 8.1(85)chapter 6, 4.6(40)characteristic polynomial, 6.2(56), 57closed, 8.2(88)column, 4.1(31), 4.3(34), 4.5(39)column space, 4.1(31), 31, 4.5(39)column spaces, 4.6(40)columnspace, 4.1(31), 4.5(39)complex, 4.7.1(40), 7.2(75), 7.3(77), 8.1(85)Complex Addition, 73complex arithmetic, 7.1(73)Complex Conjugation, 73Complex Dierentiation, 7.4(82)

Complex Division, 73Complex Functions, 7.4(82)complex integration, 8.4(93)Complex Multiplication, 73complex numbers, 7.1(73), 7.4(82)curve, 8.2(88)curves, 8.1(85), 8.2(88)

D damper, 11.6(121)decomposition, 12.1(125)degree, 7.2(75)determinant, 6.2(56), 57diagonal matrix, 10.1(103)diagonalizes, 109di, 7.3(77)dierential, 7.3(77)dierentiation, 7.3(77)dimension, 4.7.2(41), 42distinct, 7.2(75)dynamic Strang quartet, 6.1(51)dynamical systems, 1.1(2)

E eigenspace, 9.1(95), 95eigenspaces, 9.5(101)eigenvalue, 6.5(60), 9.1(95), 95, 9.2(96), 9.6(102), 11.5(117)eigenvalues, 1.1(2), 6.2(56), 57, 9.5(101)eigenvector, 9.1(95), 95, 9.6(102), 11.5(117)eigenvectors, 9.5(101)Elementary Row Operations, 42elongation, 3.1(17)exercises, 2.2(15), 3.4(28), 4.6(40), 7.4(82)exogeneous disturbance, 2.1(5)exogenous disturbance, 7exponential, 11.1(111), 11.2(113), 11.3(114), 11.4(116), 11.5(117)

F force balance, 3.1(17), 18four fundamental subspaces, 3free, 4.3(34)

Page 143: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

INDEX 137

free variables, 36function, 7.2(75)functions, 8.1(85)fundamental subspaces, 1.1(2)fundamental theorem, 4.1(31), 4.4(38), 4.5(39)

G gauss, 4.7.2(41), 4.7.3(42)Gauss-Jordan method, 3.1(17), 20, 9.3(97)gaussian, 4.7.2(41), 4.7.3(42)Gaussian elimination, 3.1(17), 19, 4.7.2(41), 4.7.3(42)general, 3.3(26)generalized eigenspace, 9.4(100), 100generalized eigenvalue, 9.4(100)generalized eigenvector, 100geometric multiplicity, 9.1(95), 95Gram-Schmidt, 10.1(103)

H Hermitian, 10.1(103), 104Hooke's Law, 3.1(17), 18

I identity matrix, 3.1(17), 20integral, 8.2(88)integration, 8.1(85)inverse, 3.1(17), 3.3(26)Inverse Laplace Transform, 6.3(58), 8.3(92), 8.4(93)Inverse of S, 20invertibility, 3.1(17)invertible, 3.1(17)Invertible, or Nonsingular Matrices, 22

K kcl, 2.1(5)Kirchho's Current Law, 2.1(5), 8

L laplace, 8.3(92)Laplace transform, 6.1(51), 6.2(56), 6.5(60), 8.4(93), 11.4(116)least squares, 5.1(45)least-squares problem, 129left null space, 4.4(38), 38left nullspace, 4.4(38)limit, 7.3(77), 11.2(113)linear, 2.2(15)linear algebra, 4.1(31), 4.2(32), 4.4(38), 4.5(39), 4.7.1(40), 7.1(73), 7.2(75), 7.3(77), 8.1(85), 8.2(88)linear independence, 4.7.2(41), 42linear transformation, 1.1(2)

M Magnitude of a Complex Number, 73mass, 11.6(121)

MATLAB symbolic toolbox, 6.2(56)matrix, 2.2(15), 4.7.2(41), 4.7.3(42), 11.1(111), 11.2(113), 11.3(114), 11.4(116), 11.5(117)matrix exponential, 60matrix methods, 4.7.2(41), 4.7.3(42)matrix reduction, 4.7.2(41), 4.7.3(42)matrix theory, 1.1(2)membrane current, 2.1(5)membrane resistance, 6

N Nernst potentials, 2.1(5), 12nerve ber, 2.1(5)nerve bers, 6.1(51)nervous impulse, 6.1(51), 51node-edge adjacency matrix, 8nonsingular, 3.1(17)normal equation, 5.1(45)normal equations, 46NOT, 113null, 4.2(32), 4.3(34), 4.4(38)Null Space, 32null spaces, 4.6(40)nullspace, 4.2(32), 4.4(38)

O Ohm's Law, 2.1(5), 8operations, 4.7.2(41), 4.7.3(42)order, 6.3(58), 59orthogonal projection, 5.1(45), 45, 49, 132orthogonal projections, 10.1(103)

P Partial Fraction Expansion, 76, 9.3(97)phsyical system modeling, 1.1(2)pinv, 3.3(26)pivot, 36, 44pivot column, 36, 44pivot row, 36, 44pivot variables, 36pivot<, 4.3(34)planar, 3.3(26)polar notationpolar addition, 7.1(73)poles, 6.3(58), 58power, 11.2(113)Problems, 7.4(82)projection, 132pseudo, 3.3(26)pseudo inverse, 3.2(22)pseudo-inverse, 3.2(22), 25, 3.3(26), 28, 10.1(103), 105, 128

R Rank, 44rational, 7.2(75)replacement, 8.2(88)

Page 144: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

138 INDEX

resistance of cell body, 52, 62resolvent, 57, 9.1(95), 96row, 4.1(31), 4.5(39)Row Space, 39rowspace, 4.1(31), 4.5(39)

S singular, 3.1(17), 12.1(125)singular matrix, 21singular value decomposition, 127singularities, 6.3(58), 58small planar truss, 3.2(22)space, 4.1(31), 4.2(32), 4.3(34), 4.4(38), 4.5(39), 4.7.1(40)spaces, 4.7.1(40), 8.2(88)span, 4.7.2(41), 41, 42spectral representation, 9.4(100), 100, 10.1(103)spring, 11.6(121)static equilibrium, 1.1(2)Strang Quartet, 2.1(5), 8

subspace, 4.7.2(41)symbolic, 57symmetric matrix, 10.1(103)

T Taylor's series, 11.3(114)The Row Reduced Form, 43theorem, 8.1(85)transfer function, 57, 9.2(96), 9.3(97)transpose, 10truss, 3.3(26)

U uniaxial truss, 3.1(17)unique, 4.2(32), 4.4(38)uniqueness, 4.2(32), 4.4(38)unstable mode, 3.2(22), 25

V value, 7.2(75), 12.1(125)variables, 4.3(34)vector, 4.7.1(40)Voltage-current law obeyed by a capacitor, 53

Page 145: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

ATTRIBUTIONS 139

Attributions

Collection: Matrix AnalysisEdited by: Steven CoxURL: http://cnx.org/content/col10048/1.4/License: http://creativecommons.org/licenses/by/1.0

Module: "Preface to Matrix Analysis"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10144/2.8/Pages: 2-3Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Matrix Methods for Electrical Systems"Used here as: "Nerve Fibers and the Strang Quartet"By: Doug DanielsURL: http://cnx.org/content/m10145/2.7/Pages: 5-15Copyright: Doug DanielsLicense: http://creativecommons.org/licenses/by/1.0

Module: "CAAM 335 Chapter 1 Exercises"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10299/2.8/Pages: 15-16Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Matrix Methods for Mechanical Systems: A Uniaxial Truss"Used here as: "A Uniaxial Truss"By: Doug DanielsURL: http://cnx.org/content/m10146/2.6/Pages: 17-22Copyright: Doug DanielsLicense: http://creativecommons.org/licenses/by/1.0

Module: "Matrix Methods for Mechanical Systems: A Small Planar Truss"Used here as: "A Small Planar Truss"By: Doug DanielsURL: http://cnx.org/content/m10147/2.6/Pages: 22-25Copyright: Doug DanielsLicense: http://creativecommons.org/licenses/by/1.0

Module: "Matrix Methods for Mechanical Systems: The General Planar Truss"Used here as: "The General Planar Truss"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10148/2.9/Pages: 26-28Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Page 146: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

140 ATTRIBUTIONS

Module: "CAAM 335 Chapter 2 Exercises"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10300/2.6/Pages: 28-29Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Column Space"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10266/2.9/Pages: 31-32Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Null Space"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10293/2.9/Pages: 32-34Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Null and Column Spaces: An Example"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10368/2.4/Pages: 34-37Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Left Null Space"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10292/2.7/Pages: 38-38Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Row Space"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10296/2.7/Pages: 39-39Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Exercises: Columns and Null Spaces"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10367/2.4/Pages: 40-40Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Vector Space"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10298/2.6/Pages: 40-41Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Page 147: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

ATTRIBUTIONS 141

Module: "Subspaces"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10297/2.6/Pages: 41-42Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Row Reduced Form"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10295/2.6/Pages: 42-44Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Least Squares"By: CJ GanierURL: http://cnx.org/content/m10371/2.9/Pages: 45-50Copyright: CJ GanierLicense: http://creativecommons.org/licenses/by/1.0

Module: "Nerve Fibers and the Dynamic Strang Quartet"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10168/2.6/Pages: 51-55Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Old Laplace Transform"Used here as: "The Laplace Transform"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10169/2.5/Pages: 56-58Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Inverse Laplace Transform"By: Steven CoxURL: http://cnx.org/content/m10170/2.8/Pages: 58-59Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Backward-Euler Method"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10171/2.6/Pages: 59-60Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Page 148: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

142 ATTRIBUTIONS

Module: "Exercises: Matrix Methods for Dynamical Systems"By: Steven CoxURL: http://cnx.org/content/m10526/2.4/Pages: 60-61Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Matrix Analysis of the Branched Dendrite Nerve Fiber"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10177/2.7/Pages: 61-72Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Complex Numbers, Vectors and Matrices"By: Steven CoxURL: http://cnx.org/content/m10504/2.5/Pages: 73-75Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Complex Functions"By: Steven CoxURL: http://cnx.org/content/m10505/2.7/Pages: 75-77Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Complex Dierentiation"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10276/2.7/Pages: 77-81Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Exercises: Complex Numbers, Vectors, and Functions"By: Steven CoxURL: http://cnx.org/content/m10506/2.5/Pages: 82-82Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Cauchy's Theorem"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10264/2.8/Pages: 85-87Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Cauchy's Integral Formula"By: Doug Daniels, Steven CoxURL: http://cnx.org/content/m10246/2.8/Pages: 88-91Copyright: Doug Daniels, Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Page 149: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

ATTRIBUTIONS 143

Module: "The Inverse Laplace Transform: Complex Integration"By: Steven CoxURL: http://cnx.org/content/m10523/2.3/Pages: 92-93Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Exercises: Complex Integration"By: Steven CoxURL: http://cnx.org/content/m10524/2.4/Pages: 93-93Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Eigenvalue Problem"Used here as: "Introduction"By: Steven CoxURL: http://cnx.org/content/m10405/2.4/Pages: 95-96Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Eigenvalue Problem: The Transfer Function"Used here as: "The Resolvent"By: Steven CoxURL: http://cnx.org/content/m10490/2.3/Pages: 96-97Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Partial Fraction Expansion of the Transfer Function"Used here as: "The Partial Fraction Expansion of the Resolvent"By: Steven CoxURL: http://cnx.org/content/m10491/2.5/Pages: 97-99Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Spectral Representation"By: Steven CoxURL: http://cnx.org/content/m10492/2.3/Pages: 100-100Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Eigenvalue Problem: Examples"By: Steven CoxURL: http://cnx.org/content/m10493/2.4/Pages: 101-102Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Page 150: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

144 ATTRIBUTIONS

Module: "The Eigenvalue Problem: Exercises"By: Steven CoxURL: http://cnx.org/content/m10494/2.3/Pages: 102-102Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Spectral Representation of a Symmetric Matrix"By: Steven CoxURL: http://cnx.org/content/m10382/2.4/Pages: 103-106Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "Gram-Schmidt Orthogonalization"By: Steven CoxURL: http://cnx.org/content/m10509/2.4/Pages: 107-108Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Diagonalization of a Symmetric Matrix"By: Steven CoxURL: http://cnx.org/content/m10558/2.4/Pages: 108-109Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Matrix Exponential"Used here as: "Overview"By: Steven CoxURL: http://cnx.org/content/m10677/2.9/Pages: 111-113Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Matrix Exponential as a Limit of Powers"By: Steven CoxURL: http://cnx.org/content/m10683/2.7/Pages: 113-114Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Matrix Exponential as a Sum of Powers"By: Steven CoxURL: http://cnx.org/content/m10678/2.15/Pages: 114-116Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Page 151: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

ATTRIBUTIONS 145

Module: "The Matrix Exponential via the Laplace Transform"By: Steven CoxURL: http://cnx.org/content/m10679/2.10/Pages: 116-117Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Matrix Exponential via Eigenvalues and Eigenvectors"By: Steven CoxURL: http://cnx.org/content/m10680/2.13/Pages: 117-121Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Mass-Spring-Damper System"By: Steven CoxURL: http://cnx.org/content/m10691/2.4/Pages: 121-123Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Module: "The Singular Value Decomposition"By: Steven CoxURL: http://cnx.org/content/m10739/2.4/Pages: 125-129Copyright: Steven CoxLicense: http://creativecommons.org/licenses/by/1.0

Page 152: Matrix Analysis - The Free Information Society · Chapter 2 Matrix Methods for Electrical Systems 2.1 Nerve Fibers and the Strang Quartet 1 2.1.1 Nerve Fibers and the Strang Quartet

Matrix AnalysisEquilibria and the solution of linear and linear least squares problems. Dynamical systems and the eigenvalueproblem with the Jordan form and Laplace transform via complex integration.

About ConnexionsSince 1999, Connexions has been pioneering a global system where anyone can create course materials andmake them fully accessible and easily reusable free of charge. We are a Web-based authoring, teaching andlearning environment open to anyone interested in education, including students, teachers, professors andlifelong learners. We connect ideas and facilitate educational communities.

Connexions's modular, interactive courses are in use worldwide by universities, community colleges, K-12schools, distance learners, and lifelong learners. Connexions materials are in many languages, includingEnglish, Spanish, Chinese, Japanese, Italian, Vietnamese, French, Portuguese, and Thai. Connexions is partof an exciting new information distribution system that allows for Print on Demand Books. Connexionshas partnered with innovative on-demand publisher QOOP to accelerate the delivery of printed coursematerials and textbooks into classrooms worldwide at lower prices than traditional academic publishers.