25
University of Sydney MATH3063 LECTURE NOTES Shane Leviton 6-7-2016

MATH3063 LECTURE NOTES - StudentVIP

  • Upload
    others

  • View
    9

  • Download
    1

Embed Size (px)

Citation preview

Page 1: MATH3063 LECTURE NOTES - StudentVIP

University of Sydney

MATH3063 LECTURE NOTES

Shane Leviton 6-7-2016

Page 2: MATH3063 LECTURE NOTES - StudentVIP

Contents Introduction: ........................................................................................................................................... 6

Order of ODE: ...................................................................................................................................... 6

Explicit ODEs ....................................................................................................................................... 6

Notation: ............................................................................................................................................. 7

Example: .......................................................................................................................................... 7

Systems of ODEs ..................................................................................................................................... 7

Example 0: 𝑓π‘₯ = 0 .......................................................................................................................... 8

Eg 2: 𝑓π‘₯ = 𝑐 (or 𝑓π‘₯ = 𝑓(𝑑) ............................................................................................................ 8

But what when 𝑓(π‘₯) is a standard function ....................................................................................... 8

Matrix ODEs (lecture 2) ........................................................................................................................... 9

Linear equation of π‘₯: ....................................................................................................................... 9

Solveing: FINDING THE EIGENVALUES/EIGENVECTORSβ€Ό! .............................................................. 9

Why do we care about eigenvectors/values? ............................................................................... 10

Matrix exponentials .............................................................................................................................. 12

When does 𝑒𝐴 exist? ........................................................................................................................ 12

exp(𝐴) first class: full set of linearly independent eigenvectors ................................................. 12

Diagonalizable or semi-simple matrix ........................................................................................... 13

How big of a restriction is a full set of LI eigenvectors? ............................................................... 13

Matrix and other term exponentials:............................................................................................ 16

Complex numbers and matricies: ................................................................................................. 16

Real normal form of matricies ...................................................................................................... 18

Repeated eigenvalues ....................................................................................................................... 20

Generalized eigenspace: ............................................................................................................... 20

Primary decomposition theorym .................................................................................................. 20

Theorems: ..................................................................................................................................... 21

General algorithm for finding 𝑒𝐴 .................................................................................................. 21

Derivative of 𝑒𝐴𝑑 ........................................................................................................................... 24

Phase Plots ............................................................................................................................................ 28

Example: ........................................................................................................................................ 28

Phase Plot/Phase line ........................................................................................................................ 30

Remarks: ....................................................................................................................................... 31

Constant solution: ......................................................................................................................... 31

Spectrum: .......................................................................................................................................... 32

Definition of spectrum: ................................................................................................................. 32

Spectrally stable definition ........................................................................................................... 32

Page 3: MATH3063 LECTURE NOTES - StudentVIP

Stable/unstable/centre sub spaces: ................................................................................................. 33

Primary decomposition theorem: ................................................................................................. 33

Generally: ...................................................................................................................................... 34

Hyperbolic: .................................................................................................................................... 34

Bounded for forward time: Definition .......................................................................................... 35

Linearly stable definition: ............................................................................................................. 35

Asymptotically linearly stable Definition ...................................................................................... 36

Restricted dynamics: ......................................................................................................................... 36

Generally: ...................................................................................................................................... 37

The theory: .................................................................................................................................... 37

Classification of 2D linear systems: ................................................................................................... 41

Our approach: ............................................................................................................................... 42

Discriminant: ................................................................................................................................. 42

Overall picture: ............................................................................................................................. 62

Linear ODEs with non constant coeffiencts ...................................................................................... 64

Principle fundamental solution matric. ........................................................................................ 65

Liouville’s/ Abel’s Formula: (Theorem) ......................................................................................... 65

PART 2: METRIC SPACES ....................................................................................................................... 66

Start of thinking: ............................................................................................................................... 66

Key idea: ........................................................................................................................................ 66

Formal definition of metric spaces: .................................................................................................. 66

Metric (distance function): ........................................................................................................... 66

Metric space .................................................................................................................................. 66

Topology of metric spaces: ............................................................................................................... 68

Open ball ....................................................................................................................................... 68

Examples: ...................................................................................................................................... 68

Open subset of metric space ........................................................................................................ 69

Closed subset of metric space ...................................................................................................... 69

Interior: ......................................................................................................................................... 69

Closure: ......................................................................................................................................... 70

Boundary: ...................................................................................................................................... 70

Limit point/accumulation point .................................................................................................... 70

Sequences and Completeness .......................................................................................................... 71

Definition: complete sequence ..................................................................................................... 72

Contraction Mapping Principle: ........................................................................................................ 73

A contraction is a map: ................................................................................................................. 73

Page 4: MATH3063 LECTURE NOTES - StudentVIP

Definition: Lipshitz function .......................................................................................................... 76

Defintion: Locally Liphschitz .......................................................................................................... 77

Existence and Uniqueness of solutions of ODEs: .............................................................................. 77

Equation * ..................................................................................................................................... 77

Solution ......................................................................................................................................... 77

Dependence on initial conditions and parameters ........................................................................... 81

Theorem (P-L again) ...................................................................................................................... 82

Theorem : Families of solutions .................................................................................................... 82

Maximal intervals of existence ......................................................................................................... 84

Definition: Maximal interval of existence ..................................................................................... 84

PART 3: NON LINEAR PLANAR AUTONOMOUS ODES ........................................................................... 86

Terminology: ..................................................................................................................................... 87

Function vector: ............................................................................................................................ 87

Talking about what solutions are: ..................................................................................................... 89

Eg: pendulum equation ................................................................................................................. 89

Phase portraits: ............................................................................................................................. 91

Isoclines ......................................................................................................................................... 92

Linearisation: ..................................................................................................................................... 93

Generally for linearlisation: .......................................................................................................... 93

Hyperbolic good, non-hyperbolic = bad ........................................................................................ 94

More Qualitative results ................................................................................................................... 99

Theorem: uniqun phase curves .................................................................................................... 99

Closed phase curves/periodic solutoins ....................................................................................... 99

Hamiltonian Systems: ................................................................................................................. 104

Lotka-Volterra/ Predator-Prey system ............................................................................................ 105

Definition of predator prey: ........................................................................................................ 105

Analysis of system: ...................................................................................................................... 106

Modifications to equations: ........................................................................................................ 108

Epidemics: ....................................................................................................................................... 109

SIR model .................................................................................................................................... 110

Abstraction, Global existsnece and Reparametrisation .................................................................. 116

Definition: 1 parameter family.................................................................................................... 117

Definition: Orbit/trajectory ......................................................................................................... 117

Existence of complete flow: ........................................................................................................ 117

Flows on ℝ ...................................................................................................................................... 118

Thereom: types of flows in ℝ...................................................................................................... 120

Page 5: MATH3063 LECTURE NOTES - StudentVIP

𝛼alpha and πœ” omega limit sets: .................................................................................................... 122

Asymptotic limit point ................................................................................................................. 122

Non- Linear stability (of critical points) ........................................................................................... 129

Lyapunov stable definition: ......................................................................................................... 129

Asymptotically stable: defininition ............................................................................................. 129

Lyapunov functions ..................................................................................................................... 131

LaSalle’s Invariance principle: ..................................................................................................... 133

Hilbert’s 16th problem and Poincare- Bendixon theorem ............................................................... 134

Questino: hilbert’s 16th problem ................................................................................................. 134

Poincare Bendixson theorem Proof: ............................................................................................... 136

Proof:........................................................................................................................................... 136

Definitions of sets: disconnected ................................................................................................ 136

Proof of Poincare Bendixson theorem: ....................................................................................... 139

Bendixsons Negative Criterion .................................................................................................... 139

INDEX THEORY (Bonus) ....................................................................................................................... 142

Set up: ............................................................................................................................................. 142

Theta: .......................................................................................................................................... 142

Index: definition .............................................................................................................................. 142

Theorem: on Index ...................................................................................................................... 144

Poincare Index: ........................................................................................................................... 145

Index at infinity: .............................................................................................................................. 148

Computing the index at infinity .................................................................................................. 148

Theorem: Poincare Hopf Index theorem .................................................................................... 150

Part 4 Bifurcation theory .................................................................................................................... 151

Lets start with an example in 1D: ................................................................................................... 152

Bifurcations: .................................................................................................................................... 153

Bifurcation diagram: ................................................................................................................... 153

Bifurcation: .................................................................................................................................. 153

Example 2 (in 2D) (Saddle node) bifurcation .............................................................................. 153

Transcritical bifurcation .................................................................................................................. 156

1D example: ................................................................................................................................ 156

Supercritical Pitchfork bifurcation .................................................................................................. 158

Example ....................................................................................................................................... 158

Logistic growth with harvesting ...................................................................................................... 160

Hopf Bifurcation .............................................................................................................................. 161

Definition of Hopf: ...................................................................................................................... 161

Page 6: MATH3063 LECTURE NOTES - StudentVIP

Some more examples of bifurcations ............................................................................................. 165

FIREFLIES AND FLOWS ON A CIRCLE ............................................................................................... 167

Non uniform oscillator ................................................................................................................ 167

Glycolysis ......................................................................................................................................... 170

Model: Sel’kov 1968 .................................................................................................................... 170

Catastrophe:.................................................................................................................................... 180

Eg 1: fold bifurcation: .................................................................................................................. 181

Eg 2: swallowtail catastrophe ..................................................................................................... 185

Hysteresis and van der pol oscillator .............................................................................................. 186

Van der poll oscillator example: ................................................................................................. 188

Page 7: MATH3063 LECTURE NOTES - StudentVIP

MATH3963 NON LINEAR DIFFERENTIAL EQUATIONS WITH APPLICATIONS (biomaths) NOTES

- Robert Marangell [email protected]

Assessment:

- Midterm (Thursday April 21) 25%

- 2 assignments 10% each (due 4/4 and 30/5)

- 55% exam

Assignments are typed (eg LaTex)

PART 1: LINEAR SYSTEMS Introduction:

- What is an ODE; how to understand them

a DE is a function relationship between a function and its derivatives

An ODE is when the function has only 1 variable independent variable.

Eg; 𝑑 independant (π‘₯(𝑑); 𝑦(𝑑)dependant

Not just scalar, but also vector valued functions (Systems of ODES)

οΏ½οΏ½(𝑑) ∈ ℝ𝑑

οΏ½οΏ½(𝑑) = (π‘₯1(𝑑), π‘₯2(𝑑),… π‘₯𝑑(𝑑))

π‘₯𝑗: ℝ β†’ ℝ

Order of ODE: - Highest derivative appearing in equation

If you can isolate the highest derivative (eg 𝐹 (𝑑, οΏ½οΏ½(𝑑),𝑑𝑦(𝑑)

𝑑𝑑, …

𝑑(𝑛)οΏ½οΏ½(𝑑)

𝑑𝑑(𝑛)) = 0

Explicit ODEs If you can isolate highet derivative: it is an explicit ODE

Page 8: MATH3063 LECTURE NOTES - StudentVIP

𝑑(𝑛)οΏ½οΏ½(𝑑)

𝑑𝑑(𝑛)= 𝐺 (𝑑, οΏ½οΏ½(𝑑),

𝑑𝑦(𝑑)

𝑑𝑑, …𝑑(π‘›βˆ’1)οΏ½οΏ½(𝑑)

𝑑𝑑(π‘›βˆ’1))

(explicit is easier; implicit is very hard)

Notation: Use dot

𝑑

𝑑𝑑=

Example: 𝑦 + 2��𝑦 + 3(οΏ½οΏ½)2 + 𝑦 = 0

You can reduce the order of an ODE (to 1), in exchange for making it a system of ODEs

- Introduce new variables into system

𝑦0 = 𝑦

𝑦1 = 𝑦0 = οΏ½οΏ½

𝑦2 = 𝑦1 = οΏ½οΏ½

∴ 𝑦0 = 𝑦1

𝑦1 = 𝑦2

𝑦2 = βˆ’2𝑦2𝑦0 βˆ’ 3𝑦12 βˆ’ 𝑦0

Which is a first order system in 3 variables, equivalent to the original 3rd order ODE

Systems of ODEs - We can play the same game for systems of ODEs

οΏ½οΏ½ + 𝐴�� = 0 (π‘€π‘–π‘‘β„ŽοΏ½οΏ½ ∈ ℝ2; 𝐴 = (π‘Ž 𝑏𝑐 𝑑

))

∴ (𝑦1𝑦2) + (

π‘Ž 𝑏𝑐 𝑑

)(𝑦1𝑦2) = 0

∴ π‘–π‘›π‘‘π‘Ÿπ‘œπ‘‘π‘’π‘π‘’:

𝑦1 = 𝑦3

𝑦2 = 𝑦4 𝑦3 = 𝑦1 = βˆ’π‘Žπ‘¦1 βˆ’ 𝑏𝑦2

𝑦4 = 𝑦2 = βˆ’π‘π‘¦1 βˆ’ 𝑑𝑦2

∴ (

𝑦1𝑦2𝑦3𝑦4

) = (

0 0 1 00 0 0 1βˆ’π‘Ž βˆ’π‘ 0 0βˆ’π‘ βˆ’π‘‘ 0 0

)(

𝑦1𝑦2𝑦3𝑦4

)

𝑙𝑒𝑑�� = (𝑦1, 𝑦2, 𝑦3, 𝑦4)

The system of equations is now:

�� = 𝐡��

Where οΏ½οΏ½ = 𝑓(οΏ½οΏ½)

Page 9: MATH3063 LECTURE NOTES - StudentVIP

Example 0: 𝑓(π‘₯) = 0 ∴ οΏ½οΏ½ = 0

β†’ π‘₯οΏ½οΏ½ = 0βˆ€π‘– οΏ½οΏ½ = π‘π‘“π‘œπ‘Ÿπ‘₯, 𝑐 ∈ ℝ𝑛

This is all the solutions to οΏ½οΏ½ = 0; and the solutions form an 𝑛 dimensional vector space

Eg 2: 𝑓(π‘₯) = 𝑐 (or 𝑓(π‘₯) = 𝑓(𝑑) ∴ οΏ½οΏ½ = 𝑐

β†’ οΏ½οΏ½(𝑑) = 𝑐𝑑 + 𝑑

But what when 𝑓(π‘₯) is a standard function - If 𝑓(π‘₯) is linear in π‘₯

Remember: β€œlinear” if:

𝑓(𝑐π‘₯ + 𝑦) = 𝑐𝑓(π‘₯) + 𝑓(𝑦)

Recall any linear map can be represented by a matrix, by choosing a basis

∴ οΏ½οΏ½ = 𝐴(𝑑)π‘₯

If 𝐴(𝑑) = 𝐴(independent of 𝑑), it is called a constant coefficient system

οΏ½οΏ½ = 𝐴π‘₯

Eg:

𝐴 = (

0 1 0 01 0 1 00 1 0 10 0 1 0

)

β†’(

π‘₯1π‘₯2π‘₯3π‘₯4

)

= (

0 1 0 01 0 1 00 1 0 10 0 1 0

)(

π‘₯1π‘₯2π‘₯3π‘₯4

)

∴ π‘₯1 = π‘₯2

π‘₯2 = π‘₯1 + π‘₯3

π‘₯3 = π‘₯2 + π‘₯4

π‘₯4 = π‘₯3

Is the system of ODEs which we now have to solve

Page 10: MATH3063 LECTURE NOTES - StudentVIP

Matrix ODEs (lecture 2)

οΏ½οΏ½ = 𝑓(π‘₯)

Linear equation of π‘₯:

οΏ½οΏ½ = 𝐴π‘₯(π‘₯ ∈ ℝ𝑛; π‘Žπ‘›π‘‘π΄ ∈ 𝑀𝑛(ℝ))

Eg:

𝑒𝑔: 𝐴 =0 1 01 0 10 1 0

οΏ½οΏ½ = 𝐴π‘₯; π‘₯ = (π‘₯1, π‘₯2, π‘₯3)

(π‘₯1π‘₯2π‘₯3

) = (0 1 01 0 10 1 0

)(

π‘₯!π‘₯2π‘₯3)

∴ π‘₯1 = π‘₯2;

π‘₯2 = π‘₯1 + π‘₯3

π‘₯3 = π‘₯2

So a solution is a triple vector which solves this equation

Solveing: FINDING THE EIGENVALUES/EIGENVECTORSβ€Ό!

Recall:

A value

πœ† ∈ β„‚

Is an eigenvalue of an 𝑛 Γ— 𝑛 matrix 𝐴 if a non zero vector οΏ½οΏ½ ∈ ℝ𝑛 such that

𝐴𝑣 = πœ†π‘£

𝑣 is the corresponding eigenvector

β†’ (A βˆ’ Ξ»I)𝑣 = 0 β†’ 𝑣 ∈ ker(𝐴 βˆ’ πœ†πΌ)

β†’ 𝐴 βˆ’ πœ†πΌπ‘–π‘ π‘›π‘œπ‘‘π‘–π‘›π‘£π‘’π‘Ÿπ‘‘π‘–π‘π‘™π‘’

β†’ det(𝐴 βˆ’ πœ†πΌ) = 0

det(𝐴 βˆ’ πœ†πΌ) is a polynomial in πœ† of degree 𝑛 called the characteristic polynomial of 𝐴

𝜌𝐴(πœ†)

Eigenvalues are roots of 𝜌𝐴(πœ†) = 0

Fundamental theorem of algebra:

𝜌𝐴(πœ†) has exactly 𝑛 (possibly complex) roots counted according to algebraic multiplicity.

Page 11: MATH3063 LECTURE NOTES - StudentVIP

Algebraic multiplicity:

The algebraic multiplicity of a root π‘Ÿ of 𝜌𝐴(πœ†) = 0 is the largest π‘˜ such that 𝜌𝐴(πœ†) = (πœ† βˆ’ π‘Ÿ)π‘˜π‘ž(πœ†)

and π‘ž(π‘Ÿ) β‰  0

The algebraic multiplicity of an eigenvalue πœ† of 𝐴 is its algebraic multiplicity as a root of 𝜌𝐴(πœ†)

Geometric multiplicity:

The geometric multiplicity of an eigenvalue is the maximum number of linearly independent

eigenvectors

Alg multiplicity and geometric

π‘Žπ‘™π‘”π‘’π‘π‘Ÿπ‘Žπ‘–π‘π‘šπ‘’π‘™π‘‘π‘–π‘π‘™π‘–π‘π‘–π‘‘π‘¦ β‰₯ π‘”π‘’π‘œπ‘šπ‘’π‘‘π‘Ÿπ‘–π‘π‘šπ‘’π‘™π‘‘π‘–π‘π‘™π‘–π‘π‘–π‘‘π‘¦

Returning to example:

(

π‘₯1π‘₯2π‘₯3

) = (0 1 01 0 10 1 0

)(

π‘₯!π‘₯2π‘₯3)

𝐴 = (0 1 01 0 10 1 0

)

det(𝐴 βˆ’ πœ†πΌ) = |βˆ’πœ† 1 01 βˆ’πœ† 10 1 βˆ’πœ†

| = βˆ’πœ†(πœ†2 βˆ’ 2) = 0

β†’ πœ† = 0;±√2

Eigenvectors are found by finding the kernel of ker(𝐴 βˆ’ πœ†πΌ) for an eigenvalue:

Giving eigenvectors:

πœ†1 = 0 →𝑣1 = (10βˆ’1)

πœ†2 = √2 β†’ 𝑣2 = (1

√21

)

πœ†3 = βˆ’βˆš2 β†’ 𝑣3 = (βˆ’1

√21

)

Why do we care about eigenvectors/values?

�� = 𝐴��

Suppose that 𝑣 is an eigenvector of 𝐴 ∈ 𝑀𝑛(β„‚), with eigenvalues of πœ†.

Consider the vector of functions:

Page 12: MATH3063 LECTURE NOTES - StudentVIP

οΏ½οΏ½(𝑑) = 𝑐(𝑑)οΏ½οΏ½

𝑐 is some unkown function 𝑐: β„‚ β†’ β„‚

If οΏ½οΏ½ solves οΏ½οΏ½ = 𝐴π‘₯, then

οΏ½οΏ½(𝑑)οΏ½οΏ½ = 𝐴𝑐(𝑑)οΏ½οΏ½ = 𝑐(𝑑)𝐴�� = πœ†π‘(𝑑)οΏ½οΏ½(π‘Žπ‘ π‘’π‘–π‘”π‘’π‘›π‘£π‘’π‘π‘‘π‘œπ‘Ÿ)

As οΏ½οΏ½ β‰  0,

β†’ οΏ½οΏ½(𝑑) = πœ†π‘(𝑑)

Which is a differential equation of one variable

∴ 𝑐(𝑑) = π‘’πœ†π‘‘

SO:

π‘₯(𝑑) = π‘’πœ†π‘‘οΏ½οΏ½

is a solution vector to

οΏ½οΏ½ = 𝐴π‘₯

So- we have a 1D subspace of solutions π‘₯(𝑑) = 𝑐0π‘’πœ†π‘‘οΏ½οΏ½

Notes:

1. Solutions to οΏ½οΏ½ = 𝐴π‘₯ are of the form π‘₯ = 𝑐0π‘’πœ†π‘‘οΏ½οΏ½ where πœ† is a constant eigenvalue of 𝐴 of οΏ½οΏ½ is

the corresponding eigenvectors are called EIGENSOLUTIONS

2. What we just did has a name β€œansatz” (guess what the solution could be, and see if it is

correct)

Back to example:

∴ π‘₯(𝑑) = 𝑒0𝑑 (10βˆ’1) ; 𝑦(𝑑) = π‘’βˆš2𝑑 (

1

√21

) ; 𝑧(𝑑) = π‘’βˆ’βˆš2 (1

βˆ’βˆš21

)

So, as linear- any linear combination of these is the solution:

So:

𝑋(𝑑) = 𝑐1𝑒0𝑑 (

10βˆ’1) + 𝑐2𝑒

√2𝑑 (1

√21

) + 𝑐3π‘’βˆ’βˆš2 (

1

βˆ’βˆš21

)

IS OUR SOLUTIONβ€Όβ€Ό!

𝑋(𝑑) is actually a 3D vector space.

Page 13: MATH3063 LECTURE NOTES - StudentVIP

Matrix exponentials 𝐴 ∈ 𝑀𝑛×𝑛 ; what is:

𝑒𝐴

(exp(𝐴))

For a matrix: DEFINE

𝑒𝐴 = exp(𝐴) = βˆ‘1

π‘˜!π΄π‘˜

∞

(π‘˜=0)

= 𝐼 + 𝐴 +1

2𝐴2 +

1

3!𝐴3 +β‹―

If the entries converge, then: 𝑒𝐴 IS an 𝑛 Γ— 𝑛 matrix

When does 𝑒𝐴 exist?

exp(𝐴) first class: full set of linearly independent eigenvectors - Hypothesis: Suppose 𝐴 has a full set of linearly independent eigenvectors

- Assume that hypothesis for the rest of today(we’ll look at if it’s not later)

Let 𝑣1, … 𝑣𝑛 be the eigenvectors; with πœ†1, … , πœ†π‘› eigenvalues

Form the following matrix:

𝑃 = (𝑣1, 𝑣2, … 𝑣𝑛)

(whose columes are the eigenvectors)

∴ 𝐴𝑃 = (𝐴𝑣1, 𝐴𝑣2, …𝐴𝑣𝑛) = (πœ†1𝑣1, … πœ†π‘›π‘£π‘›)(π‘π‘¦π‘šπ‘’π‘™π‘‘π‘–π‘π‘™π‘–π‘π‘Žπ‘‘π‘–π‘œπ‘›)

= π‘ƒπ‘‘π‘–π‘Žπ‘”(πœ†1, … , πœ†π‘›)

≔ 𝑃Λ

∴ π‘ƒβˆ’1𝐴𝑃 = Ξ›(= diag(Ξ»1, … Ξ»n))

∴ 𝑒Λ = 𝐼 + Ξ› +1

2Ξ›2 +β‹―

= π‘‘π‘–π‘Žπ‘”(π‘’πœ†1 , … . , π‘’πœ†π‘›)

LHS:

π‘’π‘ƒβˆ’1𝐴𝑃

Lemma:

(π‘ƒβˆ’1𝐴𝑃)𝑛 = π‘ƒβˆ’1𝐴𝑛𝑃

So:

∴ 𝑒𝐴 = π‘ƒπ‘’Ξ›π‘ƒβˆ’1 (𝑒Λ = π‘‘π‘–π‘Žπ‘”(π‘’πœ†1 , … , π‘’πœ†π‘›))

AND: the eigenvectors of 𝑒𝐴 = π‘’πœ†π‘—, where πœ†π‘— is eigenvalues of 𝐴,

Page 14: MATH3063 LECTURE NOTES - StudentVIP

Example:

𝐴 = (0 1 01 0 10 1 0

) ;

∴ π‘₯ (10βˆ’1) ;(

1

√21

) ;(1

βˆ’βˆš21

)

𝑃 = (10βˆ’1

1

√21

1

βˆ’βˆš21

)

Ξ› = (000

0

√20

00

βˆ’βˆš2

)

∴ 𝑒𝐴 = π‘ƒπ‘’Ξ›π‘ƒβˆ’1

= (10βˆ’1

1

√21

1

βˆ’βˆš21

)(000

0

π‘’βˆš2

0

00

π‘’βˆ’βˆš2)(

10βˆ’1

1

√21

1

βˆ’βˆš21

)

βˆ’1

= (π‘’π‘”π‘™π‘¦π‘Žπ‘›π‘ π‘€π‘’π‘Ÿ)

Diagonalizable or semi-simple matrix A matrix 𝐴 for which there exists an invertible matrix 𝑃 such that

π‘ƒβˆ’1𝐴𝑃 = Ξ›

(where Ξ› is diagonal is called diagonalizable or semisimple

This says, if 𝐴 is semisimple, then 𝑒𝐴 exists, and we can compute it

How big of a restriction is a full set of LI eigenvectors?

Defective matricies

If 𝐴 does not have a full set of linearly independent eigenvectors, it is called defective

- How big of a restriction is being semisimple?

- Are there many semi simple matrices?

Proposition:

Suppose πœ†1 β‰  πœ†2 are eigenvalues of a matrix 𝐴 with eigenvectors 𝑣1π‘Žπ‘›π‘‘π‘£2. Then 𝑣1 and 𝑣2 are

linearly independent

Page 15: MATH3063 LECTURE NOTES - StudentVIP

Proof:

If βˆƒ a dependence relation, then βˆƒπ‘˜1π‘Žπ‘›π‘‘π‘˜2 scalars such that

π‘˜1𝑣1 + π‘˜2𝑣2 = 0(π‘˜1, π‘˜2 β‰  0)

Applying 𝐴:

𝐴(π‘˜1𝑣1 + π‘˜2𝑣2) = 0

π΄π‘˜1𝑣1 + π΄π‘˜2𝑣2 = 0 ∴ π‘˜1πœ†π‘£1 + π‘˜2πœ†2𝑣2 = 0(π‘Žπ‘ π‘’π‘–π‘”π‘’π‘›π‘£π‘’π‘π‘‘π‘œπ‘Ÿπ‘ )

β†’ π‘‘π‘Žπ‘˜π‘’ βˆ’ πœ†1π‘˜1𝑣1 βˆ’ πœ†1π‘˜2𝑣2 = 0

\∴ (πœ†2 βˆ’ πœ†1)π‘˜2𝑣2 = 0

β†’ π‘˜2 = 0(∴ π‘π‘œπ‘›π‘‘π‘Ÿπ‘Žπ‘‘π‘–π‘π‘‘π‘–π‘œπ‘›)

Example:

𝐴 = (π‘Ž 𝑏𝑐 𝑑

)

𝜌𝐴(πœ†) = πœ†2 βˆ’ (π‘Ž + 𝑑)πœ† + π‘Žπ‘‘ βˆ’ 𝑏𝑐

= πœ†2 βˆ’ πœπœ† + 𝛿(π‘‘π‘Žπ‘˜π‘–π‘›π‘”πœ = π‘Ž + 𝑑; 𝛿 = π‘Žπ‘‘ βˆ’ 𝑏𝑐)

(𝜏 = π‘‡π‘Ÿπ‘Žπ‘π‘’(𝐴) =βˆ‘π‘Žπ‘—π‘—

𝑛

𝑗=1

(π‘ π‘’π‘šπ‘œπ‘“π‘‘β„Žπ‘’π‘‘π‘–π‘Žπ‘”π‘œπ‘›π‘Žπ‘™π‘ ))

∴ πœ† =𝜏 Β± √𝜏2 βˆ’ 4𝛿

2π‘Žπ‘Ÿπ‘’π‘Ÿπ‘œπ‘œπ‘‘π‘ 

∴ πœ†+ β‰  πœ†βˆ’π‘–π‘“πœ2 βˆ’ 4𝛿 = 0

∴ 𝜏2 βˆ’ 4𝛿 = 0

β†’ 𝛿 =𝜏2

4

Therefore: any matrix which does not lie on this parabola in 𝜏, 𝛿 space will be diagonalizable

𝜏2 βˆ’ 4𝛿 = (π‘Ž βˆ’ 𝑑)2 + 4𝑏𝑐

The set of matricies with repeated eigenvalues, is the locus of points with

(π‘Ž βˆ’ 𝑑)2 + 4𝑏𝑐 = 0

A 3D hypersurface, in 4D space

So what if 𝐴 is defective? Can we compute 𝑒𝐴

- Yes

Two 𝑛 Γ— 𝑛 matricies 𝐴, 𝐡 are commutative, if

𝐴𝐡 = 𝐡𝐴

Page 16: MATH3063 LECTURE NOTES - StudentVIP

Commutator

A commutator we will define as:

[𝐴, 𝐡] ≔ 𝐴𝐡 βˆ’ 𝐡𝐴 = 0(π‘–π‘“π‘“π΄π΅π‘π‘œπ‘šπ‘šπ‘’π‘‘π‘’)

Proposition

If 𝐴 and 𝐡 commute, then

𝑒𝐴+𝐡 = 𝑒𝐴 + 𝑒𝐡

- Aside: if 𝑒𝐴+𝐡 = 𝑒𝐴𝑒𝐡, and 𝐴 and 𝐡 satisfy symmetric β†’ 𝐴𝐡 = 𝐡𝐴

Proof

𝑒𝐴+𝐡 = 𝐼 + (𝐴 + 𝐡) +1

2(𝐴 + 𝐡)2

= 𝐼 + 𝐴 + 𝐡 +1

2(𝐴2 + 𝐴𝐡 + 𝐡𝐴 + 𝐡2) + β‹―

= 𝐼 + 𝐴 + 𝐡 +1

2𝐴2 + 𝐴𝐡 +

1

2𝐡2(π‘Žπ‘ π‘π‘œπ‘šπ‘šπ‘’π‘‘π‘Žπ‘‘π‘–π‘£π‘’)

And:

𝑒𝐴𝑒𝐡 = (𝐼 + 𝐴 +1

2𝐴2…) (𝐼 + 𝐡 +

1

2𝐡2…)

= 𝐼 + (𝐴 + 𝐡) +1

2𝐴2 + 𝐴𝐡 +

1

2𝐡2 +β‹―

Example:

𝐴 = (π‘Ž 00 𝑏

)(π‘Ž β‰  𝑏); 𝐡 = (0 10 0

)

∴ 𝑒𝐴 = (π‘’π‘Ž 00 𝑒𝑏

) ; 𝑒𝐡 =(1 10 1

)

𝑒𝐴+𝐡 = (π‘’π‘Žπ‘’π‘Ž βˆ’ 𝑒𝑏

π‘Ž βˆ’ 𝑏0 𝑒𝑏

)

𝑒𝐴𝑒𝐡 = 𝑒𝐴+𝐡 β†’ (π‘Ž βˆ’ 𝑏)π‘’π‘Ž = π‘’π‘Ž βˆ’ 𝑒𝑏

If π‘₯ = π‘Ž βˆ’ 𝑏

∴ π‘₯ = 1 βˆ’ π‘’βˆ’π‘₯

One example of a solution: is

π‘₯ β‰ˆ βˆ’6.6022 βˆ’ 736.693𝑖

Nilpotent matrix:

Definition:

A matrix 𝑁 is nilpotent if some power of it is = 0 (𝑁𝑇 = 0for some 𝑇)

Eg; 𝑁 =0 1 00 0 10 0 0

β†’ 𝑁3 = 0(π‘π‘Žπ‘™π‘π‘’π‘™π‘Žπ‘‘π‘’)

This means that the power series terminates for 𝑒𝑁 :

Page 17: MATH3063 LECTURE NOTES - StudentVIP

𝑒𝑁 = 𝐼 + 𝑁 +1

2𝑁2 + 0 + 0 +β‹―

So- how do we compute the exponential?

Jordan Canonical Form:

Let 𝐴 be an 𝑛 Γ— 𝑛 matrix, with complex entries βˆƒ a basis of ℂ𝑛 and an invertible matrix 𝐡 (of basis

vectors) such that

π΅βˆ’1𝐴𝐡 = 𝑆 + 𝑁

Where 𝑆 is diagonal (semisimple); and 𝑁 is nilpotent; and 𝑆 and 𝑁 commute (𝑆𝑁 βˆ’ 𝑁𝑆 = 0)

So; this means that 𝑒𝐴 exists for all matricies:

π΅βˆ’1𝐴𝐡 = 𝑆 + 𝑁

β†’ π΅βˆ’1𝑒𝐴𝐡 = 𝑒𝑆𝑒𝑁

𝑒𝐴 = π΅π‘’π‘†π‘’π‘π΅βˆ’1

This is cool.

Matrix and other term exponentials:

So now: consider the follow 𝐴𝑑; where 𝐴 ∈ 𝑀𝑛(ℝ) and 𝑑 ∈ ℝ

So:

𝑒𝐴𝑑 =βˆ‘(π‘‘π‘˜

π‘˜!)π΄π‘˜

∞

π‘˜=0

For a fixed 𝐴; the exponential is a function map between real numbers and matricies

𝑒𝐴𝑑: ℝ β†’ 𝑀𝑛(ℝ)

exp𝐴(𝑑) : 𝑑 β†’ 𝑒𝐴𝑑

exp𝐴(𝑑 + 𝑠) = 𝑒𝐴(𝑑+𝑠) = 𝑒𝐴𝑑𝑒𝐴𝑠 = exp𝐴(𝑑) exp𝐴(𝑠)

𝑒𝐴𝑑 is invertible! Inverse is π‘’βˆ’π΄π‘‘

So- what is the derivative between the map of the real numbers to the matricies?

Complex numbers and matricies:

Let 𝐽 = (0 βˆ’11 0

)

β†’ 𝐽2 = βˆ’πΌ;𝐽2 = βˆ’π½; 𝐽4 = 𝐼

𝑒𝐽𝑑 = 𝐼 + 𝐽𝑑 +1

2𝐽2𝑑2 +β‹―

Page 18: MATH3063 LECTURE NOTES - StudentVIP

= 𝐼 + 𝐽𝑑 βˆ’1

2𝑑2𝐼 βˆ’

1

3!𝑑3𝐽

= 𝐼 βˆ’π‘‘2

2𝐼 +

𝑑4

4!+ β‹―+ 𝐽 (𝑑 βˆ’

𝑑3

3!+ β‹―)

= cos 𝑑 𝐼 + sin 𝑑 𝐽

= (cos 𝑑 βˆ’ sin 𝑑sin 𝑑 cos 𝑑

)

Which is the rotation matrix;

So: the matrix 𝐽 is the complex number 𝑖; and we wrote Euler’s formula

𝑒𝑖𝑑 = cos 𝑑 + 𝑖 sin 𝑑

- We can actually take any complex number 𝑧 = π‘₯ + 𝑖𝑦; and find its corresponding matrix

𝑀𝑧 ≔ π‘₯𝐼 + 𝑦𝐽 = (π‘₯ βˆ’π‘¦π‘¦ π‘₯ )

(the technical term is a field isomorphism)

Complex number algebra:

𝑀𝑧1+𝑧2 = 𝑀𝑧1 +𝑀𝑧2

𝑀𝑧1𝑧2 = 𝑀𝑧1𝑀𝑧2 = 𝑀𝑧2𝑀𝑧1 = 𝑀𝑧2𝑧1

Complex exponentials:

𝑀𝑒𝑧 = 𝑒𝑀𝑑

𝑧 = π‘₯ + 𝑖𝑦:

𝑒𝑧 = 𝑒π‘₯ cos 𝑦 + 𝑖𝑒π‘₯ sin𝑦 = 𝑒π‘₯ (cos𝑦 βˆ’ sin𝑦sin𝑦 cos𝑦

)

= 𝑒π‘₯𝐼 + 𝑒𝑦𝐽

Complex number things:

det(𝑀𝑧) = π‘₯2 + 𝑦2 = |𝑧|2

𝑀�� = (π‘₯ π‘¦βˆ’π‘¦ π‘₯) = 𝑀𝑧

𝑇

π‘€π‘Ÿ = π‘ŸπΌ(π‘–π‘“π‘Ÿ ∈ ℝ)

𝑀��𝑀𝑍 = 𝑀|𝑧|2 = |𝑧|2𝐼

𝑖𝑓𝑧 β‰  0;π‘€π‘§π‘–π‘ π‘–π‘›π‘£π‘’π‘Ÿπ‘‘π‘–π‘π‘™π‘’; π‘Žπ‘›π‘‘

π‘€π‘§βˆ’1 =

1

|𝑧|2𝑀𝑧

𝑇 =1

det(𝑀𝑧)𝑀𝑍𝑇

So lets compute 𝑒𝐽𝑑 = (cos 𝑑 βˆ’ sin 𝑑sin 𝑑 cos 𝑑

):

Eigenvalues of 𝐽𝑑 are ±𝑖𝑑

𝑒 βˆ’vectors are (±𝑖1)

Page 19: MATH3063 LECTURE NOTES - StudentVIP

𝑃 = (𝑖 βˆ’π‘–1 1

) ;π‘ƒβˆ’1 =1

2𝑖(1 π‘–βˆ’1 𝑖

)

Ξ› = (𝑖𝑑 00 βˆ’π‘–π‘‘

)

β†’ 𝑒𝐽𝑑 =1

2𝑖(𝑖 βˆ’π‘–1 1

) (𝑒𝑖𝑑 00 π‘’βˆ’π‘–π‘‘

) (1 π‘–βˆ’1 𝑖

)

= (cos 𝑑 βˆ’ sin 𝑑sin 𝑑 cos 𝑑

)

So we got what we started with! (which is good)

Real normal form of matricies Moving towards a real β€œnormal” form for matricesl with real entries but complex eigenvalues

Fact: if 𝐴 ∈ 𝑀𝑛(ℝ) then if πœ† is an eigenvalue; so οΏ½οΏ½ is an e-value. (complex conjugate theorem). Also-

eigenvectors come in conjugate pairs as well

𝐴𝑣 = πœ†π‘£ β†’ 𝐴�� = οΏ½οΏ½οΏ½οΏ½

- If πœ† is an eigenvalue πœ† = π‘Ž + 𝑖𝑏; then οΏ½οΏ½ = π‘Ž βˆ’ 𝑖𝑏 is also an e-value

- If οΏ½οΏ½ = οΏ½οΏ½ + 𝑖��;(𝑒, 𝑀 ∈ ℝ𝑛) β†’ οΏ½οΏ½ = οΏ½οΏ½ βˆ’ 𝑖��

So:

2𝑒 = 𝑣 + οΏ½οΏ½

𝐴(2𝑒) = πœ†π‘£ + πœ†οΏ½οΏ½ = 2(π‘ŽοΏ½οΏ½ βˆ’ 𝑏��)

β†’ 𝐴�� = (π‘ŽοΏ½οΏ½ βˆ’ 𝑏��)

Similarly;

𝐴�� = (π‘ŽοΏ½οΏ½ + 𝑏��)

So; if 𝑃 is the 𝑛 Γ— 2 matrix

𝑃 = (𝑒 𝑀)

β†’ 𝐴𝑃 = 𝑃 (π‘Ž βˆ’π‘π‘ π‘Ž

)

(we are close to finding the algorithm for computing the matrix exponential of any matrix)

Eg:

𝐴 =

0 1 0 0βˆ’1 0 1 00 βˆ’1 0 10 0 βˆ’1 0

Characteristic polynomial is:

π‘₯4 + 3π‘₯2 + 1

e-values of 𝐴 are Β±π‘–πœ™π‘Žπ‘›π‘‘ ±𝑖

πœ™ (with πœ™ =

1+√5

2 the golden ratio)

Page 20: MATH3063 LECTURE NOTES - StudentVIP

so eigenvectors:

𝑣1 = (

π‘–βˆ’πœ™βˆ’π‘–πœ™1

) = (

0βˆ’πœ™01

) + 𝑖(

10βˆ’πœ™1

)= 𝑒1 + 𝑖𝑀𝑖

; 𝑣2 = (

π‘–πœ™βˆ’1π‘–βˆ’πœ™

)

β†’

So

𝐴𝑃1 = 𝑃1 (0 βˆ’πœ™πœ™ 0

)

𝑃 = (𝑒1𝑀1𝑒2𝑀2)π‘ƒβˆ’1𝐴𝑃 =

(

0 βˆ’πœ™ 0 0πœ™ 0 0 0

0 0 0 βˆ’1

πœ™

0 01

πœ™0)

this is called a block diagonal matrix (has blocks of diagonal matricies; and 0s everywhere else)

= πœ™π½ βŠ•βˆ’1

πœ™π½

Direct sum:

π΄βŠ•π΅ = (𝐴 00 𝐡

)

𝐴 and 𝐡 don’t need to be the same size

Lemma: if

𝐴 = π΅βŠ•πΆ

Then

𝑒𝐴 = π‘’π΅βŠ•π‘’πΆ

Proof:

(𝐴 = (𝐡 00 0

) + (0 00 𝐢

)) π‘€β„Žπ‘–π‘β„Žπ‘π‘œπ‘šπ‘šπ‘’π‘‘π‘’

β†’ 𝑒𝐴 = 𝑒(𝐡 00 0

)𝑒(0 00 𝐢

)= (𝑒

𝐡 00 𝐼

) (𝐼 00 𝑒𝐢

) = (𝑒𝐡 00 𝑒𝐢

) = π‘’π΅βŠ•π‘’πΆ

Page 21: MATH3063 LECTURE NOTES - StudentVIP

Repeated eigenvalues

So what do we do if we have repeated eigenvalues? How do we calculate the matrix exponential?

Generalized eigenspace: Definition

Suppose πœ†π‘— is an eigenvalue of a matrix 𝐴 with algebraic multiplicity 𝑛𝑗. We define the generalized

eigenspace of πœ†π‘— as:

𝐸𝑗 ≔ ker[(𝐴 βˆ’ πœ†π‘—πΌ)𝑛𝑗]

There are basically two reasons why this definition is important. The first is that these generalized

eigenspaces are INVARIANT UNDER MULTIPLICATION BY 𝐴 (does not change). That is: if 𝒗 ∈ 𝐸𝑗, then

𝐴𝒗 is as well

Proposition: of generalized eigenspace

Suppose that 𝐸𝑗 is a generalized eigenspace of the matrix 𝐴, then if 𝑣 ∈ 𝐸𝑗; so is 𝐴𝑣

Proof of proposition:

- If 𝑣 ∈ 𝐸𝑗 for some generalized eigenspace, for some eigenvector πœ†; then 𝑣 ∈ ker((𝐴 βˆ’ πœ†πΌ)π‘˜)

say for some π‘˜. Then we have that (𝐴 βˆ’ πœ†πΌ)π‘˜οΏ½οΏ½ = (𝐴 βˆ’ πœ†πΌ)π‘˜(𝐴 βˆ’ πœ†πΌ + πœ†πΌ)𝑣

= (𝐴 βˆ’ πœ†πΌ)π‘˜+1 + (𝐴 βˆ’ πœ†πΌ)π‘˜πœ†π‘£ = 0

The second reason we care about this is that it will give us our set of linearly independent

generalized eigenvectors; which we can use to find 𝑒𝐴

Primary decomposition theorym

Let 𝑇: 𝑉 β†’ 𝑉 be a linear transformation of an 𝑛 dimensional vector space over the complex

numbers. If πœ†1, … πœ†π‘˜ are the distinct eigenvalues (not necessarily = 𝑛 ) ; if Ej is the eigenspace for πœ†π‘—,

then dim(𝐸𝑗) = the algebraic multiplicity of πœ†π‘— and the generalized eigenvectors span 𝑉. In terms of

vector spaces, we have:

𝑉 = 𝐸1βŠ•πΈ2βŠ•β€¦βŠ•πΈπ‘˜

Combining this with the previous proposition: 𝑇 will decompose the vector space into the direct sum

of the invariant subspaces.

The next thing to do is explicitly determine the ssemisimple nilpotent decomposition. Before: we

had the diagonal matrix Ξ› which was

- Block diagonal

- In normal form for complex eigenvalues

Page 22: MATH3063 LECTURE NOTES - StudentVIP

- Diagonal for real eigenvalues

Suppose we have a real matrix 𝐴 with 𝑛 eigenvalues πœ†1, β€¦πœ†π‘› with repeated multiplicity, suppose we

break them up into complex ones and real ones, so we have πœ†1, πœ†1, … , πœ†π‘˜, πœ†π‘˜ complex and

πœ†2π‘˜+1, … πœ†π‘› real. Let πœ†π‘— = π‘Žπ‘— + 𝑖𝑏 + 𝑗 then:

Ξ› ≔ (π‘Ž1 𝑏1βˆ’π‘1 π‘Ž1

)βŠ• β€¦βŠ• (π‘Žπ‘˜ π‘π‘˜βˆ’π‘π‘˜ π‘Žπ‘˜

)βŠ• π‘‘π‘–π‘Žπ‘”(πœ†2π‘˜+1, … πœ†π‘›)

Let 𝑣1, 𝑣1 , … 𝑣2π‘˜+1, … , 𝑣𝑛 be a basis of ℝ𝑛 of generalsed eigenvectors, then for complex ones: write

𝑣𝑗 = 𝑒𝑗 + 𝑖𝑀𝑗

Let

𝑃 ≔ (𝑒1𝑀1𝑒2𝑀2β€¦π‘’π‘˜ π‘€π‘˜π‘£2π‘˜+1…𝑣𝑛)

Returning to our construction, the matrix 𝑃 is invertiable (because of the primary decomposition

theorem); so we can write the new matrix as

𝑆 = π‘ƒΞ›π‘ƒβˆ’1

And a matrix

𝑁 = 𝐴 βˆ’ 𝑆

We have the following:

Theorems: Let 𝐴, 𝑁 SΞ›andP be the matricies defined: then

1. 𝐴 = 𝑆 + 𝑁

2. 𝑆 is semisimple

3. 𝑆, 𝑁 and A commute

4. 𝑁 is nilpotent

Proofs of theorems:

1. From definition of 𝑁

2. From construction of 𝑆

3. [𝑆, 𝐴] = 0 suppose that 𝑣 is a generalized eigenvector of 𝐴 associated to eigenvalue πœ†. By

construction: 𝑣 is a genuine eigenvector of 𝑆. Note that eigenspace is invariant, [𝑆, 𝐴]𝑣 =

𝑆𝐴𝑣 βˆ’ 𝐴𝑆𝑣 = πœ†(𝐴𝑣 βˆ’ 𝐴𝑣) = 0 so N and S commute. A and N commute from the definition

of N and that A commutes with S

4. suppose maximum algebraic multiplicity of an eigenvalue of 𝐴 is π‘š. Then for any 𝑣 ∈ 𝐸𝑗 a

generalized eigenspace corresponding to the eigenvalue πœ†π‘—. We have π‘π‘šπ‘£ = (𝐴 βˆ’ 𝑆)π‘šπ‘£ =

(𝐴 βˆ’ 𝑆)π‘šβˆ’1𝑣(𝐴 βˆ’ πœ†π‘—)𝑣 = (𝐴 βˆ’ πœ†π‘—)π‘šπ‘£ = 0

General algorithm for finding 𝑒𝐴 1. find eigenvalues

Page 23: MATH3063 LECTURE NOTES - StudentVIP

2. find generalized eigenspace for each eigenvalue.; eigenvalues Ξ›

3. Turn these vectors into 𝑃

4. 𝑆 = π‘ƒΞ›π‘ƒβˆ’1

5. 𝑁 = 𝐴 βˆ’ 𝑆

6. 𝑒𝐴𝑑 = 𝑒𝑆𝑑𝑒𝑁𝑑 = π‘ƒπ‘’Ξ›π‘ƒβˆ’1𝑒(π΄βˆ’π‘†)𝑑

Example:

(βˆ’1 1βˆ’4 3

)

Eigenvalues:

det ((βˆ’1 βˆ’ πœ† 1βˆ’4 3 βˆ’ πœ†

)) = 0

β†’ πœ† = 1,1

Ξ› = π‘‘π‘–π‘Žπ‘”(1,1) = 𝐼

Generalized eigenspace:

ker((𝐴 βˆ’ πœ†πΌ)𝑛𝑗) = ker((𝐴 βˆ’ 𝐼)2) = ker (βˆ’2 1βˆ’4 2

)2

= ker ((0 00 0

)) = ℝ2

So our vectors must span ℝ 𝑛; choose

(10); (

01)

𝑃 = (1 00 1

) = 𝐼

β†’ 𝑆 = πΌπΌπΌβˆ’1 = 𝐼

β†’ 𝑁 = 𝐴 βˆ’ 𝑆 = 𝐴 βˆ’ 𝐼 = (βˆ’2 1βˆ’4 2

)

So then:

𝑒𝐴 = 𝑒𝑆𝑇𝑒𝑁𝑑

Page 24: MATH3063 LECTURE NOTES - StudentVIP

Last time:

Algorithm for computing 𝑒𝐴 and 𝑒𝐴𝑑

Eg: let

𝐴 = (

1 0 0 2βˆ’2 βˆ’2 3 4βˆ’1 βˆ’2 3 2βˆ’1 βˆ’2 1 4

)

Evals ar πœ† = 2,2,1,1

Generalized eigenspace for πœ† = 2

𝐸2 = ker((𝐴 βˆ’ 2𝐼)2)

𝐸2 is spanned by:

(

2001

) ;(

βˆ’2100

)

𝐸1 = ker((𝐴 βˆ’ 𝐼)2) spanned by

(

1010

) ;(

βˆ’1100

)

Ξ› = π‘‘π‘–π‘Žπ‘”(2,2,1,1); 𝑆 = π‘ƒΞ›π‘ƒβˆ’1

𝑃 = (

2 βˆ’2 1 βˆ’10 1 0 10 0 1 01 0 0 0

)

π‘ƒβˆ’1 = β‹―

𝑆 = π‘ƒΞ›π‘ƒβˆ’1

𝑁 = 𝐴𝑆

Note: if not using a computer:

Check that 𝑁 is nilpotent (π‘π‘Ž = 0 for some π‘Žπ‘Žπ‘‘π‘šπ‘œπ‘ π‘‘π‘‘β„Žπ‘’π‘ π‘–π‘§π‘’π‘œπ‘“π‘‘β„Žπ‘’π‘šπ‘Žπ‘‘π‘Ÿπ‘–π‘₯, )

check 𝑁𝑆 βˆ’ 𝑆𝑁 = 0 (that they commute)

∴ 𝐴𝑑 = (𝑆 + 𝑁)𝑑 = 𝑆𝑑 + 𝑁𝑑

β†’ 𝑒𝐴𝑑 = 𝑒𝑆𝑑𝑒𝑁𝑑 = π‘ƒπ‘’Ξ›π‘‘π‘ƒβˆ’1𝑒𝑁𝑑

= π‘ƒπ‘’Ξ›π‘‘π‘ƒβˆ’1(1 + 𝑁𝑑)

Page 25: MATH3063 LECTURE NOTES - StudentVIP

𝑒𝐴𝑑:ℝ β†’ 𝐺𝐿(𝑛,ℝ) = (π‘†π‘π‘Žπ‘π‘’π‘œπ‘“π‘–π‘›π‘£π‘’π‘Ÿπ‘‘π‘–π‘π‘™π‘’π‘› Γ— π‘›π‘šπ‘Žπ‘‘π‘Ÿπ‘–π‘π‘–π‘’π‘ π‘€π‘–π‘‘β„Žπ‘Ÿπ‘’π‘Žπ‘™π‘’π‘›π‘‘π‘Ÿπ‘–π‘’π‘ )

Derivative of 𝑒𝐴𝑑 Proposition: what is

𝑑

𝑑𝑑(𝑒𝐴𝑑) = 𝐴𝑒𝐴𝑑 = 𝑒𝐴𝑑𝐴

(𝐴𝑒𝐴𝑑 = π‘’π΄π‘‘π΄π‘“π‘Ÿπ‘œπ‘šπ‘‘β„Žπ‘Žπ‘‘π‘’π΄π‘‘π‘’π‘₯π‘–π‘ π‘‘π‘ π‘Žπ‘›π‘‘ = π‘π‘œπ‘€π‘’π‘Ÿπ‘ π‘’π‘Ÿπ‘–π‘’π‘ π‘Ÿπ‘’π‘π‘Ÿπ‘’π‘ π‘’π‘›π‘‘π‘Žπ‘‘π‘–π‘œπ‘›)

Proof 1:

𝑑

𝑑𝑑(𝑒𝐴𝑑) =

𝑑

𝑑𝑑(𝐼 + 𝑑𝐴 +

1

2𝑑2𝐴2 +

𝑑3

3!𝐴3 +β‹―)

= 𝐴 + 𝑑𝐴2 +𝑑2

2𝐴3 +

𝑑3

3!𝐴4 +β‹―

= 𝐴(1 + 𝑑 +𝑑2

2+𝑑3

3!+ β‹―)

(we will show it converges uniformly, and so we can differentiate it term by term)

= 𝐴𝑒𝐴𝑑

Proof 2:

𝑑

𝑑𝑑(𝑒𝐴𝑑) = lim

β„Žβ†’0(𝑒𝐴(𝑑+β„Ž) βˆ’ 𝑒𝐴𝑑

β„Ž)

= limβ„Žβ†’0

(𝑒𝐴𝑑(π‘’π΄β„Ž βˆ’ 𝐼)

β„Ž)

= 𝑒𝐴𝑑 limβ„Žβ†’0

((π΄β„Ž +

𝐴2β„Ž2

2 +β‹―)

β„Ž)

= 𝑒𝐴𝑑 limβ„Žβ†’0

(𝐴 +𝐴2

2β„Ž +β‹―)

= 𝑒𝐴𝑑𝐴

This proposition implies that 𝑒𝐴𝑑 is a solution to the matrix differential equation:

Ξ¦(𝑑) = 𝐴Φ(𝑑)

(π‘›π‘œπ‘‘π‘’: π΄π‘–π‘ π‘π‘œπ‘›π‘ π‘‘π‘Žπ‘›π‘‘π‘π‘œπ‘’π‘“π‘“π‘–π‘π‘–π‘’π‘›π‘‘)

𝐴 cannot be functions of 𝑑, as the functions may not commute with the integral

Comparing this to the 1x1 differential equation:

οΏ½οΏ½ = π‘Žπ‘¦

β†’ 𝑦 = πΆπ‘’π‘Žπ‘‘