Upload
others
View
9
Download
1
Embed Size (px)
Citation preview
University of Sydney
MATH3063 LECTURE NOTES
Shane Leviton 6-7-2016
Contents Introduction: ........................................................................................................................................... 6
Order of ODE: ...................................................................................................................................... 6
Explicit ODEs ....................................................................................................................................... 6
Notation: ............................................................................................................................................. 7
Example: .......................................................................................................................................... 7
Systems of ODEs ..................................................................................................................................... 7
Example 0: ππ₯ = 0 .......................................................................................................................... 8
Eg 2: ππ₯ = π (or ππ₯ = π(π‘) ............................................................................................................ 8
But what when π(π₯) is a standard function ....................................................................................... 8
Matrix ODEs (lecture 2) ........................................................................................................................... 9
Linear equation of π₯: ....................................................................................................................... 9
Solveing: FINDING THE EIGENVALUES/EIGENVECTORSβΌ! .............................................................. 9
Why do we care about eigenvectors/values? ............................................................................... 10
Matrix exponentials .............................................................................................................................. 12
When does ππ΄ exist? ........................................................................................................................ 12
exp(π΄) first class: full set of linearly independent eigenvectors ................................................. 12
Diagonalizable or semi-simple matrix ........................................................................................... 13
How big of a restriction is a full set of LI eigenvectors? ............................................................... 13
Matrix and other term exponentials:............................................................................................ 16
Complex numbers and matricies: ................................................................................................. 16
Real normal form of matricies ...................................................................................................... 18
Repeated eigenvalues ....................................................................................................................... 20
Generalized eigenspace: ............................................................................................................... 20
Primary decomposition theorym .................................................................................................. 20
Theorems: ..................................................................................................................................... 21
General algorithm for finding ππ΄ .................................................................................................. 21
Derivative of ππ΄π‘ ........................................................................................................................... 24
Phase Plots ............................................................................................................................................ 28
Example: ........................................................................................................................................ 28
Phase Plot/Phase line ........................................................................................................................ 30
Remarks: ....................................................................................................................................... 31
Constant solution: ......................................................................................................................... 31
Spectrum: .......................................................................................................................................... 32
Definition of spectrum: ................................................................................................................. 32
Spectrally stable definition ........................................................................................................... 32
Stable/unstable/centre sub spaces: ................................................................................................. 33
Primary decomposition theorem: ................................................................................................. 33
Generally: ...................................................................................................................................... 34
Hyperbolic: .................................................................................................................................... 34
Bounded for forward time: Definition .......................................................................................... 35
Linearly stable definition: ............................................................................................................. 35
Asymptotically linearly stable Definition ...................................................................................... 36
Restricted dynamics: ......................................................................................................................... 36
Generally: ...................................................................................................................................... 37
The theory: .................................................................................................................................... 37
Classification of 2D linear systems: ................................................................................................... 41
Our approach: ............................................................................................................................... 42
Discriminant: ................................................................................................................................. 42
Overall picture: ............................................................................................................................. 62
Linear ODEs with non constant coeffiencts ...................................................................................... 64
Principle fundamental solution matric. ........................................................................................ 65
Liouvilleβs/ Abelβs Formula: (Theorem) ......................................................................................... 65
PART 2: METRIC SPACES ....................................................................................................................... 66
Start of thinking: ............................................................................................................................... 66
Key idea: ........................................................................................................................................ 66
Formal definition of metric spaces: .................................................................................................. 66
Metric (distance function): ........................................................................................................... 66
Metric space .................................................................................................................................. 66
Topology of metric spaces: ............................................................................................................... 68
Open ball ....................................................................................................................................... 68
Examples: ...................................................................................................................................... 68
Open subset of metric space ........................................................................................................ 69
Closed subset of metric space ...................................................................................................... 69
Interior: ......................................................................................................................................... 69
Closure: ......................................................................................................................................... 70
Boundary: ...................................................................................................................................... 70
Limit point/accumulation point .................................................................................................... 70
Sequences and Completeness .......................................................................................................... 71
Definition: complete sequence ..................................................................................................... 72
Contraction Mapping Principle: ........................................................................................................ 73
A contraction is a map: ................................................................................................................. 73
Definition: Lipshitz function .......................................................................................................... 76
Defintion: Locally Liphschitz .......................................................................................................... 77
Existence and Uniqueness of solutions of ODEs: .............................................................................. 77
Equation * ..................................................................................................................................... 77
Solution ......................................................................................................................................... 77
Dependence on initial conditions and parameters ........................................................................... 81
Theorem (P-L again) ...................................................................................................................... 82
Theorem : Families of solutions .................................................................................................... 82
Maximal intervals of existence ......................................................................................................... 84
Definition: Maximal interval of existence ..................................................................................... 84
PART 3: NON LINEAR PLANAR AUTONOMOUS ODES ........................................................................... 86
Terminology: ..................................................................................................................................... 87
Function vector: ............................................................................................................................ 87
Talking about what solutions are: ..................................................................................................... 89
Eg: pendulum equation ................................................................................................................. 89
Phase portraits: ............................................................................................................................. 91
Isoclines ......................................................................................................................................... 92
Linearisation: ..................................................................................................................................... 93
Generally for linearlisation: .......................................................................................................... 93
Hyperbolic good, non-hyperbolic = bad ........................................................................................ 94
More Qualitative results ................................................................................................................... 99
Theorem: uniqun phase curves .................................................................................................... 99
Closed phase curves/periodic solutoins ....................................................................................... 99
Hamiltonian Systems: ................................................................................................................. 104
Lotka-Volterra/ Predator-Prey system ............................................................................................ 105
Definition of predator prey: ........................................................................................................ 105
Analysis of system: ...................................................................................................................... 106
Modifications to equations: ........................................................................................................ 108
Epidemics: ....................................................................................................................................... 109
SIR model .................................................................................................................................... 110
Abstraction, Global existsnece and Reparametrisation .................................................................. 116
Definition: 1 parameter family.................................................................................................... 117
Definition: Orbit/trajectory ......................................................................................................... 117
Existence of complete flow: ........................................................................................................ 117
Flows on β ...................................................................................................................................... 118
Thereom: types of flows in β...................................................................................................... 120
πΌalpha and π omega limit sets: .................................................................................................... 122
Asymptotic limit point ................................................................................................................. 122
Non- Linear stability (of critical points) ........................................................................................... 129
Lyapunov stable definition: ......................................................................................................... 129
Asymptotically stable: defininition ............................................................................................. 129
Lyapunov functions ..................................................................................................................... 131
LaSalleβs Invariance principle: ..................................................................................................... 133
Hilbertβs 16th problem and Poincare- Bendixon theorem ............................................................... 134
Questino: hilbertβs 16th problem ................................................................................................. 134
Poincare Bendixson theorem Proof: ............................................................................................... 136
Proof:........................................................................................................................................... 136
Definitions of sets: disconnected ................................................................................................ 136
Proof of Poincare Bendixson theorem: ....................................................................................... 139
Bendixsons Negative Criterion .................................................................................................... 139
INDEX THEORY (Bonus) ....................................................................................................................... 142
Set up: ............................................................................................................................................. 142
Theta: .......................................................................................................................................... 142
Index: definition .............................................................................................................................. 142
Theorem: on Index ...................................................................................................................... 144
Poincare Index: ........................................................................................................................... 145
Index at infinity: .............................................................................................................................. 148
Computing the index at infinity .................................................................................................. 148
Theorem: Poincare Hopf Index theorem .................................................................................... 150
Part 4 Bifurcation theory .................................................................................................................... 151
Lets start with an example in 1D: ................................................................................................... 152
Bifurcations: .................................................................................................................................... 153
Bifurcation diagram: ................................................................................................................... 153
Bifurcation: .................................................................................................................................. 153
Example 2 (in 2D) (Saddle node) bifurcation .............................................................................. 153
Transcritical bifurcation .................................................................................................................. 156
1D example: ................................................................................................................................ 156
Supercritical Pitchfork bifurcation .................................................................................................. 158
Example ....................................................................................................................................... 158
Logistic growth with harvesting ...................................................................................................... 160
Hopf Bifurcation .............................................................................................................................. 161
Definition of Hopf: ...................................................................................................................... 161
Some more examples of bifurcations ............................................................................................. 165
FIREFLIES AND FLOWS ON A CIRCLE ............................................................................................... 167
Non uniform oscillator ................................................................................................................ 167
Glycolysis ......................................................................................................................................... 170
Model: Selβkov 1968 .................................................................................................................... 170
Catastrophe:.................................................................................................................................... 180
Eg 1: fold bifurcation: .................................................................................................................. 181
Eg 2: swallowtail catastrophe ..................................................................................................... 185
Hysteresis and van der pol oscillator .............................................................................................. 186
Van der poll oscillator example: ................................................................................................. 188
MATH3963 NON LINEAR DIFFERENTIAL EQUATIONS WITH APPLICATIONS (biomaths) NOTES
- Robert Marangell [email protected]
Assessment:
- Midterm (Thursday April 21) 25%
- 2 assignments 10% each (due 4/4 and 30/5)
- 55% exam
Assignments are typed (eg LaTex)
PART 1: LINEAR SYSTEMS Introduction:
- What is an ODE; how to understand them
a DE is a function relationship between a function and its derivatives
An ODE is when the function has only 1 variable independent variable.
Eg; π‘ independant (π₯(π‘); π¦(π‘)dependant
Not just scalar, but also vector valued functions (Systems of ODES)
οΏ½οΏ½(π‘) β βπ
οΏ½οΏ½(π‘) = (π₯1(π‘), π₯2(π‘),β¦ π₯π(π‘))
π₯π: β β β
Order of ODE: - Highest derivative appearing in equation
If you can isolate the highest derivative (eg πΉ (π‘, οΏ½οΏ½(π‘),ππ¦(π‘)
ππ‘, β¦
π(π)οΏ½οΏ½(π‘)
ππ‘(π)) = 0
Explicit ODEs If you can isolate highet derivative: it is an explicit ODE
π(π)οΏ½οΏ½(π‘)
ππ‘(π)= πΊ (π‘, οΏ½οΏ½(π‘),
ππ¦(π‘)
ππ‘, β¦π(πβ1)οΏ½οΏ½(π‘)
ππ‘(πβ1))
(explicit is easier; implicit is very hard)
Notation: Use dot
π
ππ‘=
Example: π¦ + 2οΏ½οΏ½π¦ + 3(οΏ½οΏ½)2 + π¦ = 0
You can reduce the order of an ODE (to 1), in exchange for making it a system of ODEs
- Introduce new variables into system
π¦0 = π¦
π¦1 = π¦0 = οΏ½οΏ½
π¦2 = π¦1 = οΏ½οΏ½
β΄ π¦0 = π¦1
π¦1 = π¦2
π¦2 = β2π¦2π¦0 β 3π¦12 β π¦0
Which is a first order system in 3 variables, equivalent to the original 3rd order ODE
Systems of ODEs - We can play the same game for systems of ODEs
οΏ½οΏ½ + π΄οΏ½οΏ½ = 0 (π€ππ‘βοΏ½οΏ½ β β2; π΄ = (π ππ π
))
β΄ (π¦1π¦2) + (
π ππ π
)(π¦1π¦2) = 0
β΄ πππ‘ππππ’ππ:
π¦1 = π¦3
π¦2 = π¦4 π¦3 = π¦1 = βππ¦1 β ππ¦2
π¦4 = π¦2 = βππ¦1 β ππ¦2
β΄ (
π¦1π¦2π¦3π¦4
) = (
0 0 1 00 0 0 1βπ βπ 0 0βπ βπ 0 0
)(
π¦1π¦2π¦3π¦4
)
πππ‘οΏ½οΏ½ = (π¦1, π¦2, π¦3, π¦4)
The system of equations is now:
οΏ½οΏ½ = π΅οΏ½οΏ½
Where οΏ½οΏ½ = π(οΏ½οΏ½)
Example 0: π(π₯) = 0 β΄ οΏ½οΏ½ = 0
β π₯οΏ½οΏ½ = 0βπ οΏ½οΏ½ = πππππ₯, π β βπ
This is all the solutions to οΏ½οΏ½ = 0; and the solutions form an π dimensional vector space
Eg 2: π(π₯) = π (or π(π₯) = π(π‘) β΄ οΏ½οΏ½ = π
β οΏ½οΏ½(π‘) = ππ‘ + π
But what when π(π₯) is a standard function - If π(π₯) is linear in π₯
Remember: βlinearβ if:
π(ππ₯ + π¦) = ππ(π₯) + π(π¦)
Recall any linear map can be represented by a matrix, by choosing a basis
β΄ οΏ½οΏ½ = π΄(π‘)π₯
If π΄(π‘) = π΄(independent of π‘), it is called a constant coefficient system
οΏ½οΏ½ = π΄π₯
Eg:
π΄ = (
0 1 0 01 0 1 00 1 0 10 0 1 0
)
β(
π₯1π₯2π₯3π₯4
)
= (
0 1 0 01 0 1 00 1 0 10 0 1 0
)(
π₯1π₯2π₯3π₯4
)
β΄ π₯1 = π₯2
π₯2 = π₯1 + π₯3
π₯3 = π₯2 + π₯4
π₯4 = π₯3
Is the system of ODEs which we now have to solve
Matrix ODEs (lecture 2)
οΏ½οΏ½ = π(π₯)
Linear equation of π₯:
οΏ½οΏ½ = π΄π₯(π₯ β βπ; ππππ΄ β ππ(β))
Eg:
ππ: π΄ =0 1 01 0 10 1 0
οΏ½οΏ½ = π΄π₯; π₯ = (π₯1, π₯2, π₯3)
(π₯1π₯2π₯3
) = (0 1 01 0 10 1 0
)(
π₯!π₯2π₯3)
β΄ π₯1 = π₯2;
π₯2 = π₯1 + π₯3
π₯3 = π₯2
So a solution is a triple vector which solves this equation
Solveing: FINDING THE EIGENVALUES/EIGENVECTORSβΌ!
Recall:
A value
π β β
Is an eigenvalue of an π Γ π matrix π΄ if a non zero vector οΏ½οΏ½ β βπ such that
π΄π£ = ππ£
π£ is the corresponding eigenvector
β (A β Ξ»I)π£ = 0 β π£ β ker(π΄ β ππΌ)
β π΄ β ππΌππ πππ‘πππ£πππ‘ππππ
β det(π΄ β ππΌ) = 0
det(π΄ β ππΌ) is a polynomial in π of degree π called the characteristic polynomial of π΄
ππ΄(π)
Eigenvalues are roots of ππ΄(π) = 0
Fundamental theorem of algebra:
ππ΄(π) has exactly π (possibly complex) roots counted according to algebraic multiplicity.
Algebraic multiplicity:
The algebraic multiplicity of a root π of ππ΄(π) = 0 is the largest π such that ππ΄(π) = (π β π)ππ(π)
and π(π) β 0
The algebraic multiplicity of an eigenvalue π of π΄ is its algebraic multiplicity as a root of ππ΄(π)
Geometric multiplicity:
The geometric multiplicity of an eigenvalue is the maximum number of linearly independent
eigenvectors
Alg multiplicity and geometric
πππππππππππ’ππ‘πππππππ‘π¦ β₯ ππππππ‘πππππ’ππ‘πππππππ‘π¦
Returning to example:
(
π₯1π₯2π₯3
) = (0 1 01 0 10 1 0
)(
π₯!π₯2π₯3)
π΄ = (0 1 01 0 10 1 0
)
det(π΄ β ππΌ) = |βπ 1 01 βπ 10 1 βπ
| = βπ(π2 β 2) = 0
β π = 0;Β±β2
Eigenvectors are found by finding the kernel of ker(π΄ β ππΌ) for an eigenvalue:
Giving eigenvectors:
π1 = 0 βπ£1 = (10β1)
π2 = β2 β π£2 = (1
β21
)
π3 = ββ2 β π£3 = (β1
β21
)
Why do we care about eigenvectors/values?
οΏ½οΏ½ = π΄οΏ½οΏ½
Suppose that π£ is an eigenvector of π΄ β ππ(β), with eigenvalues of π.
Consider the vector of functions:
οΏ½οΏ½(π‘) = π(π‘)οΏ½οΏ½
π is some unkown function π: β β β
If οΏ½οΏ½ solves οΏ½οΏ½ = π΄π₯, then
οΏ½οΏ½(π‘)οΏ½οΏ½ = π΄π(π‘)οΏ½οΏ½ = π(π‘)π΄οΏ½οΏ½ = ππ(π‘)οΏ½οΏ½(ππ ππππππ£πππ‘ππ)
As οΏ½οΏ½ β 0,
β οΏ½οΏ½(π‘) = ππ(π‘)
Which is a differential equation of one variable
β΄ π(π‘) = πππ‘
SO:
π₯(π‘) = πππ‘οΏ½οΏ½
is a solution vector to
οΏ½οΏ½ = π΄π₯
So- we have a 1D subspace of solutions π₯(π‘) = π0πππ‘οΏ½οΏ½
Notes:
1. Solutions to οΏ½οΏ½ = π΄π₯ are of the form π₯ = π0πππ‘οΏ½οΏ½ where π is a constant eigenvalue of π΄ of οΏ½οΏ½ is
the corresponding eigenvectors are called EIGENSOLUTIONS
2. What we just did has a name βansatzβ (guess what the solution could be, and see if it is
correct)
Back to example:
β΄ π₯(π‘) = π0π‘ (10β1) ; π¦(π‘) = πβ2π‘ (
1
β21
) ; π§(π‘) = πββ2 (1
ββ21
)
So, as linear- any linear combination of these is the solution:
So:
π(π‘) = π1π0π‘ (
10β1) + π2π
β2π‘ (1
β21
) + π3πββ2 (
1
ββ21
)
IS OUR SOLUTIONβΌβΌ!
π(π‘) is actually a 3D vector space.
Matrix exponentials π΄ β ππΓπ ; what is:
ππ΄
(exp(π΄))
For a matrix: DEFINE
ππ΄ = exp(π΄) = β1
π!π΄π
β
(π=0)
= πΌ + π΄ +1
2π΄2 +
1
3!π΄3 +β―
If the entries converge, then: ππ΄ IS an π Γ π matrix
When does ππ΄ exist?
exp(π΄) first class: full set of linearly independent eigenvectors - Hypothesis: Suppose π΄ has a full set of linearly independent eigenvectors
- Assume that hypothesis for the rest of today(weβll look at if itβs not later)
Let π£1, β¦ π£π be the eigenvectors; with π1, β¦ , ππ eigenvalues
Form the following matrix:
π = (π£1, π£2, β¦ π£π)
(whose columes are the eigenvectors)
β΄ π΄π = (π΄π£1, π΄π£2, β¦π΄π£π) = (π1π£1, β¦ πππ£π)(ππ¦ππ’ππ‘πππππππ‘πππ)
= πππππ(π1, β¦ , ππ)
β πΞ
β΄ πβ1π΄π = Ξ(= diag(Ξ»1, β¦ Ξ»n))
β΄ πΞ = πΌ + Ξ +1
2Ξ2 +β―
= ππππ(ππ1 , β¦ . , πππ)
LHS:
ππβ1π΄π
Lemma:
(πβ1π΄π)π = πβ1π΄ππ
So:
β΄ ππ΄ = ππΞπβ1 (πΞ = ππππ(ππ1 , β¦ , πππ))
AND: the eigenvectors of ππ΄ = πππ, where ππ is eigenvalues of π΄,
Example:
π΄ = (0 1 01 0 10 1 0
) ;
β΄ π₯ (10β1) ;(
1
β21
) ;(1
ββ21
)
π = (10β1
1
β21
1
ββ21
)
Ξ = (000
0
β20
00
ββ2
)
β΄ ππ΄ = ππΞπβ1
= (10β1
1
β21
1
ββ21
)(000
0
πβ2
0
00
πββ2)(
10β1
1
β21
1
ββ21
)
β1
= (π’πππ¦πππ π€ππ)
Diagonalizable or semi-simple matrix A matrix π΄ for which there exists an invertible matrix π such that
πβ1π΄π = Ξ
(where Ξ is diagonal is called diagonalizable or semisimple
This says, if π΄ is semisimple, then ππ΄ exists, and we can compute it
How big of a restriction is a full set of LI eigenvectors?
Defective matricies
If π΄ does not have a full set of linearly independent eigenvectors, it is called defective
- How big of a restriction is being semisimple?
- Are there many semi simple matrices?
Proposition:
Suppose π1 β π2 are eigenvalues of a matrix π΄ with eigenvectors π£1ππππ£2. Then π£1 and π£2 are
linearly independent
Proof:
If β a dependence relation, then βπ1ππππ2 scalars such that
π1π£1 + π2π£2 = 0(π1, π2 β 0)
Applying π΄:
π΄(π1π£1 + π2π£2) = 0
π΄π1π£1 + π΄π2π£2 = 0 β΄ π1ππ£1 + π2π2π£2 = 0(ππ ππππππ£πππ‘πππ )
β π‘πππ β π1π1π£1 β π1π2π£2 = 0
\β΄ (π2 β π1)π2π£2 = 0
β π2 = 0(β΄ ππππ‘ππππππ‘πππ)
Example:
π΄ = (π ππ π
)
ππ΄(π) = π2 β (π + π)π + ππ β ππ
= π2 β ππ + πΏ(π‘ππππππ = π + π; πΏ = ππ β ππ)
(π = πππππ(π΄) =βπππ
π
π=1
(π π’ππππ‘βππππππππππ ))
β΄ π =π Β± βπ2 β 4πΏ
2πππππππ‘π
β΄ π+ β πβπππ2 β 4πΏ = 0
β΄ π2 β 4πΏ = 0
β πΏ =π2
4
Therefore: any matrix which does not lie on this parabola in π, πΏ space will be diagonalizable
π2 β 4πΏ = (π β π)2 + 4ππ
The set of matricies with repeated eigenvalues, is the locus of points with
(π β π)2 + 4ππ = 0
A 3D hypersurface, in 4D space
So what if π΄ is defective? Can we compute ππ΄
- Yes
Two π Γ π matricies π΄, π΅ are commutative, if
π΄π΅ = π΅π΄
Commutator
A commutator we will define as:
[π΄, π΅] β π΄π΅ β π΅π΄ = 0(ππππ΄π΅πππππ’π‘π)
Proposition
If π΄ and π΅ commute, then
ππ΄+π΅ = ππ΄ + ππ΅
- Aside: if ππ΄+π΅ = ππ΄ππ΅, and π΄ and π΅ satisfy symmetric β π΄π΅ = π΅π΄
Proof
ππ΄+π΅ = πΌ + (π΄ + π΅) +1
2(π΄ + π΅)2
= πΌ + π΄ + π΅ +1
2(π΄2 + π΄π΅ + π΅π΄ + π΅2) + β―
= πΌ + π΄ + π΅ +1
2π΄2 + π΄π΅ +
1
2π΅2(ππ πππππ’π‘ππ‘ππ£π)
And:
ππ΄ππ΅ = (πΌ + π΄ +1
2π΄2β¦) (πΌ + π΅ +
1
2π΅2β¦)
= πΌ + (π΄ + π΅) +1
2π΄2 + π΄π΅ +
1
2π΅2 +β―
Example:
π΄ = (π 00 π
)(π β π); π΅ = (0 10 0
)
β΄ ππ΄ = (ππ 00 ππ
) ; ππ΅ =(1 10 1
)
ππ΄+π΅ = (ππππ β ππ
π β π0 ππ
)
ππ΄ππ΅ = ππ΄+π΅ β (π β π)ππ = ππ β ππ
If π₯ = π β π
β΄ π₯ = 1 β πβπ₯
One example of a solution: is
π₯ β β6.6022 β 736.693π
Nilpotent matrix:
Definition:
A matrix π is nilpotent if some power of it is = 0 (ππ = 0for some π)
Eg; π =0 1 00 0 10 0 0
β π3 = 0(πππππ’πππ‘π)
This means that the power series terminates for ππ :
ππ = πΌ + π +1
2π2 + 0 + 0 +β―
So- how do we compute the exponential?
Jordan Canonical Form:
Let π΄ be an π Γ π matrix, with complex entries β a basis of βπ and an invertible matrix π΅ (of basis
vectors) such that
π΅β1π΄π΅ = π + π
Where π is diagonal (semisimple); and π is nilpotent; and π and π commute (ππ β ππ = 0)
So; this means that ππ΄ exists for all matricies:
π΅β1π΄π΅ = π + π
β π΅β1ππ΄π΅ = ππππ
ππ΄ = π΅πππππ΅β1
This is cool.
Matrix and other term exponentials:
So now: consider the follow π΄π‘; where π΄ β ππ(β) and π‘ β β
So:
ππ΄π‘ =β(π‘π
π!)π΄π
β
π=0
For a fixed π΄; the exponential is a function map between real numbers and matricies
ππ΄π‘: β β ππ(β)
expπ΄(π‘) : π‘ β ππ΄π‘
expπ΄(π‘ + π ) = ππ΄(π‘+π ) = ππ΄π‘ππ΄π = expπ΄(π‘) expπ΄(π )
ππ΄π‘ is invertible! Inverse is πβπ΄π‘
So- what is the derivative between the map of the real numbers to the matricies?
Complex numbers and matricies:
Let π½ = (0 β11 0
)
β π½2 = βπΌ;π½2 = βπ½; π½4 = πΌ
ππ½π‘ = πΌ + π½π‘ +1
2π½2π‘2 +β―
= πΌ + π½π‘ β1
2π‘2πΌ β
1
3!π‘3π½
= πΌ βπ‘2
2πΌ +
π‘4
4!+ β―+ π½ (π‘ β
π‘3
3!+ β―)
= cos π‘ πΌ + sin π‘ π½
= (cos π‘ β sin π‘sin π‘ cos π‘
)
Which is the rotation matrix;
So: the matrix π½ is the complex number π; and we wrote Eulerβs formula
πππ‘ = cos π‘ + π sin π‘
- We can actually take any complex number π§ = π₯ + ππ¦; and find its corresponding matrix
ππ§ β π₯πΌ + π¦π½ = (π₯ βπ¦π¦ π₯ )
(the technical term is a field isomorphism)
Complex number algebra:
ππ§1+π§2 = ππ§1 +ππ§2
ππ§1π§2 = ππ§1ππ§2 = ππ§2ππ§1 = ππ§2π§1
Complex exponentials:
πππ§ = πππ‘
π§ = π₯ + ππ¦:
ππ§ = ππ₯ cos π¦ + πππ₯ sinπ¦ = ππ₯ (cosπ¦ β sinπ¦sinπ¦ cosπ¦
)
= ππ₯πΌ + ππ¦π½
Complex number things:
det(ππ§) = π₯2 + π¦2 = |π§|2
ποΏ½οΏ½ = (π₯ π¦βπ¦ π₯) = ππ§
π
ππ = ππΌ(πππ β β)
ποΏ½οΏ½ππ = π|π§|2 = |π§|2πΌ
πππ§ β 0;ππ§ππ πππ£πππ‘ππππ; πππ
ππ§β1 =
1
|π§|2ππ§
π =1
det(ππ§)πππ
So lets compute ππ½π‘ = (cos π‘ β sin π‘sin π‘ cos π‘
):
Eigenvalues of π½π‘ are Β±ππ‘
π βvectors are (Β±π1)
π = (π βπ1 1
) ;πβ1 =1
2π(1 πβ1 π
)
Ξ = (ππ‘ 00 βππ‘
)
β ππ½π‘ =1
2π(π βπ1 1
) (πππ‘ 00 πβππ‘
) (1 πβ1 π
)
= (cos π‘ β sin π‘sin π‘ cos π‘
)
So we got what we started with! (which is good)
Real normal form of matricies Moving towards a real βnormalβ form for matricesl with real entries but complex eigenvalues
Fact: if π΄ β ππ(β) then if π is an eigenvalue; so οΏ½οΏ½ is an e-value. (complex conjugate theorem). Also-
eigenvectors come in conjugate pairs as well
π΄π£ = ππ£ β π΄οΏ½οΏ½ = οΏ½οΏ½οΏ½οΏ½
- If π is an eigenvalue π = π + ππ; then οΏ½οΏ½ = π β ππ is also an e-value
- If οΏ½οΏ½ = οΏ½οΏ½ + ποΏ½οΏ½;(π’, π€ β βπ) β οΏ½οΏ½ = οΏ½οΏ½ β ποΏ½οΏ½
So:
2π’ = π£ + οΏ½οΏ½
π΄(2π’) = ππ£ + ποΏ½οΏ½ = 2(ποΏ½οΏ½ β ποΏ½οΏ½)
β π΄οΏ½οΏ½ = (ποΏ½οΏ½ β ποΏ½οΏ½)
Similarly;
π΄οΏ½οΏ½ = (ποΏ½οΏ½ + ποΏ½οΏ½)
So; if π is the π Γ 2 matrix
π = (π’ π€)
β π΄π = π (π βππ π
)
(we are close to finding the algorithm for computing the matrix exponential of any matrix)
Eg:
π΄ =
0 1 0 0β1 0 1 00 β1 0 10 0 β1 0
Characteristic polynomial is:
π₯4 + 3π₯2 + 1
e-values of π΄ are Β±πππππ Β±π
π (with π =
1+β5
2 the golden ratio)
so eigenvectors:
π£1 = (
πβπβππ1
) = (
0βπ01
) + π(
10βπ1
)= π’1 + ππ€π
; π£2 = (
ππβ1πβπ
)
β
So
π΄π1 = π1 (0 βππ 0
)
π = (π’1π€1π’2π€2)πβ1π΄π =
(
0 βπ 0 0π 0 0 0
0 0 0 β1
π
0 01
π0)
this is called a block diagonal matrix (has blocks of diagonal matricies; and 0s everywhere else)
= ππ½ ββ1
ππ½
Direct sum:
π΄βπ΅ = (π΄ 00 π΅
)
π΄ and π΅ donβt need to be the same size
Lemma: if
π΄ = π΅βπΆ
Then
ππ΄ = ππ΅βππΆ
Proof:
(π΄ = (π΅ 00 0
) + (0 00 πΆ
)) π€βππβπππππ’π‘π
β ππ΄ = π(π΅ 00 0
)π(0 00 πΆ
)= (π
π΅ 00 πΌ
) (πΌ 00 ππΆ
) = (ππ΅ 00 ππΆ
) = ππ΅βππΆ
Repeated eigenvalues
So what do we do if we have repeated eigenvalues? How do we calculate the matrix exponential?
Generalized eigenspace: Definition
Suppose ππ is an eigenvalue of a matrix π΄ with algebraic multiplicity ππ. We define the generalized
eigenspace of ππ as:
πΈπ β ker[(π΄ β πππΌ)ππ]
There are basically two reasons why this definition is important. The first is that these generalized
eigenspaces are INVARIANT UNDER MULTIPLICATION BY π΄ (does not change). That is: if π β πΈπ, then
π΄π is as well
Proposition: of generalized eigenspace
Suppose that πΈπ is a generalized eigenspace of the matrix π΄, then if π£ β πΈπ; so is π΄π£
Proof of proposition:
- If π£ β πΈπ for some generalized eigenspace, for some eigenvector π; then π£ β ker((π΄ β ππΌ)π)
say for some π. Then we have that (π΄ β ππΌ)ποΏ½οΏ½ = (π΄ β ππΌ)π(π΄ β ππΌ + ππΌ)π£
= (π΄ β ππΌ)π+1 + (π΄ β ππΌ)πππ£ = 0
The second reason we care about this is that it will give us our set of linearly independent
generalized eigenvectors; which we can use to find ππ΄
Primary decomposition theorym
Let π: π β π be a linear transformation of an π dimensional vector space over the complex
numbers. If π1, β¦ ππ are the distinct eigenvalues (not necessarily = π ) ; if Ej is the eigenspace for ππ,
then dim(πΈπ) = the algebraic multiplicity of ππ and the generalized eigenvectors span π. In terms of
vector spaces, we have:
π = πΈ1βπΈ2ββ¦βπΈπ
Combining this with the previous proposition: π will decompose the vector space into the direct sum
of the invariant subspaces.
The next thing to do is explicitly determine the ssemisimple nilpotent decomposition. Before: we
had the diagonal matrix Ξ which was
- Block diagonal
- In normal form for complex eigenvalues
- Diagonal for real eigenvalues
Suppose we have a real matrix π΄ with π eigenvalues π1, β¦ππ with repeated multiplicity, suppose we
break them up into complex ones and real ones, so we have π1, π1, β¦ , ππ, ππ complex and
π2π+1, β¦ ππ real. Let ππ = ππ + ππ + π then:
Ξ β (π1 π1βπ1 π1
)β β¦β (ππ ππβππ ππ
)β ππππ(π2π+1, β¦ ππ)
Let π£1, π£1 , β¦ π£2π+1, β¦ , π£π be a basis of βπ of generalsed eigenvectors, then for complex ones: write
π£π = π’π + ππ€π
Let
π β (π’1π€1π’2π€2β¦π’π π€ππ£2π+1β¦π£π)
Returning to our construction, the matrix π is invertiable (because of the primary decomposition
theorem); so we can write the new matrix as
π = πΞπβ1
And a matrix
π = π΄ β π
We have the following:
Theorems: Let π΄, π SΞandP be the matricies defined: then
1. π΄ = π + π
2. π is semisimple
3. π, π and A commute
4. π is nilpotent
Proofs of theorems:
1. From definition of π
2. From construction of π
3. [π, π΄] = 0 suppose that π£ is a generalized eigenvector of π΄ associated to eigenvalue π. By
construction: π£ is a genuine eigenvector of π. Note that eigenspace is invariant, [π, π΄]π£ =
ππ΄π£ β π΄ππ£ = π(π΄π£ β π΄π£) = 0 so N and S commute. A and N commute from the definition
of N and that A commutes with S
4. suppose maximum algebraic multiplicity of an eigenvalue of π΄ is π. Then for any π£ β πΈπ a
generalized eigenspace corresponding to the eigenvalue ππ. We have πππ£ = (π΄ β π)ππ£ =
(π΄ β π)πβ1π£(π΄ β ππ)π£ = (π΄ β ππ)ππ£ = 0
General algorithm for finding ππ΄ 1. find eigenvalues
2. find generalized eigenspace for each eigenvalue.; eigenvalues Ξ
3. Turn these vectors into π
4. π = πΞπβ1
5. π = π΄ β π
6. ππ΄π‘ = πππ‘πππ‘ = ππΞπβ1π(π΄βπ)π‘
Example:
(β1 1β4 3
)
Eigenvalues:
det ((β1 β π 1β4 3 β π
)) = 0
β π = 1,1
Ξ = ππππ(1,1) = πΌ
Generalized eigenspace:
ker((π΄ β ππΌ)ππ) = ker((π΄ β πΌ)2) = ker (β2 1β4 2
)2
= ker ((0 00 0
)) = β2
So our vectors must span β π; choose
(10); (
01)
π = (1 00 1
) = πΌ
β π = πΌπΌπΌβ1 = πΌ
β π = π΄ β π = π΄ β πΌ = (β2 1β4 2
)
So then:
ππ΄ = ππππππ‘
Last time:
Algorithm for computing ππ΄ and ππ΄π‘
Eg: let
π΄ = (
1 0 0 2β2 β2 3 4β1 β2 3 2β1 β2 1 4
)
Evals ar π = 2,2,1,1
Generalized eigenspace for π = 2
πΈ2 = ker((π΄ β 2πΌ)2)
πΈ2 is spanned by:
(
2001
) ;(
β2100
)
πΈ1 = ker((π΄ β πΌ)2) spanned by
(
1010
) ;(
β1100
)
Ξ = ππππ(2,2,1,1); π = πΞπβ1
π = (
2 β2 1 β10 1 0 10 0 1 01 0 0 0
)
πβ1 = β―
π = πΞπβ1
π = π΄π
Note: if not using a computer:
Check that π is nilpotent (ππ = 0 for some πππ‘πππ π‘π‘βππ ππ§ππππ‘βππππ‘πππ₯, )
check ππ β ππ = 0 (that they commute)
β΄ π΄π‘ = (π + π)π‘ = ππ‘ + ππ‘
β ππ΄π‘ = πππ‘πππ‘ = ππΞπ‘πβ1πππ‘
= ππΞπ‘πβ1(1 + ππ‘)
ππ΄π‘:β β πΊπΏ(π,β) = (ππππππππππ£πππ‘πππππ Γ ππππ‘ππππππ π€ππ‘βπππππππ‘ππππ )
Derivative of ππ΄π‘ Proposition: what is
π
ππ‘(ππ΄π‘) = π΄ππ΄π‘ = ππ΄π‘π΄
(π΄ππ΄π‘ = ππ΄π‘π΄πππππ‘βππ‘ππ΄π‘ππ₯ππ π‘π πππ = πππ€πππ πππππ ππππππ πππ‘ππ‘πππ)
Proof 1:
π
ππ‘(ππ΄π‘) =
π
ππ‘(πΌ + π‘π΄ +
1
2π‘2π΄2 +
π‘3
3!π΄3 +β―)
= π΄ + π‘π΄2 +π‘2
2π΄3 +
π‘3
3!π΄4 +β―
= π΄(1 + π‘ +π‘2
2+π‘3
3!+ β―)
(we will show it converges uniformly, and so we can differentiate it term by term)
= π΄ππ΄π‘
Proof 2:
π
ππ‘(ππ΄π‘) = lim
ββ0(ππ΄(π‘+β) β ππ΄π‘
β)
= limββ0
(ππ΄π‘(ππ΄β β πΌ)
β)
= ππ΄π‘ limββ0
((π΄β +
π΄2β2
2 +β―)
β)
= ππ΄π‘ limββ0
(π΄ +π΄2
2β +β―)
= ππ΄π‘π΄
This proposition implies that ππ΄π‘ is a solution to the matrix differential equation:
Ξ¦(π‘) = π΄Ξ¦(π‘)
(πππ‘π: π΄ππ ππππ π‘πππ‘πππππππππππ‘)
π΄ cannot be functions of π‘, as the functions may not commute with the integral
Comparing this to the 1x1 differential equation:
οΏ½οΏ½ = ππ¦
β π¦ = πΆπππ‘