68
Dynamical Systems and Chaos Gr´ egory Faye School of Mathematics, University of Minnesota, 206 Church Street S.E., Minneapolis, MN 55455, USA [email protected] December 3, 2012 Abstract These are some notes related to the one-semester course Math 5535 Dynamical Systems and Chaos given at the University of Minnesota during Fall 2012 with an emphasis to the study of continuous and discrete dynamical systems of dimension one and two. An ambitious list of topics to be covered include phase portraits, fixed points, stability, bifurcations, limit sets, periodic orbit, Poincar´ e map and chaotic attractor. Many ideas from analysis and topology will be introduced along the way. The course will follow some chapters of the textbook of Hirsch, Smale and Devaney [1] but not necessarily in the same order. Other interesting references as complementary and extra lectures to the course are listed at the end of these notes [2, 3, 4, 5]. Keywords: Continuous and discrete dynamical systems; Flow; Equilibrium; Phase portrait; Stability; Bifurcation; Periodic orbit; Chaos; Cantor set. Contents 1 Dynamical systems: definitions and first results 3 1.1 Introduction: some motivating examples .................... 3 1.1.1 Continuous time ............................. 3 1.1.2 Discrete time ............................... 3 1.2 Ordinary differential equations .......................... 3 1.3 Flow ........................................ 5 1.4 Existence, uniqueness and regularity of solutions ................ 5 1.4.1 Theorems ................................. 5 1.4.2 Examples ................................. 11 1

Dynamical Systems and Chaos

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Dynamical Systems and Chaos

Dynamical Systems and Chaos

Gregory Faye

School of Mathematics, University of Minnesota,

206 Church Street S.E.,

Minneapolis, MN 55455, USA

[email protected]

December 3, 2012

Abstract

These are some notes related to the one-semester course Math 5535 Dynamical

Systems and Chaos given at the University of Minnesota during Fall 2012 with an

emphasis to the study of continuous and discrete dynamical systems of dimension one

and two. An ambitious list of topics to be covered include phase portraits, fixed points,

stability, bifurcations, limit sets, periodic orbit, Poincare map and chaotic attractor.

Many ideas from analysis and topology will be introduced along the way. The course

will follow some chapters of the textbook of Hirsch, Smale and Devaney [1] but not

necessarily in the same order. Other interesting references as complementary and extra

lectures to the course are listed at the end of these notes [2, 3, 4, 5].

Keywords: Continuous and discrete dynamical systems; Flow; Equilibrium; Phase portrait;

Stability; Bifurcation; Periodic orbit; Chaos; Cantor set.

Contents

1 Dynamical systems: definitions and first results 3

1.1 Introduction: some motivating examples . . . . . . . . . . . . . . . . . . . . 3

1.1.1 Continuous time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.1.2 Discrete time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2 Ordinary differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3 Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.4 Existence, uniqueness and regularity of solutions . . . . . . . . . . . . . . . . 5

1.4.1 Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.4.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1

Page 2: Dynamical Systems and Chaos

2 Fixed points 12

2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.2 Stability of fixed points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.3 Linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.3.1 Linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.3.2 Linear differential systems . . . . . . . . . . . . . . . . . . . . . . . . 14

2.3.3 Application to planar linear systems . . . . . . . . . . . . . . . . . . 16

2.4 Linearized stability of fixed points . . . . . . . . . . . . . . . . . . . . . . . . 18

2.4.1 Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.4.2 Nullclines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.4.3 Conjugacy of phase portraits . . . . . . . . . . . . . . . . . . . . . . . 20

2.4.4 Invariant manifolds and phase portraits . . . . . . . . . . . . . . . . . 21

2.5 Center manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.6 Phase portraits using energy and other test functions . . . . . . . . . . . . . 28

2.6.1 An introductory example: prey-predator system . . . . . . . . . . . . 28

2.6.2 Undamped forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2.6.3 Liapunov functions and stability . . . . . . . . . . . . . . . . . . . . . 32

2.6.4 Gradient systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

2.7 Introduction to bifurcations of dimension 1 . . . . . . . . . . . . . . . . . . . 35

2.7.1 Saddle-node bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . 35

2.7.2 Pitchfork bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.7.3 Transcritical bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . 38

3 Periodic orbits 39

3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.2 Poincare Bendixson theorem for planar systems . . . . . . . . . . . . . . . . 41

3.3 Local sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.4 Stability of periodic orbits and the Poincare map . . . . . . . . . . . . . . . 43

4 Discrete dynamical systems 45

4.1 One-dimensional iteration maps . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.2 Linearized stability of fixed points . . . . . . . . . . . . . . . . . . . . . . . . 45

4.3 Bifurcation of one-dimensional iteration maps . . . . . . . . . . . . . . . . . 48

2

Page 3: Dynamical Systems and Chaos

4.3.1 Saddle-node bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . 48

4.3.2 Pitchfork and transcritical bifurcation . . . . . . . . . . . . . . . . . . 52

4.3.3 Period-doubling bifurcation . . . . . . . . . . . . . . . . . . . . . . . 53

4.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

5 Elementary examples of chaotic dynamical systems 57

5.1 Topological transitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

5.1.1 Dense sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

5.1.2 Invariant sets and topological transitivity . . . . . . . . . . . . . . . . 57

5.1.3 The doubling map . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

5.1.4 More examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

5.2 Sensitive dependence on initial conditions . . . . . . . . . . . . . . . . . . . . 60

5.3 Conjugacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5.4 Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5.5 Cantor sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

1 Dynamical systems: definitions and first results

1.1 Introduction: some motivating examples

1.1.1 Continuous time

System: Rosseler attractor x = −y − zy = x+ ay

z = b+ z(x− c)

1.1.2 Discrete time

System: the logistic map xn+1 = rxn(1− xn).

1.2 Ordinary differential equations

We consider the system of autonomous differential equations of the form

X = F (X) (1.1)

where F : U ⊂ Rn → Rn, U an open set.

3

Page 4: Dynamical Systems and Chaos

Figure 1: Rossler attractor in a 3 dimensional euclidean space. The initial condition is

(0.1, 0.2, 0.3) and a = b = 0.2, c = 5.7.

Figure 2: Plot of the logistic map.

4

Page 5: Dynamical Systems and Chaos

Remark 1.1. We may also use the notation X ′ ordX

dtfor X. In each case, this stands for

the derivation with respect to the time variable t.

Definition 1.1. A solution of (1.1) is a function X : J → Rn, defined on some interval

J ⊂ R such that, for all t ∈ JX(t) = F (X(t)).

Geometrically, X(t) is a curve in Rn whose tangent vector X(t) exists for all time t ∈ J and

equals F (X(t)). The map F : U → Rn defines a vector field on U .

Definition 1.2. An initial condition for a solution X : J → Rn is a specification of the form

X(t0) = X0 where t0 ∈ J and X0 ∈ U .

One challenge in the theory of differential equations is to find solution of any Cauchy problem;

that is to determine the solution of the system (1.1) that satisfies the initial condition

X(t0) = X0. When it is more convenient we will assume that t0 = 0.

1.3 Flow

Let X0 ∈ U an initial condition and X(t) a solution of (1.1) such that at t = 0 X(0) = X0.

We define the map φt : U → Rn that takes X0 to X(t) at time t that is φt(X0) = X(t). We

can defined the one-parameter family of maps (φt)t≥0. It is reasonable to expect that:

1. φ : R× U → Rn where φ(t,X0) = φt(X0) is smooth,

2. φ0 is the identity function: φ0(X0) = X0,

3. φt as an inverse φ−t,

4. φt ◦ φs = φt+s for all t, s ∈ R.

When all these properties are met we say that (φt)t∈R is a flow for system (1.1). System

(1.1) is then called a smooth dynamical system.

1.4 Existence, uniqueness and regularity of solutions

1.4.1 Theorems

Theorem 1.1 (Existence and uniqueness of solutions). Let U be an open set of Rn. Consider

the Cauchy problem

X = F (X), X(t0) = X0 (1.2)

where X0 ∈ U . Suppose that F : U → Rn is C1. Then there exists an a > 0 and a unique

solution X :]t0 − a; t0 + a[→ Rn of (1.2).

5

Page 6: Dynamical Systems and Chaos

Proof. For the proof of the theorem, we set t0 = 0 in order to simplify notations. We first

start the proof of this theorem with some definitions from functional analysis.

Let X ∈ Rn with the notation X = (x1, . . . , xn). We consider a C1 vector field F : Rn → Rn

that we write

F (X) = F (x1, . . . , xn) =

F1(x1, . . . , xn)...

Fn(x1, . . . , xn)

.

The Jacobian matrix of F at a given point X ∈ Rn is the following matrix of partial

derivatives:

DFX =

(∂Fi∂xj

(X)

)1≤i,j,≤n

. (1.3)

Definition 1.3. Lipschitz function Let O ⊂ Rn be an open set. A function F : O → Rn

is said to be Lipschitz on O if there exists a constant K, called the Lipschitz constant, such

that

‖F (X)− F (Y )‖ ≤ K‖X − Y ‖, ∀X, Y ∈ O.

We say that F is locally Lipschitz if each point in O has a neighborhood O′ in O such that

the restriction F to O′ is Lipschitz.

Lemma 1.1. We have the following properties:

1. A set K ⊂ Rn is compact if and only if it is closed and bounded.

2. A continuous function F : K → Rn, with K compact, is bounded on K and attains its

maximum on K.

3. If F : O → Rn is C1. Then F is locally Lipschitz.

Proof. We only give a proof of the last point. Suppose that F : O → Rn is C1 and X0 ∈ O.

Let ε > 0 be small enough that the closed ball Oε (which is then compact) of radius ε is

contained in O. Let K be an upper bound of ‖DFX‖ on Oε. As the set Oε is convex then

for all X, Y ∈ Oε and s ∈ [0, 1], Y + sU = Y + s(X − Y ) ∈ Oε. Let ψ(s) = F (Y + sU). We

have

ψ′(s) = DFY+sUU.

Therefor:

F (X)− F (Y ) = ψ(1)− ψ(0) =

∫ 1

0

DFY+sUUds.

Thus we have

‖F (X)− F (Y )‖ ≤ K‖X − Y ‖.

Existence:

6

Page 7: Dynamical Systems and Chaos

In order to prove the theorem we will use the integral form of (1.2). Suppose that J in an

open interval containing zero, X(t) solution of (1.2) satisfies

X(t) = X0 +

∫ t

0

F (X(s))ds. (1.4)

If X is solution of (1.4) then X is automatically solution of (1.2). To prove the existence of

solutions, we will use the integral form.

We have the following assumptions:

1. Oρ is a closed ball of radius ρ > 0 centered at X0.

2. F is Lipschitz on Oρ with constant K.

3. F is bounded by M on Oρ.

4. We set 0 < a < min{ρ/M, 1/K} and J = [−a, a].

We construct a sequence of functions Uk recursively using Picard iteration that will uniformly

converge to a function solution of (1.4). Let

U0(t) = X0 ∈ Oρ.

For t ∈ J define

U1(t) = X0 +

∫ t

0

F (U0(s))ds = X0 + tF (X0).

Since |t| ≤ a and ‖F (X0)‖ ≤M it follows that

‖U1(t)−X0‖ ≤ aM < ρ

so that U1(t) ∈ Oρ for all t ∈ J . By induction, assume that Uk(t) has been defined and that

‖Uk(t)−X0‖ < ρ for all t ∈ J . Then let

Uk+1(t) = X0 +

∫ t

0

F (Uk(s))ds.

It is straightforward to see that

‖Uk+1(t)−X0‖ ≤ aM < ρ

so that Uk+1(t) ∈ Oρ for all t ∈ J and we can continue the sequence. Furthermore, we can

easily prove by induction that there exists a constant L ≥ 0 such that for all k ≥ 0 we have

‖Uk+1(t)− Uk(t)‖ ≤ (aK)kL.

As a < 1/K, aK < 1 and given any ε > 0 there exists N large enough so that for any

p ≥ q ≥ N we have

‖Up(t)− Uq(t)‖ ≤∞∑k=N

‖Uk+1(t)− Uk(t)‖ ≤∞∑k=N

(aK)kL ≤ ε.

7

Page 8: Dynamical Systems and Chaos

Then the sequence (Uk(t))n∈N of continuous functions uniformly converges to a continuous

function X : J → Oρ that satisfies equation (1.4).

Uniqueness:

Suppose that we have X, Y : J → O two solutions of (1.4) satisfying X(0) = Y (0) = X0.

As X, Y are continuous on the bounded interval J = [−a, a] let

Q = maxt∈J‖X(t)− Y (t)‖.

This maximum is attained at some point t1 ∈ J . Then

Q = ‖X(t1)− Y (t1)‖ =

∥∥∥∥∫ t1

0

X ′(s)− Y ′(s)ds∥∥∥∥ ≤ ∫ t1

0

‖F (x(s))− F (Y (s))‖ds.

Then Q ≤ aKQ which is impossible unless Q = 0. Therefore X(t) = Y (t) for all t ∈ J .

Remark 1.2. In theorem 1.1 the fact F is C1 is a strong assumption and we can have the

same result with F only locally Lipschitz.

Lemma 1.2 (On the uniqueness). Let X1 and X2 be two solutions of (1.2) defined on the

open intervals J1 and J2 respectively with X1(t0) = X2(t0) = X0. Then

• X1 and X2 are equal on J = J1 ∩ J2.

• There exists a solution Y defined on J1∪J2 which coincides with X1 and X2 on J1 and

J2 respectively.

Proof. Let consider F the subset

F = {t ∈ J | X1(t) = X2(t)}.

As X1 and X2 are continuous F is closed and non empty of t0 ∈ F . Let s ∈ F , then X1 and

X2 are solutions of X = F (X) with same initial conditions X1(s) = X2(s). From theorem

1.1, X1 and X2 coincide on a small neighborhood of s. Then F is open. As F is not empty,

open and closed and J is an interval, then J = F .

Lemma 1.3 (Maximal solutions). There always exists a solution of (1.2) said maximal X

defined on an interval J also called maximal interval with the following property: if s ∈ Jand if Y is a solution with initial condition Y (s) = X(s) defined on an interval J then J ⊂ J

and for all t ∈ J , Y (t) = X(t).

We can then always assume that we have a unique solution defined on a maximal time

interval. There is, of course, no guarantee that a solution X(t) can be defined for all time,

no matter how nice F is, as we will see in the examples.

8

Page 9: Dynamical Systems and Chaos

Lemma 1.4 (Extending solutions). Let U be an open set of Rn and F : U → Rn is C1.Let X(t) be a solution of (1.1) defined on the interval J =]α, β[⊂ R. We suppose that there

exists a sequence (tn)n∈N which converges toward β such that:

limn→∞

X(tn) = X0 ∈ U.

Then there exists ε > 0 such that X can be extended to a solution defined on ]α, β + ε[.

Proof. From theorem 1.1, there exists a neighborhood U of X0 and ε > 0 such that for all

(s0, Z0) ∈]β− ε, β+ ε[×U ; there exists a solution Z with Z(s0) = Z0 defined on ]β− ε, β+ ε[.

We apply this result to (s0, Z0) = (tn, X(tn)) for n large enough such that |tn− β| ≤ ε/2. In

particular Z is defined on ]β − ε/2, β + ε/2[. As X is also a solution with initial condition

(tn, X(tn)), we can deduce by the uniqueness of the theorem that X can be extended to

]α, β + ε/2[.

Corollary 1.1 (Exit of any compact sets). Let U be an open set of Rn and F : U → Rn is

C1. Let X(t) be a solution of (1.1) defined on a maximal open interval J =]α, β[⊂ R with

β < ∞. Then given any closed and bounded set K ⊂ U (K is then compact), there is some

t ∈]α, β[ with X(t) /∈ K.

Proof. Suppose that X(t) ∈ K for all t ∈]α, β[. Since F is continuous on K, compact, there

exists M > 0 such that F (X) ≤M for all X ∈ K. Let γ ∈]α, β[. For t0 < t1 ∈ J we have

‖X(t0)−X(t1)‖ ≤ (t1 − t2)M

and X is uniformly continuous on J . Hence we can define:

X(β) = limt→β

X(t).

Then X is continuous on [γ, β] and then differentiable as

X(t) = X(γ) +

∫ t

γ

F (X(s))ds, ∀t ∈ [γ, β]

and X ′(β) = F (X(β)). Then X is a solution on [γ, β]. Since there must be a solution on an

interval [β, δ[ for some δ > β, we can extend X to the interval ]α, δ[ which contradicts the

fact that X is maximal solution on J .

Interpretation: this theorem says that if a solution X(t) cannot be extended to a larger

domain time interval, then this solutions leaves any compact set in U .

Lemma 1.5 (Gronwall’s inequality). Let u : [0, α] → R be a continuous and nonnegative.

Suppose C ≥ 0 and K ≥ 0 are such that

u(t) ≤ C +K

∫ t

0

u(s)ds

for all t ∈ [0, α]. Then, for all t in this interval,

u(t) ≤ CeKt.

9

Page 10: Dynamical Systems and Chaos

Proof. Let define the intermediate function U(t) = e−Kt(C +K

∫ t0u(s)ds

). U is differen-

tiable for all time t ∈ [0, α] and we have:

U ′(t) = −KU(t) +Ke−Ktu(t) ≤ −KU(t) +KU(t) = 0

⇒ U(t) ≤ U(0) = C

⇒ e−Ktu(t) ≤ e−Kt(C +K

∫ t

0

u(s)ds

)≤ U(0) = C.

This gives the result.

Theorem 1.2. Consider the system (1.1) where F : U → Rn is C1 and has Lipschitz

constant K. Suppose that X(t) and Y (t) are solutions of X = F (X) which remain in U and

are defined on the closed interval [t0, t1]. Then we have

‖Y (t)−X(t)‖Rn ≤ ‖Y0 −X0‖Rn exp (K(t− t0))

for all t ∈ [t0, t1].

Proof. Let u(t) = ‖Y (t)−X(t)‖. It is straightforward to see that for all t ∈ [t0, t1]

u(t) ≤ u(t0) +K

∫ t

t0

u(s)ds.

By applying Gronwall’s inequality we obtain

u(t) ≤ u(t0) exp(K(t− t0)).

Corollary 1.2 (Continuous dependence on initial conditions). Let φt(X0) be the flow of the

Cauchy problem (1.2) where F is C1. Then φ is a continuous function of X0.

Theorem 1.3 (Continuous dependence on parameters). Let be X = Fλ(X) be a system

of differential equations for which Fλ is C1 in both X and λ. Then the flow of this system

depends continuously on λ as well.

Theorem 1.4. Smoothness of the flow Consider system (1.2) with F is C1. Then the flow

φt(X0) of this system is a C1 function; that is∂φ

∂tand

∂φ

∂Xexist and are continuous in t and

X.

Note that we have:∂φ

∂t(t,X0) = F (φ(t,X0))

and∂φ

∂X(t,X0) = Dφt(X0)

where Dφt is the Jacobian of the function X → φt(X).

10

Page 11: Dynamical Systems and Chaos

1.4.2 Examples

• x = x2 with x(0) = x0:

In order to solve this simple equation we separate all the x variable to the left side and

obtainx

x2= 1

recall that x = dxdt

we can use a formal approach and write

dx

x2= dt.

Then we can integrate with respect to time:∫dx

x2=

∫dt

which gives

− 1

x(t)+

1

x0= t

and x(t) =x0

1− tx0.

If x0 = 0, then the only solution is x(t) = 0 defined on R. If x0 6= 0, we can first

remark that x(t) is not defined for all t ∈ R, indeed x(t) → ±∞ when t −→ 1

x0. In

the case x0 > 0, J =

]−∞, 1

x0

[is the maximal time interval and we recover the result

stated in corollary 1.1 and J =

]1

x0,+∞

[for x0 < 0. It is also easy to verify the group

property of the flow.

• x = x1/3 with x(0) = 0:

One obvious solution is x1(t) = 0 for all t. On the other hand, we can also apply the

separation of constant:dx

x1/3= dt

and integrate with respect to time:

3

2x(t)2/3 = t⇒ x(t) =

(2t

3

)3/2

.

We can then define a family of solutions of the form

x(t, t0) =

0 for t ≤ t0(2(t−t0)

3

)3/2for t ≥ t0

for all t0 ≥ 0.

Then the continuity of the map F in theorem 1.1 is not sufficient to provide the

uniqueness of solution (x→ x1/3 is continuous but not differentiable at x = 0).

11

Page 12: Dynamical Systems and Chaos

2 Fixed points

Throughout this section, we consider smooth dynamical systems and to avoid any compli-

cations we suppose that F is at least C2 in (1.1).

2.1 Definition

Definition 2.1 (Fixed point or equilibrium point). A point X∗ is called a fixed point provided

that F (X∗) = 0.

The solution starting at a fixed point has zero velocity so it just stays there and φ(t,X∗) = X∗

for all t. Traditionally, such a point was called an equilibrium point because the forces were

in equilibrium and the mass did not move. Note, that the origin is always a fixed point for

a linear system of the form X = AX where A ∈ Mn(R). This is the only fixed point of a

linear system, unless 0 is an eigenvalue.

Definition 2.2 (Invariant manifold). For a fixed point X∗, the stable manifold Ws(X∗) is

the set of all points which tend to the fixed point as t goes to plus infinity:

Ws(X∗) =

{X0 ∈ Rn : φt(X0) −→

t→+∞X∗}. (2.1)

If Ws(X∗) is an open set, then Ws(X∗) is called the basin of attraction of X∗.

In the same way, the unstable manifold Wu(X∗) is the set of all points which tend to the

fixed point as t goes to minus infinity:

Wu(X∗) =

{X0 ∈ Rn : φt(X0) −→

t→−∞X∗}. (2.2)

2.2 Stability of fixed points

We now proceed to give several different ways of designating that a fixed point is stable.

Definition 2.3. A fixed point is said to be stable if nearby solutions stay nearby for all

future time. More precisely, X∗ is stable if for every neighborhood O of X∗ in Rn there is

a neighborhood O1 ⊂ O such that every solution X(t) with X(0) = X0 ∈ O1 is defined and

remains in O for all t > 0.

Same definition but with epsilon’s.

X∗ is stable if for any ε > 0, there is a δ > 0 such that if ‖X∗−X0‖ < δ then ‖φt(X0)−X∗‖ <ε for all t > 0.

Definition 2.4. A fixed point X∗ that is not stable is called unstable. This means there is a

neighborhood O of X∗ such that for every neighborhood O1 ⊂ O of X∗, there is at least one

solution X(t) starting at X(0) = X0 ∈ O1 that does not lie entirely O in for all t > 0.

12

Page 13: Dynamical Systems and Chaos

A different form of stability is asymptotic stability which is a stronger stability than the one

stated above.

Definition 2.5. A fixed point X∗ is asymptotically stable if O1 can be chosen above so that,

in addition to the properties of stability, we have φt(X0)→ X∗ as t→ +∞ for all X0 ∈ O1.

An asymptotically stable fixed point is also called a sink. Sometimes, we will use the word

attracting to mean asymptotically stable.

Remark 2.1. We will see in the next subsection the example of stable fixed point which is

not asymptotically stable.

Definition 2.6. A fixed point is called repelling or a source, provided that it is asymptotically

stable backward in time.

2.3 Linear systems

2.3.1 Linear algebra

Eigenvalues and eigenvectors:

Definition 2.7. A vector V is said to be an eigenvector of A ∈ Mn(R) if V is a nonzero

solution to the system (A− λIn)V = 0. λ is called an eigenvalue of A.

Proposition 2.1. Let A ∈Mn(R). Then there is a change of coordinates T such that

T−1AT =

B1

. . .

Bk

where each of the B′js is a square matrix (and all the other entries are zero) of one of the

following forms:

(i)

λ 1

λ 1. . . . . .

. . . 1

λ

,

C2 I2

C2 I2. . . . . .

. . . I2

C2

where

C2 =

(α β

−β α

)I2 =

(1 0

0 1

)and where α, β, λ ∈ R with β 6= 0. The special cases where Bj = (λ) or Bj = C2 are, of

course, allowed.

13

Page 14: Dynamical Systems and Chaos

Exponential of a matrix:

Let A ∈ Mn(R). We equipped Mn(R) with the norm (do not forget that all norms are

equivalent on Mn(R)):

‖A‖ = sup {‖AX‖ | ‖X‖ ≤ 1} .

We immediately have the properties

‖AX‖ ≤ ‖A‖‖X‖, ‖AB‖ ≤ ‖A‖‖B‖.

Definition 2.8 (Exponential of a matrix). Let A ∈Mn(R), the exponential of A is defined

through

exp(A) =+∞∑n=0

An

n!. (2.3)

This sum is uniformly convergent for A in a bounded domain of Mn(R).

Proposition 2.2 (Algebraic properties). We have

exp(P−1AP ) = P−1 exp(A)P,

exp(−A) = exp(A)−1.

If A and B commute: AB = BA, then

exp(A+B) = exp(A) exp(B) = exp(B) exp(A).

Proposition 2.3. If V ∈ Rn is an eigenvector of A associated to the eigenvalue λ, then V

is an eigenvector of exp(A) associated to eλ.

Proposition 2.4. The map: {R→Mn(R)

t→ exp(tA)

is differentiable andd

dtexp(tA) = A exp(tA) = exp(tA)A.

2.3.2 Linear differential systems

Form of the solutions:

Theorem 2.1. Consider the linear differential equation X = AX with A ∈ Mn(R) with

initial value X(0) = X0. Then the unique solution is

X(t) = exp(tA)X0.

Examples

14

Page 15: Dynamical Systems and Chaos

1. Compute exp(tA) when A is the n× n matrix of the form:

A =

λ 1

λ 1. . . . . .

. . . 1

λ

= λIn + J.

J is a nilpotent matrix: Jn = 0 and Jn−1 6= 0. We first remark that In (the identity

matrix) commutes with J such that

exp(tA) = exp(tλIn) exp(tJ) = eλt exp(tJ).

In order to compute exp(tJ), we use the definition of the exponential of a matrix:

exp(tJ) =+∞∑k=0

(tJ)k

k!=

n−1∑k=0

(tJ)k

k!

which is a finite because of the nilpotent property of J . It is now a straightforward

computation to see that:

exp(tJ) =

1 t t2

2· · · tn−1

(n−1)!

1 t. . . . . . t2

2. . . t

1

.

Then we have:

exp(tA) = etλ

1 t t2

2· · · tn−1

(n−1)!

1 t. . . . . . t2

2. . . t

1

.

2. Compute exp(tA) when A is the 2× 2 matrix of the form:

A =

(α β

−β α

)= αI2 + βJ with J =

(0 1

−1 0

).

We first remark that J enjoys the property J2 = I2 and that I2 commutes with J .

Once again, we apply the definition of the exponential of a matrix:

exp(tβJ) =+∞∑k=0

(tβJ)k

k!= I2 + tβJ − (tβ)2

2− (tβ)3

3!J +

(tβ)4

4!I2 + · · ·

=

(+∞∑k=0

(−1)k(tβ)2k

(2k)!

)I2 +

(+∞∑k=0

(−1)k(tβ)2k+1

(2k + 1)!

)J

15

Page 16: Dynamical Systems and Chaos

= cos(βt)I2 + sin(βt)J.

Finally we have:

exp(tA) =

(eαt cos(βt) eαt sin(βt)

−eαt sin(βt) eαt cos(βt)

).

3. Let consider the following differential system

X =

1 2 −1

0 3 −2

0 2 −2

X = AX.

The matrix A has 3 nonnegative eigenvalues with corresponding eigenvectors:

2 : V2 = (3, 2, 1)T , 1 : V1 = (1, 0, 0)T , −1 : V−1 = (0, 1, 2)T .

The general form of the solution is:

X(t) = c1e2tV2 + C2e

tV1 + c3e−tV−1.

X∗ = (0, 0, 0)T is the only fixed point, it is hyperbolic and its stable and unstable

manifold are:

Ws(X∗) = Span(V−1) and Wu(X∗) = Span(V2, V1).

Stability of the origin:

Theorem 2.2. Consider the linear differential equation X = AX with A ∈ Mn(R). If all

the eigenvalues of A have negative real parts, then the origin is asymptotically stable.

More precisely, there exists ε > 0 and T > 0, such that if X(t) is solution then

∀t > t0 + T, ‖X(t)‖ ≤ ‖X(t0)‖e−εt.

If one of the eigenvalue has a positive real part, then the origin is unstable.

Proof. The proof of this theorem relies on a classic result of linear algebra which uses the

properties of the exponential of a matrix and the result of proposition 2.1 of the reduction

of a matrix.

Definition 2.9. The origin is called hyperbolic if none of its eigenvalues have zero real part.

2.3.3 Application to planar linear systems

Elementary properties:

Proposition 2.5. The planar linear system X = AX with A ∈M2(R) has:

(i) a unique equilibrium point (0, 0) if det(A) 6= 0,

16

Page 17: Dynamical Systems and Chaos

(ii) a straight line of equilibrium points if det(A) = 0 and A is not the zero matrix.

Proof. Let in exercice.

Proposition 2.6. Suppose that V is an eigenvector for A with associated eigenvalue λ. Then

the function X(t) = eλtV is a solution of X = AX.

Proof. Let in exercice.

Proposition 2.7 (Linearity principle). Consider the linear differential equation X = AX

with A ∈ M2(R). Suppose that X1(t) and X2(t) are solutions of this system and that the

vectors X1(0) and X2(0) are linearly independent. Then

X(t) = αX1(t) + βX2(t)

is the unique solution of this system that satisfies the initial condition X(0) = αX1(0) +

βX2(0).

Proof. Let in exercice.

Phase portraits:

See chapter 3 of the textbook [1] for illustrations.

Classification:

We can summarize these results so we can quickly tell the stability type from the trace and

the determinant.

Theorem 2.3 (Classification for planar linear systems). Consider the linear differential

equation X = AX with A ∈M2(R). Let ∆ and τ be the determinant and the trace of A.

(i) If τ 2 − 4∆ < 0. Then we have a spiral sink if τ < 0, a spiral source if τ > 0 and a

center if τ = 0 (stable but not asymptotically stable).

(ii) If τ 2 − 4∆ > 0. If ∆ > 0, then we have a saddle and the origin is unstable. If τ > 0

and ∆ > 0 then we have a source. If τ < 0 and ∆ > 0, then we have a sink and the

origin is asymptotically stable.

(iii) If τ 2− 4∆ = 0 we have a degenerate sink for τ < 0 and a degenerate source for τ > 0.

See Figure 3.

17

Page 18: Dynamical Systems and Chaos

Figure 3: Topological classification of hyperbolic equilibria on the plane.

2.4 Linearized stability of fixed points

2.4.1 Main result

The linearized system of a differential system X = F (X) with F a C1 vector field at a point

X∗ is defined by:

U = DFX∗U. (2.4)

If X∗ is a fixed point for X = F (X), then we refer to the eigenvalues of the Jacobian matrix

DFX∗ as the eigenvalues of the fixed point of the eigenvalue of X∗.

Suppose that X∗ is a fixed point for X = F (X) with F is C2. In a neighborhood of X∗ we

have:

F (X) = F (X∗) +DFX∗(X −X∗) + · · · = DFX∗(X −X∗) + · · ·

If we set U = X −X∗ then we have

U = DFX∗U +O(‖U‖2).

Then sufficiently close to X∗, we have O(‖U‖2) ≈ 0 and the above equation can be approx-

imated by the linearized system (2.4).

Definition 2.10. A fixed point X∗ is called hyperbolic provided that the real parts of all the

eigenvalues of the matrix DFX∗ are nonzero.

18

Page 19: Dynamical Systems and Chaos

Theorem 2.4 (Linearized stability of fixed point). Consider a differential equation X =

F (X) where F is C2 with a hyperbolic fixed point X∗. Then, the stability type of the fixed

point for the nonlinear system is the same as that for the linearized system at that fixed point.

In particular, if the real parts of all the eigenvalues of DFX∗ are negative, then the fixed point

is asymptotically stable for the nonlinear system.

If at least one eigenvalue of DFX∗ has a positive real part, then the fixed point X∗ is unstable

for the nonlinear system.

Proof. We prove here the fact that if the real parts of all the eigenvalues of DFX∗ are nega-

tive, then the fixed point is asymptotically stable for the nonlinear system.

We consider a Taylor expansion of F about X∗, we denote A = DFX∗ and U = X−X∗. We

have

U = AU + g(U),

where ‖g(u)‖ ≤M‖U‖2 for ‖U‖ ≤ δ. The above linear system has a solution:

U(t) = etAU0 +

∫ t

0

eA(t−s)g(U(s))ds.

For the matrix A there is a scalar K ≥ 1 such that ‖etA‖ ≤ Ke−εt so

‖etAU0‖ ≤ Ke−εt‖U0‖.

If ‖U‖ is small enough then we have M‖U‖2 ≤ m‖U‖, where m is small too. So

‖U(t)‖ ≤ Ke−εt‖U0‖+

∫ t

0

mKe−ε(t−s)‖U(s)‖ds.

We set u(t) = eεt‖U(t)‖ ≥ 0 and we have

u(t) ≤ K‖U0‖+mK

∫ t

0

u(s)ds

and by Gronwall’s inequality we have:

u(t) ≤ k‖U0‖etmK

and

‖X(t)−X∗‖ ≤ k‖U0‖et(mK−ε) −→t→∞

0

provided that mK − ε < 0 which is always possible.

Example: Determine the fixed points of the following system:{x = y

y = x− x3 − εy

For each fixed point, discuss its stability in function of the parameter ε.

19

Page 20: Dynamical Systems and Chaos

2.4.2 Nullclines

For a n-dimensional differential system X = F (X) with F of the form:

F (X) = F (x1, . . . , xn) =

F1(x1, . . . , xn)...

Fn(x1, . . . , xn)

we define the xj-nullcline to be the set of points of Rn where xj = 0 that is the hypersurface

Fj(x1, · · · , xn) = 0.

Remark 2.2. The intersections of the nullclines give the fixed points of the system.

2.4.3 Conjugacy of phase portraits

Definition 2.11. Let U and V be two open sets of Rn. The map h from U to V is called a

homeomorphism if it is continuous map onto V with a continuous inverse k from V to U .

The map h from U to V is called a C1 diffeomorphism if it is a homeomorphism and that h

and k have all partial derivatives and they are continuous.

Figure 4: Topological conjugacy.

Definition 2.12. Two flows are called φ(t, ·) and ψ(t, ·) are called topologically conjugate

on open sets U and V provided there is an homeomorphism h from U to V such that

h ◦ φ(t,X0) = ψ(t, h(X0)).

These two flows are called C1 conjugate provided that the homeomorphism h is a C1 diffeo-

morphism. See figure 4 for an example.

Theorem 2.5 (Hartman-Grobman). Let X = F (x) be a C1 dynamical system that has

a hyperbolic fixed point X∗ and flow φ(t, ·). Let ψ(t,X0) = etDFX∗X0 be the flow of the

linearized system at the fixed point X∗. Then, there are open sets U containing X∗ and V

containing the origin, such that the flow φ(t, ·) on U is topologically conjugate to the flow

ψ(t, ·) on V .

20

Page 21: Dynamical Systems and Chaos

Proof. See [6].

The Hartman-Grobman theorem says that for a hyperbolic fixed point the nonlinear system

is topologically conjugate to its linearized flow. Unfortunately, this topological conjugacy

does not tell us much about the features of the phase portrait. For example, any two linear

systems in the same dimension for which the origin is asymptotically stable are topologically

conjugate on Rn. In particular, we cannot distinguish between a sink or a spiral sink (see

figure 5). Thus, a continuous change of coordinates does not preserve the property that

trajectories spiral in toward the fixed point or do not spiral. A differentiable change of

coordinates does preserve such features.

Figure 5: Sink / Spiral Sink conjugacy.

Theorem 2.6 (Belickii). Let X = F (x) be a C2 dynamical system that has a hyperbolic fixed

point X∗ and flow φ(t, ·). Assume that the eigenvalues satisfy a non resonance assumption

that says that λk 6= λi+λj for any three eigenvalues. Then, there are open sets U containing

X∗ and V containing the origin, such that the flow φ(t, ·) on U is C1 conjugate to the flow

of the linearized system on V .

Proof. See [7].

2.4.4 Invariant manifolds and phase portraits

We can define local invariant manifolds by restricting the definition of invariant manifold on

a neighborhood of the fixed point. Let U a neighborhood of the fixed point X∗. Then we

define the local stable and unstable manifold as

Wsloc(X

∗) =

{X0 ∈ U : φt(X0) −→

t→+∞X∗ and φt(X0) ∈ U

}. (2.5)

Wuloc(X

∗) =

{X0 ∈ U : φt(X0) −→

t→−∞X∗ and φt(X0) ∈ U

}. (2.6)

21

Page 22: Dynamical Systems and Chaos

Definition 2.13 (Stable/Unstable subspace). The stable subspace (respectively unstable) at

a fixed point X∗ is the linear subspace spanned by the set of all the generalized eigenvec-

tors of the linearized equations at the fixed point associated with eigenvalues having negative

(respectively positive) real parts:

Es = span{V | V is a generalized eigenvector of DFX∗ with <(λ) < 0}, (2.7)

Eu = span{V | V is a generalized eigenvector of DFX∗ with <(λ) > 0}. (2.8)

We denote ns the dimension Es and nu the dimension of Eu. If X∗ is a hyperbolic fixed point

then n = ns + nu.

Theorem 2.7. Let X = F (x) be a C1 dynamical system that has a hyperbolic fixed point X∗.

Then, there exists a local stable and unstable manifold Wsloc and Wu

loc with dimWsloc = ns and

dimWuloc = nu that are tangent to Es and Eu at X∗. Ws

loc and Wuloc have the same regularity

as F .

Remark 2.3. This theorem is particularly useful for drawing phase portraits in a neighbor-

hood of a saddle for planar systems. See figures 6 and 7.

Figure 6: Saddle-sink in R3.

Examples:

We first consider: {x = x

y = −y + x2

which has (0, 0) has fixed point. The Jacobian matrix is

DFX =

(1 0

2x −1

)

such that

DF(0,0) =

(1 0

0 −1

)

22

Page 23: Dynamical Systems and Chaos

Figure 7: Spiral saddle-sink in R3.

has ±1 as eigenvalues with corresponding subspace:

Es = {(x, y) ∈ R2 | x = 0}, Eu = {(x, y) ∈ R2 | y = 0}.

We know that Wu is a one dimensional manifold tangent to the line y = 0 at (0, 0). It can

be represented by a function y = h(x) with h(0) = h′(0) = 0. By differentiating y = h(x)

with respect to time we have:

dy

dt= h′(x)

dx

dt= xh′(x) = −h(x) + x2.

This equivalent to:d

dx(xh(x)) = x2

which can be integrated and we have

xh(x) =x3

3+ C.

The constant C = 0 and then h(x) = x2

3. Finally we have

Wu(0, 0) =

{(x, y) ∈ R2 | y =

x2

3

}.

Let’s know compute the stable manifold Ws. We apply the same technique with x = g(y)

and g(0) = g′(0) = 0

dx

dt= g′(y)

dy

dt= g′(y)(−y + g(y)2) = g(y).

which is equivalent to:d

dy(yg(y)) =

d

dy

g(y)3

3.

We have:

yg(y) =g(y)3

3.

23

Page 24: Dynamical Systems and Chaos

Then either g(y) = 0 or g(y)2 = 3y. The last condition is not possible as g is defined in a

neighborhood of y = 0. This implies that x = g(y) = 0.

Ws(0, 0) ={

(x, y) ∈ R2 | x = 0}.

The second example is {x = y

y = − sin(x)

which has (0, 0), (−kπ, 0) and (kπ, 0) as fixed points. The phase portrait is 2π-periodic such

that we restrict the study to x ∈ [−π, π]. The Jacobian matrix is:

DFX =

(0 1

− cos(x) 0

)such that

DF(0,0) =

(0 1

−1 0

)⇒ λ = ±i (center)

DF(±π,0) =

(0 1

1 0

)⇒ λ = ±1 (saddle)

The two corresponding subspaces for the saddles are:

Es = {(x, y) ∈ R2 | y = −x}, Eu = {(x, y) ∈ R2 | y = x}.

Stable manifold of (−π, 0). We can obtain integral curves:

dy

dx=− sin(x)

y⇒ ydy = − sin(x)dx

which givesy2

2+ C = cos(x).

The constant C is determined by the initial constant that we take on the x-axis at (α, 0)

and then cos(α) = C. Theny2

2= cos(x)− cos(α)

then necessary cos(x) ≥ cos(α) and x ∈ [−α, α].

y = ±√

2(cos(x)− cos(α)) = ±h(x).

The stable manifold must satisfies h(−π) = 0 and h′(−π) = −1. Then α = −π and

h(x) = ±√

2(cos(x) + 1). We compute the derivative of h:

h′(x) = ∓ sin(x)√2(cos(x) + 1)

.

h′(−π) = −1 implies that:

Ws(−π, 0) ={

(x, y) ∈ R2 | y = −√

2(cos(x) + 1)}.

24

Page 25: Dynamical Systems and Chaos

In a similar way

Wu(−π, 0) ={

(x, y) ∈ R2 | y = +√

2(cos(x) + 1)}.

We easily remark that

Ws(−π, 0) =Wu(π, 0), Wu(−π, 0) =Ws(π, 0).

Homoclinic and heteroclinic orbits

Definition 2.14. An orbit Γ0 starting at a point X ∈ RN is called homoclinic to the equi-

librium point X∗ of X = F (X) if φt(X)→ X∗ as t→ ±∞.

Definition 2.15. An orbit Γ0 starting at a point X ∈ RN is called heteroclinic to the

equilibrium points X∗1 and X∗2 of X = F (X) if φt(X) → X∗1 as t → −∞ and φt(X) → X∗2as t→ +∞.

Figure 8: (a) Homoclinic and (b) heteroclinic orbits on the plane.

The pendulum has heteroclinic orbits between (−π, 0) and (π, 0).

2.5 Center manifold

We suppose that X∗ = 0 is a fixed point of the differential system X = F (X) (F is Ck) and

we denote L = DF0. Let σ ⊂ C be the spectrum of L, the set of all (complex) eigenvalues of

L. σ consists of at most n points. We split σ into three disjoint parts σs, σu and σc, where

σs = {λ ∈ σ | <(λ) < 0}, σu = {λ ∈ σ | <(λ) > 0}, σc = {λ ∈ σ | <(λ) = 0}.

Let Es, Eu and Ec be the L-invariant subspaces of Rn corresponding to the above splitting

of σ. We have

Rn = Es ⊕ Eu ⊕ Ec.Let πs, πu and πc be the projections from Rn onto Es, Eu and Ec. We set Ls = L|Es , Lu = L|Euand Lc = L|Ec .The asymptotic behavior of solutions of the linearized system U = LU is summarized as

follows

25

Page 26: Dynamical Systems and Chaos

• if U0 ∈ Es, then ‖φ(t, U0)‖ = ‖etLsU0‖ tends exponentially to 0 as t→ +∞ and to ∞as t→ −∞,

• if U0 ∈ Eu, then ‖φ(t, U0)‖ = ‖etLuU0‖ tends exponentially to ∞ as t→ +∞ and to 0

as t→ −∞,

• all bounded solutions, in particular all steady and periodic ones, lie in E0.

We have seen that, provided that σ0 =, there exists a relation between the solutions of the

nonlinear system X = F (x) and those of U = LU in a neighborhood of X∗. This was the

Hartman-Grobman theorem and its differentiable form with the Belickii’s theorem. When

σ0 6=, then it is difficult to establish such a relationship in general. One might rather ask

the following question:

Question: Does X = F (x) possess a manifold having similar properties as Ec has for

U = LU?

The answer is yes!

Theorem 2.8 (Center manifold theorem). There exists a map

Ψ ∈ Ck(Ec, Es ⊕ Eu),Ψ(0) = 0, DΨ(0) = 0,

and a neighborhood U of X = 0 in Rn such that the manifold

Wc = {x+ Ψ(x) | x ∈ Ec}

has the following properties.

• Wc is locally invariant with respect to X = F (X). More precisely, if Z0 ∈ Wc ∩ U for

t ∈ I, then φ(t, Z0) ∈Mc for t ∈ I where I is an interval containing t = 0.

• If σs and σu are non void, then Wc contains all solutions of X = F (X) staying in U

for all t ∈ R. That is, if Z0 ∈ U and φ(t, Z0) ∈ U for all t ∈ R, then Z0 ∈ Wc.

Moreover, if σu is void, then one has:

• Wc is locally attractive. More precisely, all solutions of X = F (X) staying in U for

all t > 0 tend exponentially to some solution of X = F (X) on Wc.

Wc is a Ck-manifold of Rn parametrized by x ∈ Ec. Hence Wc has the same dimension

as Ec. Wc passes through X = 0, and is tangent to Ec at X = 0. We say that Wc is a

center manifold of X = F (X) at X = 0.

Remark 2.4. Wc is not unique.

26

Page 27: Dynamical Systems and Chaos

Reduced equation. If Z0 ∈ Wc ∩ U , then φ(t, Z0) ∈ Wc close to t = 0. Defining,

x0 = πcZ0, x(t) = πcφ(t, Z0),

we write

φ(t, Z0) = x(t) + Ψ(x(t)).

Using X = F (x) we obtain the following characterizations.

• x(t) satisfies the nonlinear differential equation

x = πcF (x+ Ψ(x)) = f(x), (2.9)

where x ∈ Ec.

• The map Ψ satisfies

DΨ(x)πcF (x+ Ψ(x)) = πhF (x+ Ψ(x)), πh = πs + πu (2.10)

for all x ∈ Ec.

Equation (2.9) is called the reduced equation. Equation (2.10) is a quasilinear partial differ-

ential equation of order 1 for Ψ. There exists techniques to compute an approximation of Ψ

(Taylor expansions of F and Ψ around X = 0).

Example Consider the dynamical system{x = xy

y = −y − x2

where (x, y) ∈ R2. We have

L = DF (0) =

(0 0

0 −1

)E0 is equal to the x-axis and Es is equal to the y-axis. There exists a center-manifold:

Wc = {x+ Ψ(x) | x ∈ R}

where Ψ : R→ R satisfies:

Ψ′(x)(xΨ(x)) = −Ψ(x)− x2, Ψ(0) = Ψ′(0) = 0

Setting Ψ(x) = c2x2 + c3x

3 + o(x3) we get −c2x2 − x2 = 0 and −c3x3 = 0. Consequently:

Ψ(x) = −x2 + o(x3)

The reduced system isdx

dt= −x3 + o(x4).

27

Page 28: Dynamical Systems and Chaos

Figure 9: One-dimensional center manifold.

2.6 Phase portraits using energy and other test functions

For the moment, we have seen how dynamical systems could be analyzed using nullclines and

linearization at fixed points in order to draw their phase portrait. In the previous section,

we have introduced an abstract and theoretical result (center manifold theorem) in order to

consider systems with non hyperbolic fixed points. We continue in this direction by analyzing

systems through ”energy function”.

2.6.1 An introductory example: prey-predator system

In this section we introduce the Lotka-Volterra equations for prey-predator system. The

Lotka-Volterra predator-prey model was initially proposed by Alfred J. Lotka ”in the theory

of autocatalytic chemical reactions” in 1910. Vito Volterra, who made a statistical analysis

of fish catches in the Adriatic, after the first world war, independently investigated the

equations in 1926. The simplest system of equation of this type is given by{x = x(a− by)

y = y(−c+ dx),

with all parameters positive a, b, c and d. Notice that x is the prey population and y is

the predator population. Without any of the predators around, the prey grows at a linear

positive rate a. Without any prey, the predator population has negative growth rate of

−c. These equations have fixed points (0, 0) and (c/d, a/b). We first remark that (0, 0) is

saddle point. The eigenvalues of the Jacobian matrix at the fixed point (c/d, a/b) are purely

imaginary and given by ±i√ac. Thus, (c/d, a/b) is a center. For this system, there is a way

to show that the orbits close when they go around the fixed point. Notice first that the

quantity:

K(x, y) = f(x) + g(y) = dx− c ln(x) + by − a ln(y)

28

Page 29: Dynamical Systems and Chaos

is a conserved quantity. That is ddtK(x, y) = 0 along the trajectories. K can be considered

as a function of x and y on the plane (a surface). Each level set of the surface K(x, y) is

a trajectory of the dynamical system. So if we show that K(x, y) has level sets which are

closed curve, then we will have proved that the system has periodic solutions.

The graph of the function f is concave up and has a unique minimum at x = c/d. Similarly,

the graph of g is concave up and has a unique minimum at y = a/b. Thus the minimum

value of K(x, y) is K0 = f(c/d) + g(a/b). The level curve f(x) + g(y) = K0 is just the fixed

point (c/d, a/b). If C is a constant with C < K0, then the level curve K(x, y) = C is empty.

Finally consider, C > K0 and let C = C −K0 > 0. Therefore, for a given x, we need to find

y that satisfy

g(y)− g(a/b) = C − (f(x)− f(c/d)).

For x with f(x)− f(c/d) > C, the right-hand side is negative and there are no solutions for

y. There are two values of x, x1 and x2, for which f(x)−f(c/d) = C. For these xj, the right

hand side is zero and the only solution is y = a/b. For x1 < x < x2, the right hand side is

positive and there are two values of y which satisfies the equations. One of these values of y

will be less than a/b and the other will be greater. Therefore, the level curve for C > K0 is

a closed curve that surrounds the fixed point.

There is another way to prove the existence of periodic solutions which does not require the

study of the level curves of K. We decompose the first quadrant into 4 different regions:

• RI = {(x, y) ∈ R2 | 0 < x < c/d & 0 < y < a/b}

• RII = {(x, y) ∈ R2 | c/d < x & 0 < y < a/b}

• RIII = {(x, y) ∈ R2 | c/d < x & a/b < y}

• RIV = {(x, y) ∈ R2 | 0 < x < c/d & a/b < y}.

Let (x0, y0) be in the first region RI . Then there exists t1 > 0 such that M(t) = (x(t), y(t))

enters in region RII . If M stays in region RI for all time then x and y are bounded and

converge to limits x∞, y∞. Thus x and y also converge. This limit has to be a fixed point.

As x > 0 and y < 0, then x∞ > 0 and y∞ < a/b. But this is impossible as there exist only

two fixed points.

In fact we can show the following result.

Lemma 2.1. If (x(t), y(t)) is a maximal solution then it is bounded.

Proof. There exists two constants A > 0 and B > 0 such that

∀x > A, c ln(x) <dx

2and ∀y > Ba ln(y) <

by

2.

Then for all (x, y) outside the compact [0, A]× [0, B], we have

C = K(x, y) >dx+ by

2

29

Page 30: Dynamical Systems and Chaos

Then, if (x0, y0) is the initial condition we have that

0 < x(t) < max

{A,

2

dK(x0, y0)

}and 0 < y(t) < max

{B,

2

bK(x0, y0)

}.

Remark 2.5. This lemma shows that the solution of the Lotka-Volterra system are defined

for all time t > 0.

With this lemma we can prove that there exists t2 > t1 such that M(t) enters in region RIII .

Thus we can define times t5 > t4 > t3 > t2 > t1, such that x(t1) = x(t5) = c/d. As these are

points of a same trajectory we have:

K(x(t1), y(t1)) = K(x(t5), y(t5)).

Then we have g(y(t1)) = g(y(t5)) with y(t1) < a/b and y(t5) < a/b. As g is injective on

(0, a/b) we have y(t1) = y(t5) and the solutions are periodic.

2.6.2 Undamped forces

In this subsection, we analyze systems that model the motion of a particle of mass m. We

assume the motion is determined by forces f(x) that depend only on the position and not on

the velocity (there is no damping or friction). Since the mass times the acceleration equals

the forces acting on the particle, the differential equation determines the motion is given by

mx = f(x). This equation can be written as a system of differential equations with only

first derivatives by setting x = y:

x = y

y =1

mf(x).

We want to find a quantity which is conserved along the trajectories. In this case, we denote:

E(x, x) =1

2mx2 + V (x)

where V (x) is the potential energy:

V (x) = −∫ x

x0

f(y)dy.

And ddtE(x, x) is constant along the solution, this is the total energy.

The pendulum: Consider the system of equations for the pendulum:{x = y

y = − sin(x).

The potential function is V (x) = 1 − cos(x). The potential function has a minimum at

x = 0 and multiples of 2π. It has maxima at x = ±π and other odd integer multiples

30

Page 31: Dynamical Systems and Chaos

Figure 10: Plot of the surface (x, y, E(x, y)).

of π. The local maxima and minima of the potential energy V give the fixed points of the

system. We have already studied this problem and showed that (±π, 0) are saddles and (0, 0)

is a center when restricted to [−π, π]. We have been able, for the moment, to determine

the stable and unstable manifolds of the two saddles. We are going to present a method

which allows to draw the full phase portrait by understanding the level sets of the energy

E(x, y) = 12y2 + V (x), which can be thought as a surface in R3 (see Figure 10). We recall

that for a fixed constant c, the level set of the energy E(x, y) at constant c is defined by:

E−1(c) = {(x, y) ∈ R2 | E(x, y) = c}.

First, we notice that V (0) = 0 and V (±π) = 2. Therefore, we need to understand the level

sets of the energy for c with 0 < c < 2, c = 2 and 2 < c. Indeed, if c < 0, then E−1(c) = ∅.Note that for c = 2, we have:

y2 = 2(cos(x) + 1),

which gives two curves h(x) = ±√

2(cos(x) + 1) which correspond to the stable and unstable

manifolds that we have already computed. These form heteroclinic connections between the

two fixed points (−π, 0) and (π, 0). For 0 < c < 2, the level set E−1(c) in −π < x < π

is a level curve surrounding (0, 0). Thus (0, 0) is surrounded by periodic orbits and it is a

nonlinear center. For c > 2, then there are two values of y for each x ∈ E−1(c). Thus E−1(c)

is the union of two curves, one with y > 0 and one with y < 0. These two trajectories are

called rotary solutions. See Figure 11, for a description of the phase portrait.

31

Page 32: Dynamical Systems and Chaos

Figure 11: Phase portrait of the undamped pendulum.

2.6.3 Liapunov functions and stability

As we have already seen, determining the stability of fixed point is straightforward if it is

hyperbolic. When it is not the case (a center for example), this determination becomes

more problematic. Let L : O → Rn be a differentiable function defined on an open set Ocontaining X∗ of the system X = F (X). We consider the function

L(X) = DLX(F (X)).

We can also write L(X) as

L(X) =d

dt|t=0L ◦ φt(X) ,

consequently, if L(X) is negative, then L decreases along the solution curve through X.

Theorem 2.9 (Liapunov stability). Let X∗ be an equilibrium point of X = F (X). Let

L : O → Rn be a differentiable function defined on an open set O containing X∗. Suppose

that:

(i) L(X∗) = 0 and L(X) > 0 if X 6= X∗,

(ii) L ≤ 0 in O\X∗.

Then X∗ is stable. Furthermore, if L also satisfies

(iii) L < 0 in O\X∗,

then X∗ is asymptotically stable.

Definition 2.16. A function L satisfying (i) and (ii) is called a Liapunov function for X∗.

If (iii) also holds, we call L a strict Liapunov function.

Examples

1. Prey-predator system. If we set:

L(x, y) = K(x, y)−K(c/d, a/b)

then L(c/d, a/b) = 0 and L(x, y) > 0 for all (x, y) 6= (c/d, a/b). Furthermore, we have

already that L = 0 for all (x, y). Then (c/d, a/b) is stable.

32

Page 33: Dynamical Systems and Chaos

Figure 12: (a) Lyapunov stability versus (b) strict Lyapunov stabilty.

2. Undamped pendulum. If we set:

L(x, y) = E(x, y) =1

2y2 + 1− cos(x)

then L(0, 0) = 0 and L(x, y) > 0 in a neighborhood of (x, y) = (0, 0). Furthermore, we

have already that L = 0 for all (x, y). Then (0, 0) is stable.

3. Let consider the following system:

x = (εx+ 2y)(z + 1)

y = (−x+ εy)(z + 1)

z = −z3

with ε ∈ R. There exists a unique fixed point (0, 0, 0). The linearization of the system

at (0, 0, 0) is ε 2 0

−1 ε 0

0 0 0

.

The eigenvalues are 0 and ε± i√

2. So the origin is unstable if ε > 0. When ε ≤ 0, the

origin is not hyperbolic and we cannot conclude. The idea is to search for a Liapunov

function for (0, 0, 0) of the form:

L(x, y, z) =1

2

(ax2 + by2 + cz2

)with a, b, c to be determined later. For such an L we have

L = axx+ byy + czz

= ax(εx+ 2y)(z + 1) + by(−x+ εy)(z + 1)− cz4

= ε(ax2 + by2)(z + 1) + (2a− b)xy(z + 1)− cz4.

33

Page 34: Dynamical Systems and Chaos

We set a = 1, b = 2 and c = 1 such that if ε = 0, we have L = −z4 ≤ 0, so the origin

is stable. If ε < 0, then we find that

L = ε(x2 + 2y2)(z + 1)− z4

so that L < 0 in the region O given by z > −1 (minus the origin). We conclude that

the origin is asymptotically stable in this case. This is our first example of a bifurcation

at ε = 0, the origin has changed stability (transcritical bifurcation).

2.6.4 Gradient systems

We now turn to particular type of system for which Liapunov functions are natural.

Definition 2.17 (Gradient system). A gradient system on Rn is a system of the form

X = −−−→grad V (X) (2.11)

where V : Rn → Rn is a C∞ function and

−−→grad V =

(∂V

∂x1, · · · , ∂V

∂xn

).

Proposition 2.8. The functionn V is a Liapunov function for the system (2.11). Moreover,

V (X) = 0 if and only if X is a fixed point.

Proof. By the chain rule we have

V (X) =d

dtV (X(t)) = DVX(t)(X

′(t)) =−−→grad V (X(t)) · (−−−→grad V (X(t)))

then

V (X) = −‖−−→grad V (X(t))‖ ≤ 0.

In particular V (X) = 0 if and only if−−→grad V (X) = 0.

Corollary 2.1. If X∗ is an isolated minimum of V , then X∗ is asymptotically stable.

To understand a gradient flow geometrically, we look at the level surfaces of the function

V : Rn → Rn. These are the subsets V −1 (c) with c ∈ R. X ∈ V −1 (c) is regular point if−−→grad V (X) 6= 0, then V −1 (c) looks like a surface of dimension n− 1 (a curve for n = 2). If

all the points in V −1 (c) are regular points then we call c a regular value for V . Furthermore,−−→grad V (X) is perpendicular to every tangent vector to the level set V −1 (c) at X provided

that c is a regular value of V .

Remark 2.6. The critical points of V are the equilibrium points of the system.

Example: the nonlinear pendulum.

34

Page 35: Dynamical Systems and Chaos

2.7 Introduction to bifurcations of dimension 1

Definition 2.18. In dynamical systems, a bifurcation occurs when a small smooth change

made to the parameter values (the bifurcation parameters) of a system causes a sudden “qual-

itative” or topological change in its behaviour. Generally, at a bifurcation, the local stability

properties of equilibria, periodic orbits or other invariant sets changes.

In this section, we consider scalar differential equations of the form

du

dt= f(u, µ). (2.12)

Here the unknown u is a real-valued function of the time t, and the vector field f is real-

valued depending, besides u, upon a parameter µ. The parameter µ is the bifurcation

parameter. We suppose that equation (2.12) is well-defined and satisfies the hypotheses

of the Cauchy-Lipschitz theorem, such that for each initial condition there exists a unique

solution of equation (2.12). Furthermore we assume that the vector field is of class Ck, k ≥ 2,

in a neighborhood of (0, 0) satisfying:

f(0, 0) = 0,∂f

∂u(0, 0) = 0. (2.13)

The first condition shows that u = 0 is an equilibrium of equation (2.12) at µ = 0. We are

interested in local bifurcations that occur in the neighborhood of this equilibrium when we

vary the parameter µ. The second condition is a necessary, but not sufficient, condition for

the appearance of local bifurcations at µ = 0.

Remark 2.7. Suppose that the second condition is not satified: ∂f/∂u(0, 0) 6= 0. A direct

application of the implicit function theorem shows that the equation f(u, µ) = 0 possesses a

unique solution u = u(µ) in a neighborhood of 0, for small enough µ. In particular u = 0

is the only equilibrium of equation (2.12) in a neighborhood of 0 when µ = 0, and the same

property holds for µ small enough. Futhermore, the dynamics of (2.12) in a neighborhood of

0 is qualitatively the same for all sufficiently small values of the parameter µ: no bifurcation

occurs for small values of µ.

2.7.1 Saddle-node bifurcation

Theorem 2.10 (Saddle-node bifurcation). Assume that the vector field f is of class Ck,k ≥ 2, in a neighborhood of (0, 0) and satisfies:

∂f

∂µ(0, 0) =: a 6= 0,

∂2f

∂u2(0, 0) =: 2b 6= 0. (2.14)

The following properties hold in neighborhood of 0 in R for small enough µ:

(i) if ab < 0 (resp. ab > 0) the differential equation has no equilibria for µ < 0 (resp. for

µ > 0),

35

Page 36: Dynamical Systems and Chaos

(ii) if ab < 0 (resp. ab > 0) the differential equation possesses two equilibria u±(ε), ε =√|µ| for µ > 0 (resp. µ < 0), with opposite stabilities. Furthermore, the map ε →

u±(ε) is of class Ck−2 in a neighborhood of 0 in R, and u±(ε) = O(ε).

Then for equation (2.12), a saddle-node bifurcation occurs at µ = 0.

A direct consequence of conditions (2.14) is that f has the expansion:

f(u, µ) = aµ+ bu2 + o(|µ|+ u2) as (u, µ)→ (0, 0)

Figure 13: Saddle-node bifurcation: bifurcation diagrams, in the (µ, u)-plane, of the trun-

cated equation (2.15) for different values of a and b. The solid lines represent branches of

stable equilibria, the dashed lines branches of unstable equilibria, and the arrows indicate

the sense of increasing time t.

Exercice 2.1. Consider the truncated equation

du

dt= aµ+ bu2. (2.15)

Plot bifurcation diagrams in the (u, µ)-plane of this truncated equation for different values of

a and b.

Proof. Since a 6= 0, we apply the implicit function theorem which implies the existence of

unique solution µ = g(u) for u close to 0 of the equation f(u, µ) = 0, where g is of class Ck,k ≥ 2 in a neighborhood of the origin with g(0) = 0. Its Taylor extansion is given by

µ = − bau2 + o(u2).

Consequently, if abµ > 0 equation (2.12) has no equilibria, one equilibrium u = 0 if µ = 0

and a pair of equilibria u±(µ) = ±√−aµ/b + o(

√|µ|) if abµ < 0. Finally, in the case

abµ < 0, we have:∂f

∂u(u±(µ), µ) = 2bu±(µ) + o(

√|µ|)

then the equilibrium u−(µ) is attractive, asymptotically stable when b > 0 and repelling,

unstable when b < 0; wheres, the equilibrium u+(µ) has opposite stability properties.

36

Page 37: Dynamical Systems and Chaos

Figure 14: Extended phase portrait, in the (t, u)-plane, of the truncated equation (2.15) for

b > 0 and (a) aµ > 0, (b) µ = 0, (c) aµ < 0. Phase portraits at a saddle node bifurcation.

2.7.2 Pitchfork bifurcation

Theorem 2.11 (Pitchfork bifurcation). Assume that the vector field f is of class Ck, k ≥ 3,

in a neighborhood of (0, 0), that it is satisfies conditions (2.13), and that it is odd with repsect

to u:

f(−u, µ) = −f(u, µ) (2.16)

Furthermore assume that:

∂2f

∂µ∂u(0, 0) =: a 6= 0,

∂3f

∂u3(0, 0) =: 6b 6= 0. (2.17)

The following properties hold in neighborhood of 0 in R for small enough µ:

(i) if ab < 0 (resp. ab > 0) the differential equation has one trivial equilibrium u = 0 for

µ < 0 (resp. for µ > 0). This equilibrium is stable when b < 0 and unstable when

b > 0.

(ii) if ab < 0 (resp. ab > 0) the differential equation possesses the trivial equilibrium u = 0

and two nontrivial equilibria u±(ε), ε =√|µ| for µ > 0 (resp. µ < 0), which are

symmetric, u+(ε) = −u−(ε). The map ε→ u±(ε) is of class Ck−3 in a neighborhood of

0 in R, and u±(ε) = O(ε). The nontrivial equilibria are stable when b < 0 and unstable

when b > 0, whereas the trivial equilibrium has opposite stability.

Then for equation (2.12), a pitchfork bifurcation occurs at µ = 0.

A direct consequence of conditions (2.13), (2.16) and (2.17) is that f has the Taylor expan-

sion:

f(u, µ) = uh(u2, µ) h(u2, µ) = aµ+ bu2 + o(|µ|+ u2) as (u, µ)→ (0, 0)

where h is of class C(k−1)/2 in a neighborhood of (0, 0).

Exercice 2.2. Consider the truncated equation

du

dt= aµu+ bu3. (2.18)

37

Page 38: Dynamical Systems and Chaos

• Plot bifurcation diagrams in the (u, µ)-plane of this truncated equationfor different

values of a and b.

• Prove the theorem.

Figure 15: Pitchfork bifurcation: bifurcation diagrams, in the (µ, u)-plane, of the truncated

equation (2.18) for different values of a and b. The solid lines represent branches of stable

equilibria, the dashed lines branches of unstable equilibria, and the arrows indicate the sense

of increasing time t.

2.7.3 Transcritical bifurcation

Theorem 2.12 (Transcritical bifurcation). Assume that the vector field f is of class Ck,

k ≥ 2, in a neighborhood of (0, 0), that it is satisfies conditions (2.13), and also:

∂2f

∂µ∂u(0, 0) =: a 6= 0,

∂2f

∂u2(0, 0) =: 2b 6= 0. (2.19)

The following properties hold in neighborhood of 0 in R for small enough µ:

(i) the differential equation possesses the trivial equilibrium u = 0 and the nontrivial equi-

librium u0(µ) where the map µ → u0(µ) is of class Ck−2 in a neighborhood of 0 in R,

and u0(µ) = O(µ).

(ii) if aµ < 0 (resp. aµ > 0) the trivial equilibrium u = 0 is stable (resp. unstable) whereas

the nontrivial equilibrium u0(µ) is unstbale (resp. stable).

Then for equation (2.12), a transcritical bifurcation occurs at µ = 0.

A direct consequence of conditions (2.13) and (2.19) is that f has the Taylor expansion:

f(u, µ) = aµu+ bu2 + o(u|µ|+ u2) as (u, µ)→ (0, 0)

Exercice 2.3. Consider the truncated equation

du

dt= aµu+ bu2.

38

Page 39: Dynamical Systems and Chaos

• Plot bifurcation diagrams in the (u, µ)-plane of this truncated equationfor different

values of a and b.

• Prove the theorem.

Further readings on bifurcation can be found in the book of Kuznetsov [8].

3 Periodic orbits

In the previous chapter, we concentrated our effort on the study of fixed points of systems

of differential equations. There exists, of course, other types of interesting solutions. Among

them are periodic solutions or closed orbits that we already encountered in some examples:

Lotka-Volterra system and the nonlinear pendulum. This chapter is dedicated to the study

of these solutions and will allow us to establish important connections between continuous

and discrete dynamical systems.

Figure 16: Periodic orbits for continuous and discrete system.

3.1 Definitions

Definition 3.1 (Periodic point and periodic orbit). Let X(t) = φt(X0) be the solution of the

differential equation X = F (X) with initial condition X0. We define a periodic point of period T

to be a point X0 such that φT (X0) = X0 but φt(X0) 6= X0 for all 0 < t < T . If X0 is

a periodic point then the set γ of all the points γ = {φt(X0) | 0 ≤ t ≤ T} is called a

periodic orbit or closed orbit.

Remark 3.1. γ is called periodic orbit because the flow is periodic in time φt+T (X0) = φt(X0)

for all t. γ is called a closed orbit because the set of points on the orbit is a closed set and

the orbit closes up on itself after time T and the whole orbit {φt(X0) | −∞ ≤ t ≤ +∞} is

a closed set.

39

Page 40: Dynamical Systems and Chaos

Periodic orbits in the plane, can either be contained in a whole band of periodic orbits,

like the pendulum equation, or they can be isolated in the sense that nearby orbits are not

periodic. The latter case is called a limit cycle.

Definition 3.2 (Limit cycle). A limit cycle is an isolated periodic orbit for a system of

differential equations in the plane.

Figure 17: Example of a limit cycle.

Definition 3.3 (Limit sets). A point Y ∈ Rn is an ω-limit point of a trajectory φt(X0)

provided that there exists a sequence of times tm going to infinity such that limm→∞

φtm(X0) = Y .

This condition means that the orbit φt(X0) keeps coming back near Y infinitely often as t

goes to infinity. The set of all ω-limit points of X0 is called the ω-limit set and is denoted

ω(X0).

Similarly, a point Y ∈ Rn is an α-limit point of a trajectory φt(X0) provided that there exists

a sequence of times tm going to −∞ such that limm→−∞

φtm(X0) = Y . The set of all α-limit

points of X0 is called the α-limit set and is denoted α(X0).

Examples: If X0 is a fixed point, then ω(X0) = α(X0) = {X0}. If X0 is a periodic point

then ω(X0) = α(X0) = γ where γ is the periodic orbit.

Proposition 3.1. Assume that φt(X0) is a trajectory for all t ∈ R. Then the following

properties are true:

(i) ω(X0) and α(X0) are invariant sets: if Y ∈ ω(X0) (resp. α(X0)), then the orbit φt(Y )

is in ω(X0) (resp. α(X0)) for all t ∈ R.

(ii) ω(X0) and α(X0) are closed sets.

(iii) The limit set depends only on the trajectory and not on a particular point, so ω(X0) =

ω(φt(X0)) and α(X0) = α(φt(X0)) for all t ∈ R.

Just like a fixed point, a periodic orbit can have different types of stability.

40

Page 41: Dynamical Systems and Chaos

Definition 3.4 (Orbitally stable). A periodic orbit γ = {φt(X0) | 0 ≤ t ≤ T} is called

orbitally stable provided that the following condition holds:

• given any ε > 0, there is a δ > 0 such that, if X0 is an initial condition with within a

distance δ of γ, then φ(t,X0) is within a distance ε of γ for all t ≥ 0.

Definition 3.5 (Orbitally asymptotically stable). A periodic orbit γ = {φt(X0) | 0 ≤ t ≤T} is called orbitally asymptotically stable provided that it is orbitally stable and that the

following condition further holds:

• there is a δ1 > 0 such that an initial condition with within a distance δ1 of γ has the

distance between φ(t,X0) and γ go to zero as t goes to infinity, that is ω(X0) = γ).

Example: consider the system of differential equations{x = y + x(1− x2 − y2)y = −x+ y(1− x2 − y2).

This example can be easily understood by using polar coordinates. If r2 = x2 + y2, differen-

tiating each side by t we have:

rr = xx+ yy

= xy + x2(1− r2)− xy + y2(1− r2)= r2(1− r2) or

r = r(1− r2).

Similarly the angle θ satisfies tan θ = y/x, so again differentiating with respect to t we get

θ

cos(θ)2=−x2 + xy(1− r2)− y2 − xy(1− r2)

x2

= − r2

x2

θ = −1.

Then the two equations are decoupled. From this, we deduce that the origin r = 0 is a fixed

point of the system and repelling. The differential equation for r has also r = 1 as fixed

point and it is attracting. So the planar differential equations have an attracting periodic

orbit of radius one (which is a limit cycle!), and the origin is a repelling point.

For all X0 = (x0, y0) 6= (0, 0), ω(X0) = γ = {(x, y) ∈ R2 | x2 + y2 = 1}. If ‖X0‖ < 1, then

α(X0) = {(0, 0)}. If ‖X0‖ > 1, then α(X0) = ∅.From now on, we restrict ourselves to planar dynamical systems.

3.2 Poincare Bendixson theorem for planar systems

Definition 3.6. A set A is called positively invariant provided that whenever X0 ∈ A, then

φt(X0) ∈ A for all t > 0. See figure 18.

41

Page 42: Dynamical Systems and Chaos

Figure 18: Positively invariant set.

Theorem 3.1 (Poincare Bendixson). Consider a differential equation X = F (X) on R2.

1. Assume that F is defined on all R2. Assume a forward orbit (φt(X0))t≥0 is bounded.

Then, ω(X0) either contains a fixed point or is a periodic orbit.

2. Assume that A is a closed and bounded subset of R2 that is positively invariant for

the differential equation. We assume that F is defined at all points of A and has no

fixed point in A. Then, given any X0 ∈ A, the orbit φt(X0) is either periodic or tends

toward a periodic orbit at t goes to ∞, and ω(X0) equals this periodic orbit.

Corollary 3.1. A compact set K that is positively or negatively invariant contains either a

limit cycle or an equilibrium point.

Corollary 3.2. Let γ a closed orbit and let U be the open region in the interior of γ. Then

U contains either an equilibrium point or a limit cycle.

Let γ a closed orbit that forms the boundary of an open set U . Then U contains an equilibrium

point.

Corollary 3.3. If L is a strict Liapunov function for a planar system, then there are no

limit cycles.

Example: consider the differential system

x = y

y = −x+ y(4− x2 − 4y2).

We use a bounding function (like a Liapunov function) L(x, y) = 12(x2 + 4y2). The time

derivative is

L = y2(4− x2 − 4y2) =

{≥ 0 if 2L(x, y) = x2 + 4y2 = 1

≤ 0 if 2L(x, y) = x2 + 4y2 = 4.

42

Page 43: Dynamical Systems and Chaos

These inequalities imply that the annulus

A =

{(x, y) ∈ R2 | 1

2≤ L(x, y) ≤ 2

}is positively invariant. The only fixed of the system is at the origin, which is not in A. The

Poincare-Bendixson theorem implies that there is a periodic orbit in A.

3.3 Local sections

Definition 3.7. Suppose that F (X0) 6= 0, X0 is not a fixed point for the system X =

F (X). The transverse line at X0, denoted by `(X0), is the straight line through X0, which is

perpendicular to the vector F (X0) based at X0.

Remark 3.2. Since F (X) is continuous, the vector field is not tangent to `(X0), at least in

some open interval in `(X0) surrounding X0. We call such an open subinterval containing

X0 a local section S at X0. For all X ∈ S, F (X) 6= 0.

If S is local section, the solution through a point Z0 may reach X0 ∈ S at a certain time t0.

We show that in a certain local sense, this time of first arrival at S is a continuous function

of Z0.

Proposition 3.2. Let S be a local section at X0 and suppose φt0(Z0) = X0. Let W be a

neighborhood of Z0. Then there exists an open set U ⊂ W containing Z0 and a differentiable

function τ : U → R such that τ(Z0) = t0 and φτ(X)(X) ∈ S for each X ∈ U .

3.4 Stability of periodic orbits and the Poincare map

Definition 3.8 (Poincare map). Given a periodic orbit γ, let be X0 ∈ γ and S a local section

at X0. We consider the Poincare map as the first return map on S. This is the function

P : U ⊂ S → S, U a neighborhood of X0, such that:

P (X) = φτ(X)(X)

where τ(X) is the smallest positive time for which φτ(X)(X) ∈ S.

Remark 3.3. We always have P (X0) = X0. Close to a periodic, we have transformed the

study of a continuous dynamical system X = F (X) into the the study of the iteration map

Xn+1 = P (Xn) = P n+1(X0) where P is the Poincate map. We can then use the stability

theorem of fixed point for discrete systems.

Theorem 3.2 (Stability of periodic orbits). Let X = F (X) be a planar system and suppose

that X0 lies on a closed orbit γ. Let P be a Poincare map defined on a neighborhood of X0

in some local section. If |P ′(X0)| < 1, then γ is orbitally asymptotically stable.

43

Page 44: Dynamical Systems and Chaos

Figure 19: Poincare map P .

Remark 3.4. All these results can generalized to higher order dimensions.

Example: let us consider again the system{x = y + x(1− x2 − y2)y = −x+ y(1− x2 − y2).

and prove that γ = {(x, y) ∈ R2 | x2+y2 = 1} is asymptotically stable. We need to construct

the Poincare map. We discuss the first return trajectories from the half line {(x, 0), x > 0}to itself. In polar coordinates, this amounts to following solutions from θ = 0 to θ = −2π.

After a separation of variables, we have:

2

∫ r(t)

r0

1

r(1− r2)dr =

∫ t

0

2dt

⇒ ln

(r(t)2

1− r(t)2)−(

r201− r20

)= 2t

⇒ r(t) =(1 + e−2t(r−20 − 1)

)−1/2.

And the solution for θ is

θ(t) = θ0 − t.

So it takes a length of time of 2π to go once around the origin from θ0 = 0 to θ(t) = −2π.

So, evaluating r(t) at 2π gives the radius after one revolution in terms of the original radial

r0 as:

r1 = r(2π) = P (r0) =(1 + e−4π(r−20 − 1)

)−1/2.

From this, it is straightforward to see that P (0) = 0 and P (1) = 1, for all other initial values,

P (r0) 6= r0. The derivative of the Poincare map P is

P ′ (r) =e−4π

(r2 + e−4π(1− r2))3/2

=e−4πP (r)3

r3> 0.

Then P ′(r = 1) = e−4π < 1 and γ is asymptotically stable.

44

Page 45: Dynamical Systems and Chaos

4 Discrete dynamical systems

4.1 One-dimensional iteration maps

Figure 20: Staircase diagrams for stable fixed points.

4.2 Linearized stability of fixed points

How all the notions and results of stability are translated to discrete dynamical systems?

First of all, let’s define a discrete dynamical system as

Xn+1 = F (Xn) (4.1)

where F : Rn → Rn is, at least, continuous. Given an initial condition X0 ∈ Rn, we naturally

have Xn+1 = F n+1(X0). This allows to define the flow of (4.1) as the one parameter family

(F n)n∈N. If F is invertible then F−n exists and the flow is defined for n ∈ Z.

A fixed point or an equilibrium is an invariant point by the flow such that we have the

following definition.

Definition 4.1. A point X∗ is a fixed point for (4.1) if X∗ = F (X∗).

We can also define local and global invariant stable and unstable manifolds. Let U be a

neighborhood of the fixed point X∗.

Wsloc(X

∗) =

{X0 ∈ U : F n(X0) −→

n→+∞X∗ and F n(X0) ∈ U

}(4.2)

Wuloc(X

∗) =

{X0 ∈ U : F−n(X0) −→

n→+∞X∗ and F−n(X0) ∈ U

}(4.3)

and

Ws(X∗) =⋃n∈N

F−n (Wsloc(X

∗)) (4.4)

45

Page 46: Dynamical Systems and Chaos

Wu(X∗) =⋃n∈N

F n (Wuloc(X

∗)) . (4.5)

Remark 4.1. Contrary to the continuous case, invariant manifolds are made of discrete

points. See figures 21 and 22.

Figure 21: Invariant manifold of a saddle fixed point on the plane with positive multipliers.

Figure 22: Invariant manifold of a saddle fixed point on the plane with negative multipliers.

Definition 4.2. A fixed point X∗ for a map F is said to be stable if for all r > 0 there

exists a δ > 0 such that if X is in the open ball B(X∗, r) of radius r and center X∗, then

F j(X) ∈ B(X∗, r) for all j ≥ 0.

A fixed point X∗ for a map F is said to be unstable if it is not stable.

A fixed point X∗ for a map F is called attracting (or asymptotically stable or a sink) provided

that X∗ is stable and there is a δ1 > 0 such that if X ∈ B(X∗, δ1), then ‖F j(X)−X∗‖ −→j→+∞

0.

A fixed point X∗ for a map F is called repelling (or a source) provided that there is r1 > 0

such that, if X 6= X∗ is in B(X∗, r1) then there exists j such that ‖F j(X)−X∗‖ ≥ r1.

Theorem 4.1 (Linearized stability of fixed point). Consider a differential equation Xn+1 =

F (Xn) where F is C2. Let λ1, · · · , λn be the eigenvalues of DFX∗.

1. If all the eigenvalues λj of DFX∗ have |λj| < 1, then X∗ is attracting.

2. If one eigenvalue λj0 of DFX∗ has |λj0| > 1, then X∗ is unstable.

46

Page 47: Dynamical Systems and Chaos

3. If all the eigenvalues λj of DFX∗ have |λj| > 1, then X∗ is repelling.

Definition 4.3. A fixed point of (4.1) is called hyperbolic provided that all the eigenvalues

λj of DFX∗ satisfy |λj| 6= 1.

Figure 23: Stable fixed points of one-dimensional systems.

Definition 4.4 (Periodic point). A point X∗ is a period-N point for F provided that FN(X∗) =

X∗ but F j(X∗) 6= X∗ for 0 < j < N .

Definition 4.5. A period-N point X∗ for a map F is said to be stable if for all r > 0 there

exists a δ > 0 such that if X is in the open ball B(X∗, r) of radius r and center X∗, then

F j(X) ∈ B(F j(X∗), r) for all j ≥ 0.

A period-N point X∗ for a map F is said to be unstable if it is not stable.

A period-N point X∗ for a map F is called attracting (or asymptotically stable or a sink)

provided that X∗ is stable and there is a δ1 > 0 such that if X ∈ B(X∗, δ1), then ‖F j(X)−F j(X∗)‖ −→

j→+∞0.

A period-N point X∗ for a map F is called repelling (or a source) provided that there is r1 > 0

such that, if X 6= X∗ is in B(X∗, r1) then there exists j such that ‖F j(X)− F j(X∗)‖ ≥ r1.

Theorem 4.2 (Linearized stability of periodic-N point). Consider a differential equation

Xn+1 = F (Xn) where F is C2 with X∗ a periodic-N point. Let λ1, · · · , λn be the eigenvalues

of D(FN)X∗.

1. If all the eigenvalues λj of D(FN)X∗ have |λj| < 1, then X∗ is attracting.

2. If one eigenvalue λj0 of D(FN)X∗ has |λj0| > 1, then X∗ is unstable.

3. If all the eigenvalues λj of D(FN)X∗ have |λj| > 1, then X∗ is repelling.

Theorem 4.3. Theorems 2.5 and 2.7 are directly transferable to the discrete case.

Examples: F (x) = −x3.

47

Page 48: Dynamical Systems and Chaos

4.3 Bifurcation of one-dimensional iteration maps

Consider a discrete dynamical system depending on a parameter:

xn+1 = f(xn, α), xn ∈ RN , α ∈ R

where f is smooth with respect to both x and α. There exists only three ways in which the

hyperbolicity condition of a fixed point can be violated. Either a simple positive multiplier

approaches the unit circle and we have µ1 = 1 (see figure 24(a)), or simple negative multiplier

approaches the unit circle and we have µ1 = −1 (see figure 24(b)), or a pair of simple complex

multipliers reaches the unit circle and we have µ1,2 = e±iθ0 , 0 < θ0 < π (see figure 24(c)).

Figure 24: Codimension 1 critical cases.

Definition 4.6. The bifurcation corresponding to the presence of µ1,2 = e±iθ0, 0 < θ0 < π is

called a Neimark-Sacker bifurcation. Notice that for the Neimark-Sacker bifurcation we need

N ≥ 2.

4.3.1 Saddle-node bifurcation

Consider the following one-dimensional dynamical system depending on one-parameter

x→ α + x+ x2 = f(x, α) = fα(x). (4.6)

The map fα is invertible for |α| small in a neighborhood of the origin. The system has at

α = 0 a nonhyperbolic fixed point x∗ = 0 with µ = fx(0, 0) = 1. The behavior of the system

near x = 0 for small |α| is shown in figure 25.

For α < 0 there are two fixed points in the system: x1,2(α) = ±√−α, the left of which is

stable, while the right one is unstable. For α > 0 there are no fixed points in the system.

While α crosses zero from negative to positive values, the two fixed points (stable and

unstable) “collide”, forming at α = 0 a fixed point with µ = 1, and disappear. This is a fold

(tangent) bifurcation in the discrete-time dynamical system.

There is, as usual, another way of presenting this bifurcation: plotting a bifurcation diagram

in the direct product of the phase and parameter spaces, namely, in the (x, α)-plane. The

48

Page 49: Dynamical Systems and Chaos

Figure 25: Fold bifurcation.

4.2 The normal form of the fold bifurcation 115

2

x( )fα x( )fα x( )fα

x~ x~ x~

x1

x

µ = 1

x

α < 0 α = 0 α > 0

x0

x

FIGURE 4.2. Fold bifurcation.

system has at α = 0 a nonhyperbolic fixed point x0 = 0 with µ = fx(0, 0) =1. The behavior of the system near x = 0 for small |α| is shown in Figure4.2. For α < 0 there are two fixed points in the system: x1,2(α) = ±√−α,the left of which is stable, while the right one is unstable. For α > 0 thereare no fixed points in the system. While α crosses zero from negative topositive values, the two fixed points (stable and unstable) “collide,” formingat α = 0 a fixed point with µ = 1, and disappear. This is a fold (tangent)bifurcation in the discrete-time dynamical system.

There is, as usual, another way of presenting this bifurcation: plottinga bifurcation diagram in the direct product of the phase and parameterspaces, namely, in the (x,α)-plane. The fixed-point manifold x−f(x,α) = 0is simply the parabola α = −x2 (see Figure 4.3). Fixing some α, we caneasily determine the number of fixed points in the system for this parametervalue. At (x,α) = (0, 0) a map projecting the fixed-point manifold onto theα-axis has a singularity of the fold type.

1

x 2

x

= ( )

α

+ x = 2 0α x

f x,x α

FIGURE 4.3. Fixed point manifold.Figure 26: Fixed point manifold.

49

Page 50: Dynamical Systems and Chaos

fixed-point manifold x− f(x, α) = 0 is simply the parabola α = −x2 (see figure 26). Fixing

some α, we can easily determine the number of fixed points in the system for this parameter

value. At (x, α) = (0, 0) a map projecting the fixed-point manifold onto the α-axis has a

singularity of the fold type.

Lemma 4.1. The system

x→ α + x+ x2 +O(x3)

is locally topologically equivalent near the origin to the system

x→ α + x+ x2.

Theorem 4.4. Suppose that a one-dimensional system

xn+1 = f(xn, α), xn ∈ R, α ∈ R

with smooth f , has at α = 0 the fixed point x∗ = 0 and let µ = fx(0, 0) = 1. Assume that

the following conditions are satisfied:

1. fxx(0, 0) 6= 0

2. fα(0, 0) 6= 0.

Then there are smooth invertible coordinate and parameter changes transforming the system

into

η → β + η ± η2 +O(η3)

Proof. Expand f in a Taylor series with respect to x at x = 0:

f(x, α) = f0(α) + f1(α)x+ f2(α)x2 +O(x3).

Two conditions are satisfied: f0(0) = f(0, 0) = 0 (fixed-point condition) and f1(0) =

fx(0, 0) = 1 (fold bifurcation condition). since f1(0) = 1, we may write

f(x, α) = f0(α) + (1 + g(α))x+ f2(α)x2 +O(x3)

where g(α) is smooth and g(0) = 0.

We introduce a new variable (coordinate shift):

ξ = x+ δ

where δ = δ(α) is to be defined suitably. This transformation yields

ξ = x+ δ = f(x, α) + δ = f(ξ − δ, α) + δ.

Therefore

ξ =(f0(α)− g(α)δ + f2(α)δ2 +O(δ3)

)+ ξ +

(g(α)− 2f2(α)δ +O(δ)2

+ (f2(α) +O(δ)) ξ2 +O(ξ3).

50

Page 51: Dynamical Systems and Chaos

Assume that f2(0) = 12fxx(0, 0) 6= 0. Then there is a smooth function δ(α), which annihilates

the parameter-dependent linear term in the above map for all sufficiently small |α|. Indeed,

the condition for that term to vanish can be written as

F (α, δ) = g(α)− 2f2(α)δ + δ2ϕ(α, δ) = 0

for some smooth function ϕ. We have

F (0, 0) = 0, Fδ(0, 0) = −2f2(0) 6= 0, Fα(0, 0) = g′(0)

which implies (local) existence and uniqueness of a smooth function δ = δ(α) such that

δ(0) = 0and F (α, δ(α))) = 0. It follows that

δ(α) =g′(0)

2f2(0)α +O(α2).

The map written in terms of ξ is given by

ξ =(f ′0(0)α + α2ψ(α)

)+ ξ + (f2(0) +O(α)) ξ2 +O(ξ3)

where ψ is some smooth function. Consider a new parameter µ = µ(α) given by the constant

terms of the previous equation

µ = f ′0(0)α + α2ψ(α).

We have

µ(0) = 0, µ′(0) = f ′0(0) = fα(0, 0).

If we assume fα(0, 0) 6= 0, then the inverse function theorem implies local existence and

uniqueness of a smooth function α = α(µ) with α(0) = 0. Therefore we have

ξ = µ+ ξ + a(µ)ξ2 +O(ξ3),

where a(µ) is a smooth function with a(0) = f2(0) 6= 0. Let η = |a(µ)|ξ and β = |a(µ)|µ.

The we get

η = β + η + sη2 +O(η3),

where s = sign a(0) = ±1.

Theorem 4.5. Any scalar one-parameter system

x→ f(x, α)

having at α = 0 the fixed point x∗ = 0 with µ = fx(0, 0) = 1 and the conditions 1-2 of

theorem 4.4, is locally topologically equivalent near the origin to one of the following normal

forms:

η → β + η ± η2.

51

Page 52: Dynamical Systems and Chaos

Theorem 4.6 (Saddle-node bifurcation). Suppose that a one-dimensional system

xn+1 = f(xn, α), xn ∈ R, α ∈ R

with smooth f . We say that there is a saddle-node bifurcation at (x∗, αc) if the following

conditions are satisfied:

(SN1) f(x∗, αc) = x∗

(SN2) fx(x∗, αc) = 1

(SN3) fxx(x∗, αc) 6= 0

(SN4) fα(x∗, αc) 6= 0.

4.3.2 Pitchfork and transcritical bifurcation

Theorem 4.7 (Pitchfork bifurcation). Suppose that a one-dimensional system

xn+1 = f(xn, α), xn ∈ R, α ∈ R

with smooth f . We say that there is a pitchfork bifurcation at (x∗, αc) if the following

conditions are satisfied:

(P1) f(x∗, α) = x∗ and ∂2k

∂x2kf(x∗, α) = 0 for all α

(P2) fx(x∗, αc) = 1

(P3) fxα(x∗, αc) 6= 0

(P4) fxxx(x∗, αc) 6= 0.

Theorem 4.8 (Transcritical bifurcation). Suppose that a one-dimensional system

xn+1 = f(xn, α), xn ∈ R, α ∈ R

with smooth f . We say that there is a transcritical bifurcation at (x∗, αc) if the following

conditions are satisfied:

(T1) f(x∗, α) = x∗ for all α

(T2) fx(x∗, αc) = 1

(T3) fxα(x∗, αc) 6= 0

(T4) fxx(x∗, αc) 6= 0.

52

Page 53: Dynamical Systems and Chaos

4.3.3 Period-doubling bifurcation

Consider the following one-dimensional dynamical system depending on one-parameter

x→ −(1 + α)x+ x3 = f(x, α) = fα(x). (4.7)

The map fα is invertible for small |α| in a neighborhood of the origin. System (4.7) has the

fixed point x∗ = 0 for all α with multiplier µ = −(1 + α). The point is linearly stable for

small α < 0 and is linearly unstable for α > 0. At α = 0 the point is not hyperbolic, since

the multiplier µ = fx(0, 0) = −1, but is nevertheless (nonlinearly) stable. There are no other

fixed points near the origin for small |α|.

Figure 27: Second iterate map near a period-doubling bifurcation.

Consider now the second iterate f 2α(x) of the map (4.7). If y = fα(x), then

f 2α(x) = −(1 + α)y + y3

= −(1 + α)(−(1 + α)x+ x3

)+(−(1 + α)x+ x3

)3= (1 + α)2x−

((1 + α)(2 + 2α + α2)

)x3 +O(x5).

The map f 2α(x) obviously has the trivial fixed point x∗ = 0. It also has two nontrivial fixed

points for small α > 0:

x1,2 = f 2α(x1,2)

where x1,2 = ± (√α +O(α)), see figure 27. These two points are stable and constitute a

cycle of period two for the original map fα. This means that

x2 = fα(x1), x1 = fα(x2) with x1 6= x2.

Figure 28 shows the complete bifurcation diagram of system (4.7) with the help of a staircase

diagram. As α approaches zero from above, the period-two cycle “shrinks” and disappears.

This is a flip bifurcation. The other way to present this bifurcation is to use the (x, α)-plane

53

Page 54: Dynamical Systems and Chaos

Figure 28: Period-doubling bifurcation.

(see figure 29). In this figure, the horizontal axis corresponds to the fixed point of (4.7)

(stable for α < 0 and unstable for α > 0), while the “parabolaa” represents the stable cycle

of period two {x1, x2} existing for α > 0.

Lemma 4.2. The system

x→ −(1 + α)x+ x3 +O(x3)

is locally topologically equivalent near the origin to the system

x→ −(1 + α)x+ x3.

Theorem 4.9. Suppose that a one-dimensional system

xn+1 = f(xn, α), xn ∈ R, α ∈ R

with smooth f , has at α = 0 the fixed point x∗ = 0 and let µ = fx(0, 0) = −1. Assume that

the following conditions are satisfied:

1. 12

(fxx(0, 0))2 + 13fxxx(0, 0) 6= 0

2. fxα(0, 0) 6= 0.

Then there are smooth invertible coordinate and parameter changes trans- forming the system

into

η → −(1 + β)η +±η3 +O(η4).

Theorem 4.10 (Period-doubling bifurcation). Suppose that a one-dimensional system

xn+1 = f(xn, α), xn ∈ R, α ∈ R

with smooth f . We say that there is a period-doubling bifurcation at (x∗, αc) if the following

conditions are satisfied:

54

Page 55: Dynamical Systems and Chaos

Figure 29: A period-doubling corresponds to pitchfork bifurcation of the second iterate.

(PD1) f(x∗, αc) = x∗

(PD2) fx(x∗, αc) = −1

(PD3) fxα(x∗, αc) + 12fα(x∗, αc)fxx(x

∗, αc) 6= 0

(PD4) 12

(fxx(x∗, αc))

2 + 13fxxx(x

∗, αc) 6= 0.

4.4 Example

xn+1 = α + x2n = f(xn, α) = fα(xn)

Let first study the fixed points of this system with respect to the parameter α. As the

discriminant of x2 − x+ α = 0 is ∆ = 1− 4α, there exists three distinct cases:

1. α > 1/4 there is no fixed point

2. α = 1/4 there is a unique fixed point x = 1/2

3. α < 1/4 there are two fixed points x±(α) =1±√

2

We expect that a saddle-node (SN) bifurcation occurs at (x∗, αc) = (1/2, 1/4). We need to

check the four conditions stated in theorem 4.6.

(SN1) f(1/2, 1/4) = 1/2

(SN2) fx(1/2, 1/4) = 1

(SN3) fxx(1/2, 1/4c) = 2 6= 0

(SN4) fα(1/2, 1/4) = 1 6= 0.

55

Page 56: Dynamical Systems and Chaos

So, there is a SN bifurcation at (1/2, 1/4). Let us determine the stability of the two branches

of fixed points x±(α).

∂xf(x±(α), α) = 1±√

∆.

Such that the branch x+(α) is always unstable. The branch x−(α) is stable only if −1 <

∂xf(x−(α), α) < 1 which is satisfied only if α ∈ (−3/4, 1/4). For α < −3/4, the branch

x−(α) is unstable. At α = −3/4 we expect to have a bifurcation as:

∂xf(x−(−3/4),−3/4) = −1.

Note that x−(−3/4) = −1/2. Based on the previous remark, we expect to find a period-

Figure 30: Bifurcation diagram for the map f(x, α) = α + x2, for α ∈ [−1.367; 1/4].

doubling bifurcation at (−1/2,−3/4). We need to check the four conditions stated in theorem

4.10.

(PD1) f(−1/2,−3/4) = −1/2

(PD2) fx(−1/2,−3/4) = −1

(PD3) fxα(−1/2,−3/4) + 12fα(−1/2,−3/4)fxx(−1/2,−3/4) = 0 + 1 = 1 6= 0

(PD4) 12

(fxx(−1/2,−3/4))2 + 13fxxx(−1/2,−3/4) = 2 + 0 = 2 6= 0.

So we have a PD bifurcation at (−1/2,−3/4). Let’s compute the period-2 points for fα. We

need first, to find the fixed points of f 2α:

f 2α(x) = fα(fα(x)) = α + (α + x2)2 = α + α2 + 2αx2 + x4.

56

Page 57: Dynamical Systems and Chaos

The fixed points are given by:

x±(α) =1±√

1− 4α

2x2±(α) =

−1±√−3− 4α

2, α ≤ −3/4.

This implies that the fixed points of f 2α that are the period-2 points for fα are x2±(α). Note

that at α = −3/4, we have x2± = −1/2. Let’s now evaluate the stability of these two period-2

points: (f 2α

)′(x2±(α)) = f ′α(fα(x2±(α)))× f ′α(x2±(α))

= ∂xf(fα(x2±(α)), α)× ∂xf(x2±(α), α)

= 2fα(x2±(α))× 2x2±(α)

= 4x2±(α)(α +(x2±(α)

)2)

= 4 + 4α.

We see that the two branches are stable for α ∈ (−5/4,−3/4) and unstable at α = −5/4

where f ′α(x2±(α)) = −1 and another period-doubling bifurcation occurs. In fact, it is possible

to show that there exists a cascade of period-doubling bifurcations. This usually leads to

chaotic dynamics as we are going to see in the next chapter.

5 Elementary examples of chaotic dynamical systems

Under construction. The number of examples presented here will depend upon the time that

we still have at the end of the semester.

5.1 Topological transitivity

5.1.1 Dense sets

Definition 5.1. A set A ⊂ [a, b] is dense in the closed interval [a, b], provided that arbitrarily

close to each point z ∈ [a, b], there is a point in A. In other words, for each z ∈ [a, b] and

ε > 0, the intersection

(z − ε, z + ε) ∩ A 6= ∅

As an example, if Q is the set of rational numbers then Q ∩ [0, 1] is dense in [0, 1].

5.1.2 Invariant sets and topological transitivity

Since we are going to consider an orbit that is dense in a set which is taken to itself by the

map, we start by defining an invariant set. In this section, the space is usually the interval

[0, 1], but we give the definition for a general space X.

Definition 5.2. Let f be a function from a set X to itself. A subset A of X is called

invariant provided that:

57

Page 58: Dynamical Systems and Chaos

1. if x ∈ A, then f(x) ∈ A,

2. for any b ∈ A, there is some a ∈ A with f(a) = b (f(A) = A).

A subset is called positively invariant provided that if x ∈ A then f(x) ∈ A (f(A) ⊂ A).

Definition 5.3. A map f from a space X to itself is called topologically transitive, provided

that there is a point x∗ such that the orbit O+f (x∗) is dense in X.

5.1.3 The doubling map

Figure 31: (xn, xn+1)-plot of the doubling map when x0 ∈ [0, 1[ is rational.

We show in this subsection that the doubling map D(x) = 2x ( mod 1) is topologically

transitive by using binary expansions. The doubling map can also be defined as:

D(x) =

2x

2x− 1

0

for 0 ≤ x < 12

for 12≤ x < 1

for x = 1.

Note that this map has a single fixed point at x = 0. The graph of Dn consists of 2n straight

lines with slope 2n, each extending over the entire interval [0, 1]. More precisely, Dn maps

an interval of the form[k2n, k+1

2n

]for k = 0, 1, · · · , 2n − 2 onto [0, 1]. Thus the graph of Dn

crosses the diagonal y = x at some points in this interval, and so there is a periodic point in

any such interval. Since the length of these intervals are 12n

, it follows that periodic points

are dense in [0, 1].

Any number x in [0, 1] can be represented as a binary expansion of the form

x =∞∑j=1

aj2j

where each aj is either 0 or 1. Using this expansion, we have

D

(∞∑j=1

aj2j

)= a1 +

∞∑j=2

aj2j−1

( mod 1) =∞∑k=1

ak+1

2k.

58

Page 59: Dynamical Systems and Chaos

The nth iterate simply shifts the expansion by n places:

Dn

(∞∑j=1

aj2j

)=∞∑k=1

ak+n2k

.

If a point x has an expansion that repeats every n places, aj+n = aj for all j, then Dn(x) = x

and x is a period-n point. Note that the first n places in the expansion of number also

determines which subinterval of length 2−n contains the point x: if x =∑∞

j=1aj2j

and xn =∑nj=1

aj2j

then xn ≤ x ≤ xn + 2−n.

We can now show that D has a point with a dense orbit in the interval [0, 1].

Theorem 5.1. The doubling map D is topologically transitive on [0, 1]. In other words,

there is a point x∗ in [0, 1] such that the orbit of x∗ by the doubling map D is dense in [0, 1]

(i.e. the orbit of x∗ comes arbitrarily close to every point in [0, 1]).

Proof. We describe the point x∗ by giving its binary expansion. Let b∗1 = 0 and b∗2 = 1; this

lists both possible bits. Then let b∗3b∗4 = 00, b∗5b

∗6 = 10, b∗7b

∗8 = 01 and b∗9b

∗10 = 11. Notice that

we listed all the strings of bits of length two. at the third stage, we use b∗11 through b∗34 to

list all the strings of bits length three. Continuing by induction, at the jth stage, we list all

the strings of bits of length j. Let x∗ be the point with this particular binary expansion:

that is

x∗ =∞∑j=1

b∗j2j.

Now, take any point x =∑∞

j=1aj2j

, where we have given its binary expansion. For any k, the

first k bits a1a2 · · · ak appears as a string of bits for x∗; that is aj = b∗j+m for some fixed m

and 1 ≤ j ≤ k. Then the binary expansion of Dm(x∗) agrees with x in these first k bits, and

|Dm(x∗)− x| =∣∣∣∣∣∞∑

j=k+1

b∗j+m − aj2j

∣∣∣∣∣ ≤∞∑

j=k+1

1

2j=

1

2k.

Thus the orbit of x∗ comes within a distance less than 2−k of x. Since 2−k is arbitrarily

small, we have proved that this is a dense orbit.

5.1.4 More examples

The tent map T

Let T : [0, 1]→ [0, 1] be the tent map defined by

T (x) =

{2x

2(1− x)

for 0 ≤ x ≤ 12

for 12≤ x ≤ 1.

Theorem 5.2. The tent map T is topologically transitive on the interval [0, 1].

59

Page 60: Dynamical Systems and Chaos

Figure 32: Plot of the tent map.

The logistic map G

Let G : [0, 1]→ [0, 1] be the logistic map defined by

G(x) = 4x(1− x).

Theorem 5.3. The tent map G is topologically transitive on the interval [0, 1].

5.2 Sensitive dependence on initial conditions

Definition 5.4. Let f be a map on a space X. The map has sensitive dependance on initial

conditions at x0, provided that there exists r > 0 such that for any δ > 0, there is a y0 within

δ of x0 (|y0 − x0| < δ) and an iterate k > 0 such that

|fk(y0)− fk(x0)| ≥ r.

The map has sensitive dependance on initial conditions on a set A, provided that it has

sensitive dependance on initial conditions at every point x0 ∈ A; in particular, the same

value r > 0 should work for all points x0 in A.

Theorem 5.4. The doubling map D has sensitive dependence on initial conditions on the

set [0, 1].

Proof. Fix x ∈ [0, 1] and ε > 0. We chose n such that 2−(n+1) < ε. If

x =∞∑j=1

aj2j

60

Page 61: Dynamical Systems and Chaos

Figure 33: Behavior of the logistic map xn+1 = rxn(1− xn).

61

Page 62: Dynamical Systems and Chaos

then we take

y =∞∑j=1

bj2j

where bj = aj for all j 6= n+ 1 and bn+1 = 1− an. Then we have:

|x− y| = 2−(n+1) < ε and |fn(x)− fn(y)| = 1

2.

Theorem 5.5. The tent map T and the logistic map G have sensitive dependence on initial

conditions on the set [0, 1].

5.3 Conjugacy

Definition 5.5. Let f from X to X and g from Y to Y be two maps. A map h from X to

Y is called topologically conjugacy from f to g, or just a conjugacy, provided that:

(i) h is a homeomorphism from X onto Y ,

(ii) h ◦ f(x) = g ◦ h(x) for all x ∈ X.

Remark 5.1. We can easily deduce from the definition of conjugacy the following remarks.

• A conjugacy induces a conjugacy of iterates of the maps.

• The conjugacy takes the nth iterate of x0 by f to the nth iterate of y0 by g.

Proposition 5.1. Assume that f and g are conjugate by h. Then the map h takes a periodic

orbit of f to a periodic of g with the same period.

Proof. Since g(y) = h ◦ f ◦ h−1(y), we also have that gn(y) = h ◦ fn ◦ h−1(y). Thus if

fn(p0) = p0 and q0 = h(p0) then:

gn(q0) = h ◦ fn ◦ h−1(q0)= h ◦ fn(p0)

= h(p0)

= q0.

If q0 had a lower period for g, then it would imply that p0 had a lower period for f , so n

must be the least period of q0 for g.

Theorem 5.6. The logistic map G(u) = 4u(1− u) and the tent map T (s) are topologically

conjugate on [0, 1] by the map

u = h(s) = sin(πs

2

)2∀s ∈ [0, 1].

62

Page 63: Dynamical Systems and Chaos

Proof. We first calculate G ◦ h(s) as follows:

G ◦ h(s) = 4h(s)(1− h(s))

= 4 sin(πs

2

)2cos(πs

2

)2= sin (πs)2 .

We now consider h ◦ T (s). For 0 ≤ s ≤ 1/2

h ◦ T (s) = h(2s)

= sin (πs)2 ,

and for 1/2 ≤ s ≤ 1

h ◦ T (s) = h(2− 2s)

= sin (π − πs)2

= sin (πs)2 .

h is a homeomorphism of [0, 1] with h−1(y) =2

πarcsin(

√y).

5.4 Chaos

Definition 5.6. A map f defined on X is said chaotic if

1. f has sensitive dependence on initial conditions on X

2. f is topologically transitive on X

3. periodic points of f are dense in X.

Proposition 5.2. Assume that f and g are conjugate by h. If f is chaotic on X then g is

chaotic on Y .

Proof. Suppose that f has sensitive dependence on initial conditions with constant β. Let

X = [α1, α2]. We may assume that β < α2 − α1. For any x ∈ [α1, α2 − β], consider the

function |h(x + β) − h(x)|. This is a continuous function on [α1, α2 − β] which is positive

thus it has a minimum value β′ > 0. It follows that h takes intervals of length β in X to

intervals of length at least β′ in Y . Then β′ is a sensitivity constant for g.

Theorem 5.7. The doubling map D, the tent map T and the logistic map G are chaotic on

[0, 1].

Proof. In the previous subsections, we have shown that the doubling map D satisfies all the

three conditions of definition 5.6 and thus is a chaotic map on [0, 1]. In homework 7, we have

shown that the tent map T is also chaotic on [0, 1]. As the tent map T and logistic map G

are topologically conjugate on [0, 1], the tent map is also chaotic.

63

Page 64: Dynamical Systems and Chaos

5.5 Cantor sets

So far we have used symbolic dynamics for maps that have an invariant interval, with no

points leaving the interval. In this section, we consider the case in which some points leave

the interval. We first start by considering the map T3 of slope 3

T3(x) =

{3x

3(1− x)

for x ≤ 12

for 12≤ x.

Lemma 5.1. If T j3 (x0) is bounded for all j ≥ 0, then

T j3 (x0) ∈ [0, 1/3] ∪ [2/3, 1]

for all j ≥ 0.

Proof. • If x0 < 0, then T j3 (x0) = 3jx0, which goes to minus infinity as j goes to infinity.

• If x0 > 1, then T3(x0) < 0 and T j+13 (x0) = 3jT (x0) goes to minus infinity as j goes to

infinity.

• If 1/3 < x0 < 2/3, then T3(x0) > 1 and T 23 (x0) < 0 and T j+2

3 (x0) = 3jT 2(x0) goes to

minus infinity as j goes to infinity.

• Thus, if the orbit does not go to minus infinity, x0 must be in the union of the two

intervals [0, 1/3] and [2/3, 1].

• Now if T k3 (x0) is not in [0, 1/3]∪ [2/3, 1] for some k ≥ 0, then T j+k3 (x0) = 3jT k(x0) goes

to minus infinity as j goes to infinity. Therefore, if T j3 (x0) is bounded for all j ≥ 0,

then each iterate T k3 (x0) must be in the union of these two intervals.

We consider the points that stay in the interval [0, 1] for a given number of iterates. Let

Kn = {x | T j3 (x) ∈ [0, 1] for 0 ≤ j ≤ n}.

Then K0 = [0, 1] (λ(K0) = 1) and K1 = [0, 1/3] ∪ [2/3, 1] (λ(K1) = 2/3). Furthermore, we

have

K2 = [0, 1/9] ∪ [2/9, 1/3] ∪ [2/3, 7/9] ∪ [8/9, 1].

and λ(K0) = (2/3)2. If we denote IL = [0, 1/3] and IR = [2/3, 1] then

Kn =(IL ∩ T−13 (Kn−1)

)∪(IR ∩ T−13 (Kn−1)

)with length λ(Kn) = (2/3)n → 0 as n→∞.

Let

K =⋂n≥0

Kn = {x | T j3 (x) ∈ [0, 1] for 0 ≤ j <∞}.

64

Page 65: Dynamical Systems and Chaos

Figure 34: The sets K0, · · · ,K4 for the middle-third Cantor set.

This set is called the middle-third Cantor set, since the middle third of each interval is

removed at each stage. The length or Lebesgue measure of K is zero, λ(K) = 0.

We can label each of the intervals in Kn by means of the dynamics of T3. For a finite string

s0s1 . . . sn−1, where each each sj equals L or R, let

Is0s1...sn−1 = {x | T j3 (x) ∈ Isj for 0 ≤ j ≤ n− 1}.

In particular, ILL = [0, 1/9], ILR = [2/9, 1/3], IRR = [2/3, 7/9] and IRL = [8/9, 1]. Kn is the

union of all the Is0s1...sn−1 over all the choices of the strings of symbols,

Kn =⋃{

Is0s1...sn−1 | sj ∈ {L,R} for 0 ≤ j ≤ n− 1}.

We want to understand the properties of K. At each stage, the set Kn has 2n+1 end points.

Let E the union of all these sets, this is a countable set of points.

Theorem 5.8. 1. The points in the set K are exactly those which can be given in a

ternary expansion using only 0’s and 2’s, and such that an expansion for points in K

is unique. The set K is uncountable.

2. The set E is countable. The points in E have ternary expansions, which end in either

repeated 0’s (the expansion is finite), or repeated 2’s.

3. The set K\E 6= ∅, and points in this set have ternary expansions with only 0’s and 2’s,

which do not end in repeated 0’s or 2’s.

Proof. We use a ternary expansion of points. A point x ∈ [0, 1] can be written as

x =∞∑j=1

aj3j

with each aj equal to 0, 1 or 2. For example:

1 =∞∑j=1

2

3j.

65

Page 66: Dynamical Systems and Chaos

The sets K1 and K do not contain the open interval (1/3, 2/3), whose points have a1 = 1

and 1/3 as part of their expansion. All points in [2/3, 1] start with a1 = 2, while those in

[0, 1/3) starts with a1 = 0. The point 1/3 is in K1 and K, but it can be written using only

0’s and 2’s in its expansion:1

3=∞∑j=2

2

3j.

Thus all the points in K1 and K can be represented with an expansion, with a1 either 0 or

2. The left end points of the intervals in K1 all have expansions which end in repeated 0’s,

whole the right end points have expansions with repeated 2’s.

At the second stage:

K2 = [0, 1/9] ∪ [2/9, 1/3] ∪ [2/3, 7/9] ∪ [8/9, 1].

The points in [0, 1/9]∪ [2/3, 7/9] have a2 = 0, while those in [2/9, 1/3]∪ [8/9, 1] have a2 = 2.

Again, the right end points can be expressed using repeated 2’s:

1

9=∞∑j=3

2

3jand

7

9=

2

3+∞∑j=3

2

3j.

Thus, all points in K2 and K can be represented with an expansion with a2 either 0 or 2.

The left end points of the interval in K2 all have expansions that end in repeated 0’s, while

the right end have expansions that end in repeated 2’s.

At the nth stage, all the points which need to have 1/3n in their expansion are removed.

The left end points of the interval in Kn all have expansions that end in repeated 0’s, while

the right end have expansions that end in repeated 2’s. Thus, any point in K can be given

by a ternary expansion that uses only 0’s and 2’s and no 1’s. This implies that the ternary

expansion is unique for a point in K.

Let’s define the map F from K to [0, 1]

F (x) = F

(∞∑j=1

aj3j

)=∞∑j=1

aj2j+1

.

This map realizes all possible binary expansions, so F is onto [0, 1]. Since [0, 1] is uncountable,

it follows that K is uncountable.

Since E is countable, there are uncountably many points in K\E and K\E 6= ∅.

Examples:

To find a point in K\E, we take a ternary expansion that does not end in either repeated

0’s or in repeated 2’s. For example, if aj = 0 for odd j and aj = 2 for even j then:

x =∞∑k=1

2

32k=

2

9

(1

1− 19

)=

1

4.

Then 1/4 is in K but not in E.

Let us check to see whether the point 9/13 is in K. to find its ternary expansion, we multiply

66

Page 67: Dynamical Systems and Chaos

by 3 and take the integer part, obtaining

3 · 9

13= 2 +

1

13⇒ a1 = 2

3 · 1

13= 0 +

13

13⇒ a2 = 0

3 · 3

13= 0 +

9

13⇒ a3 = 0

3 · 9

13= 2 +

1

13⇒ a4 = 2

and the process is beginning to repeat. Therefore a1+3j = 2 and a2+3j = a3j = 0. Since the

expansion uses only 2’s and 0’s then 9/13 is in K but not in E.

Theorem 5.9. Consider the tent map T3. Use this to define the middle-third Cantor set K.

1. The periodic points for T3 are dense in K.

2. The map T3 has sensitive dependence on initial conditions when restricted to K.

3. The map T3 is topologically transitive on K.

As a conclusion, the tent map T3 is chaotic on K.

Proof. The proof is let in exercise.

67

Page 68: Dynamical Systems and Chaos

References

[1] M.W. Hirsch, S. Smale and R.L. Devaney. Differential Equations, Dynamical Systems,

and an Introduction to Chaos. Academic Press, 3rd Edition.

[2] R.C. Robinson. An introduction to Dynamical Systems: Continuous and Discrete.

Prentice Hall, 2004.

[3] R.L. Devaney. An Introduction to Chaotic Dynamical Systems. Addison-Wesley, The

Advanced Book Program , 1989.

[4] K.T. Alligood, T.D. Sauer and J.A. Yorke. Chaos: An Introduction to Dynamical

Systems. Springer, 1996.

[5] S.H. Strogatz. Nonlinear Dynamics and Chaos, with Applications to Physics, Biology,

Chemistry, and Engineering. Perceus Book, 1994.

[6] R.C. Robinson. Dynamical Systems: stability, Symbolic Dynamics, and Chaos. CRC

Press, 1999.

[7] G.R. Belickii. Functional equations and conjugacy of local diffeomorphisms of finite

smooth class. Dokl. Akad. Nauk 56-59, 1972.

[8] Y.A. Kuznetsov. Elements of Applied Bifurcation Theory. Springer-Verlag, New York,

2004.

68