19
IERG5300 Tutorial 1 Discrete-time Markov Chain Peter Chen Peng Adapted from Qiwen Wang’s Tutorial Materials

IERG5300 Tutorial 1 Discrete-time Markov Chain Peter Chen Peng Adapted from Qiwen Wang’s Tutorial Materials

Embed Size (px)

Citation preview

IERG5300 Tutorial 1 Discrete-time Markov Chain

Peter Chen PengAdapted from Qiwen Wang’s Tutorial Materials

2

Outline

Course Information

Discrete-time Markov Chain N-step transition probability Chapman-Kolmogorov Equations Limiting probabilities

Miscellaneous Materials

Summary, Q&A

3

Course Information Lecture Notes can be found at

https://elearn.cuhk.edu.hk/webapps/portal/frameset.jsp

Or

https://course.ie.cuhk.edu.hk/~ierg5300/

Grading:

Homework is coming soon.

10% homework, 30% mid-term exam, 60% final exam

4

Discrete-time Markov Chain - Definition A sequence of discrete r.v. X0, X1, X2, ... forms a Markov Chain if

At any discrete time t, the state of Xt+1 only depends on the

state of Xt , i.e. ,

Pij is the 1-step transition probability and it is time invariant

Pij describes how likely state j will occur tomorrow if today's state

is i.

∏ is the 1-step transition matrix comprising all Pij

1 1 1 1 1 0 0

1

P(X | X ,X , ,X ,X )

P(X | X )t t t t

t t ij

j i i i i

j i P

0

P 0, , 0 ; P 1, 0,1,ij ijj

i j i

j

for all i, j. for all i.

5

Markov Chain - Example An individual possesses r umbrellas which he brings from

his home to office, and vice versa. If he is at home (the

office) at the beginning (end) of a day and it is raining,

then he will take an umbrella at home (the office ) to the

office (home), provided there is one to be taken. If it is

not raining, then he never takes an umbrella. Assume

that, independent of the past, it rains at the beginning

(end) of a day with probability p.

Q: Define a Markov chain with r+1 states

Let Xt be the number of umbrellas at home at the

beginning of day t, then Xt ∈ {0, 1, … , r}

6

N-step transition probability Natural questions are:

Given the Markov chain is currently at state i, what is the chance to stay at state j after n time periods?

n-step transition probability:

Correspondingly,

(X | X ) , 0, 0nt n tijP P j i n t

00 01 02

(n) 10 11 12

20 21 22

P

n n n

n n n

n n n

P P P

P P P

P P P

7

Chapman-Kolmogorov Equations P(n) = ∏n

To compute , we can just get it by conditioning on some intermediate time period t+m :

P(m+n) = P(m) P(n)

At any time period t, Xt is a r.v. with distribution

Vt = { … P(Xt = 0) , P(Xt = 1) , P(Xt = 2), … }, then for n≥0, Vt+n = Vt ∏n

(X | X )m nt n tij mP P j i

// Only newest knowledge counts time-invariant property

(X | X )

(X | X , X ) (X | X )

m nij

t m n t

t m n t m t t m tk

m nik kj

k

P

P j i

P j k i P k i

P P

8

C-K Equations - Example Suppose that coin 1 and 2 have probability 0.7 and 0.6

of coming up heads respectively. If the coin flipped today comes up heads, then we select coin 1 to flip tomorrow, and if it comes up tails, then we select coin 2 to flip tomorrow. If the coin flipped on the first day is equally likely to be coin 1 or coin 2, then what is the probability that the coin flipped on the third day is coin 1?

Define the state on a day to be the label of the coin that is flipped on that day. Thus

. We want to find V20

.7 .3P , V (.5 .5)

.6 .4

9

Limiting Probabilities

The reverse is not true.

Summary of theorems for finite-state markov chains.

Ergodicity // All entries in the matrix t are nonzero for some t 1.

The limiting distribution V exists.

// regardless of the initial distribution V0.

Eigenvalue 1 of the transition matrix with 1-dim eigenspace

// Rank of the matrix I is full minus 1 (dim of eigenspace = 1).

// Long-run distr. = unique normalized eigenvector with eigenvalue 1

Eigenvalue 1 of the transition matrix

// The matrix I is a singular. det(I) = 0

10

Limiting Probabilities Consider a Markov chain with discrete r.v. X0, X1, X2 ,… and transition

matrix ∏.

At any time t ≥ 0, Xt is a r.v. with distribution

Vt = { … P(Xt = 0) , P(Xt = 1) , P(Xt = 2), … } .

Then, Vt+n = Vt ∏n for n≥0. //Chapman-Kolmogorov Equation

If the distribution exists, V is

referred to the limiting distribution/stationary state of the Markov chain

and i are the limiting probabilities.

To calculate the limiting probabilities, we need the following equations:

V=V∏ ,

t 0 1 2limV =V ={ , , }t

1ii

11

Limiting Probabilities - Ergodicity

Definition. A finite-state Markov chain is ergodic if all entries

in the matrix t are nonzero for some t 1.

// For some t, you can go from anywhere to anywhere in exactly t steps.

Theorem. If a finite-state Markov chain is ergodic, then the

stationary state exists.

// Ergodicity is a sufficient condition but not a necessary

condition.

12

Limiting Probabilities - Remarks V is regardless of V0, the initial distribution of the

Markov chain.

V is equal to ANY row in

V may also exist if the Markov chain is not ergodic, e.g.

V may not exist but satisfy V=V∏. V can NOT be

regarded as a stationary state, e.g.

0 11

0.20.8

0 11

1

limPn

n ∏n

13

Limiting Probabilities - Remarks (cont’d) As long as V∞ satisfies V=V∏, can be interpreted

as the long-run proportion of time that the Markov chain is in state j.

Vj

14

Limiting Probabilities ― Example 1 Each of two switches is either on or off during a day. On day

n, each switch will independently off with probability

p = (1 + number of on switches during day n−1) / 4.

(For instance, if both switches are on during day n−1, then

each will independently be off during day n with probability

3/4.). What fraction of days are both switches on? What

fraction are both off?

15

Limiting Probabilities ― Example 1 Each of two switches is either on or off during a day. On day

n, each switch will independently off with probability

p = (1 + number of on switches during day n−1) / 4.

(For instance, if both switches are on during day n−1, then

each will independently be off during day n with probability

3/4.). What fraction of days are both switches on? What

fraction are both off? How to solve:

1) Model the problem as a Markov chain : define the proper states and the corresponding transition probabilities.

Define the state to be the number of on switches, which gives us a 3-state Markov chain.

16

Limiting Probabilities ― Example 1(cont’d) Each of two switches is either on or off during a day. On day

n, each switch will independently off with probability

p = (1 + number of on switches during day n−1) / 4.

(For instance, if both switches are on during day n−1, then

each will independently be off during day n with probability

3/4.). What fraction of days are both switches on? What

fraction are both off? How to solve:

2) Solve the linear equations:

1 2 3 1 2 3

1 2 3

1/16 3/8 9/16

( ) 1/4 1/2 1/4 ( )

9/16 3/8 1/16

1

0 1 22/7, 3/7, 2/7

17

Geometric Random Variable An experiment with probability of success p

Two types: (Starting at 1) The number of total experiments till a success

appears.

P(N = k) = (1-p)k-1p E[N] = 1/p, k 1

(Starting at 0) The number of failed experiments till a success

appears.

P(N = k) = (1-p)kp E[N] = 1/p – 1, k 0

Geometric r.v. is the only discrete one that satisfies the

memoryless property:

P(N=k+m | Nm) = P(N=k)

18

Summary Markov chain has two important properties: memoryless and time-

invariant.

C-K Equation: Vt+n = Vt ∏n

When we calculate the limiting probabilities of a Markov chain,

check if the stationary state exists (whether the finite state Markov

chain is ergodic)

Use the two equations: V=V∏ , .

Geometric r.v. is the only discrete distribution that satisfies the

memoryless property.

Questions?

1ii

Extra

19

1. Markov Chain?2. What kind of rule changing make it a Markov chain?3. What are the states then?