47
STOCHASTIC MODELS LECTURE 1 PART II MARKOV CHAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 16, 2015

S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Embed Size (px)

Citation preview

Page 1: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

STOCHASTIC MODELSLECTURE 1 PART IIMARKOV CHAINS

Nan ChenMSc Program in Financial EngineeringThe Chinese University of Hong Kong

(ShenZhen)Sept. 16, 2015

Page 2: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Outline1. Long-Term Behavior of Markov

Chains2. Mean-Time Spent in Transient

States3. Branching Processes

Page 3: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

1.5 LONG-RUN PROPORTIONS AND LIMITING PROBABILITIES

Page 4: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Long-Run Proportions of MC

• In this section, we will study the long-term behavior of Markov chains.

• Consider a Markov chain Let denote the long-run proportion of time that the Markov chain is in state , i.e.,

Page 5: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Long-Run Proportions of MC (Continued)• A simple fact is that, if a state is transient,

the corresponding • For recurrent states, we have the following

result: Let , the number of

transitions until the Markov chain makes a transition into the state Denote to be its expectation, i.e. .

Page 6: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Long-Run Proportions of MC (Continued)• Theorem: If the Markov chain is irreducible and

recurrent, then for any initial state

Page 7: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Positive Recurrent and Null Recurrent

• Definition: we say a state is positive recurrent if and say that it is null recurrent if

• It is obvious from the previous theorem, if state is positive recurrent, we have

Page 8: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

How to Determine ?

• Theorem: Consider an irreducible Markov chain. If the chain is also positive recurrent, then the long-run proportions of each state are the unique solution of the equations:

If there is no solution of the preceding linear equations, then the chain is either transient or null recurrent and all

Page 9: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Example I: Rainy Days in Shenzhen

• Assume that in Shenzhen, if it rains today, then it will rain tomorrow with prob. 60%; and if it does not rain today, then it will rain tomorrow with prob. 40%. What is the average proportion of rainy days in Shenzhen?

Page 10: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Example I: Rainy Days in Shenzhen (Solution)

• Modeling the problem as a Markov chain:

• Let and be the long-run proportions of rainy and no-rain days. We have

Page 11: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Example II: A Model of Class Mobility

• A problem of interest to sociologists is to determine the proportion of a society that has an upper-, middle-, and lower-class occupations.

• Let us consider the transitions between social classes of the successive generations in a family. Assume that the occupation of a child depends only on his or her parent’s occupation.

Page 12: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Example II: A Model of Class Mobility (Continued)• The transition matrix of this social mobility is

given by

That is, for instance, the child of a middle-class worker will attain an upper-class occupation with prob. 5%, will move down to a lower-class occupation with prob. 25%.

Page 13: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Example II: A Model of Class Mobility (Continued)

• The long-run proportion thus satisfy

Hence,

Page 14: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Stationary Distribution of MC

• For a Markov chain, any set of satisfying and is called a stationary probability distribution

of the Markov chain.• Theorem: If the Markov chain starts with an

initial distribution i.e., then

for all state and

Page 15: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Limiting Probabilities

• Reconsider Example I (Rainy days in Shenzhen):

Please calculate what are the probabilities that it will rain in 10 days, in 100 days, and in 1,000 days, given it does not rain today.

Page 16: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Limiting Probabilities (Continued)

• That transforms to the following problem: Given

what are , and

Page 17: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Limiting Probabilities (Continued)

• With the help of MATLAB, we have

Page 18: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Limiting Probabilities (Continued)

• Observations:– As converges;– The limit of does not depend on the initial

state– The limits coincide with the stationary

distribution of the Markov chain

Page 19: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Limiting Probabilities and Stationary Distribution• Theorem: In a positive recurrent (aperiodic)

Markov chain, we have

where is the stationary distribution of the chain.

In words, we say that a positive recurrent Markov chain will reach an equilibrium/a steady state after long-term transition.

Page 20: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Periodic vs. Aperiodic

• The requirement of aperiodicity in the last theorem turns out to be very essential.

• We say that a Markov chain is periodic if, starting from any state, it can only return to the state in a multiple of steps. Otherwise, we say that it is aperiodic.

• Example: A Markov chain with period 3.

Page 21: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Summary

• In a positive recurrent aperiodic Markov chain, the following three concepts are equivalent:– Long-run proportion;– Stationary probability distribution;– Long-term limits of transition probabilities.

Page 22: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

The Ergodic Theorem

• The following result is quite useful: Theorem: Let be an irreducible

Markov chain with stationary distribution , and let be a bounded function on the state space. Then,

Page 23: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Example III: Bonus-Malus Automobile Insurance System • In most countries of Europe and Asia,

automobile insurance premium is determined by use of a Bonus-Malus (Latin for Good-Bad) system.

• Each policyholder is assigned a positive integer valued state and the annual premium is a function of this state.

• Lower numbered states correspond to lower number of claims in the past, and result in lower annual premiums.

Page 24: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Example III (Continued)

• A policyholder’s state changes from year to year in response to the number of claims that he/she made in the past year.

• Consider a hypothetical Bonus-Malus system having 4 states.

State Annual Premium ($)

1 200

2 250

3 400

4 600

Page 25: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Example III (Continued)

• According to historical data, suppose that the transition probabilities between states are given by

• Find the average annual premium paid by a typical policyholder.

Page 26: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Example III (Solution)

• By the ergodic theorem, we need to compute the corresponding stationary distribution first:

• Therefore, the average annual premium paid is

Page 27: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

1.6 MEAN TIME SPENT IN TRANSIENT STATES

Page 28: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Mean Time Spend in Transient States

• Now, let us turn to transient states. We know that the chain will leave transient states eventually. Thus, it will be an interesting problem to study how long a Markov chain will spend in such states.

• Below we use an example to illustrate the idea. Please refer to Ross Ch. 4.6 (p. 231-234) for more theoretical discussion.

Page 29: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Example IV: Gambler’s Ruin Problem

• Consider the gambler’s ruin problem. Suppose that he starts with 3 dollars in the pocket and decides to exit the game if he loses all the money or reaches $7. Assume that the winning probability for the gambler is 40%.

• What is the expected amount of time the gambler has 2 dollars?

Page 30: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Example IV: Gambler’s Ruin Problem (Continued)

• This is a 7-state Markov chain. States are transient.

• Let denote the expected number of time periods the gambler has dollars, given that he starts with dollars. Then, conditioning on the initial transition, we have

Page 31: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Example IV: Gambler’s Ruin Problem (Continued)

• In general,

for

Page 32: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Example IV: Gambler’s Ruin Problem (Continued)

• Putting the above equations in a matrix form,

where , is an identity matrix, and is given by

1 2 3 4 5 6

1 0 0.4 0 0 0 0

2 0.6 0 0.4 0 0 0

3 0 0.6 0 0.4 0 0

4 0 0 0.6 0 0.4 0

5 0 0 0 0.6 0 0.4

6 0 0 0 0 0.6 0

Page 33: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Example IV: Gambler’s Ruin Problem (Solution)

• We can invert the matrix to solve for S:

In particular,

Page 34: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

1.7 BRANCHING PROCESSES

Page 35: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Branching Processes

• In this section, we consider a special example of Markov chains, known as the branching processes. It has a wide variety of applications in the biological, sociological, and engineering sciences.

Page 36: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Origin of the Problem

• In 19th century, two British sociologists, Francis Galton and Henry Watson, found that many aristocratic surnames in ancient Britain were becoming extinction.

• They attributed such phenomenon to the patrilineal rule (meaning passed from father to son) of inheritance.

Page 37: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Origin of the Problem

• An extinct family tree:

Chen XX (Male)

Chen XX (Male)

Chen XX (Female)

Chen XX (Female)

Chen XX (Female)

Generation 0

Generation 1

Generation 2

Page 38: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Mathematical Formulation

• Suppose that each individual will, by the end of his/her lifetime, have produced new offspring with prob. Independent of the number produced by other individuals.

• Suppose that for all• Let be the size of the nth generation. It is

easy to verify that is a Markov chain having as its state space the set of nonnegative integers. (Exercise: please write down the transition probabilities)

Page 39: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Some Simple Facts of Branching Processes• 0 is a recurrent state, since• If all the other states are transient. This

leads to an important conclusion that, if then the population will extinct or explode up to infinity.

Page 40: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Population Expectation

• Assume that that is, initially there is a single individual present. We calculate

• We may write

where represents the number of offspring of the ith individual of the st generation.

Page 41: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Population Expectation (Continued)

• Let

the average offspring number of each individual. We have

Therefore,

– When the expected number of individuals will converge to 0 as

Page 42: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Extinction Probability

• Denote to be the probability that a family ultimately dies out (under the assumption ).

• An equation determining may be derived by conditioning on the number of offspring of the initial individual

Page 43: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Extinction Probability (Continued)

• Define a function

It satisfies – –

Page 44: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Extinction Probability (Continued)

• The problem is transformed to find a solution to the following equation:

• Two scenarios:– When the above equation only has one

solution That is, the family will die out with probability 1.

Page 45: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Extinction Probability (Continued)

• (Continue)– When the above equation has two

solutions: one solution is still 1 and the other is strictly less than 1.

We can show that the extinction probability should be equal to the smaller solution.

Page 46: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Example V: Branching Processes

• Suppose that the individuals in a family has the following distribution to produce offspring. What is the extinction probability of this family?– –

Page 47: S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept

Homework Assignments

• Read Ross Chapter 4.4, 4.6, and 4.7.• Supplementary reading: Ross Chapter 4.8-

4.11. • Answer Questions:– Exercises 18, 20 (Page 263, Ross)– Exercises 23, 24 (Page 264, Ross)– Exercises 37 (Page 266, Ross)– (Optional, Extra Bonus) Exercise 70 (page 272,

Ross).