Upload
linette-davis
View
215
Download
0
Embed Size (px)
Citation preview
STOCHASTIC MODELSLECTURE 1 PART IIMARKOV CHAINS
Nan ChenMSc Program in Financial EngineeringThe Chinese University of Hong Kong
(ShenZhen)Sept. 16, 2015
Outline1. Long-Term Behavior of Markov
Chains2. Mean-Time Spent in Transient
States3. Branching Processes
1.5 LONG-RUN PROPORTIONS AND LIMITING PROBABILITIES
Long-Run Proportions of MC
• In this section, we will study the long-term behavior of Markov chains.
• Consider a Markov chain Let denote the long-run proportion of time that the Markov chain is in state , i.e.,
Long-Run Proportions of MC (Continued)• A simple fact is that, if a state is transient,
the corresponding • For recurrent states, we have the following
result: Let , the number of
transitions until the Markov chain makes a transition into the state Denote to be its expectation, i.e. .
Long-Run Proportions of MC (Continued)• Theorem: If the Markov chain is irreducible and
recurrent, then for any initial state
Positive Recurrent and Null Recurrent
• Definition: we say a state is positive recurrent if and say that it is null recurrent if
• It is obvious from the previous theorem, if state is positive recurrent, we have
How to Determine ?
• Theorem: Consider an irreducible Markov chain. If the chain is also positive recurrent, then the long-run proportions of each state are the unique solution of the equations:
If there is no solution of the preceding linear equations, then the chain is either transient or null recurrent and all
Example I: Rainy Days in Shenzhen
• Assume that in Shenzhen, if it rains today, then it will rain tomorrow with prob. 60%; and if it does not rain today, then it will rain tomorrow with prob. 40%. What is the average proportion of rainy days in Shenzhen?
Example I: Rainy Days in Shenzhen (Solution)
• Modeling the problem as a Markov chain:
• Let and be the long-run proportions of rainy and no-rain days. We have
Example II: A Model of Class Mobility
• A problem of interest to sociologists is to determine the proportion of a society that has an upper-, middle-, and lower-class occupations.
• Let us consider the transitions between social classes of the successive generations in a family. Assume that the occupation of a child depends only on his or her parent’s occupation.
Example II: A Model of Class Mobility (Continued)• The transition matrix of this social mobility is
given by
That is, for instance, the child of a middle-class worker will attain an upper-class occupation with prob. 5%, will move down to a lower-class occupation with prob. 25%.
Example II: A Model of Class Mobility (Continued)
• The long-run proportion thus satisfy
Hence,
Stationary Distribution of MC
• For a Markov chain, any set of satisfying and is called a stationary probability distribution
of the Markov chain.• Theorem: If the Markov chain starts with an
initial distribution i.e., then
for all state and
Limiting Probabilities
• Reconsider Example I (Rainy days in Shenzhen):
Please calculate what are the probabilities that it will rain in 10 days, in 100 days, and in 1,000 days, given it does not rain today.
Limiting Probabilities (Continued)
• That transforms to the following problem: Given
what are , and
Limiting Probabilities (Continued)
• With the help of MATLAB, we have
Limiting Probabilities (Continued)
• Observations:– As converges;– The limit of does not depend on the initial
state– The limits coincide with the stationary
distribution of the Markov chain
Limiting Probabilities and Stationary Distribution• Theorem: In a positive recurrent (aperiodic)
Markov chain, we have
where is the stationary distribution of the chain.
In words, we say that a positive recurrent Markov chain will reach an equilibrium/a steady state after long-term transition.
Periodic vs. Aperiodic
• The requirement of aperiodicity in the last theorem turns out to be very essential.
• We say that a Markov chain is periodic if, starting from any state, it can only return to the state in a multiple of steps. Otherwise, we say that it is aperiodic.
• Example: A Markov chain with period 3.
Summary
• In a positive recurrent aperiodic Markov chain, the following three concepts are equivalent:– Long-run proportion;– Stationary probability distribution;– Long-term limits of transition probabilities.
The Ergodic Theorem
• The following result is quite useful: Theorem: Let be an irreducible
Markov chain with stationary distribution , and let be a bounded function on the state space. Then,
Example III: Bonus-Malus Automobile Insurance System • In most countries of Europe and Asia,
automobile insurance premium is determined by use of a Bonus-Malus (Latin for Good-Bad) system.
• Each policyholder is assigned a positive integer valued state and the annual premium is a function of this state.
• Lower numbered states correspond to lower number of claims in the past, and result in lower annual premiums.
Example III (Continued)
• A policyholder’s state changes from year to year in response to the number of claims that he/she made in the past year.
• Consider a hypothetical Bonus-Malus system having 4 states.
State Annual Premium ($)
1 200
2 250
3 400
4 600
Example III (Continued)
• According to historical data, suppose that the transition probabilities between states are given by
• Find the average annual premium paid by a typical policyholder.
Example III (Solution)
• By the ergodic theorem, we need to compute the corresponding stationary distribution first:
• Therefore, the average annual premium paid is
1.6 MEAN TIME SPENT IN TRANSIENT STATES
Mean Time Spend in Transient States
• Now, let us turn to transient states. We know that the chain will leave transient states eventually. Thus, it will be an interesting problem to study how long a Markov chain will spend in such states.
• Below we use an example to illustrate the idea. Please refer to Ross Ch. 4.6 (p. 231-234) for more theoretical discussion.
Example IV: Gambler’s Ruin Problem
• Consider the gambler’s ruin problem. Suppose that he starts with 3 dollars in the pocket and decides to exit the game if he loses all the money or reaches $7. Assume that the winning probability for the gambler is 40%.
• What is the expected amount of time the gambler has 2 dollars?
Example IV: Gambler’s Ruin Problem (Continued)
• This is a 7-state Markov chain. States are transient.
• Let denote the expected number of time periods the gambler has dollars, given that he starts with dollars. Then, conditioning on the initial transition, we have
Example IV: Gambler’s Ruin Problem (Continued)
• In general,
for
Example IV: Gambler’s Ruin Problem (Continued)
• Putting the above equations in a matrix form,
where , is an identity matrix, and is given by
1 2 3 4 5 6
1 0 0.4 0 0 0 0
2 0.6 0 0.4 0 0 0
3 0 0.6 0 0.4 0 0
4 0 0 0.6 0 0.4 0
5 0 0 0 0.6 0 0.4
6 0 0 0 0 0.6 0
Example IV: Gambler’s Ruin Problem (Solution)
• We can invert the matrix to solve for S:
In particular,
1.7 BRANCHING PROCESSES
Branching Processes
• In this section, we consider a special example of Markov chains, known as the branching processes. It has a wide variety of applications in the biological, sociological, and engineering sciences.
Origin of the Problem
• In 19th century, two British sociologists, Francis Galton and Henry Watson, found that many aristocratic surnames in ancient Britain were becoming extinction.
• They attributed such phenomenon to the patrilineal rule (meaning passed from father to son) of inheritance.
Origin of the Problem
• An extinct family tree:
Chen XX (Male)
Chen XX (Male)
Chen XX (Female)
Chen XX (Female)
Chen XX (Female)
Generation 0
Generation 1
Generation 2
Mathematical Formulation
• Suppose that each individual will, by the end of his/her lifetime, have produced new offspring with prob. Independent of the number produced by other individuals.
• Suppose that for all• Let be the size of the nth generation. It is
easy to verify that is a Markov chain having as its state space the set of nonnegative integers. (Exercise: please write down the transition probabilities)
Some Simple Facts of Branching Processes• 0 is a recurrent state, since• If all the other states are transient. This
leads to an important conclusion that, if then the population will extinct or explode up to infinity.
Population Expectation
• Assume that that is, initially there is a single individual present. We calculate
• We may write
where represents the number of offspring of the ith individual of the st generation.
Population Expectation (Continued)
• Let
the average offspring number of each individual. We have
Therefore,
– When the expected number of individuals will converge to 0 as
Extinction Probability
• Denote to be the probability that a family ultimately dies out (under the assumption ).
• An equation determining may be derived by conditioning on the number of offspring of the initial individual
Extinction Probability (Continued)
• Define a function
It satisfies – –
–
Extinction Probability (Continued)
• The problem is transformed to find a solution to the following equation:
• Two scenarios:– When the above equation only has one
solution That is, the family will die out with probability 1.
Extinction Probability (Continued)
• (Continue)– When the above equation has two
solutions: one solution is still 1 and the other is strictly less than 1.
We can show that the extinction probability should be equal to the smaller solution.
Example V: Branching Processes
• Suppose that the individuals in a family has the following distribution to produce offspring. What is the extinction probability of this family?– –
Homework Assignments
• Read Ross Chapter 4.4, 4.6, and 4.7.• Supplementary reading: Ross Chapter 4.8-
4.11. • Answer Questions:– Exercises 18, 20 (Page 263, Ross)– Exercises 23, 24 (Page 264, Ross)– Exercises 37 (Page 266, Ross)– (Optional, Extra Bonus) Exercise 70 (page 272,
Ross).