Upload
gwen-hamilton
View
221
Download
0
Embed Size (px)
Citation preview
CS433Modeling and Simulation
Lecture 07 – Part 01
Continuous Markov Chains
Dr. Anis Koubâa
http://10.2.230.10:4040/akoubaa/cs433/
14 Dec 2008
Al-Imam Mohammad Ibn Saud UniversityAl-Imam Mohammad Ibn Saud University
Goals for Today
Understand the Markov property in the
Continuous Case
Understand the difference between
continuous time and discrete time
Markov Chains
Learn how to use Continuous Markov
Chains for modelling stochastic
processes
3
“Discrete Time” versus “Continuous Time”
0 1 2 3 4
time
Events occur at known points in time
Fixed Time
Discrete Time
utime
Events occur at any point in time
Variable TimeContinuous Time
s v t
1=u-s 2=v-u
=1 =1
3=t-v
4
Definition (WiKi): Continuous-Time Markov Chains
In probability theory, a Continuous-Time Markov Process (CTMC) is
a stochastic process { X(t) : t ≥ 0 } that satisfies the Markov property and
takes values from a set called the state space.
The Markov property states that at any times s > t > 0, the conditional
probability distribution of the process at time s given the whole history
of the process up to and including time t, depends only on the state of
the process at time t.
In effect, the state of the process at time s is conditionally independent
of the history of the process before time t, given the state of the
process at time t.
5
Definition 1: Continuous-Time Markov Chains
A stochastic process {X(t), t 0} is a Continuous-Time Markov Chain (CTMC) if for all 0 s t
and non-negative integers i, j, x(u), such that 0 u < s,
In addition, if this probability is independent from s and t, then the CTMC has stationary transition probabilities:
, ,0
,ij
P X t j X s i X u x u u s
P X t j X s i p s t
for all ij ijp p t s P X t j X s i s
s
X(s)=i
الحا
ضر
t
X(t)=j
المست
قبل
u
X(u)=x(u)
الما
ضي
مدة
زمنية
6
Differences between Continuous-Time and Discrete-Time Markov Chains
Discrete Markov Chain
Continuous Markov Chain
Time tk or k ∈ ℕ+ s,t ∈ ℝ+Transient Transition
ProbabilityPij (k)
for the time interval [k, k+1]Pij (s,t)
for the time interval [s,t]
Stationary Transition Probability
Pij (1)= Pij
in the time unit equal to 1Time duration fixed
Pij ()for the time duration t-s
dependent on the duration
Transition Probability to the
Same State
Pii can be different from 0 Pii ()=0 for any
0 1 2 3 4
time
Events occur at known points in time
Fixed Time
=1 =1
Discrete Time
u
time
Events occur at any point in time
Variable Time
s v t
1=u-s 2=v-u 3=t-v
Continuous Time
7
Definition 2: Continuous-Time Markov Chains
A stochastic process {X(t), t 0} is a Continuous-Time Markov Chain (CTMC) if The amount of time spent in state i before making
a transition to a different state is exponentially distributed with rate a parameter vi,
When the process leaves state i, it enters state j with a probability pij, where pii = 0 and
All transitions and times are independent (in particular, the transition probability out of a state is independent of the time spent in the state).
Summary: The CTMC process moves from state to state according to DTMC, and the time spent in each state is exponentially distributed
8
Differences between DISCRETE and CONTINOUS
Summary: The CTMC process moves from state to state according to DTMC, and the time spent in each state is exponentially distributed
CTMC process DTMC process
Five Minutes Break
You are free to discuss with your classmates about the previous slides, or to refresh a bit, or to ask questions.
10
Chapman Kolmogorov: Transition Function
Define the Transition Function (like Transition Probability in DTMC) Pr | , ,ijp X j X i s ts t t s
Using the Markov (memoryless) property
The Continuous-Time analogue of the Chapman-Kolmogorov equation is
Pr | , Pr |,ijr
p X j X r X i X r X is t t u s u s
Pr | Pr |,
, ,
ijr
ir rjr
p X j X r X r X is t t u u s
p ps u u t
دالة االنتقال
11
-Time Transition ProbabilityTransition Matrix
H(s,t)=[pij(s,t)], for i,j=1,2,… then
, , , , s u ts t s u u t H H H
Note that H(s, s)= I
مصفوفة االنتقال Define the transition matrix between s and t as
for all ij ijp p t s P X t j X s i s
In the Homogenous caseHomogenous case, the CTMC has stationary transition probabilities and is called -time Transition Probability.
pij() means the probability that the transition from i to j occursduring the time interval . 1ij
j
p We must have:
12
Transition Rate Matrix
In the Matrix Form, Chapman-Kolmogorov equation for s ≤ t ≤ t+Δt , , ,s t t s t t t t H H H
In the Scalar Form, the Chapman-Kolmogorov equation for s ≤ t ≤ t+Δt
0
, , ,ij ik kjk
p p ps t t s t t t t
13
Transition Rate Matrix
Consider the Chapman-Kolmogorov for s ≤ t ≤ t+Δt
, , ,s t t s t t t t H H H
Subtracting H(s,t) from both sides and dividing by Δt
, ,, , s t t t ts t t s t
t t
H H IH H
Taking the limit as Δt0
,
,s t
s t tt
H
H Q
where the Transition Rate Matrix Q(t) is given by (equivalent to one-step transition)
0
,lim [ ]ijt
t t tq tt
t
H I
Q
qij (t) is the transition rate that the chain enters state j from state i at time t
Transition MatrixState Holding TimeTransition RateTransition Probability
Time Homogeneous Case
15
Homogeneous Case
In the homogeneous case, the transition functions do not depend on s and t, but only on the difference = t-s thus
,ij ij ijp p ps t t s
It follows that
,s t t s H H P
and the transition rate matrix
0 0
,lim lim [ ], constantijt t
t t t t qtt t
H I H IQ Q
0
0
for , lim ' 0
1for , lim ' 0
ijij ijt
iiii ii it
p ti j q p
tp ti j q p
t
qij is the transition rate that the chain enters state j from state i
i=-qii is the transition rate that the chain leaves state i
16
Homogeneous Case
The transition rate matrix
0 0
,lim lim [ ], constantijt t
t t t t qtt t
H I H IQ Q
0
0
for , lim
01for , lim
ijij t
iiiiii it
p ti j q
tpp ti j q
t t
qij is the transition rate that the chain enters state j from state i
i=-qii is the transition rate that the chain leaves state i
Continuous Markov Chain
Discrete Markov Chain
i
Pij
j
Pij: Transition ProbabilityTransition Time is deterministic (each slot)
Pji
i jk
qki=k . Pki qji=j . Pji
qik=i . Pikqij=i . Pij
Pij: Transition Probability, qij input rate from i to j, i output rateTransition Time is random
17
Continuous Markov Chain
Continuous Markov Chain
Discrete Markov Chain
i
Pij
j
Pij: Transition ProbabilityTransition Time is Known (each slot)
Pji
i jk
qki=k . Pki qji=j . Pji
qik=i . Pik qij=i . Pij
0
1
0
0,
0
ii
ijj
ij i ij
i ij iii j
ijj
P
P
q v P
q q
q
• Pij: Transition Probability, • qij input rate of state j from state i, •i output rate from state i for all other neighbor states• Transition Time is randoms
1Pr | ,ijij k k
ii
qP X j X i i j
q
18
Homogeneous Case
Thus, if P() is the Transition Matrix AFTER a time period
1 if
with 00 if ij
i jp
i jt
PP Q e QP
pij (0) is the instantaneous transition function from i to j
0 0
'ij ik kj ik kj i ijk k
k i
p q p q p p
In the Scalar Form, it is possible to write, with:
0 0
'ij ik kj ik kj i ijk k
k i
p p q p q p
Forward Forward EquationEquation
Backward Backward EquationEquation
' 0i ii iiq p
Two Minutes Break
You are free to discuss with your classmates about the previous slides, or to refresh a bit, or to ask questions.
Next: State Holding Time
20
State Holding and Transition Time
In a CTMC, the process makes a transition from one state to another, after it has spent an amount of time on the state it starts from. This amount of time is defined as the state holding time.
TheoremTheorem: State Holding Time of CTMCThe state holding time Ti := inf {t: Xt ≠ i | X0 = i} in a state i of a Continuous-Time Markov Chain Satisfies the Memoryless Property
Is Exponentially Distributed with the parameter i
TheoremTheorem: Transition Time in a CTMCThe time Tij := inf {t: Xt = j | X0 = i} spent in a state i before a transition to state j is exponentially distributed with the parameter qij
= for each state i and all i i iP T t T t P T t
1- expi iP T
1- expij ijP T q
21
State Holding Time: Proofs
Suppose our continuous time Markov Chain has just arrived in state i. Define the random variable Ti to be the length of time the process spends in state i before moving to a different state. We call Ti the holding time in state i.
The Markov Property implies the distribution of how much longer you’ll be in a given
state i is independent of how long you’ve already been there. Proof (1) (by contradiction): Suppose it is time s, you are in state i, and
i.e., the amount of time you have already been in state i is relevant inpredicting how much longer you will be there. Then for any time r < s,whether or not you were in state i at time r is relevant in predictingwhether you will be in state i or a different state j at some future time s + t. Thuswhich violates the Markov Property. Proof (2): The only distribution satisfying the memoryless property is the
exponential distribution. Thus, the result in (2).
= for each state i and all i i iP T t T t P T t
i i iP T t T t P T
( ( ) | ( ) and ( ) ) ( ( ) | ( ) ),P X t s j X s i X r k P X t s j X s i
Example: Computer System
Assume a computer system where jobs arrive according to a Poisson process with rate λ.
Each job is processed using a First In First Out (FIFO) policy.
The processing time of each job is exponential with rate μ.
The computer has a buffer to store up to two jobs that wait for processing.
Jobs that find the buffer full are lost.
Example: Computer System
Questions
Draw the state transition diagram. Find the Rate Transition Matrix Q. Find the State Transition Matrix P
24
Example
The rate transition matrix is given by
a
d
0 1 2 3
a a
a
d d
0 0
0
0
0 0
Q
0 0 0
0 01
0 0
0 0 0
P
The state transition matrix is given by
Transient State Probabilities
26
State Probabilities and Transient Analysis
Similar to the discrete-time case, we define Prj X jt t
In vector form 1 2, ,...t t t π
With initial probabilities 1 2, ,...0 0 0 π
Using our previous notation (for homogeneous MC)
0t t π π P 0teQπ
Obtaining a general solution is not easy!
Differentiating with respect to t gives us more “inside” d t
tdt
π
π Q
jjj j ij i
i j
d tq qt t
dt
27
“Probability Fluid” view
We view πj(t) as the level of a “probability fluid” that is stored at each node j (0=empty, 1=full).
j
jj j ij ii j
d tq qt t
dt
Change in the probability fluid
inflowoutflow
ri
j
qij… qjr…
Inflow
Outflow
jj jrr j
q q
Steady State Probabilities
29
Steady State Analysis
Often we are interested in the “long-run” probabilistic behavior of the Markov chain, i.e.,
limj jtt
As with the discrete-time case, we need to address the following questions Under what conditions do the limits exist? If they exist, do they form legitimate
probabilities? How can we evaluate these limits?
These are referred to as steady state probabilities or equilibrium state probabilities or stationary state probabilities
30
Steady State Analysis
Theorem: In an irreducible continuous-time Markov Chain consisting of positive recurrent states, a unique stationary state probability vector π with limj jt
t
These vectors are independent of the initial state
probability and can be obtained by solving and 1j
j
πQ = 0
0 jj j ij ii j
q qt t
0 Change
inflow
outflow ri
j
qij… qjr…
Inflow
Outflow
Using the “probability fluid” view
0j t
dt
31
Example
For the previous example, with the above transition function, what are the steady state probabilities
a
d
0 1 2 3
a a
a
d d
0 1 2 3
0 0
0
0
0 0
πQ 0
0 1 2 3 1
Solve
32
Example
The solution is obtained
0 1 2 3 1
0 1 0 1 0
0 1 2 0 2
2 0
1 2 3 0 3
3 0
0 2 3
1
1
Uniformization of Makov Chains
34
Uniformization of Markov Chains
In general, discrete-time models are easier to work with, and computers (that are needed to solve such models) operate in discrete-time
Thus, we need a way to turn continuous-time to discrete-time Markov Chains
Uniformization Procedure Recall that the total rate out of state i is –
qii=(i). Pick a uniform rate γ such that γ ≥ (i) for all
states i. The difference γ - (i) implies a “fictitious” event
that returns the MC back to state i (self loop).
35
Uniformization of Markov Chains
Uniformization Procedure Let PU
ij be the transition probability from state I to state j for the discrete-time uniformized Markov Chain, then
if
if
ij
Uij
ijj i
qi j
Pq
i j
i
j
k
……
qij
qik
Uniformization
i
j
k
……
ijq
ikq
iiq
End of Chapter