14
], ,oi Mp",RKOVTHEORY 744 ~ ~Q~ a4dtUlt~"1 '7,,~ U .. 14tUe ., ~ '* 'N41foi '/411'8 ti/". DEFINITION 3.1: A stochastic process, {x( t ), t E T}, is a collectionof random variables. That is, for each t E T:. X(t) is a random variable. The index t is often referred to as time and asa result, we refer to X( t) as the state of the process at.time ~..The set T is called the index set of the process. DEFINITION 3.2: When T is a countable set, the stochastic process is said to be a discrete-time process. [f T is an interval of the real line, the stochasticprocess is said to be continuous time- process. DEFINITION: 3.3: . The state space of a stochastic process is defined as the set of all possible values that the random variables X(t) can assulne. THUS,ASTOCHASTICPROCESS ISA fAMILY Of RANDOM VARIABLESTHATDESCRIBESTHEEVOLUTIONTHROUGH , TIME OF SOME (PHYSICAL) PROCESS. 1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111III111III11 MARKOV THEORY EDGAR L. DE CASTRO PAGE 1 .. .. ..

Markov theory

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Markov theory

], ,oi

Mp",RKOVTHEORY

744 ~ ~Q~ a4dtUlt~"1'7,,~ U

.. 14tUe ., ~ '* 'N41foi '/411'8ti/".

DEFINITION 3.1:

A stochastic process, {x( t ), t E T}, is a collectionof randomvariables. That is, for each t E T:. X(t) is a random variable. Theindex t is often referred to as time and asa result, we refer to X( t)as the state of the process at.time ~..The set T is called the indexset of the process.

DEFINITION 3.2:

When T is a countable set, the stochastic process is said to be adiscrete-time process. [f T is an interval of the real line, thestochasticprocess is said to be continuous time- process.

DEFINITION: 3.3: .

The state space of a stochasticprocess is defined as the set ofall possible values that the random variables X(t) can assulne.

THUS,A STOCHASTICPROCESSISA fAMILY Of RANDOMVARIABLESTHATDESCRIBESTHE EVOLUTIONTHROUGH

, TIMEOF SOME (PHYSICAL) PROCESS.

1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111III111III11

MARKOV THEORY EDGAR L. DE CASTRO PAGE 1

.... ..

Page 2: Markov theory

DISCRETE-TIME PROCESSES

DEFINITION 3.4:

An epoch is a point in time at which the system is observed. Thestates correspond ,to the possible conditions observed. Atransition is a change of state. A record of the observed statesthrough time is caned a realization of the process.

DEFINITION 3.5:

A transition diagram is a pictorial map in which the states arerepresented by points and transition by arrows.

oTRANSITION DIAGRAM FOR THREE STATES

1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111IIIIII11

MARKOVTHEORY EDGARL. DECASTRO PAGE2

" ::,.:.' .... .. ,'.',., '. '

,", <

Page 3: Markov theory

DEFINITION 3.6:

The process of transition can be visualized as a random walk ofthe particle over the transition diagram. A virtual transition isone where the new state is the same as the old. A real transitionis a genuine ?hange of state.

THE RANDOM WALK MODEL

Consider a discrete time process whose state space is given by theintegers i = O,:f: 1,:f: 2, The discrete time process is said to

be a random walk, if for some number 0 < P < 1,

lj,i+l =P =1..1li,i-I i = 0,:1:1,:J:2,. ..

The random walk may be thought of as being a model for anindividual walking on a straight line who at each point of timeeither takes one step to the right with probabilityp and one step tothe left with probability 1 - p.

I1I11111I11III1111111111111111111111111111111111111111I11I1III11111111111111111111111111111111111111111111111II1111I1111I111111111111111111111111111111111111111111I11I1I1I1IIII1111

MARKOV THEORY EDGAR L. DE CASTRO PAGE 3

" " , . ,, ,.. "

. t:,'. ." .,

Page 4: Markov theory

THE MARKOV CHAIN

DEFINITION 3.7:

A markov chain is a discrete time stochastic process in whichthe current state of each random variable Xi depends only on theprevious state. The word chain suggests the linking of the randomvariables to their immediately adjacent neighbors in the sequence.Markov is the Russian mathematician who developed the processaround the beginning of the 20th century.

TRANSITION PROBABILITY (Pij) - the probability of atransition from state i to state j after one period.

.

TRANSITION MATRIX (P) - the matrix of transitionprobabilities.

..t:",',.."

1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 II 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111

MARKOV THEORY EDGAR L. DE CASTRO PAGE 4

.. ...........

.'. .', .',' .. ..".

PII PI2 ... PIn

P =I P21P22 ... P2n. . . .. . . . .. . . .

. PDI Pn2... Pnn

Page 5: Markov theory

ASSUMPTIONS OF THE MARKOV CHAIN

t. THEMARKOV ASSUMPTION,The knowledge of the state at any time is sufficient to predictthe future of the process. Or, given the present, the future isindependent of the parts and the process is "forgetful."

2. THE STATIONARITYASSUMPTION

The probability mechanism is assumed as stable.

CHAPMAN-KOLMOGOROV EQUATIONS

Let PDI1)= the n step transition probability, i.e., the probabilitythat a process in state i will be in state j after nadditional transitions.

pJn) = P{Xn+m = jlXnl = i}, n > 0, i,j > 0

The Chapman-Kolmogorov equations provide a method forcalculating these n-step transition probabilities.

00

P(n+m)- ~' p.(n)n(~n)\in m> 0 all i,Jo

ij - L... Ik rkJ ' -,k=O '

Formally, we derive:

11111111111111111111111111111111111111111111111111111111IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII!IIIIIIIIIIIIIIIIIIIII1II1I11I11III1I1111111111111111111111111111111111111111111111I1111111

MARKOV THEORY EDGAR r..,DE CASTRO PAGE 5

" ' . '. ., " " ' .

. .:. -.:', "

, ,

Page 6: Markov theory

00.

. LP{Xn+m = j,Xn = kixo = i}k=O

00

= LP{Xn+m = jlXn = k,Xo = i}P{Xn = klXo = i}k=O

00- " n(m) p (n)- .i- rkj ik

k=O .

If we let p~n) denote the matrix of n-step transition probabiIitiesp,(n) then

1] ,

p(n+m) =p(n) - p(trt)

where the dot represents matrix multiplication.particular:

Hence, in

p(2) =p(l+l) = p- p = p2

And by induction:

p(o) = p(n-l+1) = pn-I - p =pO

That is, the n-step transition matrix is obtained by multiplyingmatrix P by itself n times. Therefore the N-step transition matrix isgivenby:

1II111I1I111III111111111111111111111111I1111111111111111111111111111111111111111111111111111111111111111111111I1I11111I11111II111111111111111111111111111111111111111IIIIIIIIIIilili

MARKOV THEORY EDGARL. DECASTRO PAGE 6

0.

Page 7: Markov theory

FIRST PASSAGEPROBABILITIES

AND FIRST RETURN

Let f~N) = first passage probability= probability of reaching state j from state i for

the first time in N steps.

f~N)= first return probability if i = j

fi~N) =P{XN =j,XN-I :I:j,XN-2 :I:j,...,Xf :I:jlXo = i}

f.(I) = p..lJ IJ

N-lf,(N)= p.(N)- ~ f.(k)p(N-k)IJ IJ ~ IJ JJ

k=1

111111111111111111111111111111111111111111111111111111" 11111111111" 11111111111111111111111111111111111" 111111111111111111111111111111111111111""" 11/1111111" 11111111111I11111

fv1ARKOVTHEORY EDGAR L. DE CASTRO PAGE 7

.'. I"... .. ".:'.. . .

rp(N) p(N) ...pCl'J) l

11112 In

(N) p(N) p(N)p(N) =I P21

...22 2n. . . .. . . .. . . .

Page 8: Markov theory

CLAS~IFICATION OF STATES

For fixed i andj, the fi~N) are nonnegative numbers such that

When the sum does equal 1, fi~N) can be considered as aprobability distribution for the random variable: first passage time

If i =j and

00

L f(N) - 1IJ -

N=1

then state i is caned a recurrent state because this conditionimplies that once the process is in state i, it will return to state i.

A special case of the recuuent state is the absorbingstate. A stateis said to be an absorb,lng state if the one step transitionprobability Pij = 1. Thus, if a state is absorbing, the process winnever leave once it enters. If

00

L f(N) < tN=] 1J

then state i is called a transient state because this condition impliesthat once the process is in state i, there is a strictly positiveprobability that it will never return to i.

I1111II1I11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I1111111111111111111111111111111111111111I1I1II1II1IIIII11

MARKOV THEORY EDGARL. DECASTRO PAGE 8

'", ..

.:

Page 9: Markov theory

Let Mij =expected first passage time from i to j

0000

if L fi(N) < 1N=1 ~

00

if L fDN) =1N=1

[Mijexists only if the states are recurrent]

Whenever

00

~ f.(N)- 1£.oJ 1J -

N=l

Then

Moo= 1+ '" P'kMk'

1J £.. I Jk*j

When j = i, the expected first passage time is caUed the firstrecurrence time. If Mii = 00, it is called a null recurrentstate, If Mii< 00, it is called a positive recurrent state, In afinite Markov chain, there are no null recurrent states (onlypositive recurrent states and transient states).

111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I111I11III11

MARKOV THEORY EDGAR L. DE CASTRO PAGE 9

.'." .

Page 10: Markov theory

State j is accessible from i if Pij > 0 for some n > o. If j isaccessible to i and i is accessible to j, then the two statescommunicate. In general :

(1) any state communicates with itself.(2) if state i communicates with state j, then state j communicates

with state i.

(3) if state i communicates with state j and state j communicateswith state k, then state i communicates with state k.

If all states communicate, the Markov chain is Irreducible. In afmite Markov chain, the members of a class are either all transientstates or all positive recurrent states. A state i is said to have a

. period t (t > 1) if Pj]N) =0 whenever n is not divisible by t, and tis the largest integerwith this property.If a state has a period 1, itis called aperiodic state. If state i in a class is aperiodic, then.all states in the class are aperiodic. Positive recurrent states thatare aperiodicare calledergodic states.

11111111111111111111111111111111111111 n 11111111111" 1111111111" II" 111111" II" II iIIlllll"" !l1I1I111111" n II111I "" 11111111111111111" 11111" I" 1111111111111111" I111III1IIII

MARKOV THEORY EDGARL. DECASTRO PAGE 10

. . .

. ..:. '."," .",'I

Page 11: Markov theory

ERGODIC MARKOV CHAINS

STEADY STATEPROBABILITIES)

PROBABILITIES (LIMITING

Let 7tj = lim p.(N)N~oo IJ

As N grows large:

7t1 7t2

pN ~ 17tI 7t2. .. .. .

... 7tn

... 7tn... ...

... 7tn

As long as the process is ergodic, such Iin1itexists.

p(I'l) =p(N-I) 8 P

Jim peN)= Jim p(N-I). pN~oo N~oo

...... 7t 'it-- 1t2 ... 7t

l11 1 n. ..... =. . . . . p

. ... : . ,:J... 1tnJ L7t1 7t2

1t=1t8P

1tT = pT 81t

[This system possesses an infinite number of solutions.]

1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I11I11111I1III11111111111I11111111111111111111111111111I1I1IIIII1

MARKOV THEORY EDGAR L. DE CASTRO PAGE 11

',' : '.. ".

. ::".,'. ." 'I

Page 12: Markov theory

The nonnalizing equation

L 1ti =1al1 i

is used to identifY the one solution which wiUprobability distribution.

qualify as a

ABSORBING MARKOV CHAINS

IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII!11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111II1I1111111II11111

MARKOV THEORY EDGAR L. DE CASTRO PAGE 12

Let

PH Pl2 ...'Plk I Pl,k+l

... ...

P21 P22 ... P2k I P2,k+l... ...

. . . .I

. . . .. . . .

Pkl Pk2 ...Pkk I Pk,k +1

... ...

- - - - I - - -0 0 ... 0

I1 ... 0

0 0 ... 0I

0 ... 0.

I

. . .. . . .. . . .0 0 . . . 0

I0 ... I

The partitioned matrix is given by:

Q I I( ..,

P=I- -

J0 I

Page 13: Markov theory

Let eij = mean number of times that transient state j is occupiedgiven the initial state in i before absorption

E = corresponding matrix

Then,

k

i:l: j: eij"= L ~vevjv=l

k

i =j :eij = 1 + L Pjvevjv=l

In matrix form :

E = I + QE

E - QE = I

(I-Q)E=I

E=(I-Q)-l

Let di = total number of transitions until absorption

k

di = Leijj=l

111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111II1I11

MARKOVTHEORY EDGARL. DE CASTRO PAGE 13

.":.":

Page 14: Markov theory

ABSORPTION PROBABILITY - probability of entering anabsorbing state

Let Aij = probability that the process even enters absorbing state jgiven that the initial state is i.

k

Aij = Pij + L PivAvjv=l

In matrix form

A = matrix of Aij (not necessarily square)

[where the number of rows is the number of transient states andthe number of columns is the number of absorbing states]

Examining matrix A

A=R+QA

A-QA=R(1- Q)A =R

A = [I - Q ]-1 R

CONDITIONAL MEAN FIRST PASSAGE TIME - number oftransitions which will occur before an absorbing state is entered

A..M..=A..+ ~ P.kA k.M k.

U IJ 1J £.- I'" J Jk=#j

111II1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111!11111111111111111111IIII1I111111111

MARKOV THEORY EDGAR L. DE CASTRO PAGE 14

. '. . .. .',.

.,.'