24
Presented by: Anil Kumar H A (13MVD1002) Naveen Chaubey (13MVD1037) Aditya Dwivedi (13MVD1062) 1

stochastic process

Embed Size (px)

DESCRIPTION

stochastic process is a Indexed collection of random variables.

Citation preview

  • Presented by:Anil Kumar H A (13MVD1002)Naveen Chaubey (13MVD1037)Aditya Dwivedi (13MVD1062)*

  • Indexed collection of random variables{Xt} tT , for each t T, Xt is a random variableT = Index SetState Space = range (possible values) of all XtStationary Process: Joint Distribution of the Xs dependent only on their relative positions.(not affected by time shift) (Xt1, ..., Xtn) has the same distribution as(Xt1+h, Xt2+h..., Xtn+h)

    e.g.) (X8, X11) has same distribution as (X20, X23)*

  • Markov Process: Pr of any future event given present does not depend on past: t0 < t1 < ... < tn-1 < tn < tP(a Xt b | Xtn = xtn, ........., Xt0 = xt0)| future | | present | | past |= P (a Xt b | Xtn = xtn)

    Another way of writing this:P{Xt+1 = j | X0 = k0, X1 = k1,..., Xt = i} = P{Xt+1 = j | Xt = i} for t=0,1,.. Andevery sequence i, j, k0, k1,... kt-1,*

  • Markov Chains: State Space {0, 1, ...}

    Discrete Time Continuous Time{T = (0, 1, 2, ...)} {T = [0, )} Finite number of statesThe markovian propertyStationary transition probabilitiesA set of initial probabilities P{X0 = i} for i*

  • Note:Pij = P(Xt+1 = j | Xt = i) = P(X1 = j | X0 = i)Only depends on going ONE step*

  • Stage (t) Stage (t + 1) State i Pij State j (with prob. Pij)These are conditional probabilities!Note that given Xt = i, must enter some state at stage t + 10Pi01Pi12withPi2......prob...... jPij...........mPim *

  • *

  • Example:t = day index 0, 1, 2, ...Xt = 0high defective rate on tth day = 1low defective rate on tth daytwo states ===> n = 1 (0, 1)

    P00 = P(Xt+1 = 0 | Xt = 0) = 1/4 0 0P01 = P(Xt+1 = 1 | Xt = 0) = 3/4 0 1P10 = P(Xt+1 = 0 | Xt = 1) = 1/2 1 0P11 = P(Xt+1 = 1 | Xt = 1) = 1/2 1 1

    \P =*

  • *

  • *

  • Properties:

    Homogeneous, Irreducible, AperiodicLimiting State Probabilities:*(j=0, 1, 2...) Exist and are Independent of the Pj(0)s

  • If all states of the chain are recurrent and their mean recurrence time is finite,Pjs are a stationary probabilitydistribution and can be determined by solving the equationsPj = S Pi Pij, (j=0,1,2..) and S Pi = 1 i iSolution ==> Equilibrium State Probabilities *

  • *

  • Example: Consider a communication system which transmits the digits 0 and 1 through several stages. At each stage the probability that the same digit will be received by the next stage, as transmitted, is 0.75. What is the probability that a 0 that is entered at the first stage is received as a 0 by the 5th stage?

    *

  • *

  • We have the equationsp0 + p1 = 1, p0 = 0.75p0 + 0.25p1 , p1 = 0.25p0 + 0.75p1.The unique solution of these equations is p0 = 0.5, p1 = 0.5. This means that if data are passed through a large number of stages, the output is independent of the original input and each digit received is equally likely to be a 0 or a 1. This also means that*

  • Note that:

    Note also thatpP = (0.5, 0.5) = p,so p is a stationary distribution. *

  • Problem:CPU of a multiprogramming system is at any time executing instructions from:User program or ==> Problem State (S3)

    OS routine explicitly called by a user program (S2)OS routine performing system wide ctrl task (S1)==> Supervisor State

    wait loop ==> Idle State (S0)*

  • Assume time spent in each state 50 ms

    Note: Should split S1 into 3 states (S3, S1), (S2, S1),(S0, S1)so that a distinction can be made regarding entering S0. *

  • *State Transition Diagram of discrete-time Markov of a CPU

  • To StateS0S1S2S3S00.990.0100FromS10.020.920.020.04StateS200.010.900.09S300.010.010.98

    Transition Probability Matrix*

  • P0 = 0.99P0 + 0.02P1P1 = 0.01P0 + 0.92P1+ 0.01P2 + 0.01P3P2 = 0.02P1+ 0.90P2 + 0.01P3P3 = 0.04P1+ 0.09P2 + 0.98P31 = P0 + P1+ P2 + P3Equilibrium state probabilities can be computed by solving system of equations. So we have:P0 = 2/9, P1 = 1/9, P2 = 8/99, P3 = 58/99*

  • Utilization of CPU1 - P0 = 77.7%58.6% of total time spent for processing users programs19.1% (77.7 - 58.6) of time spent in supervisor state 11.1% in S1 8% in S2*

  • *

    *30*30*30*30*30*30*30*30*30*30*30*30*30*30*30*30*30*30*30*30*30*30*30