processus martingales

Embed Size (px)

Citation preview

  • 8/18/2019 processus martingales

    1/46

    STAT-F-407

    Martingales

    Thomas Verdebout

    Université Libre de Bruxelles

    2015

  • 8/18/2019 processus martingales

    2/46

    Outline of the course

    1. A short introduction.

    2. Basic probability review.

    3. Martingales.

    4. Markov chains.

    5. Markov processes, Poisson processes.

    6. Brownian motions.

  • 8/18/2019 processus martingales

    3/46

    Outline of the course

    1. A short introduction.2. Basic probability review.

    3. Martingales.

    3.1. Definitions and examples.

    3.2. Sub- and super- martingales, gambling strategies.

    3.3. Stopping times and the optional stopping theorem.

    3.4. Limiting results.

    4. Markov chains.

    5. Markov processes, Poisson processes.

    6. Brownian motions.

  • 8/18/2019 processus martingales

    4/46

    Definitions and basic comments

    Let (Ω,A,P ) be a measure space.

    Definition: a filtration is a sequence {An |n ∈ N} of σ-algebrassuch that A0 ⊂ A1 ⊂ . . . ⊂ A.

    Definition: The SP {Y n |n ∈ N} is adapted to the filtration {An |n ∈ N}⇔ Y n   is An -measurable for all n .

    Intuition: growing information An ... And the value of  Y n   isknown as soon as the information An   is available.

    The SP {Y n |n ∈N} is a martingale w.r.t. the filtration{An |n ∈ N} ⇔

      (i) {Y n |n ∈ N} is adapted to the filtration {An |n ∈ N}.

      (ii)  E[|Y n |] < ∞ for all n .

      (iii) E

    [Y n +1|An ] = Y n  a.s. for all n .

  • 8/18/2019 processus martingales

    5/46

    Definitions and basic comments

    Remarks:

     (iii) shows that a martingale can be thought of as the

    fortune of a gambler betting on a fair game.

      (iii) ⇒ E[Y n ] =  E[Y 0] for all n   (mean-stationarity). Using (iii),we also have (for k  = 2, 3, . . .)

    E[Y n +k |An ] =  EE[Y n +k |An +k −1]An  =  E[Y n +k −1|An ] == . . . =

     E[Y n +1|An ] = Y n 

    a.s. for all n .

  • 8/18/2019 processus martingales

    6/46

    Definitions and basic comments

    Let σ(X 1, . . . ,X n ) be the smallest σ-algebra containing

    {X −1i    (B ) |B  ∈ B ,   i  = 1, . . . , n }.

    The SP {Y n |n ∈ N} is a martingale w.r.t. the SP {X n |n ∈ N} ⇔

    {Y n |n ∈N

    }is a martingale w.r.t. the filtration

    {σ(X 

    1, . . . ,X n )|

    n ∈N

    }

      (i) Y n   is σ(X 1, . . . ,X n )-measurable for all n .

      (ii)  E[|Y n |] < ∞ for all n .

      (iii)  E[Y n +1

    |X 1

    , . . . ,X n 

    ] = Y n 

     a.s. for all n .

    Remark:

     (i) just states that “Y n  is a function of X 1, . . . ,X n  only".

  • 8/18/2019 processus martingales

    7/46

    Definitions and basic comments

    A lot of examples...

  • 8/18/2019 processus martingales

    8/46

    Example 1

    Let X 1,X 2, . . . be ⊥⊥ integrable r.v.’s, with common mean 0.

    Let Y n  :=n 

    i =1 X i .Then {Y n } is a martingale w.r.t.  {X n }.

    Indeed,

      (i) is trivial.

      (ii) is trivial.   (iii): with An   := σ(X 1, . . . ,X n ), we have

    E[Y n +1|An ] =  E[Y n  + X n +1|An ] =  E[Y n |An ] +  E[X n +1|An ]

    = Y n  + E[X n +1] = Y n  + 0 = Y n  a.s. for all n ,

    where we used that Y n   is An -measurable and that X n +1 ⊥⊥ An .

    Similarly, if X 1,X 2, . . . are ⊥⊥ and integrable with meansµ1, µ2, . . ., respectively, {

    n i =1(X i  − µi )} is a martingale w.r.t.

    {X n }.

  • 8/18/2019 processus martingales

    9/46

    Example 2

    Let X 1,X 2, . . . be ⊥⊥ integrable r.v.’s, with common mean 1.

    Let Y n  :=n 

    i =1 X i .Then {Y n } is a martingale w.r.t.  {X n }.

    Indeed,

      (i) is trivial.

      (ii) is trivial.   (iii): with An   := σ(X 1, . . . ,X n ), we have

    E[Y n +1|An ] =  E[Y n X n +1|An ] = Y n E[X n +1|An ] = Y n E[X n +1] = Y n 

    a.s. for all n , where we used that Y n   is An -measurable(and hence behaves as a constant in  E[ . |An ]).

    Similarly, if X 1,X 2, . . . are ⊥⊥ and integrable with meansµ1, µ2, . . ., respectively, {

    n i =1(X i /µi )} is a martingale w.r.t.

    {X n }.

  • 8/18/2019 processus martingales

    10/46

    Example 2

    The previous example is related to basic models for stock

    prices.

    Assume X 1,X 2, . . . are positive, ⊥⊥ , and integrable r.v.’s.Let Y n  := c 

    n i =1 X i , where c  is the initial price.

    The quantity X i  − 1 is the change in the value of the stock overa fixed time interval (one day, say) as a fraction of its current

    value. This multiplicative model (i) ensures nonnegativeness ofthe price and (ii) is compatible with the fact fluctuations in the

    value of a stock are roughly proportional to its price.

    Various models are obtained for various distributions of the  X i ’s:

     Discrete Black-Scholes model:  X i  = e ηi , where

    ηi  ∼ N (µ, σ2) for all i .

      Binomial model:  X i  = e −r (1 + a )2ηi −1, where ηi  ∼ Bin(1,p )

    for all i   (r  = interest rate, by which one discounts future rewards)

  • 8/18/2019 processus martingales

    11/46

    Example 3: random walks

    Let X 1,X 2, . . . be i.i.d., with P [X i  = 1] = p  and P [X i  = −1] = q  = 1−Let Y n  :=

    n i =1 X i .

      The SP {Y n } is called a random walk.

    Remarks:

      If p  = q , the RW is said to be symmetric.

     Of course, from Example 1, we know that

    {(n i =1 X i )− n (p − q )} is a martingale w.r.t.  {X n }.But other martingales exist for RWs...

  • 8/18/2019 processus martingales

    12/46

    Example 3: random walks

    Consider the non-symmetric case (p = q ) and let S n  :=q p Y n 

    .

    Then {S n } is a martingale w.r.t.  {X n }.

    Indeed,

      (i) is trivial.   (ii):  |S n | ≤ max((q /p )n , (q /p )−n ). Hence,  E[|S n |] < ∞.

      (iii): with An   := σ(X 1, . . . ,X n ), we have

    E[S n +1|An ] =  E[(q /p )Y n (q /p )X n +1 |An ] = (q /p )

    Y n E[(q /p )X n +1 |An ]

    = S n  E[(q /p )X n +1 ] = S n 

    (q /p )1×p +(q /p )−1×q 

    = S n , a.s. for all

    where we used that Y n   is An -measurable and that X n +1 ⊥⊥ An .

  • 8/18/2019 processus martingales

    13/46

    Example 3: random walks

    Consider the symmetric case (p  = q ) and let S n  := Y 2n  − n .

    Then {S n } is a martingale w.r.t.  {X n }.Indeed,

      (i) is trivial.

      (ii):  |S n | ≤ n 2 − n . Hence,  E[|S n |] < ∞.

      (iii): with An   := σ(X 1, . . . ,X n ), we have

    E[S n +1|An ] =  E[(Y n  + X n +1)2 − (n  + 1)|An ]

    =  E[(Y 2n   + X 2n +1 + 2Y n X n +1)− (n  + 1)|An ]

    = Y 2n  +E[X 2n +1|An ]+2Y n E[X n +1|An ]−(n +1)

    = Y 2n   + E[X 2n +1] + 2Y n E[X n +1]− (n  + 1)

    = S n  a.s. for all n ,

    where we used that Y n 

      is An 

    -measurable and that X n +1

     ⊥⊥ An 

    .

    D

  • 8/18/2019 processus martingales

    14/46

    Example 4: De Moivre’s martingales

    Let X 1,X 2, . . . be i.i.d., with P [X i  = 1] = p  and P [X i  = −1] = q  = 1−Let Y 0  := k  ∈ {1, 2, . . . ,m − 1} be the initial state.Let Y n +1 := (Y n  + X n +1) I [Y n   /∈ {0,m }] + Y n  I [Y n  ∈ {0,m }].

      The SP {Y n } is called a random walk with absorbingbarriers.

    Remarks:

     Before being caught either in 0 or m , this is just a RW.  As soon as you get in 0 or m , you stay there forever.

    E l 4 D M i ’ ti l

  • 8/18/2019 processus martingales

    15/46

    Example 4: De Moivre’s martingales

    Let X 1,X 2, . . . be i.i.d., with P [X i  = 1] = p  and P [X i  = −1] = q  = 1−Let Y 0  := k  ∈ {1, 2, . . . ,m − 1} be the initial state.Let Y n +1 := (Y n  + X n +1) I [Y n   /∈ {0,m }] + Y n  I [Y n  ∈ {0,m }].

    In this new setup and with this new definition of  Y n ,

      In the non-symmetric case,

    {S n  := q 

    p Y n } is still a martingale w.r.t.  {X n }.

    (exercise).

    E l 5 b hi

  • 8/18/2019 processus martingales

    16/46

    Example 5: branching processes

    Consider some population, in which each individual i  of the Z n 

    individuals in the n th generation gives birth toX n +1,i   children (theX n ,i ’s are i.i.d., take values in  N, and have common mean µ < ∞).

    Assume that Z 0 = 1.

    Then {Z n /µn } is a martingale w.r.t.

    {An  := σ(the X m ,i ’s,m ≤ n )}.

    Indeed,

      (i), (ii): exercise...

      (iii):   EZ n +1µn +1 |An 

    =

      1µn +1 E

    Z n i =1 X n +1,i |An 

    =   1µn +1

    Z n i =1 E[X n +1,i |An ] =

      1µn +1

    Z n i =1 E[X n +1,i ] =

      Z n µn  a.s. for all n .

    In particular,  EZ n µn  =  E

    Z 0µ0 = 1. Hence,  E[Z n ] = µ

    n  for all n .

    E ample 6 Pol a’s rn

  • 8/18/2019 processus martingales

    17/46

    Example 6: Polya’s urn

    Consider an urn containing b  blue balls and r  red ones.Pick randomly some ball in the urn and put it back in the urn

    with an extra ball of the same color. Repeat this procedure.

    This is a so-called contamination process.

    Let X n  be the number of red balls in the urn after n  steps.Let

    R n  =  X n 

    b  + r  + n 

    be the proportion of red balls in the urn after n  steps.

    Then {R n } is a martingale w.r.t.  {X n }.

    Example 6: Polya’s urn

  • 8/18/2019 processus martingales

    18/46

    Example 6: Polya’s urn

    Indeed,

      (i) is trivial.

      (ii): 0 ≤ |R n | ≤ 1. Hence,  E[|R n |] < ∞.

      (iii): with An   := σ(X 1, . . . ,X n ), we have

    E[X n +1|An ] = (X n  + 1)  X n 

    r  + b  + n  + (X n  + 0)

    1−  X n 

    r  + b  + n 

    = (X n  + 1)X n  + X n ((r  + b  + n )− X n )

    r  + b  + n 

    = (r  + b  + n  + 1)X n 

    r  + b  + n = (r  + b  + n  + 1)R n  a.s. for all n ,

    so that  E[R n +1|An ] = R n  a.s. for all n .

    Outline of the course

  • 8/18/2019 processus martingales

    19/46

    Outline of the course

    1. A short introduction.

    2. Basic probability review.

    3. Martingales.

    3.1. Definitions and examples.

    3.2. Stopping times and the optional stopping theorem.

    3.3. Sub- and super- martingales, Limiting results.

    4. Markov chains.

    5. Markov processes, Poisson processes.

    6. Brownian motions.

    Stopping times

  • 8/18/2019 processus martingales

    20/46

    Stopping times

    Let (Y n ) be a martingale w.r.t.  (An )Let T   : (Ω,A,P) → N :=  N ∪ {∞} be a r.v.

    Definition: T   is a stopping time w.r.t.  (An ) ⇔(i) T  is a.s. finite (i.e.,  P[T   < ∞] = 1).

    (ii) [T   = n ] ∈ An  for all n .Remarks:

     (ii) is the crucial assumption:

    it says that one knows, at time n , on the basis of the

    “information" An , whether T   = n  or not, that is, whetherone should stop at  n  or not.

      (i) just makes (almost) sure that one will stop at some point.

    Stopping times (examples)

  • 8/18/2019 processus martingales

    21/46

    Stopping times (examples)

    (Kind of) examples...

    Let (Y n ) be a martingale w.r.t.  (An ). Let B  ∈ B .

    (A) Let T   := inf{n ∈ N|Y n  ∈ B } be the time of 1st entry of (Y n )into B . Then,

    [T   = n ] = [Y 0   /∈ B ,Y 1   /∈ B , . . . ,Y n −1   /∈ B ,Y n  ∈ B ] ∈ An .

    Hence, provided that T   is a.s. finite,  T  is a ST.

    (B) Let T   := sup{n ∈ N|Y n  ∈ B } be the time of last escape of(Y n ) out of B . Then,

    [T   = n ] = [Y n  ∈ B ,Y n +1   /∈ B ,Y n +2   /∈ B , . . .]  /∈ An .

    Hence, T  is not a ST.

    Stopping times (examples)

  • 8/18/2019 processus martingales

    22/46

    Stopping times (examples)

    (C) Let T   := k  a.s. (for some fixed integer  k ). Then, of course,(i) T   < ∞ a.s. and (ii)

    [T   = n ] =   ∅   if    n = k 

    Ω   if    n  = k ,

    which is in An  for all n . Hence, T  is a ST.

    Stopping times (properties)

  • 8/18/2019 processus martingales

    23/46

    Stopping times (properties)

    Properties:

      [T   = n ] ∈ An  ∀n   (1)

    ⇔ [T  ≤ n ] ∈ An  ∀n   (2)

    ⇔ [T   > n ] ∈ An  ∀n .

    Indeed,(1)⇒ follows from [T  ≤ n ] =

    n k =1[T   = k ].

    (1)

    ⇐ follows from [T   = n ] = [T  ≤ n ]\[T  ≤ n − 1].(2)⇔ follows from [T  ≤ n ] = Ω\[T   > n ].

      T 1,T 2  are ST ⇒ T 1 + T 2, max(T 1,T 2), and min(T 1,T 2) areST (exercise).

      Let (Y n ) be a martingale w.r.t.  (An ).Let T  be a ST w.r.t.  (An ).Then Y T   :=

    ∞n =0 Y n I[T =n ]  is a r.v.

    Indeed, [Y T  ∈ B ] = ∞n =0{[T   = n ] ∩ [Y n  ∈ B ]} ∈ A.

    Stopped martingale

  • 8/18/2019 processus martingales

    24/46

    Stopped martingale

    A key lemma:

    Lemma: Let (Y n ) be a martingale w.r.t.  (An ). Let T  be a STw.r.t.  (An ).Then {Z n   := Y min(n ,T )} is a martingale w.r.t.  (An ).

    Proof: note that

    Z n  = Y min(n ,T ) =n −1k =0

    Y k I[T =k ] + Y n I[T ≥n ].

    So

      (i):  Z n   is An -measurable for all n .   (ii):  |Z n | ≤

    n −1k =0 |Y k |I[T =k ] + |Y n |I[T ≥n ] ≤

    n k =0 |Y k |.

    Hence,  E[|Z n |] < ∞.

    Stopped martingale

  • 8/18/2019 processus martingales

    25/46

    Stopped martingale

      (iii): we have

    E[Z n +1|An ] − Z n  =  E[Z n +1 − Z n |An ]

    =  E[(Y n +1 − Y n )I[T ≥n +1]|An ]

    =  E[(Y n +1 − Y n )I[T >n ]|An ]=  I[T >n ] E[Y n +1 − Y n |An ]

    =  I[T >n ]E[Y n +1|An ]− Y n 

    = 0   a.s. for all n ,

    where we used that  I[T >n ]   is An -measurable.  

    Stopped martingale

  • 8/18/2019 processus martingales

    26/46

    pp g

    Corollary: Let (Y n ) be a martingale w.r.t.  (An ). Let T  be a STw.r.t.  (An ). Then  E[Y min(n ,T )] =  E[Y 0] for all n .

    Proof: the lemma and the mean-stationarity of martingales yield

    E[Y min(n ,T )] =  E[Z n ] =  E[Z 0] =  E[Y min(0,T )] =  E[Y 0] for all n .  

    In particular, if the ST is such that T  ≤ k  a.s. for some k , wehave that, for n ≥ k ,

    Y min(n ,T ) = Y T    a.s.,

    so that

    E[Y T ] =  E[Y 0].

    Stopped martingale

  • 8/18/2019 processus martingales

    27/46

    pp g

    E[Y T ] =  E[Y 0] does not always hold.

    Example: the doubling strategy, for which the winnings are

    Y n  =n 

    i =1C i X i ,

    where the X i ’s are i.i.d.  P[X i  = 1] =  P[X i  = −1] =  12  and C i  = 2

    i −1b .

    The SP (Y n ) is a martingale w.r.t.  (X n ) (exercise).Let T   = inf{n ∈ N|X n  = 1} (exercise:  T  is a ST).

    As we have seen, Y T   = b  a.s., so that  E[Y T ] = b = 0 =  E[Y 0].

    However, as shown by the following result,  E[Y T ] =  E[Y 0] holdsunder much broader conditions than "T  ≤ k  a.s."

    Optional stopping theorem (examples)

  • 8/18/2019 processus martingales

    28/46

    p pp g ( p )

    Let p k   :=  P[Y T   = 0].

    Then

    E[Y T ] = 0× p k  + m × (1− p k )

    and

    E[Y 0] =  E[k ] = k ,

    so that  E[Y T ] =  E[Y 0] yields

    m (1− p k ) = k ,

    that is, solving for p k ,

    p k  = m − k 

    m   .

    Optional stopping theorem (examples)

  • 8/18/2019 processus martingales

    29/46

    p pp g p

    Let X 1,X 2, . . . be i.i.d., with  P[X i  = 1] = p  andP[X i  = −1] = q  = 1− p .Let Y 0  := k  ∈ {1, 2, . . . ,m − 1} be the initial state.Let Y n +1 := (Y n  + X n +1) I[Y n /∈{0,m }] + Y n I[Y n ∈{0,m }].

      The SP (Y n ) is called a random walk with absorbing barriers.

    In the non-symmetric case,

    S n  :=

    q p 

    Y n is a martingale

    w.r.t.  (X n ). Let T   := inf{n ∈ N|S n  ∈ {0,m }} (exercise:  T   is astopping time, and the assumptions of the optional stopping

    thm are satisfied).   E[S T ] =  E[S 0].

    Optional stopping theorem (examples)

  • 8/18/2019 processus martingales

    30/46

    Let again p k   :=  P[Y T   = 0]. Then, since

    E[S T ] =  E

    q p 

    Y T 

    =q p 

    0 × p k  +

    q p 

    m (1− p k ),

    andE[S 0] =  E

    q p 

    Y 0=  E

    q p 

    k =q p 

    k ,

    we deduce thatq p 

    0× p k  +

    q p 

    m (1− p k ) =

    q p 

    k ,

    that is, solving for p k ,

    p k   =

    q p 

    k −q p 

    m 1−

    q p 

    m    .

    Optional stopping theorem (examples)

  • 8/18/2019 processus martingales

    31/46

    Is there a way to get  E[T ] here as well?

    In the non-symmetric case, (R n  := Y n −min(n ,T )(p − q )) is amartingale w.r.t.  (X n ) (exercise: check this, and check that theoptional stopping thm applies with (R n ) and T ).

      E[R T ] =  E[R 0], where

    E[R T ] =  E[Y T −T (p −q )] =  E[Y T ]−(p −q )E[T ] =

    0×p k +m ×(1−p k )

    and

    E[R 0] =  E[Y 0 − min(0,T )(p − q )] =  E[Y 0] =  E[k ] = k .

    Hence,

    E[T ] = m (1− p k )− k 

    p − q   =

    1−q p 

    k − k 

    1−

    q p 

    m (p − q )

    1−

    q p 

    m    .

    Outline of the course

  • 8/18/2019 processus martingales

    32/46

    1. A short introduction.

    2. Basic probability review.

    3. Martingales.

    3.1. Definitions and examples.3.2. Stopping times and the optional stopping theorem.

    3.3. Sub- and super- martingales, Limiting results.

    4. Markov chains.

    5. Markov processes, Poisson processes.

    6. Brownian motions.

    Sub- and super-martingales

  • 8/18/2019 processus martingales

    33/46

    Not every game is fair...

    There are also favourable and defavourable games.

    Therefore, we introduce the following concepts:

    The SP (Y n )n ∈N   is a submartingale w.r.t. the filtration (An )n ∈N ⇔

      (i) (Y n )n ∈N  is adapted to the filtration (An )n ∈N.   (ii)’  E[Y +n   ] < ∞ for all n .

      (iii)’  E[Y n +1|An ] ≥ Y n  a.s. for all n .

    The SP (Y n )n ∈N   is a supermartingale w.r.t. the filtration  (An )n ∈N ⇔

      (i) (Y n )n ∈N  is adapted to the filtration (An )n ∈N.

      (ii)”  E[Y −n   ] < ∞ for all n .

      (iii)” E

    [Y n +1|An ] ≤ Y n  a.s. for all n .

    Sub- and super-martingales

  • 8/18/2019 processus martingales

    34/46

    Remarks:

     (iii)’ shows that a submartingale can be thought of as the

    fortune of a gambler betting on a favourable game.

      (iii)’ ⇒ E[Y n ] ≥ E[Y 0] for all n .

     (iii)” shows that a supermartingale can be thought of as thefortune of a gambler betting on a defavourable game.

      (iii)” ⇒ E[Y n ] ≤ E[Y 0] for all n .

      (Y n ) is a submartingale w.r.t.  (An )

    ⇔ (−Y n ) is a supermartingale w.r.t.  (An ).

      (Y n ) is a martingale w.r.t.  (An )⇔ (Y n ) is both a sub- and a supermartingale w.r.t.  (An ).

    Sub- and super-martingales

  • 8/18/2019 processus martingales

    35/46

    Consider the following strategy for the fair version of roulette

    (without the "0" slot):

    Bet b  euros on an even result. If you win, stop.

    If you lose, bet 2b  euros on an even result. If you win, stop.

    If you lose, bet 4b  euros on an even result. If you win, stop...

    And so on...

    How good is this strategy?

    (a) If you first win in the n th game, your total winning is

    n −2i =0

    2i b  + 2n −1b  = b 

    ⇒ Whatever the value of n  is, you win b  euros with thisstrategy.

    Sub- and super-martingales

  • 8/18/2019 processus martingales

    36/46

    (b) You will a.s. win. Indeed, let T  be the time index of first

    success. Then

    P[T   < ∞] =∞n =1

    P[n − 1 first results are "odd", then "even"]

    =∞

    n =11

    2n −1 1

    2 =

    n =11

    2n 

    = 1.

    But

    (c) The expected amount you lose just before you win is

    1

    2 +b ×1

    22

    +(b +2b )×1

    23

    +. . .+n −2

    i =02i 

    b 1

    2n 

    +. . . = ∞

    ⇒ Your expected loss is infinite!

    (d) You need an unbounded wallet...

    Sub- and super-martingales

  • 8/18/2019 processus martingales

    37/46

    Let us try to formalize strategies...

    Consider the SP (X n )n ∈N, where X n  is your winning per unitstake in game n . Denote by (An )n ∈N  the corresponding filtration(An  = σ(X 1, . . . ,X n )).

    Definition: A gambling strategy (w.r.t.  (X n )) is a SP (C n )n ∈Nsuch that C n   is An −1-measurable for all n .

    Remarks:

      C n  = C n (X 1, . . . ,X n −1) is what you will bet in game  n .  A0  = {∅, Ω).

    Sub- and super-martingales

  • 8/18/2019 processus martingales

    38/46

    Using some strategy (C n ), your total winning after n  games is

    Y (C )n    =

    n i =1

    C i X i .

    A natural question:

    Is there any way to choose (C n ) so that (Y (C )n    ) is "nice"?.

    Consider the "blind" strategy  C n  = 1 (for all n ), that consists inbetting 1 euro in each game, and denote by (Y n  =

    n i =1 X i ) the

    corresponding process of winnings.

    Then, here is the answer:

    Theorem: Let (C n ) be a gambling strategy with nonnegative

    and bounded r.v.’s. Then if (Y n ) is a martingale, so is  (Y (C )n    ). If

    (Y n ) is a submart., so is  (Y (C )n    ). And if (Y n ) is a supermart., so

    is (Y (C )n    ).

    Sub- and super-martingales

  • 8/18/2019 processus martingales

    39/46

    Proof:

      (i) is trivial.

      (ii):  |Y (C )n    | ≤

    n i =1 a i |X i |. Hence,  E[|Y (C )n    |] < ∞.  (iii),(iii)’,(iii)”: with An   := σ(X 1, . . . ,X n ), we have

    E[Y (C )n +1|An ] =   E[Y 

    (C )n    + C n +1X n +1|An ]

    =   Y 

    (C )

    n    + C n +1E

    [X n +1|An ]=   Y 

    (C )n    + C n +1 E[Y n +1 − Y n |An ]

    =   Y (C )n    + C n +1

    E[Y n +1|An ]− Y n 

    ,

    where we used that C n +1   is An -measurable. SinceC n +1 ≥ 0, the result follows.  

    Remark: The second part was checked for martingales only.

    Exercise: check (ii)’ and (ii)”...

    Convergence of martingales

  • 8/18/2019 processus martingales

    40/46

    Theorem: let (Y n ) be a submartingale w.r.t.  (An ). Assume that,for some M ,  E[Y +n   ] ≤ M  for all n . Then

    (i) ∃Y ∞  such that Y n  a .s .→  Y ∞  as n →∞.(ii) If  E[|Y 0|] < ∞,  E[|Y ∞|] < ∞.

    The following results directly follow:

    Corollary 1: let (Y n ) be a submartingale or a supermartingalew.r.t.  (An ). Assume that, for some  M ,  E[|Y n |] ≤ M  for all n .

    Then ∃Y ∞  (satisfying  E[|Y ∞|] < ∞) such that Y n a .s .→  Y ∞  as

    n →∞

    .

    Corollary 2: let (Y n ) be a negative submartingale or a positive

    supermartingale w.r.t.  (An ). Then ∃Y ∞  such that Y n a .s .→  Y ∞  as

    n →∞.

    Example: products of r.v.’s

  • 8/18/2019 processus martingales

    41/46

    Let X 1,X 2, . . . be i.i.d. r.v.’s, with common distribution

    distribution of X i values 0 2

    probabilities   12

    12

    The X i ’s are integrable r.v.’s with common mean 1,

    so that (Y n  = n i =1 X i ) is a (positive) martingale w.r.t.  (X n )(example 2 in the previous lecture).

    Consequently, ∃Y ∞  such that Y n a .s .→  Y ∞  as n →∞.

    We showed, in Lecture #4, that Y n P → 0 as n →∞ so that

    Y ∞  = 0 a.s.

    But we also showed there that convergence in L1 does not

    hold. To ensure L1-convergence, one has to require uniform

    integrability of (Y n ).

    Example: Polya’s urn

  • 8/18/2019 processus martingales

    42/46

    Consider an urn containing b  blue balls and r  red ones.

    Pick randomly some ball in the urn and put it back in the urn

    with an extra ball of the same color. Repeat this procedure.

    Let X n  be the number of red balls in the urn after n  steps.

    Let R n  =  X n b +r +n  be the proportion of red balls after n  steps.

    We know that (R n ) is a martingale w.r.t.  (X n ).

    Now, |R n | ≤ 1 (⇒   E[|R n |] ≤ 1), so that ∃R ∞  (satisfying  E[|R ∞|] < ∞)

    such that R n a .s .→  R ∞  as n →∞.

    Clearly, uniform integrability holds. Hence, R n L1→ R ∞  as

    n →∞.

    Remark: it can be shown that  R ∞  has a beta distribution:

    P[R ∞ ≤ u ] = b  + r 

    r     u 

    0

    x r −1(1− x )b −1 dx ,   u  ∈ (0, 1).

    Example: branching processes

  • 8/18/2019 processus martingales

    43/46

    Consider some population, in which each individual i  of the Z n 

    individuals in the n th generation gives birth to X n ,i  children (theX n ,i ’s are i.i.d., take values in  N, and have common mean µ < ∞).

    Assume that Z 0 = 1.

    Then (Y n  := Z n /µn ) is a martingale w.r.t.

    (An  := σ(X n ,i , n  fixed)).

    What can be said about the long-run state?

    E

    [|Y n |] = E

    [Y 0] = 1 for all n ⇒ ∃Y ∞  (satisfying E

    [|Y ∞|] < ∞)such that Y n a .s .→  Y ∞  as n →∞.

      Z n  ≈ Y ∞µn  for large n .

    Example: branching processes

  • 8/18/2019 processus martingales

    44/46

    Case 1:  µ ≤ 1.

    If p 0  :=  P[X n ,i  = 0] >  0, Z n a .s .→  Z ∞  := 0 as n →∞.

    Proof:

    Let k  ∈N

    0. Then

    P[Z ∞ = k ] =  P[Z n  = k   ∀n ≥ n 0] =  P[Z n 0  = k ] (P[Z n 0+1 = k |Z n 0  = k ])∞

    where

    P[Z n 0+1  = k |Z n 0  = k ] ≤ P[Z n 0+1 = 0|Z n 0  = k ] = 1 − (p 0)k 

  • 8/18/2019 processus martingales

    45/46

    Case 2:  µ > 1.

    Then Z n a .s .

    →  Z ∞  as n →∞, where the distribution of  Z ∞  is givenby

    distribution of Z ∞values 0   ∞

    probabilities   π   1 − π

    where π ∈ (0, 1) is the unique solution of g (s ) = s , where,defining p k   :=  P[X n ,i  = k ], we let g (s ) :=

    ∞k =0 p k s 

    k .

    Proof:

    Clearly, since Z n  ≈ Y ∞µn  for large n , Z n  a .s .→  Z ∞  as n →∞,where Z ∞  only assumes the values 0 and ∞, with proba a  and1− a , respectively, say.

    We may assume that p 0  > 0 (clearly, if p 0  = 0, Z ∞ = ∞ a.s.)

    Example: branching processes

  • 8/18/2019 processus martingales

    46/46

    There is indeed a unique π  ∈ (0, 1) such that g (π) = π, since g 

    is monotone increasing, g (0) = p 0  > 0, and g 

    (1) = µ > 1.  (πZ n ) is a martingale w.r.t.  (An ), since

    E[πZ n +1 |An ] =  E[πZ n 

    i =1 X n +1,i |An ] =

    E[πX n +1,1 |An ]

    Z n 

    =E[πX n +1

    ,1 ]Z n  =

    ∞k =0

    πk p k Z n 

    =g (π)

    Z n  = πZ n 

    a.s. for all n .

    (πZ n ) is also uniformly integrable, so that  πZ n   L1→ πZ ∞ as n →∞.

    Therefore,  E[πZ ∞] = π0 × a  + π∞ × (1− a ) = a  is equal toE[πZ 0 ] =  E[π1] = π.

    Hence,  P[Z ∞  = 0] = π and  P[Z ∞  = ∞] = 1− π.