Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
i
TRANSITION
PROBABILITIES BASED ON
KOLMOGOROV EQUATIONS FOR
PURE BIRTH PROCESSES
By Mutothya Nicholas Mwilu
Reg.No: I56/70030/2011
Dissertation Submitted to the School of Mathematics of the University of Nairobi in
Partial Fulfillment of the Requirements for the Degree of Master of Science in the School
of Mathematics
July 2013
ii
DECLARATION
I declare that this is my original work and has not been presented for an award of
a degree in any other university
Signature ………………………… Date ………………..
Mutothya Nicholas Mwilu
This thesis is submitted for examination with my approval as the university
supervisor
Signature ………………………………… Date……………………………..
Prof J.A.M Ottieno
iii
EXECUTIVE SUMMARY
The aim of this thesis is to identify probability distributions emerging by solving the
Kolmogorov forward differential equation for continuous time non homogenous pure birth
process. This equation is k,k + n k +n k,k + n k +n-1 k,k + n - 1p s, t + λ t p s, t = λ t p s, t .t
This equation has been solved for four different processes. i.e. Poisson Process n ,
Simple Birth process n n , in Simple Birth process with immigration n n v and
the Polya process n
1 an
1 at
.
Three different methods have been applied in solving the forward Kolmogorov equation above
with the initial conditions being that k k k,k + np s,s = 1 and p s,s = 0 for n > 0.
These methods are:
(1) The integrating factor technique
(2) the Lagrange’s Method
(3) The generator Matrix technique.
The results from the three Methods were similar.
In addition, the first passage time arising from the solution of the basic difference differential
equations has also been derived for each of the four processes.
From the Poisson process, we found the distribution of the increments to be a Poisson
distribution with parameter λ t - s
n
- λ t - s
k,k + n
λ t - sp s, t = e ; n = 0, 1, 2, ...
n!
This is independent the initial state and
depends on the length of the time interval, thus for a Poisson processes the increments are
independent and stationary. The first passage distribution was
n - 1- λ t n- λ t n - 1
n
e λ t λf t = = e t for t > 0; n = 1, 2, 3,.....
n - 1 ! n which is gamma n,λ
iv
From the Simple birth process, we found the distribution of the increments to be Negative
binomial distribution with - λ t - sp = e and - λ t - s
q =1- e
n
- λk t - s - λ t - s
k,k + n
k + n - 1p s, t = e 1 - e .
n
It depends on the length of the time interval
t - s and on k and is thus stationary and not independent.
The first passage distribution is n - 2
- λ t - λ t
nf t = λ n - 1 e 1 - e ; t > 0, n = 2, 3,... which is
an exponentiated exponential distribution with parameters λ and n - 1
From the Simple birth process with immigration, we found the distribution of the increments to
be Negative binomial distribution with -λ t - sp = e and -λ t - s
q =1- e
vλ
v n- k + +n t - s λ -λ t - sλ
k,k+n
k + +n -1p s, t = e 1-e .
n
It depends on the length of the time interval
t - s and on k and is thus stationary and not independent.
The first passage distribution is
00
0
n - n- n λ + v - λ t - λt - λt
n 0n λ + v - λ
λ
1f t = e 1 - e nλ + v - λ e - n λ + v - λ
!
From the Polya process, we found the distribution of the increments to be Negative binomial
distribution with 1 + λas
p =1 + λat
and λa t - s1 + λas
q =1- .1 + λat 1 + λat
1a
jk +1a
k,k + j
j + k + - 1 λa t - s1 + λasp s, t =
j 1 + λat 1 + λat
It depends on the length of the time
interval t - s and on k and is thus stationary and not independent. The first passage distribution
is
1
n +a
1n - 11 a1
λaa
n1a 1
λa
tn+ 1 1f t = × , t > 0, n > 0, > 0, > 0
a λ ant +
which is a generalized
Pareto distribution.
v
ACKNOWLEDGEMENT
I am grateful to Professor JAM Otieno for serving as my adviser during this research. He
provided a never ending source of ideas and directions.
I have received much support from friends and colleagues through the years which has enabled
me to reach this point. Special thanks must go to Mr. Karanja with whom I have worked with
past one year. Their contribution and encouragement is highly appreciated.
Finally, I must acknowledge the love and understanding extended to me by my family and
particularly my wife Christine, my daughter Sasha and Melisa, they were always there to
reassure me.
vi
TABLE OF CONTENTS
Declaration .......................................................................................................... ii
Executive Summary .......................................................................................... iii
CHAPTER ONE ............................................................................................. 1
GENERAL INTRODUCTION ...................................................................... 1
1.1 Background Information ............................................................................... 1
1.2 Stochastic Processes ...................................................................................... 2
1.3 Pure Birth Processes ...................................................................................... 3
1.4 Literature Review .......................................................................................... 3
1.5 Problem Statement And Objectives Of The Study ....................................... 5
1.6 Areas Of Application ..................................................................................... 6
CHAPTER TWO ............................................................................................ 7
TRANSITION PROBABILITIES FOR A GENERAL PURE BIRTH
PROCESS ....................................................................................................... 7
2.1 Introduction ................................................................................................... 7
2.2 Chapman-Kolmogorov Equations ................................................................. 8
2.2.1 Time Homogeneous Markov Process ............................................................... 9
2.2.2 Time Non-Homogeneous Markov Process ..................................................... 10
2.3 Forward And Backward Kolmogorov Differential Equations .................... 11
2.3.1 Forward Kolmogorov Differential Equation .................................................. 11
2.3.2 Backward Kolmogorov Differential Equations .............................................. 12
2.4 Methods Of Solving Continuous Time Non-Homogenous Forward
Kolmogorov Differential Equations .................................................................. 13
2.4.1 Integrating Factor Technique .......................................................................... 13
2.4.2 Lagrange’s Method ......................................................................................... 14
2.4.3 Laplace Transforms ........................................................................................ 15
2.4.4 Generator Matrix Method ............................................................................... 17
CHAPTER THREE ...................................................................................... 18
vii
TRANSITIONAL PROBABILITIES BASED ON THE INTEGRATING
FACTOR TECHNIQUE .............................................................................. 18
3.1 Introduction ................................................................................................. 18
3.2 Determining k kp s,t ................................................................................... 18
3.3 Special Cases Of Non-Homogeneous Markov Processes. .......................... 19
3.3.1 Poisson Process .............................................................................................. 19
3.3.2 Simple Birth Process ....................................................................................... 23
3.3.3 Simple Birth Process With Immigration ........................................................ 30
3.3.4 Polya Process ................................................................................................. 36
CHAPTER FOUR ........................................................................................ 44
Probability Distributions For Transitional Probabilities Using Langranges
Method .......................................................................................................... 44
4.1 Introduction ................................................................................................. 44
4.2 General Definitions ..................................................................................... 44
4.3 Poisson Process ........................................................................................... 45
4.4 Simple Birth................................................................................................. 47
4.5 Simple Birth With Immigration ................................................................. 51
4.6 Polya Process .............................................................................................. 56
CHAPTER FIVE .......................................................................................... 61
INFINITESIMAL GENERATOR ............................................................... 61
5.1 Introduction ................................................................................................. 61
5.2 Solving Matrix Equation ............................................................................. 62
5.3 Simple Birth Process .................................................................................. 72
5.4 Simple Birth With Immigration .................................................................. 74
5.5 Polya Process ............................................................................................... 77
CHAPTER SIX ............................................................................................ 80
FIRST PASSAGE DISTRIBUTIONS FOR PURE BIRTH PROCESSES 80
viii
6.1Derivation Of nF t From np t ................................................................... 80
6.2 Poisson Process .......................................................................................... 81
6.2.1 Determining nF t For Poisson Process ......................................................... 81
6.2.2 Mean And Variance Of nf t For The Poisson Process ................................ 82
6.3 Simple Birth Process ................................................................................... 84
6.3.1 Determining nf t For The Simple Birth Process .......................................... 84
6.3.2 Mean And Variance Of nf t For The Simple Birth Process ......................... 86
6.4 Pure Birth Process With Immigration ........................................................ 92
6.4.1 Determining nF t For Pure Birth Process With Immigration ..................... 92
5.4.2 Mean And Variance ........................................................................................ 95
6.5 Polya Process .............................................................................................. 96
6.5.1 Determining nf t For The Polya Process .................................................... 96
6.5.2 Mean And Variance Of nf t For The Polya Process ................................. 99
CHAPTER SEVEN .................................................................................... 101
SUMMARY AND CONCLUSION ........................................................... 101
7.1 Summary .................................................................................................... 101
7.1.1 General Kolmogorov Forward And Backward Equations .......................... 101
7.1.2 Poisson Process ............................................................................................. 101
7.1.3 Simple Birth Process ..................................................................................... 102
7.1.4 Simple Birth With Immigration .................................................................... 102
7.1.5 Polya Process ................................................................................................ 102
7.2 Conclusion ................................................................................................. 103
7.3 Recommendation For Further Research .................................................... 103
7.4 References ................................................................................................. 104
1
CHAPTER ONE
GENERAL INTRODUCTION
1.1 Background Information
Consider a case of infectious disease transmission. When an infected person makes contact
with susceptible individual there is potential transmission. A susceptible person makes the
potential transition to infected person through a contact with infected persons. A newly
infected person then makes further transitions to other infected states with the possibility of
removal. The removal stage represents death caused by the diseases or eventual recovery.
Those individual who have recovered reenters the susceptible stage. The average number of
people that an infected person infect during his or her infectious period is the reproductive
number and the mean number of affected individuals is the outbreak size. These transitions, for
both susceptible and infected persons evolve over time. Such complex process can be
described mathematically through what is called the continuous time stochastic processes.
Banks loan money to customers on trust that they will payback. The financial circumstance for
the customer servicing loans may remain change within the loan period. If the financial status
improves or remains stable the customer repays the loan without default and the loan status is
said to be always current. When customers makes a full and final settlement before the end of
the loan period attrition is said to have occurred. If the customer is financially constrained he
defaults the loan. A customer who has default makes further transitions to other higher default
states, early delinquency, mild delinquency and hard delinquency with the possibility paying
the loan to fall back to current or lower delinquency status. When the customer defaults for a
very long period, the loan is considered irrecoverable and bank write it off as bad debt and
customer is removed from the loan book to written off book. In addition some customers die
before clearing the loan. This process can be modeled using continuous time non homogenous
Markov process with discrete states.
When pollen grains are suspended in water they move erratically. This erratic movement of the
pollen grain is thought to be due to bombardment of the water molecules that surround the
pollen grains. These bombardments occur many times in each small interval of time, they are
independent of each other and the impact of a single hit is very small compared to the total
effect. This suggests that the motion of pollen grains can studied as a continuous time
2
stochastic process. A botanist, Brown was the first to observe it in 1927 and named the motion
‘Brownian motion’.
1.2 Stochastic Processes
Definition and classifications
A stochastic process is a family of random variables X t ; t 0 on a probability space with
t ranging over a suitable parameter set T (t often represents time). The state space of the
process is a set S in which possible values of each X t lie. This X t can be either discrete
or continuous. The set of t is called parameter space T. The parameter t can also be either
discrete or continuous. The parameter space is said to be discrete if the set T is countable
otherwise it is continuous.
Thus, we have the following four classifications of stochastic processes.
1 Discrete Parameter and Discrete state
2 Discrete Parameter and Continuous state
3 Continuous Parameter and Discrete state
4 Continuous Parameter and Continuous state
Further, stochastic processes are broadly described according to the nature of dependence
relationships existing among the members of the family. Some of the relationships are
characterized by
(i) Stationary process
A process X t , t T is said to be a stationary if different observations on time
intervals of the same length have the same distribution.
i.e. For any s, t T, X t + s - X t has the same distribution as X s - X 0 .
(ii) Markov Processes
Markov process is a stochastic process whose dynamic behavior is such that probability
distribution for its future development depends only on its present state, but not on the
past history of the process or the manner in which the present state was reached. This
property is referred to as the Markov or the memory less property.
A Markov chain is a discrete time space Markov process with discrete state space.
Markov jump process is a continuous time space Markov process with discrete state
space.
3
A Markov process is said to be time homogenous if the behavior of the system does not
depend on when it is observed. In a time homogenous Markov process the transition rates
between states are independent of the time at which the transition occurs. On the hand
Markov process is said to be time non homogenous if the behavior of the system is
depended on the time when it is observed.
1.3 Pure Birth Processes
Pure birth process is a continuous time, discrete state Markov process. Specifically, we
deal with a family of random variables X t ; 0< t < where the possible values of
X t are non negative integers. X t represents the population size at time t and the
transitions are limited to birth. When a birth occurs, the process goes from state n to
state n 1. If no birth occurs, the process remains at the current state. The process
cannot move from higher state to a lower state since there is no death. The birth process
is characterized by the birth rate nλ which varies according to the state n of the system.
The birth death process is a continuous time markov process where the states are the
population size and the transition are restricted to birth and death. When a birth occurs the
process jumps to the immediate higher state. Similarly when a death occurs the process jumps
to the immediate lower state. In this process birth and death are assumed to be independent of
each other. This process is characterized by birth rate and death rate which vary according to
the state of the system. Pure birth process is birth death process with death rate equal to zero
for all states.
1.4 Literature Review
The negative binomial arises from several stochastic processes. The time - homogeneous birth
and immigration process with zero initial population was first obtained by McKendrick (1914).
4
The non - homogeneous process with zero initial population known as the Polya process was
developed by Lundberg (1940) in the context of risk theory.
Other stochastic processes that lead to negative binomial distribution include the simple birth
process with non – zero initial population size (Yule, 1925; Furry, 1937).
Kendall (1948) considered non homogeneous birth – and – death process with zero death rate.
He also worked on the simple birth – death – and immigration process with zero initial
population (Kendall 1949).
A remarkable new derivation as the solution of the simple birth – and – immigration process
was given by Mckendrick (1914).
Kendall (1948) formed Lagrange’s equation from the differential difference equations for a
distribution of the population, and via auxiliary equation, obtained a complete solution of the
equations governing the generalized birth and death process in which the birth rate and death
rates may be any specified function of time.
Karlin and Mcgregor (1958) expressed transitional probabilities of birth and death processs
(BDPs) in terms of a sequence of orthogonal polynomials and spectra Measures. Birth rate and
death rate uniquely determine the unique measure on the real axis with respect to the sequence
of orthogonal polynomials. Their work gave valuable insights about the existence of unique
solution of a given process.
Gani and swift (2008) attempted to derive equations for the probability generating function of
the Poisson, pure birth and death process subject to mass movement immigration and
emigration. He considered mass movement immigration and emigration as positive and
negative mass movements. The resulting probability generating functions turned out to be a
product of the probability generating functions of the original processes modified by
immigration process.
Simple birth process was originally introduced by Yule (1924) to model new species evolution
and by Furry (1937) to model particle creation.
Devaraj and Fu (2008) developed a non-homogenous Markov chain to describe deterioration
on bridge elements. They showed that the new model could better predict bridge element
deterioration trends.
Wolter et al (1998) presented and compared three different uniformization based algorithms;
base algorithm, correction algorithm and interval splitting algorithm, for numerically
computing transient state distribution for non homogenous model. They investigated the
5
numerical solution of the transient state probability vector of non homogenous markov process
by algorithms that are based on unifomization by Grassmann(1991) and Jensen (1953). The
three algorithms numerically solved the integral equations that constituted the uniformization
equation as derived by Dijk(1992).
1.5 Problem Statement and Objectives of the Study
Problem Statement
Most of the literatures concerning pure birth processes concentrate on deriving and solving
basic difference differential equations for time homogeneous processes and rarely do they
present non homogenous birth processes. The few literatures that have presented the non
homogeneous birth processes only sketch them in outline form or scatter details in different
sections. In the analysis, emphasis is often laid on the steady state solutions while transient or
time dependent analysis has received less attention. The assumptions required to derive steady
state solutions are seldom satisfied in analysis of real systems. Some systems never approach
equilibrium and for such systems steady states measures of performance do not make sense. In
additions, systems with finite time horizon of operations, steady state results are inappropriate.
Generally, in real life situation, knowledge on the time dependent behavior of the system is
needed rather than the easily obtained steady state solution.
Objectives
The aim of this thesis is to derive and solve differential equations for continuous time non
homogenous pure birth process by applying four alternative approaches; Iteration method,
Kolmogorov equations and solving directly the Lagrange partial differential equation for the
generating function by means of auxiliary equations, Laplace transform method and the
generator matrix approach. It is hoped that this will demonstrate to statistics scholars the
variety of mathematical tools that can be used to solve non homogenous pure birth processes.
At the same time it is also hoped that students of mathematics will find in this processes
interesting application of standard mathematical methods.
6
1.6 Areas of application
Pure birth and death processes play a fundamental role in the theory and applications that
embrace population growth. Examples include the spread of new infections in cases of a
disease where each new infection is considered as a birth.
Pure birth and death processes have a lot of application in the following areas.
(a) Biological field
When an infected person makes contact with susceptible individual there is potential
transmission. A susceptible person makes the potential transition to infected person
through a contact with infected persons. A newly infected person then makes further
transitions to other infected states with the possibility of removal. The removal stage
represents death caused by the diseases or eventual recovery. Those individual who have
recovered reenters the susceptible stage. The average number of people that an infected
person infect during his or her infectious period is the reproductive number and the mean
number of affected individuals is the outbreak size. These transitions, for both susceptible
and infected persons evolve over time. Such complex process can be described
mathematically through what is called the continuous time stochastic processes.
(b) Radioactivity
Radioactive atoms are unstable and disintegrate stochastically. Each of the new atoms is
also unstable. By the emission of radioactive particles these new atoms pass through a
number of physical states with specified decay rates from one state to the adjacent. Thus
radioactive transformation can be modeled as birth process.
(c) Communication
Suppose that calls arrive at a single channel telephone exchange such that successive calls
arrivals are independent exponential random variables. Suppose that a connection is
realized if the incoming call finds an idle channel. If the channel is busy, then the
incoming call joins the queue. When the caller is through, the next caller is connected.
Assuming that the successive service times are independent exponential variables, the
number of callers in the system at time t is described by a birth and death process.
7
CHAPTER TWO
TRANSITION PROBABILITIES FOR A GENERAL PURE BIRTH
PROCESS
2.1 Introduction
A stochastic process tN :t > 0 is a collection of random variables, indexed by the variable t
(which often represents time).
A counting process is a stochastic process in which Nt must be a non – negative integer and
for t st > s, N N .
Our interest is in the variable t sN - N . We refer to this variable as an increment in the interval
s, t .Consider the set of events jN where jN is the jth
state. The probability of moving
from state jN to state kN is denoted by jkp and is called the transitional probability jk.
Further, jk k jp = Prob N k|N j
Definition 1
A stochastic process has stationary increment if the distribution of t sN - N for t s depend
only on the length of the interval.
Remark
One consequence of stationary increments is that the process does not change over time. Some
researchers call this process time homogeneous also.
Definition 2
A stochastic process has independent increments if increments for any set of disjoint
intervals are independent.
We shall examine counting processes that are Markovian. A loose definition is that for t s,
the distribution t sN - N given Ns is the same as if any of the values 1 2s u uN , N , N , ... with all
1 2u , u , ... s were given.
For a Markovian counting process, the probabilities of greatest interest are the transitional
probabilities given by
k, k+n t s tp s, t = Prob N - N = n / N = k (2.1)
for 0 s t .
8
The marginal distribution of the increment t sN - N may be obtained by an application of the
law of total probability.
That is
t s t s s s
k=0
Prob N - N = n = Prob N - N = n / N = k Prob N = k
k, k+n k
k=0
= p s, t p s (2.2)
2.2 Chapman-Kolmogorov Equations
“Feller (1968), states that the transition probabilities of time homogenous Markov process
satisfy the Chapman – Kolmogorov equation.
in iv vn
v
p s + t = p s p t (2.3)
The transitional probabilities of time non-homogenous Markov process satisfies the Chapman-
Kolmogorov equation, namely
in iv vn
v
p τ, t = p τ,s p s, t (2.4)
and is valid for s t.
This relation expresses the fact that a transition from the state Ei at epoch(time) to En at
epoch t occurs via some state Ev at the intermediate epoch s, and for Markov process the
probability vnP s, t of the transition from Ev to En is independent of the previous state Ei.
The transition probabilities of Markov processes with countably many states are therefore
solutions of the Chapman-Kolmogorov identity (2.4) satisfying the side conditions
ik ik
v
p , t 0, p ,s 1 ” (2.5)
9
2.2.1 Time Homogeneous Markov Process
Proof of formulae (2.3)
The transition probability
in i j
v
v
v
v
p s + t = the probability of moving from state E at time 0 to state E at time s + t, passing via
some state E in time s.
= Prob X s + t = j, X t = v / X 0 = i
Prob X s + t = j, X t = v, X 0 = i=
Prob X 0 = i
Prob X s + t = j/ X t = v, X 0 = i Prob X t = v / X 0 = i Prob X 0 = i=
Prob X 0 = i
*
Therefore
in
v
p s + t = Prob X s + t = j/ X t = v, X 0 = i Prob X t = v / X 0 = i
Because of Markov property,
in
v
p s + t = Prob X s + t = j/ X t = v Prob X t = v / X 0 = i
The equation above can also be written in the form
in v j i v
v
i v v j
v
p s + t = p t p t
= p t p t *
10
2.2.2 Time Non-homogeneous Markov Process
Proof of formula (2.4)
Consider a transition from the state iE at epoch (time) to state jE at epoch t occurs via
some state vE at the intermediate epoch s.
Diagrammatically,
Time s t
State iE vE jE
in
v
v
v
v
p τ, t = Prob X τ, t = n, X τ,s = v / X τ = i
Prob X τ, t = n, X τ,s = v, X τ = i=
Prob X τ = i
Prob X τ, t = n / X τ,s = v, X τ = i Prob X τ,s = v, X τ = i=
Prob X τ = i
= Prob X τ, t = n / X τ,s = v, X τ = i Prob X τ,s = v / X τ = i
Therefore
in
v
vn i v i v vn
v v
p τ, t = Prob X τ, t = n / X τ,s = v Prob X τ,s = v / X τ = i
= P s, t P τ,s = P τ,s P s, t
11
2.3 Forward and Backward Kolmogorov Differential Equations
2.3.1 Forward Kolmogorov Differential Equation
Using notations by Klugman, Panjer and Withalt (2008) replace i by k and n by k + n.
Therefore,
k,k+n k v v,k+n
v
p τ, t = p τ,s p s, t
Next, replace by s, s by t and t by t h. Thus we have
k,k+n k v v,k+n
v
p s, t + h = p s, t p t, t + h
Finally, replace v by k + j
n
k,k+n k,k j k j,k+n
j=0
p s, t + h = p s, t p t, t + h (2.6)
This can also be re-written as
n-2
k,k+n k,k + j k + j,k+n k,k +n-1 k +n-1,k+n k,k +n k +n,k+n
j=0
p s, t +h = p s, t p t, t +h +p s, t p t, t +h +p s, t p t, t +h
(2.7)
But for a pure birth process
k,k + 1 kp t, t + h = λ t h + o h ; k = 0, 1, 2, ... (2.8)
k k kp t, t + h = 1 - λ t h + o h ; k = 0, 1, 2, ... (2.9)
and
k,k + jp t, t + h = o h ; j = 2, 3, ... , n (2.10)
Apply (2.8), (2.9) and (2.10) in (2.7) to get
n-2
k,k+n k,k + j k +n-1 k,k +n-1 k +n-1 k k +n
j=0
p s, t +h =o h p s, t + λ t h +o h p s, t + 1-λ t h +o h p s, t
12
Therefore,
k k +n k k +n
h 0
n-2k +n-1 k +n
k,k + j k,k + n - 1 k,k + nh 0 h 0 h 0
j=0
p s, t + h -p s, tlim =
h
λ t h +o h - λ t h +o ho hlim p s, t + lim p s, t + lim p s, t
h h h
i.e.,
k,k + n k +n k,k + n k +n-1 k,k + n - 1p s, t + λ t p s, t = λ t p s, tt
(2.11)
where
k,k - 1p s, t = 0 (2.12)
In general ij j ij j-1 i j-1p s, t + λ t p s, t = λ t p s, tt
where j j - 1p s, t = 0.
For this non-homogeneous birth process, the initial conditions are:
k k k,k + np s,s = 1 and p s,s = 0 for n > 0. (2.13)
2.3.2 Backward Kolmogorov Differential Equations
Backward Kolmogorov Differential Equations
Consider the following diagram
Time s s + h t
State iE vE jE
i j ik k j
k
p s, t = p s,s h p s h, t
But for a pure birth process
i,i+1 ip s,s+h = λ s h + o h
i i ip s,s + h = 1 - λ s .h + o h
And ikp s,s + h = o h ; for k i 2
13
Therefore,
j
i j ii i j i,i i+1, j ik kj
k =i 2
j
i i j i i+1, j kj
k =i 2
p s, t = p s,s h p s + h, t + p s,s h p s + h, t p s,s h p s + h, t
= 1 - λ s .h + o h p s + h, t + λ s h + o h p s + h, t o h p s + h, t
Therefore,
j
i j i j i i j i i+1, j kj
k =i+2
p s, t - p s + h, t = - λ s .h + o h p s+h, t + λ s h + o h p s+h, t + o h p s+h, t
i j i j
i jh 0
p s, t - p s + h, tp s, t = lim
s h
i i j i i+1, j= - λ s p s, t +λ s p s, t (2.14)
NB. The duration s, t > duration s + h, t
This is the Kolmogorov Backward Differential equation.
2.4 Methods of Solving Continuous Time Non-homogenous Forward Kolmogorov
Differential Equations
Equation (2.11) can be solved using the following techniques. In Chapter three, we shall use
the integrating factor method. In chapter four, we shall use the Lagrange’s Method. In chapter
five, we shall use the Matrix method. Highlights of the key steps in each of these three
methods are given below. Some comments on the Laplace Method have also been given even
though it has not been used.
2.4.1 Integrating Factor Technique
Let a differential equation be of the form
dyay R
dx (2.15)
where y is a function of x and R is a constant or a function of x.
To solve for y in this kind of differential equation, given the initial conditions, we use the
following procedure.
14
Procedure
Let the integrating factor IF a dx
e
Multiply both sides of the equation (of the form (2.15)) you want to solve with the
integrating factor.
Express the new equation in the form a dx a dxd
e y R edx
Integrate both sides with respect to x. Use the initial condition to find the constant of
integration.
Find y.
2.4.2 Lagrange’s Method
Let P, Q and R be functions of x, y and z. suppose we have an equation of the form
dz dzP Q R
dx dy (2.16)
subject to some appropriate boundary conditions. Such an equation is called a linear partial
differential equation. The Lagrange method of solving this kind of equation is described below.
The probability generating function (pgf) is one of the major analytical tools used to work
with stochastic processes on discrete state spaces.
Definition
Let X be a non negative integer valued random variable such that P[X = k] = pk k= 0,1,2,…is
the probability mass function (pmf). Then the probability generating function is given by
k=0
X kk
G(s) = E S = p s
If we are able to obtain the pgf of a stochastic process, then we can find the pmf of the process.
The pmf k
p is the coefficient of ks
Procedure
Define n n
jk jk
n =0 k = j
G s,t,z = p (s, t) z p (s, t) z .
Consequently define G s, t, zs
and
G s, t, z .z
15
Multiply both sides of the second differential equation by ns and sum over n, and taking
advantage of the initial conditions, write the resulting equation in terms of the definitions
above.
Summarize the results by a single Lagrange Partial differential equation for a generation
function of the form P G s, t, z Q G s, t, z R.t z
Form the resulting auxiliary equations from the equation above. It will be of the form;
G s, t, zt z
P Q R
We can form three subsidiary equations from the equation above. These are
t zi
P Q
G s, t, ztii
P R
G s, t, zziii
Q R
Consider any two equations and solve them
Solutions of the two considered subsidiary equations are in the form
U s, t,z Constant (2.17)
and
V s, t,z Constant (2.18)
The most general solution of (2.16) is now given by
u = ψ v
where ψ is an arbitrary function. The precise form of this function is determined when the
boundary conditions have been inserted.
2.4.3 Laplace Transforms
- s t
0
L f t = e f t dt
(2.19)
Using integration by parts
vdu = uv - u dv (2.20)
16
Let - s t - s tdvv = e = - se
dt
Also, let du = f t u = f t
Substituting in equation (2.20)
- s t - s t - s t
00 0
- s t - s t
00
e f t dt = f t .e - f t .-s e dt
= f t .e + s f t . e dt
- s t - s t
0 0
- s t
0
e f t dt = 0 - f 0 + s f t . e dt
= s f t . e dt - f 0
Thus
L f t = sL f t - f 0
Substituting f t with p t ,ij the equation above becomes
i j i j i jL p t = sL p t - p 0 (2.21)
Procedure
Take the Laplace transform of the second of the two basic difference equations.
Apply the relation i j i j i jL p t = sL p t - p 0 to replace i jL p t and simplify
leaving i jL p t as the subject of the formula.
Starting with the conditions at t = 0, generate a recursive relation and use it to generalize
for i jL p t .
Find the Laplace inverse of the i jL p t so got.
17
2.4.4 Generator Matrix Method
Express the differential equation in Matrix form.
Obtain the eigen values and the corresponding eigen vectors.
If has distinct eigen values 1 2 kμ ,μ ,...,μ , say, then UDV where -1V = U and
1 2 kD diag μ ,μ ,...,μ and the ith column of U is the right eigen vector associated with iμ .
Work out 1 kt tp t U diag e , ..., e V
Substitute n in the equation
- j + v λ t - sn-1 n
j j + n n nv=0k=0
n n
k=0k v
ep s, t = j + k λ
j + k λ - j + v λ
and simplify.
18
CHAPTER THREE
TRANSITIONAL PROBABILITIES BASED ON THE INTEGRATING
FACTOR TECHNIQUE
3.1 Introduction
In this chapter, we shall solve equation (2.11) using the integrating factor technique. We shall
solve this equation for all the four pure birth processes starting with Poisson Process, then the
Simple Birth process, the Simple Birth Process with immigration and finally the Polya Process.
By putting n = 0 in equation (2.11), we shall find k kp s, t . Putting other higher values of n in
the same equation will generate a recursive relation. Each generated equation will be solved
using the integrating factor technique. We shall then generalize each case by induction.
3.2 Determining k kp s,t
When n = 0, equation (2.11) becomes
k k k k k k-1 k,k - 1p s, t + λ t p s, t = λ t p s, tt
But from (2.12), k,k - 1p s, t = 0. Therefore
k k k k kp s, t + λ t p s, t = 0t
k k k
k k
k k k
1 dp s, t = - λ t
p s, t dt
log P s, t = - λ tt
Therefore,
t
t
k k kss
log p s, t = - λ x dx
Equivalently
tk k
k
k k s
p s, tlog = - λ x dx
p s,s
19
Therefore
t
k k k
s
log p s, t = - λ x dx
t
k
s
- λ x dx
k kp s, t = e
(3.1)
For n 0, use the integrating factor
t
k + n
s
- λ t dt
e
in equation (2.11).
We shall now look for special cases.
3.3 Special Cases of Non-Homogeneous Markov Processes.
3.3.1 Poisson Process
kλ = λ for all k.
t
s
- λ dx- λ t - s
k kp s, t = e = e
(3.2)
For n 0, the differential equation for the transition probabilities is
k,k+n k,k n k,k n - 1p s, t + λ p s, t = λ p s, tt
(3.3)
When n = 1,
- λ t - s
k,k +1 k,k + 1 k kp s, t + λ p s, t P s, t = λet
λ dt tIntegrating factor = e e
Therefore,
- λ t - st t t
k,k +1 k,k + 1
λt λs
k,k + 1
e p s, t + λ e p s, t λ e et
e p s, t = λ et
20
Therefore,
t t
λy λs
k,k + 1
s s
tt
λy λs
k,k + 1 ss
e p s, y λ e y
e p s, y λ e y
Therefore
λt λs λs
k,k + 1 k,k + 1
λt λs
k,k + 1
e p s, t - e p s,s = λe t - s
e p s, t - 0 = λ t - s e
Equivalently
- λ t - s
k,k + 1p s, t = λ t - s e (3.4)
When n = 2, equation (2.11) becomes
- λ t - s2
k,k + 2 k,k + 2p s, t + λ p s, t = λ t - s et
Integrating Factor te . Multiplying the above equation by the Integrating factor, we have
- λ t - sλt λt 2 λt
k,k + 2 k,k + 2
λt 2 λs
k,k + 2
e p s, t + λ e p s, t = λ t - s e et
e p s, t = λ t - s et
Integrating both sides
t
tλy 2 λs
k,k + 2 ss
e p s, y = λ y - s e dy
Therefore,
t
2λt λs 2 λs
k,k + 2 k,k + 2
s
2 22 λs 2
ye p s, t - e p s,s = λ e - sy
2
t s= λ e - st - - s
2 2
21
2 2λt λs 2 λs 2
k,k + 2 k,k + 2
2 λs2 2
2 λs2
t se p s, t - e p s,s = λ e - st - + s
2 2
λ e= t - 2st + s
2
λ e= t - s
2
Equivalently, using (2.21)
2 λs
λt
k,k + 2
λ t - s ee p s, t =
2!
Therefore,
2
- λ t s
k,k + 2
λ t - sp s, t = e
2!
(3.5)
By induction, assume
n - 1
- λ t - s
k,k + n - 1
λ t - sp s, t = e
n - 1 !
Then equation (2.11) becomes
n - 1
- λ t - s
k,k + n k,k + n
λ t - sp s, t + λ p s, t = λ e
t n - 1 !
Integrating factor te
Multiplying the above equation by the integrating factor, we get
n - 1
- λ t - st t t
k,k + n k,k + n
n - 1
t s
k,k + n
λ t - se p s, t + λ e p s, t = λ e e
t n - 1 !
λ t - se p s, t λ e
t n - 1 !
Integrating both sides
tn s
t n - 1t
k,k + n ss
λ ee p s, t y - s y
n - 1 !
Let du
u = y - s =1 or equivalentlydu = dy.When y = s, u = 0.When y = t, u = t -s.dy
22
Then
t - sn λst
λt n - 1
k,k + n s0
t - sn λs n
0
n λsn
n
λs
λ ee p s, y = u du
n - 1 !
λ e u=
n - 1 ! n
λ e= t - s
n n - 1 !
λ t - s= e
n!
Therefore,
n
λt λt λs
k,k + n k,k + n
n
λt λs
k,k + n
λ t - se p s, t - e p s,s = e
n!
λ t - se p s, t = e
n!
Therefore
n
- λ t - s
k,k + n
λ t - sp s,t = e ; n = 0, 1, 2, ...
n! (3.6)
This is independent the initial state and depends on the length of the time interval, thus for a
Poisson processes the increments are independent and stationary.
Remarks;
Formula (3.6) does not depend on k, which reflects the fact that the increments are
independent.
From (2.2), the marginal distribution of the increment t sN - N is thus given by
t s k,k + n k
k=0
n
- λ t - s
k
k=0
n
- λ t - s
k
k=0
Prob N - N = n = p s, t p s
λ t - s= e p s
n!
λ t - s= e p s
n!
23
Therefore,
n
- λ t - s
t s
λ t - sProb N - N = n e ; n = 0, 1, 2,...
n!
(3.7)
Which depends only on t - s. Therefore, the process is stationary.
Thus, the poisson process is a homogeneous birth process with stationary and independent
increments.
Note that
n
- λt
t 0
λtProb N - N = n = e
n!
Since Nt is the number of events that have occurred up to time t, it is convenient to define
0N = 0.
Therefore,
n
- λt
t
λtProb N = n = e
n!
i.e.
; n 0, 1, 2, ...
n
- λt
n
λtp t = e
n! (3.8)
Nt has a Poisson distribution with mean t.
3.3.2 Simple Birth Process
nλ = kλ, for k = 0, 1, 2, ...
For n = 0, (2.11) becomes (3.1). Therefore,
t
s
- kλ dx- kλ t - s
k kp s, t = e = e
(3.9)
For n 0, (2.11) becomes
k,k + n k,k + n k,k + n - 1p s, t + k n λ p s, t = k n 1 λ p s, tt
(3.10)
For n = 1,
- kλ t - s
k,k + 1 k,k + 1 k kp s, t + k 1 λ p s, t = kλ p s, t kλet
24
Integrating factor k 1 dt k 1 t
e e
Multiplying the equation above by the integrating factor, we have
k 1 t k 1 t k 1 t - kλ t - s
k,k + 1 k,k + 1e p s, t + k 1 e λ p s, t = k λe et
k + 1 λt - kλ t - s λ t kskλt λt - kλt kλs λt kλsR.H.S = kλe e = kλ×e ×e ×e ×e kλ×e ×e kλ×e
Therefore,
k + 1 λt λt kλs
k,k + 1e p s, t = kλe et
Integrating both sides with respect to t, we have
t
tk + 1 λy λks λy
k,k + 1s
s
tλy
λks
s
λksλt λs
λks λt λs
e p s, y = kλe e dy
e= kλe
λ
kλe= e - e
λ
= ke e - e
Therefore,
λt - k + 1 λt λs - k + 1 λtλks
k,k + 1
λks -kλt λs - kλt - λt
p s, t = ke e - e
= ke e - e
λks -kλt λs - λt
k,k + 1
λk s - t λs - λt
λk s - t - λ t - s
- λk t - s - λ t - s
p s, t = ke e 1 - e
= ke 1 - e
= ke 1 - e
= ke 1 - e
- λk t - s - λ t - sk= e 1 - e
1
(3.11)
25
For n 2
- λk t - s - λ t - s
k,k + 2 k,k + 2 k,k + 1p s, t + k + 2 λ p s, t = k + 1 λ p s, t k + 1 λ ke 1 - et
Integrating factor λ k + 2 dt λ k + 2 t
= e = e
Multiplying the above equation by the integrating factor, we have
λ k + 2 t λ k + 2 t λ k + 2 t - λk t - s - λ t - s
k,k + 2 k,k + 2e p s, t + λ k + 2 e p s, t = λk k + 1 e e 1 - et
This can also be written as,
λ k + 2 t - λ t - sλkt + 2λt- λkt + λks
k,k + 2
- λ t - s2λt + λks
λks 2λt λs λt
e p s, t = λk k + 1 e 1 - et
= λk k + 1 e 1 - e
= λk k + 1 e e - e e
Integrating both sides with respect to t, we have
tλ k + 2 t λks 2λy λs λy
k,k + 2
s
t2λy λs λy
λks
s
2λt 2λsλks λs λt λs λs
e p s, t = λk k + 1 e e - e e dy
e e e= λk k + 1 e -
2λ λ
e e= k k + 1 e - e e - + e e
2 2
Therefore,
λks 2λt- kλt - 2λt λs + λt- kλt - 2λt 2λs- λkt - 2λt 2λs- kλt - 2λt
k k + 2
1 1p s, t = k k + 1 e e - e - e + e
2 2
Simplifying,
- λ t - s - 2λ t - s - 2λ t - sλks - kλt
k k + 2
- λ t - s - 2λ t - sλks - kλt
- λk t s - 2λ t - s - λ t - s
1 1p s, t = k k + 1 e e - e - e + e
2 2
1 1= k k + 1 e e - e + e
2 2
k k + 1= e e - 2e + 1
2
26
Simplifying, we have
2- λk t s - λ t - s
k k + 2
k k + 1p s, t = e 1 - e
2
Equivalently,
2- λk t s - λ t - s
k,k + 2
k + 1p s, t = e 1 - e
2
(3.12)
Put n = 3, then (2.11) becomes
k,k + 3 k +3 k,k + 3 k +2 k,k + 2
k,k + 3 k,k + 3 k,k + 2
p s, t + λ t p s, t = λ t p s, tt
p s, t + k 3 λ p s, t = k 2 λ p s, tt
Integrating factor = exp k + 3 λdt = exp k + 3 λt
Multiplying the above equation by the integrating factor, we have
2
k + 3 λt k + 3 λt k + 3 λt - λk t - s - λ t - s
k,k + 3 k,k + 3
k + 1e p s, t + k + 3 λe p s, t = k + 2 λ e e 1 - e
2t
This can also be written as
2k + 3 λt - λ t - skλt + 3λt - λkt + λks
k,k + 3
2k + 3 λt - λ t - s3λt λks
k,k + 3
k + 2e p s, t = 3λ e 1 - e
3t
k + 2e p s, t = 3λ e 1 - e
3t
Integrating both sides with respect to t, we have
t
t 2k + 3 λy - λ y - sλks 3λy
k,k + 3s
s
t2
λks λy λy λs
s
k + 2e p s, y = 3λ e e 1 - e dy
3
k + 2= 3λ e e e - e dy
3
Put λy λs λy λyduu =e -e = λe du = λe dy.
dy When y=s, u =0. When λt λsy t, u e -e .
27
Therefore
λt λs
λt λs
λt λs
tt 2k + 3 λy λks λy λy λs
k,k + 3s
s
e - eλks λy 2
λy0
e - eλks 2
0
e - e3
λks
0
3λks λt λs
λks λt
k + 2e p s, y = 3 e λe e - e dy
3
k + 2 du= 3 e λe u
3 λe
k + 2= 3 e u du
3
k + 2 u= 3 e
3 3
k + 2= e e - e
3
k + 2= e e 1 -
3
3- λ t - s
3- λ t - s3λt λks
e
k + 2= e e 1 - e
3
But k,k + 3p s,s 0. Therefore,
3
k + 3 λt - λ t - s3λt λks
k,k + 3
k + 2e p s, t = e e 1 - e
3
The above equation can also be written in the form
3
- λ t - s3λt λks - kλt - 3λt
k,k + 3
k + 2p s, t = e e e e 1 - e
3
3
- λk t - s - λ t - sk + 2= e 1 - e
3
(3.13)
By induction, assume
n - 1
- λk t - s - λ t - s
k,k + n - 1
k + n - 2p s, t = e 1 - e
n - 1
to be true. (3.14)
Then equation (2.11) becomes
n - 1
- λk t - s - λ t - s
k,k + n k +n k,k + n k +n-1
k + n - 2p s, t + λ t p s, t = λ t e 1 - e
n - 1t
28
Substituting for k +nλ t , we have
n -1- λk t - s - λ t - s
k,k + n k k + n
n - 1- λk t - s - λ t - s
k + n - 2p s, t + k + n λ p s, t = k + n -1 λ e 1 - e
n - 1t
k + n - 1= nλ e 1 - e
n
k + n λtIntegrating Factor = e
Multiplying the equation above with the integrating factor, we have
n - 1
k + n λt k + n λt - λk t - s k + n λt - λ t - s
k k + n k k + n
k + n - 1e p s, t + k + n λe p s, t = nλe e 1 - e
nt
Equivalently,
n - 1k + n λt - λ t - skλt + nλt - λkt + λks
k,k + n
n - 1λks nλt - λt λs
n - 1λs
λks nλt
λt
n - 1-λt n - 1λks nλt λt λs
k + n - 1e p s, t = nλ e 1 - e
nt
k + n - 1= nλ e e 1 - e e
n
k + n - 1 e= n λ e e 1 -
n e
k + n - 1= n λ e e e e - e
n
k + n - 1= n
n
n - 1λks + nλt - λtn + λt λt λs
n - 1λks λt λt λs
λ e e - e
k + n - 1= n e λ e e - e
n
Integrating both sides with respect to t, we have
t
t n - 1k + n λy λks λt λy λs
k,k + ns
s
k + n - 1e p s, y = n e λe e - e dy
n
Put λy λs λy
λy
du duu e -e e or dy .
dy e
Further, when λt λsy s, u 0and when y t,u e - e .
29
Therefore,
λt λst e - ek + n λy λks λt n - 1
k,k + n λt0s
k + n - 1 due p s, y = n e λe u
n λ e
λt λs
λt λs
t e - ek + n λy λks n - 1
k,k + n0s
e - en
λks
0
nλks λt λs
n- λ t - sλks nλt
k + n - 1e p s, y = n e u du
n
k + n - 1 u= n e
n n
k + n - 1= e e -e
n
k + n - 1= e e 1 - e
n
Therefore,
t nk + n λy - λ t - sλks nλt
k,k + ns
nk + n λt k + n λs - λ t - sλks nλt
k,k + n k,k + n
nk + n λt - λ t - sλks nλt
k,k + n
k + n - 1e p s, y = e e 1 - e
n
k + n - 1e p s, t - e p s,s = e e 1 - e
n
k + n - 1e p s, t e e 1 - e
n
Therefore,
n- λk t - s - λ t - s
k,k + n
k + n - 1p s,t = e 1 - e
n (3.15)
This is a Negative binomial distribution with - λ t - sp = e and - λ t - s
q =1- e
It depends on the length of the time interval t - s and on k and is thus stationary and not
independent.
30
3.3.3 Simple Birth Process With immigration
For simple birth with immigration kλ = kλ + v
t
s
t
s
- kλ + v dx
k k
- kλ + v x
- kλ + v t - s
p s,t = e
= e
= e
For n 0, (2.11) becomes
k,k + n k,k + n k,k + n - 1p s, t + k + n λ + v p s, t = k + n - 1 λ + v p s, tt
(3.16)
When n 1, equation (3.16) becomes
k,k + 1 k,k + 1 k kp s, t + k + 1 λ + v p s, t = kλ + v p s, tt
- kλ + v t - s
k,k + 1 k,k + 1p s, t + k + 1 λ + v p s, t = kλ + v et
(3.17)
To integrate equation (3.17),
k + 1 λ + v dt
k + 1 λ + v t
Integrating factor = e
= e
Multiplying equation (3.17) with the integrating factor, we have
k + 1 λ + v t k + 1 λ + v t k + 1 λ + v t- kλ + v t - s
k k + 1 k k + 1e p s, t + k +1 λ + v e p s, t = kλ + v e et
Equivalently,
k + 1 λ + v t kλ + v s λt
k k + 1
ttk + 1 λ + v y kλ + v s λy
k k + 1s
s
tyt
k + 1 λ + v y kλ + v s
k k + 1s
s
e p s, t = kλ + v e et
e p s, y = kλ + v e e dy
ee p s, y = kλ + v e
31
From (2.13), k k 1p s,s 0. Therefore,
k + 1 λ + v t kλ + v s t s
k k + 1
kλ + ve p s, t = e e e
Therefore,
- k + 1 λ + v tkλ + v s λt λs
k k + 1
- k + 1 λ + v tkλ + v s λt λs
kλ + v s - t - λt λt λs
- kλ + v t - s -λ t - s- λt
kλ + vp s, t = e e - e e
λ
kλ + v= e e - e e
λ
v= k + e e e - e
λ
v= k + e e 1 - e
λ
v-λ k + t - s
-λ t - s- λtλv= k + e e 1 - e
λ
(3.18)
When n = 2,
k k + 2 k k + 2 k k + 1
v-λ k + t - s
-λ t - s- λtλ
p s, t + k + 2 λ + v p s, t = k + 1 λ + v p s, tt
v= k + 1 λ + v k + e e 1 - e
λ
k + 2 λ + v dt
k + 2 λ + v t
Integrating factor = e
e
Multiplying equation (3.18) by the integrating factor, we have
v-λ k+ t -s
k+2 λ+v t k+2 λ+v t k+2 λ+v t -λ t-s-λtλ
k,k+2 k,k+2
ve p s, t +e k+2 λ+v p s, t = k+1 λ+v k+ e e e 1-e
t λ
Equivalently,
k+2 λ+v t k v s -λ t-s2λt
k,k+2
ve p s, t k+ k+1 λ+v e e 1-e
t λ
32
Integrating both sides with respect to t, we have
ttk +2 λ +v t kλ + v s 2λy λs λy
k k +2s
s
t2λy λs λy
kλ + v s
s
2λt λs λt 2λs λs λskλ + v s
ve p s, t = k + k +1 λ + v e e - e e dy
λ
v e e e= k + k +1 λ + v e -
λ 2λ λ
v e e e e e e= k + k +1 λ + v e - - -
λ 2λ λ 2λ λ
2λt λs λt 2λs 2λskλ + v s
kλ + v s λ s + t2λt 2λs 2λs
kλ + v s λ s + t2λt 2λs
v e e e e e= k + k +1 λ + v e - - +
λ 2λ λ 2λ λ
vk + +1
λv= k + e e - 2e - e + 2e
λ 2
vk + +1
λv= k + e e - 2e + e
λ 2
Now, from (2.13), k,k 1p s,s 0. Therefore,
k +2 λ+v t kλ + v s λ s + t2λt 2λs
k,k +2
vk + +1
λve p s, t = k + e e - 2e + e
λ 2
Equivalently,
- k +2 λ +v tkλ + v s λ s + t2λt 2λs
k,k +2
- kλ + v t - s λ s + t-2λt 2λt 2λs
- kλ + v t - s -λ t - s -2λ t - s
vvk + +1k +
λλp s, t = e e e - 2e + e
1 2
vvk + +1k +
λ= e e e - 2e + e
1 2
vvk + +1k +
λλ= e 1 - 2e + e
1 2
vk +
λ=
2
- kλ + v t - s - λ t - s
vk + +1
λe 1 - e
1 2
33
Therefore,
v
- k + t - s 2- λ t - s
k,k +2
vvk + +1k +
λλp s, t = e 1 - e
1 2
(3.19)
When n = 3, equation (3.16) becomes
vλ
k,k + 3 k,k +3 k,k + 2
vv 2-λ k + t - sλ - λ t - sλ
dp s, t + k +3 λ + v p s, t = k + 2 λ + v p s, t
dt
k+ +1k+= k + 2 λ + v e 1-e
1 2
Integrating this equation,
k +3 λ+v dt
k +3 λ+v t
Integrating factor = e
= e
vλ
vv2-λ k + t -sλk+3 λ+v t k +3 λ + v t k +3 λ + v t - λ t -sλ
k,k +3 k,k+3
k+ +1k+e p s, t + k+3 λ+v e p s, t = k +2 λ + v e e 1-e
t 1 2
vλ
vv 2-λ k + t -sk+3 λ+v t k +3 λ +v tλ - λ t -sλ
k,k +3
vvλ kλ +v s - λ t -s - 2λ t -sλ 3λt
vvλ kλ +v sλ 3λt λs 2λt 2λs λt
k+ +1k+e p s, t = k + 2 λ + v e e 1-e
t 1 2
k+ +1k+= k + 2 λ + v e e 1-2e + e
1 2
k+ +1k+= k + 2 λ + v e e -2e e + e e
1 2
Integrating, we have
tv 3λy λs 2λy 2λs λyvtk+3 λ+v y λ kλ +v sλ
k,k +3s
s
v vvtλ λ kλ +v sλ 3λy λs 2λy 2λs λy
s
k+ +1k+ e 2e e e ee p s, y = k + 2 λ + v e - +
1 2 3λ 2λ λ
k+ +1 k + + 2k+= e e -3e e +3e e
1 2 3
Therefore,
v vvtλ λk+3 λ+v y kλ +v sλ 3λx λs 2λx 2λs λx 3λs λs 2λs 2λs λs
k, k +3s
v vvλ λ kλ +v sλ 3λx λs 2λx 2λs λx 3λs 3λs 3λs
k+ +1 k + +2k+e p s, y = e e -3e e +3e e - e -3e e +3e e
1 2 3
k+ +1 k + +2k+= e e -3e e +3e e -e +3e -3e
1 2 3
34
Simplifying,
v vvtk+3 λ+v t λ λ kλ+v sλ 3λx λs 2λx 2λs λx 3λs
k,k +3s
k+ +1 k + +2k+e p s, t = e e - 3e e + 3e e - e
1 2 3
Now, from (2.13), k,k 1P s,s 0. Therefore,
v vvk+3 λ+v t λ λ kλ+v sλ 3λx λs 2λx 2λs λx 3λs
k,k +3
k+ +1 k + +2k+e p s, t = e e - 3e e + 3e e - e
1 2 3
Equivalently,
v vv- k+3 λ+v tλ λ kλ +v sλ λs -λt 2λs -2λx 3λs -3λt
k,k +3
v vvλ λ - kλ +v t - s -λ t - s -2λ t - s 3λ t - sλ
v vvλ λ - kλ +v t - s -λ tλ
k+ +1 k + + 2k+p s, t = e e 1 - 3e e + 3e e - e e
1 2 3
k+ +1 k + + 2k+= e 1 - 3e + 3e - e
1 2 3
k+ +1 k + + 2k+= e 1 - e
1 2 3
3- s
v v v3
λ λ λ - kλ +v t - s -λ t - s
v v3
λ λ - kλ +v t - s -λ t - s
vλ
k+ k+ +1 k + + 2= e 1 - e
3!
k+ +1 k + + 2 != e 1 - e
3! k+ - 1 !
v 3- kλ+v t - s -λ t - sλ
k + +2= e 1 - e
3
(3.20)
By induction, assume
v n - 1
- kλ+v t - s -λ t - sλ
k,k +n - 1
k + +n - 2p s, t = e 1 - e
n - 1
(3.21)
From (3.16), we have for n > 0,
k,k+n k,k +n k,k +n -1
v n - 1- kλ +v t - s -λ t - sλ
p s, t + k + n λ + v p s, t = k + n -1 λ + v p s, tt
k + + n -2v= k + + n -1 e 1-e
n - 1
35
k +n λ+v dt
k +n λ+v t
Integrating factor = e
= e
Thus,
v
vλ
v n - 1k +n λ +v t k +n λ +v t - kλ +v t - s -λ t - sλ
k k+n
n - 1sv
k + +n s n tλvλ t
vk + +n λs - n -1 λtn t λt λλv
λ
k + + n -2p s, t e = k + n -1 λ + v e e 1-e
n - 1t
k + + n -2 e= λ k + + n -1 e e 1 -
n - 1 e
k + + n -2= λ k + + n -1 e e e e - e
n - 1
vλ
n - 1s
vn - 1k + +n λs t λt λsλv
λ
k + + n -2= λ k + + n -1 e e e - e
n - 1
Integrating both sides, we have
vλ
tvt n - 1k + +n λsk +n λ+v y y λy λsλvk,k+n λ
ss
k + +n -2p s, y e = λ k + +n -1 e e e - e dy
n - 1
Let λy λse - e u. Then ydue
dy
or -λydudy = e .
λ Also, when λs λsy=s, u =e -e = 0 and when
λt λsy= t, u =e -e .
Therefore,
λt λs
vλ
λt λs
vλ
λt λs
vλ
e -e -λyvk + +n λsk +n λ +v t λy n - 1λv
k,k+n λ
0
e -evk + +n λs n - 1λv
λ
0
e -env
k + +n λsλvλ
0
vλv
λ
k + + n -2 e due p s, t = λ k + + n -1 e e u
n - 1 λ
k + + n -2= k + + n -1 e u du
n - 1
k + + n -2 u= k + + n -1 e
n - 1 n
k + + n -2= k + + n -1
n
vλ
vλ
k + +n λsn
λt λs
k + +n λsvn
λt λsλvλ
ee -e - 0
- 1 n
k + + n -2 e= k + + n -1 e -e
n - 1 n
36
Equivalently,
vλ
vλ
vλ
k + +n λsvn- k +n λ +v tλ λt λsv
k,k+n λ vλ
vn- k + +n t - s λλ λt λs
vλ
v n- k + +n t - s λ -λ t - sλ
k + + n -2 ! ep s, t = k + + n -1 e e -e
n - 1 ! k + - 1 ! n
k + + n -1 != e e -e
n! k + - 1 !
k + + n -1= e 1-e
n
Therefore,
vλ
v n- k + +n t - s λ -λ t - sλ
k,k+n
k + +n-1p s,t = e 1-e
n (3.22)
This is a Negative binomial distribution with - λ t - sp = e and - λ t - s
q =1- e
It depends on the length of the time interval t - s and on k and is thus stationary and not
independent.
3.3.4 Polya Process
For the Polya process, k
1 + akλ = λ
1 + λat
From equation (3.1), we have
t
s
t
s
1 + ak- λdx
1 + λax
k k
1- 1 + ak λ dx
1 + λax
p s,t = e
= e
Let du
u = 1 + λax adx
or du
dxa
Limits
x = s u = 1 + λas
x = t u = 1 + λat
Therefore,
1+ λat
1+ λas
1 + ak du- λ
aλ u
k kp s,t = e
37
1+ λat
1+ λas
1k +a
1 + ak- λ ln u
aλ
k k
1 +λat1ln- + k
1+λasa
1 +λasln
1+λat
p s,t = e
= e
= e
1a
k +
1 + λas=
1+ λat
For n ≥ 1, equation (2.11) becomes
k k + n k k + n k k + n - 1
1 + a k n 1 + a k n 1p s, t + λ p s, t = λ p s, t
t 1 + λat 1 + λat
(3.23)
For n = 1
k k + 1 k k + 1 k k
1 + a k + 1 1 + akp s, t + λ p s, t = λ p s, t
t 1 + λat 1 + λat
1a
1a
k +
k +
1 + ak 1 + λas= λ
1 + λat 1+ λat
1 + ak 1 + λas= λ
1 + λat 1+ λat
Integrating factor
1 + a k + 1λ dt
1 + λat= e
1a
k + + 1= 1 + λat
Multiply the equation above by the integrating factor, we obtain
1a
1 1a a
1a
k +
k + + 1 k + + 1
k k + 1
k +
1 + ak 1 + λas1 + λat p s, t = 1 + λat λ
t 1 + λat 1+ λat
= 1 + ak 1 + λas λ
Integrating, we have
1 1a a
ttk + + 1 k +
k k + 1 ss
1 + λay p s, y 1 + ak 1 +λas λ y
1 1a a
k + + 1 k +
k k + 11 + λat p s, t = 1 + ak 1 +λas λ t - s
38
Therefore,
1a
1a
1a
1a
k +
k k + 1 k + + 1
k +1a
k + + 1
1 + ak 1 + λas λp s, t = t - s
1 + λat
k + 1 + λas λ= t - s
1 + λat
1a
k +
1k k + 1 a
aλ t - s1 + λasp s, t = k +
1 + λat 1 + λat
(3.24)
When n = 2
1a
1a
1a
k k + 2 k k + 2 k k + 1
k +1a 1
a
2
k +1 1a a k + + 2
1 + a k + 2 1 + a k + 1p s, t + λ p s, t = λ p s, t
t 1 + λat 1 + λat
aλ t - sk + + 1 1 + λas= aλ k +
1 + λat 1 + λat 1 + λat
aλ t - s= k + k + + 1 1 + λas
1 + λat
Integrating factor
1 + a k + 2λdt
1 + λate
1a
k 21 at
Multiply equation the above with the integrating factor, we have
1 1 1a a a
1a
1a
2
k + + 2 k + + 2 k +1 1k k + 2 a a k + + 2
k + 21 1a a
aλ t - s1 + λat p s, t = k + k + + 1 1 + λat 1 + λas
t 1 + λat
= k + k + + 1 1 + λas aλ t - s
Integrating, we have
1 1a a
ttk + + 2 k + 21 1
k k + 2 a as
s
1 + λay p s, y = k + k + + 1 1 +λas aλ y - s dy
39
1 1a a
1a
1a
1a
t2
k + + 2 k + 21 1k k + 2 a a
s
2 2k + 2 21 1
a a
2 2k + 21 1
a a
2
k + 2 21 1a a
1 1a a
y1 + λat p s, t = k + k + + 1 1 + λas aλ - sy
2
t s= k + k + + 1 1 + λas aλ - st - - s
2 2
t s= k + k + + 1 1 + λas aλ - st +
2 2
aλ= k + k + + 1 1 + λas t - 2st + s
2
= k + k + + 1 1 + λ
1a
2
k + 2aλas t - s
2
Therefore,
1a
1a
1a
1a
k + 21 12a a
k,k + 2 k + + 2
2k +1 1a a
2k +1a
1a
k + k + + 1 1 + λas aλp s, t = t - s
21 + λat
k + k + + 1 λa t - s1 + λas=
2 1 + λat 1 + λat
k + + 1 ! λa t - s1 + λas=
k + - 1 !2! 1 + λat 1 + λat
1a
2k +1a
k + + 1 λa t - s1 +λas=
2 1 + λat 1 + λat
(3.25)
For n = 3
k,k + 3 k k + 3 k k 2
1 + a k + 3 1 + a k 2p s, t + λ p s, t = λ p s, t
t 1 + λat 1 + λat
1a
1a
1a
2k +1a
k,k + 3 k k + 3
k + 31 1 12a a a
k + +3
k + +11 + a k + 3 1+a k + 2 λa t -s1+ λasp s, t + λ p s, t = λ
2t 1 + λat 1+ λat 1+ λat 1+ λat
k + k + +1 k + + 2 1+ λas λa= t -s
21+ λat
40
Integrating factor
1 + a k + 3λ dt
1 + λate
1a
k 31 at
Multiplying the equation above by the integrating factor, we obtain
1a
1 1a a
1a
1a
k + 31 1 1k + + 3 k + +3 2a a a
k,k + 3 k + +3
3
k + 21 1 1a a a
k + k + +1 k + + 2 1+ λas λa1 + λat p s, t = 1+ λat t -s
t 21+ λat
λa= k + k + +1 k + + 2 1+ λas t -s
2
Integrating, we have
1 1a a
3 ttk + + 3 k + 21 1 1
k,k + 3 a a as
s
λa1 + λay p s, y = k + k + +1 k + +2 1+λas y-s dy
2
We wish to integrate t
2
s
y-s dy
Let du
u y s 1dy
Limits
When y s, u 0 and when y= t, u = t -s
Therefore,
t -s 3t -st 32 2
s 0 0
t - suy-s dy = u du = = - 0
3 3
Thus,
1 1
a a
3 3t
k + + 3 k +1 1 1k,k + 3 a a a
s
λa t - s1 + λay p s, y = k + k + +1 k + +2 1+λas
2 3
Therefore,
1 1
a a
3 31 1 1k + + 3 k +a a a
k,k + 3
k + k + +1 k + +2 λa t - s1 + λat p s, t = 1+λas
2 3
Re arranging
1a
1a
3 k + 31 1 1a a a
k,k + 3 k + + 3
k + k + +1 k + +2 λa 1+λas t - sp s, t = ×
2.3 1 + λat
41
1a
1a
3k +1 1 1a a a
k,k + 3
3k +1a
1a
k + k + +1 k + + 2 λa t - s1+ λasp s, t =
3! 1 + λat 1 + λat
k + + 2 ! λa t - s1+ λas=
k + - 1 !3! 1 + λat 1 + λat
Therefore,
1a
3k +1a
k,k + 3
k + +2 λa t - s1+λasp s, t =
3 1 + λat 1 + λat
(3.26)
By induction we assume that when n = j – 1
1a
j-1k +1a
k,k + j-1
λa t -sk + + j-2 1+λasp s, t =
1+λat 1+λatj-1
is true. (3.27)
When n = j,
k,k + j k,k + n k,k + j - 1
1 + a k j 1 + a k j 1p s, t + λ p s, t = λ p s, t
t 1 + λat 1 + λat
Equivalently,
1a
1a
1a
j-1k +1a
k,k + j k,k +n
j k +1j-1a
k + + j1a
1ja
1a
1+a k + j 1+a k + j-1 λa t -sk + + j-2 1+ λasp s, t + λ p s, t = λ
t 1+ λat 1+ λat 1+ λat 1+ λatj-1
k + + j-2 ! aλ 1+ λas= 1+a k + j-1 t -s
k + -1 ! j-1 ! 1+ λat
k + + j-1 ! 1= aλ
k + -1 ! j-1 !
1a
1a
k +
j-1
k + + j
+ λast -s
1+ λat
Now we solve the equation above by use of the integrating factor
Integrating factor
1+a k + jλdt
1+λate
1a
k + + j= 1+λat
42
Multiplying the equation above by the integrating factor, we have
1a
1 1a a
1a
1a
k +1k + + j k + + j j j-1a
k,k + j k + + j1a
1k + j j-1a
1a
k + + j-1 ! 1+ λas1+ λat p s, t = 1+ λat aλ t -s
t k + -1 ! j-1 ! 1+ λat
k + + j-1 != 1+ λas aλ t -s
k + -1 ! j-1 !
Integrating, we have
1 1a a
t1tk + + j k + j j-1a
k,k + j 1s a s
k + + j-1 !1+λay p s, y = 1+λas aλ y-s dy
k + -1 ! j-1 !
We wish to integrate t
j - 1
s
y-s dy
Let du
u y s 1dy
Limits
When y s, u 0 and when y= t, u = t -s
Therefore,
t -s jt -st jj - 1 j 1
s 0 0
t - suy-s dy = u du = = - 0
j j
Thus,
1 1a a
j1k + + j k + ja
k,k + j 1a
k + + j-1 ! t - s1+λat p s, t = 1+λas aλ
k + -1 ! j-1 ! j
Equivalently,
1a
1a
1a
k + j1ja
k k + j k + + j1a
jk +1a
k + + j-1 ! 1+ λas t - sp s, t = aλ
k + -1 ! j! j1+ λat
λa t - sk + + j-1 1+ λas=
1+ λat 1+ λatj
Therefore,
1a
1a
jk +1a
k k + j
jk +1a
k + + j - 1 λa t - s1 + λasp s, t =
j 1 + λat 1 + λat
j + k + - 1 λa t - s1 + λas=
j 1 + λat 1 + λat
43
Further,
Let 1 + λas
P =1 + λat
Then,
q = 1 - P
1 + λas= 1 -
1 + λat
1 + λat - 1 - λas=
1 + λat
λat - λas=
1 + λat
λa t - s=
1 + λat
Thus,
1a
jk +1a
k,k + j
j + k + - 1 λa t - s1 + λasp s,t =
j 1 + λat 1 + λat, j = 0, 1, 2, 3, …, (3.28)
is a negative binomial distribution with 1 + λas
p =1 + λat
and λa t - s
q =1 + λat
. It depends on the
length of the time interval t - s and on k and is thus stationary and not independent.
44
CHAPTER FOUR
Probability Distributions for Transitional Probabilities Using Langranges
Method
4.1 Introduction
In this chapter, we shall solve equation (2.11) using the Lagranges method. We shall solve this
equation for all the four pure birth processes starting with Poisson Process, then the Simple
Birth process, the Simple Birth Process with immigration and finally the Polya Process. The
general definitions for this Method are given in the first section.
4.2 General Definitions
Now, from (2.11), we know that
k,k + n k +n k,k + n k +n-1 k,k + n - 1p s, t + λ t p s, t = λ t p s, t .t
Let i j k,k np s, t = P s, t . Then, equation (2.11) becomes
i j j i j j - 1 i, j - 1p s, t = - λ t p s, t + λ t p s, tt
(4.1)
Define transition Probability generating function
j
i j
j=0
G s,t,z = p s, t z
j
i j
j=i
= p s, t z
(Since i jp s, t 0 j i )
j
i j
j i
k - 1 j
i j i j
j i j i
G s,t,z z p s, tt t
G s,t,z k p s, t z z G s,t,z k p s, t zz z
Boundary Condition
i i i jp s,s 1, p s,s 0 j i
j i
i j
j=i
G s,s,z = p s,s z = z
Multiplying equation (4.1) by jz and summing over j, we obtain
j j j
i j j i j j - 1 i, j - 1
j i j i j i
p s, t z = - λ t p s, t z + λ t p s, t zt
45
Thus
j j j - 1
i j j i j j - 1 i, j - 1
j i j i j i
p s, t z = - λ t p s, t z + z λ t p s, t zt
(4.2)
4.3 Poisson Process
When jλ t = λ, equation (4.2) becomes
j j j - 1
i j i j i, j - 1
j i j i j i
j
i j
j i
j j
i j i j
j i j i
p s, t z = - λ p s, t z + z λ p s, t zt
= z - 1 λ p s, t z
p s, t z = z - 1 λ p s, t zt
j j
i j i j
j i j i
p s, t z = z - 1 λ p s, t zt
(4.3)
Therefore,
j
i j
j i
j
i j
j i
j
i j
j i
p s, t z = z - 1 λ G s,t,zt
p s, t z = z - 1 λ G s,t,zt
p s, t zt
= z - 1 λG s,t,z
G s,t,zt = z - 1 λG s,t,z
logG s,t,z z - 1 λ
Integrating with respect to t
log G s,t,z z - 1 λ x
log G s,t,z z - 1 λt c
46
Taking exponential both sides, we have
z - 1 λt + c
z - 1 λt
z - 1 λt
G s,t,z = e
k e
G s,t,z = k e
When t = s
z - 1 λsG s,s,z = k e
But
j i
i j
j=i
G s,s,z = p s,s z = z
Thus
z - 1 λsi
- z - 1 λsi
z = ke
k = z e
- z - 1 λs z - 1 λtiG s,t,z = z e e
z - 1 λ t - si= z e (4.4)
j kp s, t is the coefficient of nz on G s,t,z .
z - 1 λ t - sj
zλ t - s - λ t - sj
- λ t - s λ t - s zj
G s,t,z = z e
= z e
= z e e
Therefore,
n
- λ t - si
n 0
n- λ t - s n i
n 0
n- λ t - s
n i
n 0
λ t - s zG s,t,z = z e
n!
e λ t - s z z=
n!
e λ t - s= z
n!
47
Let k = n + i. Then
n- λ t - s
j k
e λ t - sp s, t
n! which is a Poisson distribution with parameter λ t - s .
Thus
n- λ t - s
j k
e λ t - sp s,t =
n! n = 0, 1, 2,… (4.5)
4.4 Simple Birth
When jλ t = jλ, equation (4.2) becomes
j j j - 1
i j i j i j
j i j=i j=i
p s, t z = - jλp s, t z + z j - 1 λp s, t zt
j - 1 2 j - 2
i j i j
j=i j=i
= - z jλ p s, t z + z j - 1 λ p s, t z
(4.6)
Therefore,
j j - 1
i j i j
j i j i
j j - 1
i j i j
j i j i
p s, t z = z z - 1 jλ p s, t zt
p s, t z = z z - 1 λ k p s, t zt
G s,t,z = z z - 1 λ G s,t,zt z
G s,t,z - z z - 1 λ G s,t,z = 0t z
The auxiliary equation
G s,t,zt z= =
1 -z z - 1 λ 0
(4.7)
Taking the first two equations, we have
t z=
1 -z z - 1 λ
z-λ t =
z z - 1
48
Integrating, we obtain
z-λt =
z z - 1
z z= - +
z z - 1
= -ln z + ln z - 1 + c
c = -λt + ln z - ln z - 1
z-λt + ln
z - 1
Taking another pair, we have
G s,t,zt=
1 0
0 t = G s,t,z
Integrating both sides,
0 t = G s,t,z
c G s,t,z
Therefore,
G s,t,z u where
z-λt + ln
z - 1u = e
Then
-λt
-λt
-λt
-λt
-λt
-λt
zu = e
z - 1
u z - 1 = z e
uz - u = z e
uz - z e = u
z u - e = u
uz
u - e
49
Using boundary condition with t = s, we have
i
i
-λs
G s,s,z = z
u=
u - e
Now,
i-λt
-λt-λs
i-λt
-λt -λs
i-λt
-λt -λs -λs
i
-λt λs
-λ t - s
z e
z - 1G s,t,z =
z e- e
z - 1
z e=
z e - z - 1 e
z e=
z e - e + e
z e e=
z e - 1 + 1
i
-λ t - s
-λ t - s
z e=
1 - z 1 - e
(4.8)
Let -λ t - sp e and -λ t - s
q = 1 - e
j,kP s, t is the coefficient of nz on G s,t,z .
50
i
i -i
i n
n
n n ni i
n
z pG s,t,z =
1 - z q
= z p 1 - z q
- j= z p -z q
n
j + n - 1= z p -1 -1 z q
n
Therefore,
2n i n i n
n
i n i + n
n
j + n - 1G s,t,z = -1 p q z z
n
j + n - 1= p q z
n
Let k i n
n = 0, 1, 2, …
i n
i j
i n-λ t - s -λ t - s
j + n - 1P s, t = p q
n
j + n - 1= e 1 - e
n
Thus
i n
-λ t - s -λ t - s
i j
i + n - 1p s,t = e 1 - e
n
(4.9)
Where k i n and n = 0, 1, 2, …
which is negative binomial distribution.
51
4.5 Simple Birth with Immigration
In this case jλ t j v
Equation (4.2) now becomes
j j j - 1
i j i j i j
j i j i j i 1
j j j-1 j-1
i j i j i j i j
j i j i j=i+1 j i 1
j-1 2 j-2
i j i j
j i j=i+1
p s, t z = - j v p s, t z + z j 1 v p s, t zt
= -λ k p s, t z - v p s, t z + z λ j-1 p s, t z + zv p s, t z
= -λz k p s, t z - vG s,t,z + z λ j-1 p s, t z + zvG s
,t,z
2 k-1
i j
j i
= z λ - λz jp s, t z zv v G s,t,z
Therefore,
j
i j
j=i
P s, t z = λz z - 1 G s,t,z + v z - 1 G s,t,zt z
G s,t,z - λz z - 1 G s,t,z = v z - 1 G s,t,zt z
(4.10)
Auxiliary equations
G s,t,zt z
1 - λz z - 1 v z - 1 G s,t,z
(4.11)
Selecting the first two,
52
t z
1 - λz z - 1
z- λ t
z z - 1
1 1- λ t z z
z z - 1
Integrating both sides
1 1- λt = - z z
z z - 1
-λt = -ln z + ln z - 1 + c
-λt ln z ln z - 1 = c
zc -λt ln
z - 1
(i)
Taking the last two, we have
G s,t,zz
- λz z - 1 v z - 1 G s,t,z
Multiply both sides by v z - 1
G s,t,zv z-
λ z G s,t,z
v z- ln G s,t,z
λ z
Integrating both sides, e have
v v
- -λ λ
v-
ln z c ln z c λ
v z- ln G s,t,z
λ z
v- ln z c ln G s,t,z
λ
G s,t,z e e e z .k
v
λk z G s,t,z (ii)
From (i), let
53
z z-λt ln ln
z - 1 z - 1-λt -λt zu e e e e
z - 1
From (i) and (ii),
v
λ
v-
λ
z G s,t,z = ψ u
G s,t,z = z ψ u
Boundary condition t = s.
v
v-
i λ
i
-λs
-λs
G s,s,z z z ψ u
u z
zu e
z - 1
u z - 1 = e z
-λs -λs
-λs
uz - u= e z z u e u
uz
u e
Therefore,
54
v
vλ
vλ
vλ
v vλ λ
vλ
i
-λs
i +
-λt
v-
λ
-λt -λs
i +v -λt
-λ
-λt -λs
i +-λs
- i +
-λt -λs -λs
i +-λs
i
-λs -λt -λs
-λ t - s
i
-λ t - s
uu
u e
ze
z - 1G s,t,z = z
ze - e
z - 1
ze= z
ze - z - 1 e
e= z .z
z e - e + e
e= z
e + z e - e
e= z
1 + z e - 1
vλ
i +
vλ
i +
-λ t - s
i
-λ t - s
e= z
1 - z 1 - e
(4.12)
Let t sp e
and t s
q 1 p 1 e .
Then
vλ
i +
i pG s,t,z = z
1 - qz
j kp s, t is the coefficient of nz of G s,t,z .
vvλλ
vλ
- j +i +i
vni +i λ
n 0
G s,t,z = z p 1 - qz
- i += z p -qz
n
55
vλ
vλ
vλ
vn n ni +i λ
n 0
v2n i + n i nλ
n 0
vi + n i nλ
n 0
i + n 1G s,t,z = z p -1 -1 qz
n
i + n 1= -1 p q z
n
i + n 1= p q z
n
k = i + n : n = 0,1,2,…
vλ
vi + nλ
i j i,i n
i + + n - 1p s, t = p s, t = p q ,
n
n = 0,1,2,…
Thus
vλ
v i + n-λ t - s -λ t - sλ
i j
i + + n - 1p s, t = e 1 - e
n
Equivalently,
vλ
v n- t - s λ i + -λ t - sλ
i j
i + + n - 1p s,t = e 1 - e
n
(4.13)
This is a negative binomial distribution function.
56
4.6 Polya process
In this case j
1 + ajλ t = λ
1 + λat
Equation (4.2) now becomes
i j j - 1
i j i j i, j - 1
j i j i j i 1
j j - 1
i j i, j - 1
j i j i 1
1 + a j - 11 + ajp s, t z = - λ p s, t z + z λ p s, t z
t 1 + λat 1 + λat
= - 1 + aj p s, t z - z 1 + a j - 1 p s, t z1 + λat
Expanding, we have,
j j j j-1 j-1
i j i j i j i, j-1 i, j-1
j i j i j i j i 1 j i 1
p s, t z =- p s, t z +a k p s, t z -z p s, t z -za j-1 p s, t zt 1+λat
Equivalently,
2
2
G s,t,z = - G s,t,z az G s,t,z zG s,t,z z a G s,t,zt 1+ λat z z
1+ λatG s,t,z = -G s,t,z - az G s,t,z + zG s,t,z + z a G s,t,z
t z z
= z 1 G s,t,z za z 1 G s,t,zz
Re writing the above equation, we have
1+λat
G s,t,z - za z - 1 G s,t,z = z - 1 G s,t,zλ t z
(4.14)
Solving using Lagrange’s equation
Auxiliary equations
G s,t,zλ t z
1+λat - za z - 1 z - 1 G s,t,z
(4.15)
Taking the first two equations, we have
λ t z
1+ λat - za z - 1
-λa t z z z-
1+ λat z z - 1 z z - 1
Integrating both sides, we have
57
t z z-λa -
1+ λat z z - 1
- ln 1 + λat = - ln z + ln z - 1 c
- ln 1 + λat + ln z - ln z - 1 = c
zln = c
1 + λat z - 1
zu t
1 + λat z - 1 (i)
Taking the last two, we have
G s,t,zz
- za z - 1 z - 1 G s,t,z
Multiplying both sides by (z – 1), we have
G s,t,z1 z- ln G s,t,z
a z G s,t,z
Integrating,
1-
aln z c
1-a
1 z- ln G s,t,z
a z
1- ln z c ln G s,t,z
a
e e G s,t,z
k z G s,t,z
1
ak z G s,t,z (ii)
From (i) and (ii), we have
1
a
1-
a
z G s,t,z = ψ u t
G s,t,z = ψ u t z
Using the boundary condition t = s, we have
58
1-
a
1-
i a
G s,s,z = ψ u s z
z ψ u s z
1
iaψ u s z
From (i)
zu t
1 + λat z - 1
zu t z - 1
1 + λat
zz u t - u t
1 + λat
zz u t - = u t
1 + λat
Therefore,
u t 1 + λatz =
u t 1 + λat - 1
Thus,
1i
au s 1 + λatψ u s
u s 1 + λat - 1
1i
1a -a
u s 1 + λatG s,s,z = z
u s 1 + λat - 1
Therefore,
59
1i +
1a -a
1i +
a
1-
a
1i +
1a -a
u t 1 + λasG s,t,z = z
u t 1 + λas - 1
z1 + λas
1 + λat z - 1= z
z1 + λas - 1
1 + λat z - 1
z 1 + λas= z
z 1 + λas - 1 + λat z - 1
1i +
a
i
1i +
a
i
1 + λas
1 + λatG s,t,z = z
1 + λasz - 1 + 1
1 + λat
1 + λas
1 + λat= z
1 + λas1 z 1
1 + λat
1 + λas
1 + λat=
1 + λat 1 λas1 z
1 + λat
1i +
a
iz
Therefore
1i +
a
i
1 + λas
1 + λatG s,t,z = z
λa t s1 z
1 + λat
(4.16)
Let 1 + λas
p1 + λat
λa t - s1 + λasq = 1 - p = 1 - =
1 + λat 1 + λat
60
j,kP s, t is the coefficient of nz on G s,t,z
1i +
ai
1 1i + - i +ia a
1 1i + ni aa
n 0
1 1i + n n ni aa
n 0
pG s,t,z = z
1 - qz
= p z 1 - qz
- i= p z - qz
n
i n 1= p z -1 -1 qz
n
1 1
i +i n naa
n 0
11i +
n n ia a
n 0
i n 1G s,t,z = p z q z
n
i n 1= p q z
n
k = i + n, n = 0,1,2,3,…
11
i +na a
i j i, i n
i n 1p s, t p s, t p q
n
Thus
,
1ni +
1 aa
i j i i+n
i + + n - 1 λa t - s1 + λasp s,t = p s,t =
n 1 + λat 1 + λat
(4.17)
for n = 0, 1, 2, …
Which is negative binomial distribution.
61
CHAPTER FIVE
INFINITESIMAL GENERATOR
5.1 Introduction
Now, we wish to solve equation (2.11) below which was derived in Chapter two. This time, we
will use a matrix method.
k,k + n k +n k,k + n k +n-1 k,k + n - 1p s, t + λ t p s, t = λ t p s, tt
For k = 0, 1, 2, 3, .. equation (2.11) becomes
k = 0
j j j j j j-1 j j-1
j j j
P s, t = -λ P s, t + λ p s, tt
= -λ P s, t
k = 1
j j+1 j+1 j j+1 j j j
j j j j+1 j j+1
P s, t = -λ P s, t + λ p s, tt
= λ P s, t - λ p s, t
k = 2
j j+2 j+2 j j+2 j+1 j j+1
j+1 j j+1 j+2 j j+2
P s, t = -λ P s, t + λ p s, tt
= λ P s, t - λ p s, t
k = 3
j j+3 j+3 j j+3 j+2 j j+2
j+2 j j+2 j+3 j j+3
P s, t = -λ P s, t + λ p s, tt
= λ p s, t - λ P s, t
k = 4
j j+4 j+4 j j+4 j+3 j j+3
j+3 j j+3 j+4 j j+4
P s, t = -λ P s, t + λ p s, tt
= λ p s, t - λ P s, t
k = n
j j+n j+n j j+n j+n-1 j j+n-1
j+n-1 j j+n-1 j+n j j+n
P s, t = -λ P s, t + λ p s, tt
= λ p s, t - λ P s, t
62
These equations forms a matrix expressed as follows
j j
j, j 1
j, j 2
j, j 3
j, j 4
j, j k
p s, tt
p s, tt
p s, tt
p s, tt
p s, tt
.
.
p s, tt
.
j j j
j j 1 j j 1
j 1 j 2 j j 2
j 2 j 3 j j 3
j 3 j 4 j j 4
j n 1 j n
0 0 0 0 . . 0 . p s, t
0 0 0 . . 0 . p s, t
0 0 0 . . 0 . p s, t
0 0 0 . . 0 . p s, t
0 0 0 . . 0 . p s, t
. . . . . . . . . .
. . . . . . . . .
0 0 0 0 0 . .
. . . . . . . . .
j j n
.
p s, t
.
(5.1)
5.2 Solving Matrix Equation
p s, t p s, tt
p s, tt
p s, t
log p s, t dt
Integrating both sides, we have
t t
s s
t t
ss
log p s, y dy dy
log p s, y y
log p s, t - log p s,s t - s
log p s,s I
log p s, t t - s
63
Taking the exponential of both sides
t - sp s, t e
Therefore, from Taylor expansion,
k
k 0
kk
k 0
n
n 1
t - sp s, t =
k
t - s=
k
t - s= I
n
If all the eigen values i of are distinct, then
it - s t - sp s, t e U diage V
(5.2)
Obtaining the eigen values of ,
I 0
j
j j+1
j 1 j+2
j 2 j+3
j 3 j+3
j+n-1 j+n
-λ - μ 0 0 0 0 . 0 0 .
-λ - μ 0 0 0 . 0 0 .
0 -λ -μ 0 0 . 0 0 .
0 0 -λ -μ 0 . 0 0 .
00 0 0 -λ -μ . 0 0 .
. . . . . . . . .
. . . . . . . . .
0 0 0 0 0 . -λ -μ .
. . . . . . . . .
64
Taking n 1 n 1 matrix, we have
j j+1 j+2 j+3 j+4 j+n
n 1
j j 1 j 2 j 3 j 4 j n
j j 1 j 2 j 3 j 4 j n
-λ -μ -λ -μ -λ -μ -λ -μ -λ -μ ... -λ -μ 0
1 ... 0
... 0
Solving, we obtain
jμ = -λ or j+1μ = -λ or j+2μ = -λ or … or j+nμ = -λ
Assume that j i i = 0, 1, 2, …, n are distinct. We obtain n + 1 distinct eigen values. Next we
obtain eigenvectors corresponding to n + 1 eigen values. Let
j
j 1
j 2
j n
z
z
z
z.
.
z
The eigen vector j ie for j+iμ = -λ i = 0, 1, 2, … is given by
j j
j j+1 j 1
j+1 j+2 j 2
j+2 j+3 j 3
j+3 j+4 j 4
j+k j n
-λ 0 0 0 0 . . 0 z
λ -λ 0 0 0 . . 0 z
0 λ -λ 0 0 . . 0 z
0 0 λ -λ 0 . . 0 z
-λ0 0 0 λ -λ . . 0 z
. . . . . . . . .
. . . . . . . . .
0 0 0 0 0 . . - λ z
j
j 1
j 2
j 3
j+i
j 4
j n
z
z
z
z
z
.
.
z
65
j
j j j+k j+1 j j+1
j
j+1 j
j+1 j
e
λ z - λ z = - λ z
λz = z
λ - λ
Let jz 1
j
j+1
j+1 j
λz =
λ - λ
j+1 j+1 j+2 j+2 j j+2
j+1
j+2 j+1
j+2 j
j j+1
j+1 j j+2 j
λ z - λ z = -λ z
λz = z
λ - λ
λ λ=
λ - λ λ - λ
j+2 j+2 j+3 j+3 j j+3
j+2
j+3 j+2
j+3 j
j j+1 j+2
j+1 j j+2 j j+3 j
λ z - λ z = -λ z
λz = z
λ - λ
λ λ λ=
λ - λ λ - λ λ - λ
Similarly
j j+1 j+2 j+n-1
j+n
j+1 j j+2 j j+n j
nj+k-1
k=1 j+k j
λ λ λ ...λz =
λ - λ λ - λ ... λ - λ
λ=
λ - λ
66
The eigen vector ej
j
j+1 j
j j+1
j+1 j j+2 jj
nj+k-1
k=1 j+k j
1
λ
λ - λ
λ λ
λ - λ λ - λe
.
.
λ
λ - λ
j 1
j + 2 j 1
j 1 j 1 j + 2
j +2 j 1 j + 2 j
nj + k - 1
k=2 j + k j + 1
0
1
λ
λ - λ
e λ λ
λ - λ λ - λ
.
λ
λ - λ
67
j 2
j + 3 j 2
j 2 j 2 j + 3
j +3 j 2 j + 4 j 2
nj + k - 1
k=2 j + k j + 2
0
0
1
λ
λ - λ
e λ λ
λ - λ λ - λ
.
.
λ
λ - λ
Let
j j 1 j 2 j nu e e e . . . e then
j
j+1 j
j j+1 j 1
j + 2 j 1j+1 j j+2 j
j j 1 j 2 j 1 j + 2 j 2
j + 3 j 2j 1 j k j 2 j j 3 j j + 2 j 1 j + 2 j
n nj+k-1 j + k - 1 j + k - 1
k=1 k=2j+k j j + k j + 1 j + k
1 0 0 . . . 0
λ 1 0 . . . 0
λ - λ
λ λ λ 1 . . . 0
λ - λλ - λ λ - λ
λ λ λ . . . 0u
λ - λλ - λ λ - λ
. . . . . . .
. . . . . . .
λ λ λ
λ - λ λ - λ λ - λ
n
k=2 j + 2
. . . 1
(6.3)
68
dag t s 1
j, j kP s, t Ue U p s,s
j, j k
1
0
0
p s,s
.
.
0
The inverse of matrix U is given below
j
j+1 j
j j+1 j+1
j+2 j+1j+2 j j+2 j+1
-1j j+1 j+2 j+1 j+2 j+2
j+3 j j+3 j+1 j+3 j+2 j+2 j+1 j+3 j+1 j+3 j+2
n-1n n - 1j+k j+k -1
j=0 j+n j+k j+k +2 j+
1 0 0 . . . 0
- λ 1 0 . . . 0
λ -λ
λ λ - λ 1 . . . 0
λ -λλ - λ λ - λ
- λ λ λ λ λ - λ . . . 0U =
λ - λ λ -λ λ -λ λ - λ λ -λ λ -λ
. . . .
. . . .
λ λ-1 -1
λ -λ λ -λ
n-2 n-3n -2 j+k +2
k=0 k=01 j+k +3 j+2
λ . . . 1-1
λ - λ
(5.4)
j+k
j+k
j+k
j+k
j+k
-λ t - s
-λ t - s
-λ t - s
diagΛ t - s -λ t - s
-λ t - s
0 0 0 . . 0e
0 0 0 . . 0e
0 0 0 . . 0e
0 0 0 . . 0e e
. . . . .
. . . . .
0 0 0 0 . . e
(5.5)
69
j
j+1 j
j j+1
j+2 j j+2 j+1
- 1j j+1 j+2j j+k
j+3 j j+3 j+1 j+3 j +2
n-1n j+k
n=0 j+n j+k
1
- λ
λ -λ
λ λ
λ -λ λ - λ
-λ λ λU p s,s =
λ -λ λ -λ λ -λ
.
.
λ-1
λ - λ
Next
j
j 1
j 2
j 3
j n
- t - s
- t - sj
j+1 j
- t - sj j+1
j+2 j j+2 j+1
diag t s - 1- t - sj j+1 j+2j j+k
j+3 j j+3 j+1 j+3 j +2
n-1n - t - sj+k
n=0 j+n j+k
e
- λe
λ -λ
λ λe
λ -λ λ - λ
-λ λ λe U p s,s =e
λ -λ λ -λ λ -λ
.
.
λ-1 e
λ - λ
70
j
j j
j jj
j j
- λ t -s
- λ t -s - λ t -s
j i
j+1 j j+1 j
- λ t -s - λ t -s- λ t -s
j j+1 j+1 j j+1j
j+1 j j+2 j+1j+1 j j+2 j j+2 j j+2 j+1
- λ t -s - λ t -s
j j+1 j+2 j j+1 j+2
j+1 j j+2 j j+3 j j+1 j j+2
e
λ e λ e-
λ -λ λ - λ
λ λ e λ λ λ eλ e+ . +
λ - λ λ -λλ - λ λ -λ λ -λ λ -λ
P s,t =λ λ λ e λ λ λ e
-λ -λ λ -λ λ -λ λ - λ λ
j j
j j+1
- λ t -s - λ t -s
j j+1 j+2 j j+1 j+2
j+1 j+3 j+1 j + 2 j j + 2 j + 1 j+3 i+2 j+3 j j+3 j+1 j+3 j+2
- λ t -s - λ t -sn n
j+k -1 i + j-1j+k-1 j j j+1
k=1 k=2 j+k j+1 j+1 j j+2 jj+ k j i + j + 1 i + 1
λ λ λ e λ λ λ e+ -
-λ λ - λ λ -λ λ -λ λ - λ λ -λ λ -λ λ -λ
.
.
λ e λλ λ e λ λ- . + . .
λ - λ λ -λ λ -λλ -λ λ - λ
j+nj+2 - λ t -s- λ t -sn n-1
n j+k
k=3 k=0j+2 j+1 j+n j+k
λ ee+ ... + -1
λ -λ λ - λ
Therefore,
71
j+ nj+1 j+ 2
j
j
n-1n n-λ t - s-λ t - s -λ t - s
-λ t - s j + kj + k - 1 j + k - 1nj + k - 1 k=0k=1 k=1
j j+n n n n-1k=1 j + k j
j + k j + 1 j + k j + 2 j + k j + n
k=0 k=0 k=0k¹1 k¹2
n-1-λ t - s
j + k j +
k=0
n
j + k j
k=1
λ eλ e λ eλ e
p s, t = + + + ... +λ - λ
λ - λ λ - λ λ - λ
λ e λ
= +
λ - λ
j+1 j+ nj+ 2
j j j
n-1 n-1n-λ t - s -λ t - s-λ t - s
k j + kj + k
k=0 k=0k=1
n n n-1
j + k j + 1 j + k j + 2 j + k j + n
k=0 k=0 k=0k¹1 k¹2
-λ t - s -λ t - s -λ t - s
j + k n n n
j + k j j + k j + 1 j + k j + 2
k=1 k=0 k=0k¹1 k¹2
e λ eλ e
+ + ... +
λ - λ λ - λ λ - λ
e e e= λ + + +
λ - λ λ - λ λ - λ
j
j v
-λ t - sn-1
n-1k=0
j + k j + n
k=0
-λ t - sn 1 n
j + k nv 0k 0
j + k j + v
k 0k v
e... +
λ - λ
eλ
λ - λ
72
5.3 Simple Birth Process
j i j i
- j + v λ t - sn-1 n
j j + n nv=0k=0
k=0k v
- j λ t - s -vλ t - sn-1 nn
nnv=0k=0
k=0k v
-vλ t - sn-1 n- j λ t - s
nv=0k=0
k=0k v
- j λ t - s
ep s, t = j + k λ
j + k λ - j + v λ
e e= λ j + k
λ k - v
e= e j + k λ
k - v
j + n - 1 ! 1= e
j - 1 !k
- λ t - s -2λ t - s -3λ t - s -n λ t - s
n n n n n-1
k=1 k=0 k=0 k=0 k=0k 1 k 2 k 3
e e e e...
k - 1 k - 2 k - 3 k - n
Therefore
- λ t -s -2λ t -s -3λ t -s -4λ t -s1 3- j λ t - s
j j+n
n -n λ t -s-k λ t -sk
j+ n -1 ! 1 e e e ep s, t = e + -1 -1 ...
j-1 ! n! 1. n -1 ! 2! n -2 ! 3! n -3 ! 4! n -4 !
-1 ee-1 ...
k! n -k ! n
Therefore,
-k λ t -snk- j λ t - s
j j+n
k=0
j+n -1 ! ep s, t = e -1
j-1 ! k! n -k !
73
Simplifying,
-k λ t -snk- j λ t - s
j j+n
k=0
n kk- j λ t - s - λ t -s
k=0
n k- j λ t - s - λ t -s
k=0
n- j λ t - s - λ t -s
j+ n -1 ! n ep s, t = e -1
j-1 ! n k! n -k !
j+ n -1 ! n= e -1 e
n j-1 ! k! n -k !
j+ n -1 n
= e - e
n k
j+ n -1
= e 1 - e
n
Therefore
j n
- λ t - s - λ t -s
j j+n
j+ n -1
p s, t = e 1 - e
n
n = 0, 1, 2, 3, …… (6.8)
74
5.4 Simple Birth with Immigration
Now, recall that
j i-λ t - sn 1 n
j j+n j + k ni 0k 0
j + k j + i
k 0k i
ep s, t λ
λ - λ
In this case jλ = jλ + v
- j + i λ + v t - sn 1 n
j j+n ni 0k 0
k 0k i
- j + i λ + v t - sn-1 n
ni=0k=0
k=0k i
-iλ t - sn-1 n- jλ + v t - s
ni=0k=0
k=0k i
- jλ + v t - s
ep s, t j + k λ + v
j + k λ + v - j + i λ + v
e= j + k λ + v
k - i λ
e= e j + k λ + v
k - i λ
= e
- iλ t - sn-1 n
nni=1k=0
k=0k i
- iλ t - sn-1 n- jλ + v t - s n
nni=1k=0
k=0k i
v ej + k + λ
λλ k - i
v e= e λ j + k +
λλ k - i
75
- iλ t - sn-1 n- jλ + v t - s
j j+n ni=1k=0
k=0k i
v ep s, t = e j + + k
λk - i
Expanding the expression inside the summation sign,
- 0λ t -s - λ t -s - 2λ t -s - rλ t -s - nλ t -sn-1- jλ + v t -s v
j j+ n λ n n n n n-1k=0
k=1 k=0 k=0 k=0 k=0k¹1 k¹2 k¹r
- λ t -s - 2λ t -s -- jλ + v t -s v
λ
e e e e ep s, t = e j+ +k + + +... + + ... +
k -0 k -1 k -2 k -r k -n
1 e e e= e j+ +k + + +... +
n! -1 n-1 ! -1 -2 n-2
rλ t -s - nλ t -sn-1
r nk=0
2 r n- λ t -s - 2λ t -s - rλ t -s - nλ t -sn-1- jλ + v t -s v
λ
k=0
e+ ... +
-1 r! n-r -1 n!
-1 e -1 e -1 e -1 e1= e j+ +k + + +... + + ... +
n! n-1 ! 2 n-2 r! n-r n!
Therefore,
r - rλ t - sn-1 n- jλ + v t -s v
j j+n λ
r=0k=0
r - rλ t - sv n- jλ + v t -s λ
vr=0λ
r - rλ t - sv n- jλ + v t -s λ
vr=0λ
r - rλ t - sv n- jλ + v t -s λ
vr=0λ
-
-1 ep s, t = e j+ + k
r! n - r
j+ + n - 1 ! -1 e= e
j+ - 1 ! r! n - r
j+ + n - 1 ! -1 e n!= e
j+ - 1 ! r! n - r n!
j+ + n - 1 ! -1 n!e= e
j+ - 1 ! n! r! n - r
= e
r- λ t - s
v njλ + v t -s λ
r=0
v n r- jλ + v t -s - λ t - sλ
r=0
v n- jλ + v t -s - λ t - sλ
n! - ej+ + n - 1
n r! n - r
nj+ + n - 1= e - e
rn
j+ + n - 1= e 1 - e
n
76
Therefore
vv n- j+ t -s - λ t - sλ
j j+n
j+ +n - 1p s, t e 1 - e
n
Equivalently,
vλ
v j+ n-λ t -s - λ t - sλ
jj+n
j+ +n - 1p s,t = e 1 - e
n n = 0,1,2, … (5.9)
77
5.5 Polya Process
In this case, j
1 ajλ = λ
1 at
j v-λ t - sn 1 n
j j+n j + k nv 0k 0
j + k j + v
k 0k v
ep s, t λ
λ - λ
Substituting, we have
1 + a j + v- λ t - s
1 + λatn-1 n
j j+nn
v=0k=0
k=0k v
1 + a j + v- λ t - s
1 + λat
1 + a j + k ep s, t = λ
1 + λat 1 + a j + k 1 + a j + kλ - λ
1 + λat 1 + λat
1 + a j + k e= λ
1 + λat 1 + aj + ak - 1 - aj - avλ
1 + λat
n-1 n
nv=0k=0
k=0k v
1 + a j + v- λ t - s
1 + λatn-1 n
nv=0k=0
k=0k v
1 + a j + v- λ t - sn 1 + λat
n
k=0k v
1 + a j + k e= λ
λa1 + λatk - v
1 + λat
λ e= 1 + a j + k
1 + λat λak - v
1 + λat
n-1 n
nv=0k=0
1 + aj + av- λ t - s
1 + λatn-1 n
nnv=0k=0
k=0k v
e= 1 + a j + k
a k - v
78
1 + aj + av- λ t - s
1 + λatn-1 nn 1
j j+n a nnv=0k=0
k=0k v
av- λ t - s
1 + aj 1 + λatn-1 n- λ t - s1+λat 1
a nv=0k=0
k=0k v
ep s, t a + j + k
a k - v
e= e j + + k
k - v
But n
v
k=0k v
k - v = -1 v! n - v
and
1n 1a1
a 1k 0 a
j + + n - 1j k
j + - 1
av- λ t - s
v1 + aj 1 + λat1 n- λ t - s1+λata
j j+n 1v=0a
va
- λ t - s1 + aj 1 + λat1 n- λ t - s1+λata
1v=0a
1 + a1 -a
1a
j + + n - 1 -1 ep s, t e
j + - 1 v! n - v
j + + n - 1 -ee
j + - 1 v! n - v
j + + n - 1e
j + - 1
vav
- λ t - sj 1 + λatnλ t - s
1+λat
v=0
v1 + aj av1 n- λ t - s - λ t - s1+λat 1 + λata
1v=0a
1 + aj1 - λ1+λata
-e n
v! n - v n
j + + n - 1 ne -e
j + - 1 n v! n - v
j + + n - 1e
n
vav
nt - s - λ t - s1 + λat
v=0
v1 + aj av1 n- λ t - s - λ t - s1+λat 1 + λata
v=0
n-e
v
nj + + n - 1e -e
vn
79
n1 + aj a1 - λ t - s - λ t - s1+λat 1 + λata
j j+n
naλ t - s1 + aj1 - λ t - s -
1+λata 1 + λat
j + + n - 1p s, t e 1 e
n
j + + n - 1e 1 e
n
Therefore,
naλ t - s1 + aj1 - λ t - s -
1+λata 1 + λat
jj+n
j + + n - 1p s,t = e 1 - e
n (5.10)
This is a negative binomial distribution.
80
CHAPTER SIX
FIRST PASSAGE DISTRIBUTIONS FOR PURE BIRTH PROCESSES
6.1Derivation of nF t from np t
Let np t = Prob X t = n and n nF t = Prob T t where
X t = the population at time t and nT = the first time the population size is n.
Consider the following diagram
Time 0 Tn t
Size X(0) X(Tn) = n X(t)
For a birth process
n nT t X T X t , ie n X t
n nT = t X T = X t , i.e n = X(t)
nT t X t n
nProb T t Prob X t n
i.e
n
n -1
j
j 0
F t = 1 - Prob X t n
= 1 - Prob X t n 1
= 1 - P t
Thus
n-1
n j
j = 0
F t = 1 - p t (6.1)
Next,
n
df t F t
dt
81
6.2 Poisson Process
6.2.1 Determining nF t for Poisson Process
j- λ t
j
e λ tp t = ; j = 0, 1, 2, 3, .....
j !
Therefore
jn -1- λ t
n
j = 0
jn -1- λ t
j = 0
λ tF t = 1 - e
j !
λ t= 1 - e
j !
But n n
dF t = f t
dt . Therefore,
jn -1- λ t
n
j = 0
j j-1n -1 n - 1- λ t - λt
j = 0 j = 0
j j - 1n -1 n -1- λ t - λt
j = 0 j = 1
j j - 1n -1 n -1- λ t - λ t
j = 0 j = 1
j j - 1n - 1 n - 1- λ t
j = 0 j = 1
λtdf t = - e
dt j !
λt λ t= - - λ e + λ j e
j ! j !
λt j λ t= λ e - λ e
j ! j !
λt λt= λ e - λ e
j ! j-1 !
λ t λ t= λe -
j ! j - 1 !
Expanding the expression inside the summation sign, we get
2 3 n - 2 n - 1 n n - 2
λ t λ t λ t λ t λ t λ tλ t λ t- λ t
n 1! 2 ! 3! 1! 2 !n - 2 ! n - 1 ! n - 2 !f t = λe 1 + + + + ..... + + - 1 - - - ....... -
Therefore
n - 1- λ t n- λ t n - 1
n
e λ t λf t = = e t for t > 0; n = 1, 2, 3,.....
n - 1 ! n (6.2)
Which is gamma n,λ
82
6.2.2 Mean and Variance of nf t for the Poisson Process
Mean
n
n
- λ t n - 1
n0
- λ t n
0
λE T = t e t dt
n
λ= e t dt
n
y dyLet y = λ t t = and dt = .
λ λ
Therefore,
n
n
n + 1
n - 1
- y
n0
- y n
0
λ y dyE T = e
λ λn
λ 1= × e y dy
n λ
1 n n= × n + 1 =
λ n λ n
n=
λ (6.3)
Variance
n
n
n
n
2 2 2
2 2 -λ t n - 1
n0
-λ t n + 1
0
n + 1
-y
0
-y n + 1
n + 2
0
λE T = t e t dt
n
λ= e t dt
n
λ y dy= e
λ λn
λ 1= × e y dy
λn
n n + 11 1= n + 2 = × n + 1 × n × n =
λ n λ n λ
But 22
nVar T = E T - E T . Thus,
2
2 2
n n + 1 nVar T = -
λ λ
83
Simplifying,
2
nVar T =
λ (6.4)
84
6.3 Simple Birth Process
6.3.1 Determining nf t for the Simple Birth Process
Initial conditions: when t = 0, X(0) = 0
j - 1
- λ t - λ t
jP t = e 1 - e
Therefore,
n - 1 n - 1j - 1
- λ t - λ t
n j
j = 0 j = 0
n - 1j - 1
- λ t - λ t
j = 0
2 3 n - 2- λ t - λ t - λ t - λ t - λ t
n - 1- λ t
- λ t
- λ t
n - 1- λ t
- λ t
- λ t
n - 1- λ t
n 1t
F t = 1 - P t = 1 - e 1 - e
= 1 - e 1 - e
= 1 - e 1 + 1 - e + 1 - e + 1 - e + ....... + 1 - e
1 - 1 - e= 1 - e
1 - 1 - e
1 - 1 - e= 1 - e
e
= 1 - 1 - 1 - e
1 e
Differentiating both sides with respect to t, we get
n - 2- λ t - λ t
n
n - 2- λ t - λ t
f t = n - 1 1 - e λ e
= λ n - 1 e 1 - e ; t > 0, n = 2, 3, 4, ....
Definition of Exponentiated Exponential Distribution
Let α
F x = G x where G x = the old or parent cdf and F x = the new cdf. α > 0.
Then
α - 1dF
f x = = α G x g xdx
is called exponentiated pdf.
If - λxg x = λe (exponential pdf), then - λ tG x = 1 - e
85
α - 1
- λt - λt
α - 1- λt - λt
f x = α 1 - e λ e
= α λ 1 - e e
is called exponentiated exponential pdf or generalized exponential pdf with parameters λ and
α.
Put α = n - 1, then we have
n - 2
- λ t - λtf x = λ n - 1 e 1 - e
Thus
n - 2
- λt - λt
nf x = λ n - 1 e 1 - e , t > 0, n > 1 (6.5)
is an exponentiated exponential distribution with parameters λ and n - 1.
Initial Condition: 0X 0 = n
0
n 1
n j
j n
F t 1 P t
0 0n t n t
j
0
j 1P t e 1 e
n 1
0 0 0, 1, 2, ....j n n n
0 0
00 0
0 0
00 0
0
00 0
0
n 1 n 1
n n j j
j n j n
j nn t n t
j
j n j n 0
j nn t n t
j n 0
n 1j n
n t n t
j n 0 0
d d df t F t 1 P t P t
dt dt dt
j 1P t e 1 e 1
n 1
j 1de 1 e 0
n 1dt
j 1 j 1de 1 e
n 1 n 1dt
00 0
0 00 0 0 0
0
j nn t n t
j n
n 1j n j n
n t n t n t n t
j n j n0 0
e 1 e 0
j 1 j 1d de 1 e e 1 e
n 1 n 1dt dt
But 0j = n + k where k = 0, 1, 2, …..
0 0
0
k0 n t n t
n
n k n 0
n k 1df t e 1 e
n 1dt
(6.6)
86
When 0n 1
k- λt - λt
n
k = n - 1
k- λt - λt
k = n - 1
k- λt - λ t
k = n - 1
n -1 n n +1 n +2- λt - λt - λt - λt - λt
n -1 2 3- λ t - λt - λt - λt - λt
kdf t = e 1 - e
0dt
d= e 1 - e
dt
d= e 1 - e
dt
d= e 1 - e + 1 - e + 1 - e + 1 - e + ....
dt
d= e 1 - e 1 + 1 - e + 1 - e + 1 - e + ....
dt
n -1- λt - λt
- λt
n - 1- λt - λt
- λt
n - 1- λt
n - 2- λt - λt
d 1= e 1 - e
dt 1 - 1 - e
d 1= e 1 - e
dt e
d= 1 - e
dt
= n - 1 1 - e - λ - e
n - 2
- λt - λt= n - 1 λ e 1 - e (6.7)
6.3.2 Mean and Variance of nf t for the Simple Birth Process
Initial Condition: X(0) = 0
Mean
n - 2- λ t - λ t
n n0
0
n - 2- λ t - λ t
0
n - 2k
- λ t - λ t
0k = 0
n - 2k - λ t - λ t k
0k = 0
E T = t f t dt = t λ n - 1 e 1 - e dt
= λ n - 1 t e 1 - e dt
n - 2= λ n - 1 t e - e dt
k
n - 2= λ n - 1 - 1 t e dt
k
87
n - 2
k - k + 1 λ t
n0
k = 0
n - 2E T = λ n - 1 - 1 t e dt
k
Put
yy = k + 1 λ t t =
k + 1 λ and
dy
dt =k + 1 λ
n - 2k
- λ t - λ t
n0
k = 0
n - 2k - λ t - λ t k
0k = 0
n - 2k - k + 1 λ t
0k = 0
n - 2E T = λ n - 1 t e - e dt
k
n - 2= λ n - 1 - 1 t e dt
k
n - 2= λ n - 1 - 1 t e dt
k
Put
yy = k + 1 λ t t =
k + 1 λ and
dy
dt =k + 1 λ
n - 2k - y
n0
k = 0
n - 2k - y 2 - 1
2 0k = 0
n - 2k
2k = 0
k
2 2k =
n - 2 y dyE T = λ n - 1 - 1 t e ×
k k + 1 λ k + 1 λ
n - 2 1= λ n - 1 - 1 t e × y dy
k k + 1 λ
n - 2 1= λ n - 1 - 1 × 2
k k + 1 λ
n - 2 1= λ n - 1 - 1
k k + 1 λ
n - 2
0
n -2k
2k = 0
n - 2k
2k = 0
n - 2k
2k = 0
n - 2k
2k = 0
n - 2 1= n - 1 - 1
k k + 1 λ
n - 21 1= - 1 n - 1
kλ k + 1
n - 2 !1 1= - 1 n - 1 × ×
λ k ! n - k - 2 ! k + 1
n - 1 !1 1= - 1 × ×
λ k ! n - 1 - k - 1 ! k + 1
88
n - 2k
n
k = 0
kn - 2
k = 0
n - 1 !1 1E T = - 1 × ×
λ k + 1k + 1 ! n - 1 - k - 1 !
n - 1- 11= ×
k + 1λ k + 1
Put k + 1 = j
kn - 1
n
j = 1
k 2n - 1
j = 1
k + 2n - 1
j = 1
j + 1n - 1
j = 1
n - 1j + 1
j = 1
n - 1- 11E T = ×
jλ j
n - 1- 1 - 11= ×
jλ j
n - 1- 11= ×
jλ j
n - 1- 11= ×
jλ j
n - 11 1= - 1 × ×
jλ j
Next, we wish to proof that
j - 1 j - 1
j + 1
j = 1 j = 1
n - 1 1 1- 1 × × =
j j j
Proof
From the identity
n2 n - 1
nn - 1k
k = 0
1 - x1 + x + x + ...... + x = i.e
1 - x
1 - xx =
1 - x
If we put x = 1 - t we have
nn - 1k
k = 0
n
1 - 1 - t1 - t =
1 - 1 - t
1 - 1 - t=
t
89
n - 1
k n - 1
k = 0
1 - t = 1 - 1 - t t 0 t 1
Integrate this identity w.r.t t. Thus
1 1n - 1
k n - 1
k = 00 0
1 - t dt= 1 - 1 - t t dt
1n - 1
k
k = 0 0
L.H.S = 1 - t dt
Put u = 1 - t du = - dt
0 1n - 1 n - 1k k
k = 0 k = 01 0
1k + 1n - 1 n - 1
k = 0 k = 00
L.H.S = u . - du = u . du
u 1= =
k + 1 k + 1
1n - 1
0
1 nk - 1
k = 00
R.H.S = 1 - 1 - t t dt
n= 1 - - t t dt
k
1 nk - 1
k = 10
1 nk k - 1
k = 10
1 nk + 1 k - 1
k = 10
1nk + 1 k - 1
k = 1 0
1kn
k + 1
k = 1 0
k + 1
k = 1
nR.H.S = 1 - 1 + - t t dt
k
n= - -1 t t dt
k
n= -1 t dt
k
n= -1 t dt
k
n t= -1
k k
n= -1 ×
k
n 1
k
n - 1n n
k + 1
k = 1 k = 0 k = 1
n 1 1 1-1 × = =
k k k + 1 k
90
Hence n - 1
n
j = 1
1 1E T =
l j (6.8)
End of proof
Variance
Next
n - 22 2 - λ t - λ t
n
0
n - 2k
2 - λ t - λ t
k = 00
n - 2k2 - λ t - λ t k
k = 00
n - 2k- λ t k + 1 2
k = 00
n - 2k
32k = 0
k
2
E T = t λ n - 1 e 1 - e dt
n - 2= λ n - 1 t e - e dt
k
n - 2= λ n - 1 t e - 1 e dt
k
n - 2= λ n - 1 e t - 1 dt
k
n - 22 n - 1 1= - 1
kλ k + 1
n - 1 n - 2 !2= - 1
λ k ! n - 2 -
n - 2
3k = 0
n - 2k
22k = 0
n - 2k
22k = 0
n - 2k - λ t k + 1 2
k = 0 0
1×
k ! k + 1
n - 1 !2 1= - 1 ×
λ k + 1 ! n - 1 - k + 1 ! k + 1
n - 12 1= - 1 ×
k + 1λ k + 1
n - 2= λ n - 1 - 1 e t dt
k
Put
y dyy = k + 1 λt t = and dt =
k + 1 λ k + 1 λ
91
2n - 2
k2 - y
n
k = 0 0
n - 2k - y 2
3k = 0 0
n - 2k - y 3 - 1
3k = 0 0
n - 2 y dyE T = λ n - 1 - 1 e
k k + 1 λ k + 1 λ
n - 2 1= λ n - 1 - 1 e y dy
k k + 1 λ
n - 2 1= λ n - 1 - 1 e y dy
k k + 1 λ
n - 2k2
n 33k = 0
n - 2k + 2
22k = 0
n - 22λ n - 1 1E T = - 1
kλ k + 1
n - 12 1= - 1 ×
k + 1λ k + 1
Put k + 1 = j
n - 1
j + 12
n 2 2j = 0
n - 12 1E T = - 1 ×
jλ j
92
6.4 Pure Birth Process with Immigration
6.4.1 Determining nF t for Pure Birth Process with Immigration
Initial Condition: 0X 0 = n
Now, for simple birth with immigration,
0
m kt t
j n k
k m 1P t P t e 1 e
k
0
0 0
0
n 1
n j
j 0
n 1m k
t t
n k 0
j
j 0
n 1m k m k
t t t t
n k 0 n k 0
m kt t
n k n
mt
F t 1 P t
k m 11 e 1 e
k
dP t 0
dt
k m 1 k m 1d de 1 e e 1 e
k kdt dt
k m 1e 1 e 0
k
k m 1de
kdt
0 0
n 1k m k
t t t
n k 0 n k n
k m 1d1 e e 1 e
kdt
Therefore
0
0
n 1m k
t t
n n
n k 0
m kt t
n k n
k m 1d df t F t e 1 e
kdt dt
k m 1de 1 e
kdt
0
kt m t
n
n k n 0
k m 1df t e 1 e
kdt
Let 0 0n k n r k r n n
0
0
r + n - n0- λ t m - λ t
n
r = 0 0
m n - n r0- λ t - λ t - λ t
r = 0 0
r + n - n + m - 1df t = e 1 - e
r + n - ndt
r + n - n + m - 1d= e 1 - e 1 - e
r + n - ndt
93
0
0
0
m n - n r0- λ t - λ t - λ t
n
r = 0 0
m n - n r0- λ t - λ t - λ t
r = 0 0
rm n - n r
0 0- λ t - λ t - λ t
r = i = 00 0
r + n - n + m - 1 !df t = e 1 - e 1 - e
dt r + n - n ! m - 1 !
r + n - n + m - 1 !d 1= e 1 - e 1 - e
dt m - 1 ! r + n - n !
n - n + m - 2 ! i + n - n + m - 1d 1= e 1 - e 1 - e
dt m - 1 ! n - n - 1 ! i + n - n
0
0
0
rm n - n r
0 0- λ t - λ t - λ t
r = 0 i = 00 0
m n - n r0 0- λ t - λ t - λ t
r = 0 i = 00 0
n - n + m - 2 ! i + n - n + m - 1d= e 1 - e 1 - e
dt m - 1 ! n - n - 1 ! i + n - n
n - n + m - 2 ! i + n - n + m - 1d= e 1 - e 1 - e
dt m - 1 ! n - n - 1 ! i + n - n
Now,
0 0 0 0
i = 0 0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
0
0
i + n - n + m - 1 n+ m- n - 1 n + m - n n + m - n + 1= . . .....
i + n - n n - n n - n + 1 n - n + 2
n+ 2m- n - 3 n+ 2m- n - 2 n+ m- n - 1 n+ 2m- n. . . ....
n - n + m - 2 n - n + m - 1 n - n + m n - n + m + 1
1
n n n n 1 n n 2 .... n n m 2
n n 1
n n m 2
0
0
0
0
m n - n r0 0 - λ t - λ t - λ t
n
r = 00 0
m n - n r- λ t - λ t - λ t
r = 0
m n - n - 1- λ t - λ t - λ t
m n - n - 1- λ t - λ t - λ t
n + m - n - 2 ! n - n - 1 !df t = e 1 - e 1 - e
dt m - 1 ! n - n - 1 ! n - n + m - 2 !
d 1= e 1 - e 1 - e
dt m - 1 !
d 1= e 1 - e 1 - 1 - e
dt m - 1 !
d 1= e 1 - e e
dt m - 1 !
0m n - n
- λ t - λ t λ td 1= e 1 - e e
dt m - 1 !
94
0
0
0 0
0
n - n- λ t m + λ t - λ t
n
n - n- λ t m - 1 - λ t
n - n n - n - 1- λ t m - 1 - λ t m - 1- λ t - λ t - λ t
0
n - n - 1- λ t m - 1 - λ t - λ t - λ t
0
n - n- λ t m - 1 - λ t
d 1f t = e 1 - e
dt m - 1 !
d 1= e 1 - e
dt m - 1 !
1= - m - 1 λ e 1 - e + e n - n 1 - e .λ e
m - 1 !
λ= e 1 - e - m - 1 + n - n e 1 - e
m - 1 !
λ= e 1 - e
m - 1 !
0
0
0
- 1- λ t - λ t
0
n - n - 1- λ t m - 1 - λ t - λ t - λ t
0
n - n - 1- m - 1 λ t - λ t - λ t
0
n - n e - m - 1 1 - e
λ= e 1 - e n - n e + m - 1 e - m + 1
m - 1 !
λ= e 1 - e n - n + m - 1 e - m + 1
m - 1 !
But 0
vm = n +
λ
0 0
0 0
0
v- n + - 1 λ t n - n - 1
- λ t - λ tλ
n 0 0 0
0
v- n + - 1 λ t n - n
- λ t - λ tλ 0
0
n- n λ + v - λ t - λ t
0
λ v vf t = e 1 - e n - n +n + - 1 e -n - + 1
v λ λn + - 1 !
λ
n λ + v - λλ v= e 1 - e n+ - 1 e - *
v λ λn + - 1 !
λ
λ= e 1 - e
n λ + v - λ! λ
λ
0
00
- n- λ t
0
n - n- n λ + v - λ t - λ t - λ t
0
0
n λ + v - λ e - n λ + v - λ
1= e 1 - e n λ + v - λ e - n λ + v - λ
n λ + v - λ!
λ
Thus,
00
0
n - n- n λ + v - λ t - λt - λt
n 0n λ + v - λ
λ
1f t = e 1 - e nλ + v - λ e - n λ + v - λ
! (6.9)
95
When 0n = 1,
n - 1
- v t - λ t - λ t
n
1f t = e 1 - e n λ + v - λ e - v
v!
λ
n - 1 - v + λ t- λ t - v t1
= 1 - e n λ + v - λ e - vev
!λ
(6.10)
When v = 0,
00
00
n - n- n - 1 λ t - λ t - λ t
n 0
0
n - n- n - 1 λ t - λ t - λ t
0
0
1f t = e 1 - e λ n - 1 e - λ n - 1
n - 1 !
λ= e 1 - e n - 1 e - n - 1
n - 1 !
0
00n - n - n - 1 λ t- n λ t- λ t
0
0
λ= 1 - e n - 1 e - n - 1 e
n - 1 ! (6.11)
5.4.2 Mean and Variance
96
6.5 Polya Process
6.5.1 Determining nf t for the Polya Process
Initial Condition: X(0) = 0
n - 1
n j
j = 0
F t = 1 - p t where 1a
1ja
j
j+ - 1p t = p q , j = 0, 1, 2, 3,....
j
. jp t can also be
written as
1j1 a
a
j 1a
j+ - 1 1 λ a tp t = ; j = 0, 1, 2, 3,....
- 1 1 + λ a t 1 + λ a t
1j1n - 1 a
a
n 1j = 0 a
1n - 1 1j - j +a
a1
j = 0 a
j+ - 1 1 λ a tF t = 1 -
- 1 1 + λ a t 1 + λ a t
j+ - 1= 1 - λ a t 1 + λ a t ; t > 0 and n = 1, 2, 3,..
- 1
Differentiating both sides with respect to t, we have
1a
1 1- j + - j + 1 +a a
1n - 1j - j +a
n n 1j = 0 a
1n - 1j - 1 ja 1
a1j = 0 a
j+ - 1d df t = F t = - λ a t 1 + λ a t
- 1dt dt
j+ - 1= - jaλ λ a t 1 + λ a t - j + a λ λ a t 1 + λ a t
- 1
1 1- j + 1 + - j +
a a1 1n - 1 n - 1
j j - 1a a1n a1 1
j = 0 j = 1a a
j+ - 1 j+ - 1f t = λ j + a λa t 1 + λa t - ja λa t 1 + λa t
- 1 - 1
But
1 11 1 1 1a aa a a a1 1
a a1 11 11a aa aa
j+ - 1 j +j+ - 1 ! j+ - 1 ! j + j + !j + a = j + a = = =
1- 1 - 1 ! j! ! j !- 1 ! j!
a
and
1 11 1 1a aa a a
1 11 1 1 1a aa a a a
j+ - 1 j+ - 1j+ - 1 ! a j j+ - 1 ! j+ - 1 !a j = = = =
- 1 - 1 ! j! - 1 ! j - 1 ! ! j - 1 !
1 1
j + 1 + j +a a
j j - 11 1n - 1 n - 1a a
n 1 1j = 0 j = 1a a
j + j+ - 1λ a t λ a tf t =λ -
1 + λ a t 1 + λ a t
If we take u = j - 1, the above equation becomes
1 1u +1+j + 1 + aa
j u1n - 1 n - 2a
n 11j = 0 u = 0 aa
u- 1j + λ a t λ a tf t =λ -
1 + λ a t1 + λ a t
97
1n +
a
1n +
a
1n +
a
1n +
a
1n +
a
11 n +n + aa
n - 11a
n 1a
n - 11a
n - 11a
1a
n - 11a
n - 1 n - 11a
1a
n n - 11a
1a 1
λa
n - 1 + λ a tf t = λ
1 + λ a t
λ n - 1 + ! λ a t=
1! n - 1 ! 1 + λ a t
a
λ n+ λ a t= ×
+ 1 n1 + λ a t
n+ λ a tλ= a × × × , t> 0, n > 0
1 n1 + λ a t
a
a λ n+ λ a t= × , t> 0, n > 0
n1 + λ a t
n+ λ a t= ×
nλ a t +
1
n +a
1n - 11 a1
λaa
n1a 1
λa
tn+ 1 1f t = × , t > 0, n > 0, > 0, > 0
a λ ant +
(6.12)
Definition of generalized Pareto distribution
Let X\θ ~ Gamma α, θ and θ ~ G λ, β .
Thus a gamma mixture of a gamma distribution is given by
α- θ x α - 1
0
α λ- θ x α - 1 - βθ λ - 1
0
α - 1 λ- x + β θ α + λ - 1
0
θf x = e x g θ dθ
α
θ β= e x e θ dθ
α λ
x β= e θ dθ
α λ
Put y
y = x + β θ θ =x + β
and dy
dθ = .x + β
Therefore,
98
α + λ - 1α - 1 λ
- y
0
α - 1 λ- y α + λ - 1
α + λ
0
α - 1 λ
α + λ
x β y dyf x = e
x + β x + βα λ
x β 1= × e y dy
α λ x + β
x β 1= × × α + λ
α λ x + β
Therefore,
α - 1λ
α + λ
α + λ xf x = × β × ; x > 0
α λ x + β
Back to equation (6.12), replace x by t, α by n and1
λ =a
. Therefore
1
n +a
1
n - 11 aa
1a
n + β tf x = × × ;
nt + β
Further, let 1
β = .λ a
Therefore
1n +
a
1
an - 1
1a
1a
1t
λ an + 1 1f x = × ; t > 0, > 0, n > 0 and > 0.
a λan1
t +λ a
(6.13)
This is generalized Pareto (a Gamma mixture of Gamma) distribution.
Note that since 0
f t dt = 1,
then
1n +
a
1n +
a
1n - 11 a
a
10 a 1
λa
n - 1 1a
110 1 aa1
λaλa
n + 1 tf x = × × = 1
λ ant +
t n= ×
n +t +
99
6.5.2 Mean and Variance of nf t for the Polya Process
Initial Condition: X 0 = 0
Mean
1n +
a
1n +
a
1n + 1 + - 1
a
1- 1
a
n n
0
1
n - 11 a1λaa
10 a 1
λa
1
1 1 a na λa
10a 1
λa
1
1 1 a n + 1 - 1a λa
10a
1λa
1
1 1 a 1a λa a
1 1a a
1λa
1 1a a
E T = t f t dt
tn += t× × dt
nt +
n + t= dt
nt +
n + t= dt
nt +
n + n + 1 - 1 1= × ×
n n +λ a
n n n= × = =
1 - 1 λ a - 1 λ - λ a
n
nE T = 0 < a < 1
λ 1 - a (6.14)
Variance
1n +
a
1n +
a
1
n - 11 a1λa2 2 a
10 a 1
λa
1
1 1 a n + 1a λa
10a 1
λa
1
1 1 a n + 2 - 1a λa
11 n + 2 + - 20a 1 a
λa
tn +E t = t × × dt
nt +
n + t= dt
nt +
n + t= dt
nt +
100
1- 2
a
1
1 1 a 1a λa a2
1 1a a
21
λa
1 1 1 1a a a a
2
n + × n + 2 × - 2 1E t = × ×
n × n +λ a
n + 1 ×n n + 1 ×n= × =
1 - 1 - 2 λ a - 1 λ a - 2
n n + 1 n n + 1= = for 0 < a < 1 and 0 < 2a < 1.
λ - λ a λ - 2λ a λ 1 - a 1 - 2a
or equivalently 1
for 0 < a < 1 and 0 < a < .2
therefore 1
0 < a < .2
22
n n n
2
22 2
2
22
2 2 2
22
2 2 2 2
22
2
n 2 22 2
var T = E T - E T
n n + 1 n= -
λ 1 - a 1 - 2a λ 1 - a
n n + 1 1 - a - n 1 - 2a=
λ 1 - a 1 - 2a
n + n 1 - a - n + 2a n=
λ 1 - a 1 - 2a
n - n a + n - n a - n + 2a n=
λ 1 - a 1 - 2a
n a n+ 1 -aa n + n - n avar T = =
λ 1 - a 1 - 2a λ 1 - a 1 - 2a
1a
n 22
a n n+ - 1 1var T = , n > 0 and 0 < a <
2λ 1 - a 1 - 2a (6.15)
101
CHAPTER SEVEN
SUMMARY AND CONCLUSION
7.1 Summary
In this thesis, we derived the Kolmogorov forward and backward differential equations for
continuous time non homogenous pure birth process and then solved the forward Kolmogorov
differential equation using three alternative approaches. We were able to generate distributions
arising from increments from pure birth processes by the applying the three alternative
approaches.
7.1.1 General Kolmogorov Forward and Backward Equations
The general forward Kolmogorov differential equation is;
k,k + n k +n k,k + n k +n-1 k,k + n - 1p s, t + λ t p s, t = λ t p s, tt
(2.11)
With initial conditions: k k k,k + np s,s = 1 and p s,s = 0 for n > 0.
The general Backward Kolmogorov differential equation is;
i j i i j i i+1, jp s, t = - λ s p s, t +λ s p s, ts
(2.14)
7.1.2 Poisson Process
From the Poisson process, we found the distribution of the increments to be a Poisson
distribution with parameter λ t - s
n
- λ t - s
k,k + n
λ t - sp s, t = e ; n = 0, 1, 2, ...
n!
(3.6, 4.5)
This is independent the initial state and depends on the length of the time interval, thus for a
Poisson processes the increments are independent and stationary.
The first passage distribution is
n - 1- λ t n- λ t n - 1
n
e λ t λf t = = e t for t > 0; n = 1, 2, 3,.....
n - 1 ! n (6.2)
Which is gamma n,λ
The mean of the first passage distribution was found to be n
nE T =
λ (6.3)
102
The Variance of the first passage distribution was found to be2
nVar T =
λ (6.4)
7.1.3 Simple Birth Process
From the Simple birth process, we found the distribution of the increments to be Negative
binomial distribution with - λ t - sp = e and - λ t - s
q =1- e
n
- λk t - s - λ t - s
k,k + n
k + n - 1p s, t = e 1 - e .
n
(3.15, 4.9, 6.8 )
It depends on the length of the time interval t - s and on k and is thus stationary and not
independent.
The first passage distribution when 0n 1 is
n - 2
- λ t - λ t
nf t = λ n - 1 e 1 - e ; t > 0, n = 2, 3,...
which is an exponentiated exponential distribution with parameters λ and n - 1
The mean of the first passage distribution was found to be n - 1
n
j = 1
1 1E T =
l j (7.7)
7.1.4 Simple Birth with Immigration
From the Simple birth process with immigration, we found the distribution of the increments to
be Negative binomial distribution with -λ t - sp = e and -λ t - s
q =1- e
vλ
v n- k + +n t - s λ -λ t - sλ
k,k+n
k + +n -1p s, t = e 1-e .
n
(3.22, 4.13, 5.9)
It depends on the length of the time interval t - s and on k and is thus stationary and not
independent.
The first passage distribution is
00
0
n - n- n λ + v - λ t - λt - λt
n 0n λ + v - λ
λ
1f t = e 1 - e nλ + v - λ e - n λ + v - λ
!
7.1.5 Polya Process
From the Polya process, we found the distribution of the increments to be Negative binomial
distribution with 1 + λas
p =1 + λat
and λa t - s1 + λas
q =1- .1 + λat 1 + λat
103
1a
jk +1a
k,k + j
j + k + - 1 λa t - s1 + λasp s, t =
j 1 + λat 1 + λat
(3.28, 4.17, 5.10)
It depends on the length of the time interval t - s and on k and is thus stationary and not
independent. The first passage distribution is
1
n +a
1n - 11 a1
λaa
n1a 1
λa
tn+ 1 1f t = × , t > 0, n > 0, > 0, > 0
a λ ant +
which is a generalized
Pareto distribution.
n
nE T = 0 < a < 1
λ 1 - a (6.14)
1a
n 22
a n n+ - 1 1var T = , n > 0 and 0 < a <
2λ 1 - a 1 - 2a (6.15)
7.2 Conclusion
In each case of the birth processes, the various approaches lead to same distribution. The
distributions that emerged from increments in pure birth process were power series
distributions and were all discrete.
The first passage distributions that emerged from pure birth process were continuous
distributions
7.3 Recommendation for Further Research
(1) Determine the distribution emerging from pure birth processes when the birth rate is a
distribution function.
(2) Determine the distribution emerging from pure birth processes when the birth rate
change over certain time intervals
(3) Determine the distribution emerging from pure birth processes when the birth rate is a
survival function
(4) Determine the distribution emerging from pure birth processes when more than one
births are allowed over between time t and t + ∆t
104
7.4 References
Bhattacharya, R.N. and Waymire, E. (1990) Stochastic Processes with Application. New York
John wiley and Sons pp 261-262.
Janardan, K.G. (2005), “Integral Representation of a Distribution Associated with a Pure Birth
Process”. Communication in Statistics-Theory and Methods, 34:11, 2097-2103.
Morgan, B.J.T. (2006), “Four Approaches to Solving Linear Birth - and – Death (and Similar)
Processes”. International Journal of Mathematics Education in Science and Technology.
vol.10.1 pp. 51-64.
Moorsel, V. and Wolter, K (1998), “Numerical solution of non-homogeneous Markov
processes through uniformization”. In Proc. of the European Simulation Multiconference -
Simulation, pages 710 -717.