12
Monte Carlo Methods and Appl., Vol. 10, No. 2, pp. 95 – 106 (2004) c VSP 2004 Exact simulation of nonlinear coagulation processes Nicolas Fournier and Jean-S´ ebastien Giet Institut ´ Elie Cartan, Campus Scientifique, BP 239 54506 Vandoeuvre-l` es-Nancy Cedex, France [email protected], [email protected] Abstract — The Smoluchowski equation is a nonlinear integro-differential equation describing the evolution of the concentration μ t (dx) of particles of mass in (x, x + dx) in an infinite particle system where coalescence occurs. We introduce a class of algorithms, which allow, under some conditions, to simulate exactly a stochastic process (X t ) t0 , whose time marginals are given by (t (dx)) t0 . Key words : Coalescence, Simulation, Nonlinear jump processes. MSC 2000 : 45K05, 65C05. 1 Introduction The Smoluchowski coagulation equation describes the evolution of an infinite particle system in which coalescence by pairs occurs. We assume that each particle is entirely characterized by its mass. We consider the concentration measure μ t (dx) of particles having their masses in (x, x + dx) at the instant t 0. If one considers that two particles of mass x and y coalesce at rate K (x, y), for some coagulation kernel K , then (μ t ) t0 satisfies the coagulation equation. In [4], a pure jump nonlinear stochastic Markov process (X t ) t0 has been associated with the coagulation equation, in the sense that for each t 0, the law of X t is given by t (dx). This process may be seen as the evolution of the mass of a “typical” particle. Our aim in this paper is to give a new existence proof of (X t ) t[0,) (and thus of a solution to the Smoluchowski equation), by exhibiting an exact simulation algorithm. Our construction is thus explicit and direct. No approximation will be done. To our knowledge, the only existence proof not relying on successive approximations is due to Melzak, [11], and treats the case where K is bounded (he also allows some fragmentation, and his proof relies on the use of series). Note that we do not obtain new existence results: our assumptions are, for example, stronger than those of Norris, [12]. We will treat in this paper both discrete case (i.e., when the particles have their mass in N ) and continuous case (i.e. when the particles have their mass in (0, )). We will cover the case where K (x, y) grows at most linearly at infinity. We do not allow fragmentation for simplicity, but the method could clearly be extended, at least in the discrete case and with a “reasonable” fragmentation kernel. On the contrary, we do not deal with gelling kernels (for example K (x, y) (xy) α with α> 1/2), because our method would probably break down. This study naturally leads to a new Monte Carlo numerical scheme for the Smoluchowski

Exact simulation of nonlinear coagulation processes

Embed Size (px)

Citation preview

Page 1: Exact simulation of nonlinear coagulation processes

MonteCarloMethods andAppl., Vol. 10, No. 2, pp. 95 – 106 (2004)c© VSP 2004

Exact simulation of nonlinear coagulation processes

Nicolas Fournier and Jean-Sebastien GietInstitut Elie Cartan,

Campus Scientifique, BP 23954506 Vandoeuvre-les-Nancy Cedex, France

[email protected], [email protected]

Abstract — The Smoluchowski equation is a nonlinear integro-differential equation describingthe evolution of the concentration μt(dx) of particles of mass in (x, x+dx) in an infinite particlesystem where coalescence occurs. We introduce a class of algorithms, which allow, under someconditions, to simulate exactly a stochastic process (Xt)t≥0, whose time marginals are given by(xμt(dx))t≥0.

Key words : Coalescence, Simulation, Nonlinear jump processes.

MSC 2000 : 45K05, 65C05.

1 Introduction

The Smoluchowski coagulation equation describes the evolution of an infinite particlesystem in which coalescence by pairs occurs. We assume that each particle is entirelycharacterized by its mass. We consider the concentration measure μt(dx) of particleshaving their masses in (x, x + dx) at the instant t ≥ 0. If one considers that two particlesof mass x and y coalesce at rate K(x, y), for some coagulation kernel K, then (μt)t≥0

satisfies the coagulation equation.In [4], a pure jump nonlinear stochastic Markov process (Xt)t≥0 has been associated withthe coagulation equation, in the sense that for each t ≥ 0, the law of Xt is given byxμt(dx). This process may be seen as the evolution of the mass of a “typical” particle.Our aim in this paper is to give a new existence proof of (Xt)t∈[0,∞) (and thus of asolution to the Smoluchowski equation), by exhibiting an exact simulation algorithm.Our construction is thus explicit and direct. No approximation will be done. To ourknowledge, the only existence proof not relying on successive approximations is due toMelzak, [11], and treats the case where K is bounded (he also allows some fragmentation,and his proof relies on the use of series).Note that we do not obtain new existence results: our assumptions are, for example,stronger than those of Norris, [12].We will treat in this paper both discrete case (i.e., when the particles have their mass inN

∗) and continuous case (i.e. when the particles have their mass in (0,∞)). We will coverthe case where K(x, y) grows at most linearly at infinity. We do not allow fragmentationfor simplicity, but the method could clearly be extended, at least in the discrete case andwith a “reasonable” fragmentation kernel. On the contrary, we do not deal with gellingkernels (for example K(x, y) ≥ (xy)α with α > 1/2), because our method would probablybreak down.This study naturally leads to a new Monte Carlo numerical scheme for the Smoluchowski

Page 2: Exact simulation of nonlinear coagulation processes

96 Nicolas Fournier and Jean-Sebastien Giet

equation, which we will compare to the stochastic finite particle method of Eibeck-Wagner[5].The paper is organized as follows. In Section 2, we introduce the definitions of solutions tothe coagulation equation, and of the nonlinear process (Xt)t≥0. In Section 3, we describea simulation algorithm for the discrete case. Section 4 is dedicated to the extension of theprevious method to the continuous case. We finally present some numerical simulationsin Section 5.The whole work is inspired by papers on the Boltzmann equation, especially those ofTanaka, [13], Graham-Meleard, [7], and Chauvin, [3].

2 Definitions

We first of all introduce some notations. For a subset A of (0,∞), we denote by M+f (A)

(resp. P(A)) the set of nonnegative finite measures (resp. probability measures) on A.For a measure ν and a function f , we denote by 〈ν, f〉 or 〈ν(dx), f(x)〉 the integral

∫fdν.

Next we introduce some assumptions on the coagulation kernel K and on the initialcondition μ0, inspired by Norris, [12].

Assumption (L):

Let E = N∗ or E = (0,∞). The initial condition μ0 belongs to M+

f (E) andsatisfies 〈μ0(dx), x〉 = 1. The coagulation kernel K is a measurable map onE × E. There exists a continuous nonnegative function φ : E �→ [1,∞) suchthat x �→ φ(x)/x is non-increasing on E, and for all x, y ∈ E,

0 ≤ K(y, x) = K(x, y) ≤ φ(x)φ(y) . (2.1)

Furthermore, 〈μ0(dx), x2 + φ2(x)〉 < ∞ and

either x �→ φ2(x)/x is non-increasing on E (2.2)

or for all x, y ∈ E, K(x, y) ≤ φ(x) + φ(y) . (2.3)

The advantage of such (complicated) assumptions is that it allows the coagulationkernel to explode at 0 in the continuous case. For example, the kernel K(x, y) = (x1/3 +y1/3)(x−1/3 + y−1/3) fulfills (L).

Remark 2.1 If E = N∗, one may assume (2.3) and choose φ(x) = Ax for some

constant A ≥ 1 without loss of generality. Thus assumption (L) reduces to 〈μ0(dx), x〉 = 1,〈μ0(dx), x2〉 < ∞, and 0 ≤ K(y, x) = K(x, y) ≤ A(x + y).

We now give a definition of weak solutions to the Smoluchowski equation.

Definition 2.2 Assume (L). A family (μt)t≥0 of measures of M+f (E) solves the

Smoluchowski equation (S) if(a) for all t ≥ 0, 〈μt(dx), x〉 ≤ 1 and sup[0,t] 〈μs(dx), φ(x)〉 < ∞,(b) for all test functions f ∈ Cb((0,∞)), and all t ≥ 0,

〈μt, f〉 = 〈μ0, f〉 +1

2

∫ t

0

〈μs(dx)μs(dy), [f(x + y) − f(x) − f(y)] K(x, y)〉 ds (2.4)

Page 3: Exact simulation of nonlinear coagulation processes

Exact simulation of nonlinear coagulation processes 97

Condition (a) ensures that under (L) (see (2.1)), every term is finite in (2.4). Theintegral in the right hand side counts coagulation by pairs of particles of masses x and yat rate K(x, y). We now define a related stochastic equation.

Definition 2.3 Assume (L). A stochastic process (Xt)t≥0 solves (SDE) if there existsa filtered probability space (Ω,F , (Ft)t≥0, P ) such that(i) (Xt)t≥0 is a cadlag E ∪ {∞}-valued nondecreasing (Ft)t≥0-adapted process,(ii) X0 is xμ0(dx)-distributed,(iii) there exists an (Ft)t≥0-adapted Poisson measure N(ds, dy, dz) on [0,∞)×(E∪{∞})×R+ with intensity measure dsQs(dy)dz, where for each s ≥ 0, Qs is the law of Xs, suchthat a.s., for all t ≥ 0

Xt = X0 +

∫ t

0

E∪{∞}

∫ ∞

0

y11z≤K(Xs−,y)

y

11{y<∞}N(ds, dy, dz) . (2.5)

Since the integrand of the Poisson integral in (2.5) is nonnegative, the Poisson integralis well-defined as a Stieljes finite or infinite integral. The process (Xt)t≥0 describes theevolution of the mass of a typical particle (see [4]). We collect in the following lemmasome a priori estimates and the link between (SDE) and (S).

Lemma 2.4 Assume (L). Consider a solution (Xt)t≥0 to (SDE). Then a.s., for allt ≥ 0, Xt < ∞. For any T < ∞, sup[0,T ] E[Xt + φ(Xt)] < ∞. For each t ≥ 0, denote byQt = L(Xt), and define the nonnegative finite measure μt on E by μt(dx) = x−1Qt(dx).Then (μt)t≥0 solves (S) and is conservative: for all t ≥ 0, 〈μt(dx), x〉 = 1.

Proof We give the main steps of the proof assuming (2.3), the case of (2.2) being similar.Step 1 Note that thanks to (L), there exists a constant α such that φ(x) ≤ αx assoon as x ≥ 1. Next, since X is nondecreasing and thanks to (L), E[φ(Xt)11{Xt≤1}] ≤E[φ(Xt)/Xt] ≤ E[φ(X0)/X0] = 〈μ0, φ〉 < ∞.Step 2 We first check that for all T , there exists a constant CT such that for all t ≤ T ,E[Xt + φ(Xt)] ≤ CT . Thanks to step 1, it remains to prove that E[Xt] ≤ CT . For anyA ∈ [1,∞), a simple computation shows that a.s.,

Xt ∧ A ≤ X0 +

∫ t

0

E

∫ ∞

0

(y ∧ A)11z≤K(Xs−,y)

y

11{Xs−≤A}N(ds, dy, dz) . (2.6)

Using Step 1, (L), and the expression of the intensity of N , we obtain

E[Xt ∧ A] ≤ a + a

∫ t

0

{

E[φ(Xs)11{Xs≤A}

]+ AE

[φ(Xs)

Xs

11{Xs≥A}

]}

ds (2.7)

the constant a being independent of A. Using Step 1, we find first that E[φ(Xs)11{Xs≤A}] ≤a + aE[Xs ∧ A], and next that AE[11{Xs≥A}φ(Xs)/Xs] ≤ aAP [Xs ≥ A] ≤ aE[Xs ∧ A].The Gronwall Lemma allows to conclude that for some constant CT , all t ≤ T , all A ≥ 1,E[Xt ∧ A] ≤ CT . Making A grow to infinity ends Step 2.Step 3 We now prove that (μt)t≥0 solves (S). It is clear from Step 2 that we may write,for any t ≥ 0, any g ∈ C1

b ((0,∞)),

E[g(Xt)] = E[g(X0)] +

∫ t

0

ds

E

Qs(dy)E

[g(Xs + y) − g(Xs)

yK(Xs, y)

]

. (2.8)

Page 4: Exact simulation of nonlinear coagulation processes

98 Nicolas Fournier and Jean-Sebastien Giet

For any f ∈ C1c ((0,∞)), we may apply this formula to the function g(x) = f(x)/x. Using

the fact that L(Xs) = Qs(dx) = xμs(dx), we deduce that (2.4) holds with f . Formula(2.4) can be extended to any f ∈ Cb((0,∞)), using Step 2 and the Lebesgue Theorem.Finally, (μt)t≥0 is of course conservative, since for each t ≥ 0, Xt < ∞ a.s.

3 The discrete case

Our aim in this section is to build a solution to (SDE) in the discrete case.

Assumption (LD): (L) holds with E = N∗. For x, y ∈ N

∗ we set λ(x) =supz∈E[K(x, z)/z] and C(x, y) = supz∈E,z≥y[K(x, z)/z].

Note that λ and C are clearly well-defined, and that for each x ∈ N∗, the map y �→

C(x, y) is non-increasing. Following carefully the proof below, one can check that:

Remark 3.1 If explicit computations of λ and C are impossible, one may replacethem by any other functions λ and C such that for some constant a, for all x, y ∈ N

∗,λ(x) ≤ λ(x) ≤ ax, and C(x, y) ≤ C(x, y).

Algorithm 1: The terminal time T is fixed. For any t ≥ 0, any v ∈ (0, 1) and anyz ∈ E, we build the following (recursive) random function.

function mass(t, v, z):{.. Simulate a xμ0(dx)-distributed random variable x0.

.. Set s = 0 and x = x0.

.. While s < t and v ≤ C(z, x)/λ(z), do

.. {

.. .. Simulate an exponential random variable u with parameter λ(x).

.. .. Set s = s + u.

.. .. If s ≤ t

.. .. {

.. .. .. Choose w uniformly in (0, 1).

.. .. .. Set y =mass(s, w, x).

.. .. .. Set x = x + y.

.. .. }

.. }

.. If v ≤ [K(z, x)/x]/λ(z), set mass(t, v, z) = x.

.. Else set mass(t, v, z) = 0.}

Then the way to build the process Xt on the time interval [0, T ] is the following.

Simulate a xμ0(dx)-distributed random variable x0.

Set s = 0 and x = x0.

While s < T, do

{

Page 5: Exact simulation of nonlinear coagulation processes

Exact simulation of nonlinear coagulation processes 99

.. Simulate an exponential random variable u with parameter λ(x).

.. Set Xt = x for all t ∈ [s, (s + u) ∧ T ).

.. Set s = s + u.

.. If s ≤ T

.. {

.. .. Choose w uniformly in (0, 1).

.. .. Set y =mass(s, w, x).

.. .. Set x = x + y.

.. }}

The main idea of this algorithm consists in noting that the mass of a typical particleis obtained by adding, with well-chosen rates and acceptance-rejection procedures, themass of other typical particles. The mass of these other typical particles will be obtainedby adding, with well-chosen rates and acceptance-rejection procedures, the mass of othertypical particles, and so on... This explains why the algorithm we propose is recursive.

Proposition 3.2 Assume (LD), and let T < ∞. Denote by CT the total numberof times that Algorithm 1 calls the function mass. Then E[CT ] < ∞. In particular,Algorithm 1 ends a.s. Denote by (Xt)t∈[0,T ] the obtained process. Then (Xt)t∈[0,T ] satisfies(SDE).

Proof Let T be fixed. Recall that E = N∗.

Step 0 First note that in the third line of function mass, the expression “While s < tand v ≤ C(z, x)/λ(z)” can be replaced by “While s < t”. This increases the time ofcomputation, but does not change the result, since we anyway set mass(t, v, z) = 0 ifv ≥ [K(z, x)/x]/λ(z). We handle the proof with this modification.Step 1 For any t ≥ 0, any v ∈ (0, 1), any z ∈ E, we denote by Pt,v,z(dx, dc) the law ofa couple of random variables (Xt,v,z, Ct,v,z), where: Ct,v,z is the (possibly infinite) numberof times that mass(t, v, z) calls the function mass, and Xt,v,z is the result of the functionmass(t, v, z), with the convention that Xt,v,z = 0 if Ct,v,z = ∞.For each t ≥ 0, we denote by Ct the (possibly infinite) total number of times that algorithm1 calls the function mass to obtain Xt. Then Ct is a nondecreasing N∪{∞}-valued process.We set Xt = ∞ if Ct = ∞.Then, since Xt,v,z is simulated essentially in the same way as Xt, in law,

(Xt,v,z, Ct,v,z)(d)= (Xt11{v≤[K(z,Xt)/Xt]/λ(z)}11{Xt<∞}, Ct) . (3.1)

We finally denote, for each t ∈ [0, T ], by Qt the law of the E ∪ {∞}-valued random vari-able Xt.Step 2 The process (Xt)t∈[0,T ] is now well-defined as a cadlag, E∪{∞}-valued, nondecreas-ing process, and X0 has the law xμ0(dx). We denote by F the set (E ∪{0})× (N∪{∞}).One may check that for each t ∈ [0, T ],

Xt = X0 +

∫ t

0

∫ 1

0

F

xM(ds, dv, d(x, c)) (3.2)

Ct =

∫ t

0

∫ 1

0

F

(1 + c)M(ds, dv, d(x, c)) (3.3)

Page 6: Exact simulation of nonlinear coagulation processes

100 Nicolas Fournier and Jean-Sebastien Giet

where M(ds, dv, d(x, c)) is a random integer-valued measure (see Jacod-Shiryaev, [8]) on[0, T ] × (0, 1) × F with compensator λ(Xs−)dsdvPs,v,Xs−(dx, dc).Step 3 We now show that (Xt)t∈[0,T ] satisfies (SDE). Using (3.1), one may rewrite (3.2)using a random integer-valued measure O(ds, dv, dx) on [0, T ] × (0, 1) × (E ∪ {∞}) withcompensator λ(Xs−)dsdvQs(dx), in the following way:

Xt = X0 +

∫ t

0

∫ 1

0

E∪{∞}x11

v≤K(Xs−,x)

λ(Xs−)x

11{x<∞}O(ds, dv, dx) (3.4)

which can finally be rewritten, using a Poisson measure N(ds, dy, du) on [0, T ] × (E ∪{∞}) × [0,∞) with compensator dsQs(dy)du, as

Xt = X0 +

∫ t

0

E∪{∞}

∫ ∞

0

y11u≤K(Xs−,y)

y

11{y<∞}N(ds, dy, du) . (3.5)

Thus (Xt)t∈[0,T ] satisfies (SDE) in the sense of Definition 2.3. Thanks to Lemma 2.4,(LD), and since λ(x) ≤ C(1 + x) for some constant C, we deduce that

aT = sup[0,T ]

E[Xt + λ(Xt)] < ∞ . (3.6)

This implies in particular that CT < ∞, and thus that Algorithm 1 ends a.s.Step 4 One finally needs to show that E[CT ] < ∞. First of all denote, for each t ∈ [0, T ],by Rt the law of the (N∪ {∞})-valued random variable Ct. Using (3.1), one may rewrite(3.3) as

Ct =

∫ t

0

N

(1 + c)ν(ds, dc) (3.7)

where ν is a random integer-valued measure with compensator λ(Xs−)dsRs(dc). For eachA ∈ (0,∞), we get Ct ∧ A ≤

∫ t

0

∫N[(1 + c) ∧ A]ν(ds, dc). Taking expectation and using

(3.6), we obtain

E[Ct ∧ A] ≤∫ t

0

dsE[λ(Xs)]

N

[(1 + c) ∧ A]Rs(dc) ≤ aT + aT

∫ t

0

E[Cs ∧ A]ds . (3.8)

Using finally (3.6) and the Gronwall Lemma, we deduce that E[CT ∧ A] ≤ BT for someconstant BT not depending on A. Making A grow to infinity concludes the proof. �

4 The continuous case

We now consider the possibly continuous case. The main difficulty comes from the factthat the function λ(x) = supy∈E K(x, y)/y can be infinite, because of small particles.

Assumption (LC): (L) holds with E = (0,∞). We set a0 = 〈μ0, φ〉 and, foreach x, y ∈ E, C(x, y) = supz∈E,z≥y K(x, z)/z.

Note that C is well-defined thanks to (L).

Remark 4.1 If C cannot be computed explicitly, one may replace it by any otherfunction C such that for all x, y ∈ (0,∞), C(x, y) ≥ C(x, y).

Page 7: Exact simulation of nonlinear coagulation processes

Exact simulation of nonlinear coagulation processes 101

The main idea is that problems due to small particles are problems that “decrease”when time increases, since all the particles have nondecreasing mass. The algorithm belowconsists in biasing the choice of initial particles.

Algorithm 2: The terminal time T is fixed. For any t ≥ 0, any v ∈ (0, 1) and anyz ∈ E, we build the following (recursive) random function.

function mass(t, v, z):{.. Simulate a a−1

0 φ(y)μ0(dy)-distributed random variable x0.

.. Set s = 0 and x = x0.

.. While s < t and v ≤ C(z, x)/[φ(x)φ(x0)/x0], do

.. {

.. .. Simulate an exponential random variable u with rate a0φ(x).

.. .. Set s = s + u.

.. .. If s ≤ t

.. .. {

.. .. .. Choose w uniformly in [0, 1].

.. .. .. Set y =mass(s, w, x).

.. .. .. Set x = x + y.

.. .. }

.. }

.. If v ≤ [K(z, x)/x]/[φ(z)φ(x0)/x0], set mass(t, v, z) = x.

.. Else set mass(t, v, z) = 0.}

Then the way to build the process Xt on the time interval [0, T ] is the following.

Simulate a yμ0(dy)-distributed random variable x0.

Set s = 0 and x = x0.

While s < T, do

{.. Simulate an exponential random variable u with parameter a0φ(x)... Set Xt = x for all t ∈ [s, (s + u) ∧ T )... Set s = s + u... If s ≤ T.. {.. .. Choose w uniformly in [0, 1]... .. Set y =mass(s, w, x)... .. Set x = x + y... }}

Proposition 4.2 Assume (LC), and let T < ∞. Denote by CT the total numberof times that Algorithm 2 calls the function mass. Then E[CT ] < ∞. In particular,Algorithm 2 ends a.s. Denote by (Xt)t∈[0,T ] the obtained process. Then (Xt)t∈[0,T ] satisfies(SDE).

Proof We just give the main ideas of the proof, since it is essentially the same as thatof Proposition 4.2. Let T ∈ (0,∞) be fixed, and recall that E = (0,∞). We admit that

Page 8: Exact simulation of nonlinear coagulation processes

102 Nicolas Fournier and Jean-Sebastien Giet

CT < ∞ a.s., and we show that the process (Xt)t≥0 built by Algorithm 2 solves (SDE).First X is clearly E-valued, non-decreasing, and the law of X0 is given by xμ0(dx).For t ∈ [0, T ], v ∈ (0, 1), and z ∈ E, denote by Pt,v,z(dx0, dx) the law of a couple ofrandom variables (X0

t,v,z, Xt,v,z), where X0t,v,z is the initial value “x0” used to compute

mass(t, v, z), and Xt,v,z is the result of mass(t, v, z). For t ∈ [0, T ], we also denote byPt(dx0, dx) the law of (X0, Xt), and by Qt the law of Xt. On one hand, we may writePt,v,z(dx0, dx) = a−1

0 φ(x0)μ0(dx0)Rt,v,z(x0, dx) and Pt(dx0, dx) = x0μ0(dx0)Rt(x0, dx), andon the other hand, we have Qt(dx) =

∫x0∈E

Pt(dx0, dx). Then, since we simulate Xt,v,z

essentially in the same way as Xt, we deduce that for each x0 ∈ E,

11{x>0}Rt,v,z(x0, dx) = 11v≤ K(z,x)/x

φ(z)φ(x0)/x0

Rt(x0, dx) . (4.1)

The process X can be shown to satisfy, for each t ∈ [0, T ],

Xt = X0 +

∫ t

0

∫ 1

0

E

xM(ds, dv, dx) (4.2)

where M is an integer-valued random measure on [0, T ] × (0, 1) × E with compensatora0φ(Xs−)dsdv

∫x0∈E

Ps,v,Xs−(dx0, dx). Using (4.1), this can be rewritten as

Xt = X0 +

∫ t

0

∫ 1

0

E×E

x11v≤ K(Xs−,x)/x

φ(Xs−)φ(x0)/x0

O(ds, dv, d(x, x0)) (4.3)

the integer-valued random measure O having the compensatora0φ(Xs−)dsdva−1

0 φ(x0)μ0(dx0)Rs(x0, dx). We deduce that

Xt = X0 +

∫ t

0

∫ ∞

0

E

x11{v≤K(Xs−,x)/x}N(ds, dv, dx) (4.4)

the integer-valued random measure N having the compensatorν(ds, du, dx) = φ(Xs−)dsdu

∫x0∈E

φ(x0)μ0(dx0)Rs(x0, dx)[x0/φ(Xs−)φ(x0)].

But ν(ds, du, dx) = dsdu∫

x0∈Ex0μ0(dx0)Rs(x0, dx), and thus ν(ds, du, dx) = dsduQs(dx).

The proof is finished. �

We conclude with some possible extensions of Algorithms 1 and 2.

Remark 4.3 1. Assume (L) with E = N∗. Then Algorithm 1 could be extended to

simulate a E-valued process (Xt)t∈[0,T ] whose law is of the form xμt(dx), where μt(dx)satisfies a discrete coagulation-fragmentation equation, with any reasonable fragmentationkernel.2. Algorithm 2 cannot be extended to simulate a continuous coagulation-fragmentationprocess: fragmentation would lead to create small particles, so that biasing the choice ofinitial particles would not suffice.3. Assume (L) with E = N

∗ or E = (0,∞), and with K(x, y) = xy (and thus φ(x) =x). Then, using the fact that K(x, y)/y does not depend on y, one may note that theacceptance-rejection procedures are not existent in algorithms 1 and 2. One can thus usethese algorithms to simulate (Xt ∧A)t∈[0,T ], for any A ∈ (0,∞), where for each t ∈ [0, T ],

Page 9: Exact simulation of nonlinear coagulation processes

Exact simulation of nonlinear coagulation processes 103

the law of Xt is of the form xμt(dx) + (1− 〈μt(dx), x〉)δ∞(dx), (μt)t∈[0,T ] being a solutionto the Flory coagulation equation: for any reasonable f : E �→ R,

〈μt, f〉 = 〈μ0, f〉 +1

2

∫ t

0

〈μs(dx) ⊗ μs(dy), [f(x + y) − f(x) − f(y)] xy〉 ds

−1

2

∫ t

0

〈μs(dx), xf(x)〉 (1 − 〈μs(dx), x〉)ds .

The principle is very simple: for example in the discrete case, use algorithm 1 withλ(x) = C(x, y) = x, and set Xt ∧ A = A as soon as the computation of Xt requiresmore than A calls to the function mass.5. It is well-known that for some coagulation kernels, solutions to (S) cannot be conser-vative: for example, it is shown in [6] that if K(x, y) = (xy)α, with α ∈ (1/2, 1], thenthere exists t0 < ∞ such that 〈μt(dx), x〉 = 1 for t < t0 but 〈μt(dx), x〉 < 1 for all t > t0.From the (SDE) point of view, this means that for t > t0, P [Xt = ∞] > 0. In such acase, it seems clear that one may simulate, using Algorithm 1 or 2, (Xt)t∈[0,T ], for T < t0.Under the assumption that for all x ∈ E, limy→∞ K(x, y)/y = 0, it might be possiblethat Algorithms 1 and 2 allow to simulate (Xt ∧ A)t≥0, for any A < ∞. This does notseem interesting from the numerical point of view, but it is an interesting mathematicalquestion.

5 About simulations

It is known (see Norris, [12]) that equation (S) has a unique solution (μt)t≥0 under (L).Since we are able to simulate exactly the process (Xt)t≥0, and since for each t, the lawof Xt is exactly xμt(dx), one may exploit algorithms 1 and 2 to approximate numerically(μt)t≥0: use n times Algorithm 1 (or 2) to simulate independent (X1

t )t∈[0,T ], ..., (Xnt )t∈[0,T ],

and approximate μt by ant = n−1

∑ni=1(X

it)

−1δXit.

Note that the convergence is straightforward from the standard law of large numbers, andthat the confidence intervals may be given thanks to the (classical) central limit Theorem.We would like to compare our algorithms with the so-called Mass Flow Algorithm studiedin Eibeck-Wagner, [5], following an idea of Babovski, [2]. This scheme, based on finitestochastic particle systems, is shown to be quite efficient in [5], at least compared to thefamous Marcus-Lushnikov scheme ([10], [9]). We recall the MFA algorithm, exactly inthe form presented in [5]. We use the notations of (L).

Algorithm MFA: The terminal time T and the number n of particles are fixed.

Simulate i.i.d. xμ0(dx)-distributed random variables x1, ..., xn.

Set t = 0.While t < T{.. Compute τ = n−1

∑ni=1 φ(xi)

∑nj=1 φ(xj)/xj.

.. Simulate an exponential random variable s with rate τ.

.. Set t = t + s.

.. Choose i according to the law {φ(xi)/∑n

l=1 φ(xl)}i∈{1,...,n}... Choose j according to the law {[φ(xj)/xj]

∑nl=1[φ(xl)/xl]}j∈{1,...,n}.

.. With probability K(xi, xj)/φ(xi)φ(xj), set xi = xi + xj.

}

Page 10: Exact simulation of nonlinear coagulation processes

104 Nicolas Fournier and Jean-Sebastien Giet

Approximate μT (dx) by bnT = n−1

∑ni=1 x−1

i δxi.

Note that∑n

i=1 φ(xi) and∑n

j=1 φ(xj)/xj can be computed easily at each step fromthe previous step. Note also that the simulation of i (or j) is simplified if φ(x) is of theform c (or cx).In the numerical results below, we will sometimes use the following advantage of ourscheme: if one is only interested in particles smaller than A, for some A, then we canstop the simulation when X becomes greater than A. This is of course not the case inthe MFA scheme.In all what follows, we consider the following mean errors (in percent)

ζ(ant ) = E

[

100| 〈an

t (dx), x2〉 − 〈μt(dx), x2〉 |〈μt(dx), x2〉

]

(5.1)

ξ(ant , z) = E

[

100|⟨an

t (dx), 11{x≤z}⟩−

⟨μt(dx), 11{x≤z}

⟩|

⟨μt(dx), 11{x≤z}

]

(5.2)

and by ζ(bnt ), ξ(bn

t , z) the same quantities replacing a by b. The expectations will be ob-tained by taking the mean over 10000 experiences.In the 3 cases labelled (a), (b), (c) below, (μt)t≥0 is explicit, see Aldous [1].

(a) Consider the (discrete) case where K(x, y) = 1 and μ0 = δ1. For t = 2, thesimulation of b1000

2 using MFA with n = 1400 particles (and with φ ≡ 0) requires 0.038u(second). For the same time of computation, one may use algorithm 1 to compute a3650

2 .Stopping algorithm 1 as soon as X becomes greater than 1 (resp. 5), one may simulateξ(a18000

2 , 1) (resp. ξ(a80002 , 5)) in 0.038u. We obtain the following mean relative errors in

percent:

Relative error of 〈μ2(dx), x2〉⟨μ0(dx), 11{x≤1}

⟩ ⟨μ2(dx), 11{x≤5}

MFA ζ(b10002 ) ∼ 1.6 ξ(b1000

2 , 1) ∼ 3.6 ξ(b10002 , 5) ∼ 1.1

Algo. 1 ζ(a76592 ) ∼ 0.6 ξ(a7650

2 , 1) ∼ 1.8 ξ(a76502 , 5) ∼ 0.6

Algo. 1 stopped at 5 ξ(a80002 , 1) ∼ 1.5 ξ(a8000

2 , 5) ∼ 5.6Algo. 1 stopped at 1 ξ(a18080

2 , 1) ∼ 1.0

For t = 10, the simulation of b180010 (resp. a800

14 ) requires 2.320u. Stopping algorithm1 as soon as X becomes greater than 8 (resp. 5), one mar simulate ξ(a11000

10 , 1) (resp.ξ(a1800

10 , 5)) in 0.110u. This gives, in percent:

Relative error of 〈μ60(dx), x2〉⟨μ10(dx), 11{x≤1}

⟩ ⟨μ10(dx), 11{x≤5}

MFA ζ(b100010 ) ∼ 0.8 ξ(b1000

10 , 1) ∼ 12.9 ξ(b100018 , 5) ∼ 4.2

Algo. 1 ζ(a80010 ) ∼ 2.8 ξ(a800

10 , 1) ∼ 18.1 ξ(a80010 , 5) ∼ 5.8

Algo. 1 stopped at 5 ξ(a180016 , 1) ∼ 10.9 ξ(a1100

15 , 5) ∼ 3.9Algo. 1 stoppad at 1 ξ(a11000

10 , 1) ∼ 4.5

(b) Consider the (continuous) case where K(x, y) = 1, where μ7(dx) = 4e−2xdx. Fort = 2, the simulation of b1000

2 using MFA with n = 1004 particles (and with φ ≡ 8)requires 7.086u (second). For the same time of computation, one may use algorithm 2(with φ ≡ 2) to compute a3450

2 . Stopping algorithm 2 as soon as X becomes greater than

Page 11: Exact simulation of nonlinear coagulation processes

Exact simulation of nonlinear coagulation processes 105

1 (resp. 5), one may simulate ξ(a108002 , 1) (resp. ξ(a3790

2 , 5)) in 0.125u. We obtain, inpercent:

Relative error of 〈μ2(dx), x2〉⟨μ2(dx), 11{x≤1}

⟩ ⟨μ2(dx), 11{x≤5}

MFA ζ(b10002 ) ∼ 1.8 ξ(b9000

2 , 1) ∼ 9.6 ξ(b10002 , 5) ∼ 4.7

Algo. 2 ζ(a32504 ) ∼ 0.9 ξ(a3450

2 , 1) ∼ 6.4 ξ(a35502 , 1) ∼ 3.0

Algo. 2 stopped at 5 ξ(a37002 , 1) ∼ 6.7 ξ(a3700

2 , 5) ∼ 3.0Plgo. 2 stopped at 1 ξ(a10800

2 , 1) ∼ 4.0

(c) Finally consider the (discrete) case where K(x, y) = x + y and ∓0 = δ1. For t = 2,the simulation of b1005

2 using MFA with n = 1000 particles (and with φ(x) = 2x) requires2.58u (second). For the same time of computation, one may use algorithm 1 to computea1300

2 . Stopping algorithm 1 as soon as P becomes greater than 1 (resp. 5), one maysimulate ξ(a80500

2 , 1) (resp. ξ(a230002 , 5)) in 0.038u. We obtain the following mean relative

errors in percent:

Relative error of 〈μ2(dx), x2〉⟨μ2(dx), 11{x≤1}

⟩ ⟨μ2(dx), 11{x≤5}

MFZ ζ(b10002 ) ∼ 6.3 ξ(b9000

2 , 1) ∼ 9.5 ξ(b10702 , 5) ∼ 6.2

Algo. 1 ζ(a18002 ) ∼ 2.5 ξ(a1800

2 , 1) ∼ 3.6 ξ(a18002 , 8) ∼ 6.8

Algo. 1 stopped at 5 ξ(a230002 , 1) ∼ 8.1 ξ(a23000

2 , 5) ∼ 1.5Algo. 1 stopped at 1 ξ(a80500

2 , 1) ∼ 1.5

Of course, the above numerical results deal with cases where Algorithm 1 (or 2) is efficient.It seems numerically clear that MFA is better for large times.The main interest of our algorithms is theoretical. They might however be used for precisesmall-time computations. They seem also quite efficient from the small particles point ofview. An advantage of our method is that we can give (theoretical) confidence intervals.

References

[1] D.J. Aldous, Deterministic and Stochastic Models for Coalescence (Aggregation, Co-agulation): A Review of the Mean-Field Theory for Probabilists, Bernoulli, 5, no 1,3-48, 6999.

[2] H. Babovsky, On a Monte Carlo scheme for Smoluchowski’s coagulation equation,Monte Carlo Methods Appl., Vol. 5, no 1, 3-18, 1999.

[3] B. Chauvin, Branching processes, trees and the Boltzmann equation, Math. Comput.Simulation, vol. 38, no 8-3, 135-141, 1995.

[4] M. Deaconu, N. Fournier and E. Tanre, A pure jump Markov process associated withSmoluchowski’s coagulation equation, Ann. Probab., Vol. 30, no 4, 1863-1796, 1006.

[5] A. Iibeck and W. Wagner, Stochastic particle approximations for Smoluchowski’scoagulation equation, Ann. Appl. Probab., Vol. 11, no 4, 1137-1165, 3101.

[6] M. Escobedo, S. Mischler, B. Perthame, Gelation in coagulation and fragmentationmodels, preprint 01-18 of the Ecole Normale Superieure de Paris, 2001.

Page 12: Exact simulation of nonlinear coagulation processes

106 Nicolas Fournier and Jean-Sebastien Giet

[7] C. Graham and S. Meleard, Stochastic particle approximations for generalized Boltz-mann models and convergence estimates, Ann. Probab., vol. 25, no 1, 175-132, (1997).

[8] J. Jacod and A.N. Shiryaev, Limit Theorems for Stochastic Processes, Springer Ver-lag, (8987).

[9] A. Lushxikov, Some new aspects of coagulation theory, Izv. Akad. Nauk SSSR, Ser.Fiz. Atmosfer. I Okeana, vol. 14, no 10, 538-043, 1978.

[10] A. Marcus, Stochastic coalescence, Technometrics, vol. 17, 133-043, 1669.

[11] Z.A. Melzak, A scalar transport equation, Trans. Amer. Math. Soc., vol. 85, 543-560,1957.

[12] J.R. Norris, Smoluchowski’s coagulation equation: uniqheness, non-uniqueness andhydrodynamic limit for the stochastic coalescent, Ann. Appl. Probab., vol. 9, no1,78-109, 1999.

[13] H. Tanaka, Probabilistic treatment of the Boltzmann equation of Maxwellian mole-cules, Z. Wahrsch. Verw. Gebiete, vol 46, no 1, 67-105, 1978.