Upload
others
View
47
Download
0
Embed Size (px)
Citation preview
ODE, SDE and PDE
Shizan FangUniversite de Bourgogne, France
February 29, 2008
0 Introduction
A.1. Consider the ordinary differential equation (abbreviated as ODE) on Rd
dXt
dt= V (Xt), X0 = x. (0.1)
When the coefficient V is smooth and with the bounded derivative, then the ODE (0.1) canbe solved by Picard iteration. Let Xt(x) be the solution to (0.1) with the initial condition x.Then Xt : Rd → Rd is a diffeomorphism from Rd onto Rd and enjoys the property of flows:Xt+s = Xt Xs.
Consider now the following transport equation
dut
dt+ V · ∇ut = 0, u0 = θ ∈ C1(Rd). (0.2)
This is a first order Partial Differential Equation. The solution (ut)t≥0 to (0.2) can be expressedin term of Xt, namely
ut = θ(X−1
t
).
A.2. When V satisfies the following Osgood condition
|V (x)− V (y)| ≤ C|x− y| log1
|x− y| , |x− y| ≤ δ0, (0.3)
the ODE (0.1) defines a flow of homeomorphisms of Rd. If V admits the divergence, thenut = θ
(X−1
t
)for θ ∈ C(Rd) satisfies again the transport equation (0.2), but in the following
sense:
(ut, ϕ)L2 = (θ, ϕ)L2 −∫ t
0(us, div(ϕV ))L2 ds, ϕ ∈ C∞
c (Rd),
where ( , )L2 denotes the inner product in L2(Rd) with respect to the Lebesgue measure λd. Animmediate consequence of this consideration is that, whenever div(V ) = 0, the flow Xt preservesthe Lebesgue measure.
A.3. Due to the linearity, the transport equation (0.2) can be uniquely solved under weakconditions on V ; namely under the condition
V ∈ W 1,q and div(V ) ∈ L∞, (0.4)
1
using the argument of smoothing. In the opposite direction, the transport equation (0.2) allowsto solve the ODE (0.1) in order to get a flow of quasi-invariant measurable maps. To this end,we shall follow the method developed by L. Ambrosio, which consists of three steps:
(i) Consider the mass conservation equation
∂µt
∂t−Dx · (V µt) = 0, µ0 given. (0.5)
(ii) Construct a coupling measure η on the product space Rd ×W , where W = C([0, T ],Rd)in the sense that
(πRd)∗η = µ0, (et)∗η = µt, (0.6)
where et : (x, γ) → γ(t), πRd(x, γ) = x, such that
γ(t) = x +∫ t
0V (γ(s))ds, holds η-a.s. (0.7)
(iii) Show that η is supported by a graph: there exists X : Rd → W such that
η = (I ×X)∗µ0.
Then (Xt(x)) solves the ODE (0.1).
B.1. Consider the Ito stochastic differential equation (abbreviated as SDE)
dXt = σ(Xt)dwt + V (Xt)dt (0.8)
defined on a probability space (Ω,F , P ). When the coefficients are globally Lipschitz, then (0.8)defines a stochastic flow of homeomorphisms on Rd: there is a full measure subset Ωo ⊂ Ω suchthat for w ∈ Ωo,
Xt(w, ·) : Rd → Rd is a homeomorphism.
Under the stronger conditions σ ∈ C2b and V ∈ C1
b , (Xt)t≥0 is now a flow of diffeomorphisms ofRd.
Let Px be the law of w → X·(w, x) on C([0, T ],Rd). For a given probability measure µ0,consider Pµ0 =
∫Rd Pxdµ0(x). Let et : C([0, T ],Rd) → Rd such that et(γ) = γ(t). Then
µt := (et)∗Pµ0 solves the Fokker-Planck equation
∂µt
∂t= L∗µt, µt|t=0
= µ0, (0.9)
where L∗ is the formal dual operator of
L =12
d∑
i,j=1
aij∂2
∂xi∂xj+
d∑
i=1
Vi∂
∂xi, a = σσ∗. (0.10)
B.2. The operator L is related to the weak solutions of SDE (0.8). A better knowledge, forexample elliptic estimates, on partial differential equations (abbreviated as PDE) makes possibleto prove that the weak solution is indeed the strong one in some situation.
B.3. In the opposite direction, there are some recent works (see [5], [9]) on SDE (0.8) fromthe Fokker-Planck equation (0.9) for few regular coefficients.
2
1 Flow of Homeomorphisms under Osgood Conditions
1.1 Classical Case
Let (Vt)t∈[0,T ] be a time-dependent vector field on Rd. Suppose that
|Vt(x)− Vt(y)| ≤ C(t) |x− y| for x, y ∈ Rd (1.1.1)
with ∫ T
0|Vs(x)| ds < +∞,
∫ T
0C(s) ds < +∞. (1.1.2)
The differential equationdXt
dt= V (Xt), X0 = x (1.1.3)
can be solved by Picard iteration: define X1(t) = x +∫ t0 Vs(x) ds and for n ≥ 1
Xn+1(t) = x +∫ t
0Vs(Xn(s)) ds.
Due to the condition (1.1.2), we see that t → Xn(t) is absolutely continuous for n ≥ 1. We have
|Xn+1(t)−Xn(t)| ≤∫ t
0C(s) |Xn(s)−Xn−1(s)| ds.
By induction, we prove that there exists a constant C > 0 such that
|Xn+1(t)−Xn(t)| ≤ CF (t)n
n!, where F (t) =
∫ t
0C(s) ds.
It follows that Xn(t) converges to Xt, solution to (1.1.3), uniformly on t ∈ [0, T ]. Note thatt → Xt is absolutely continuous. Denote now by Xt(x) the solution to (1.1.3) with the initialcondition x. Let t0 ∈ [0, T ] and Yt(x) be the solution to the ODE:
dYt
dt= −Vt0−t(Yt), Y0 = x.
We check that Xt0−t(x) and Yt(Xt0(x)) satisfy the same ODE with the initial condition Xt0(x);therefore by uniqueness of solutions, Xt0−t = Yt(Xt0) for 0 ≤ t ≤ t0. In the same way, Yt0−t =Xt(Yt0). Now putting t = t0 gives that Xt0(Yt0) = Yt0(Xt0) = I. Hence
X−1t0
= Yt0 .
It follows that x → Xt(x) is a homeomorphism from Rd onto Rd. Moreover if Vt is supposed tobe C1 with bounded derivative, then x → Xt(x) is a diffeomorphism from Rd onto Rd. To seethe derivability of t → X−1
t , it is convenient to consider the ODE
d
dtX(t, s, x) = Vt(X(t, s, x)), X(s, s, x) = x.
Then Xt(x) = X(t, 0, x) and (X−1t )(x) = X(0, t, x).
3
Now let θ ∈ C1(Rd). We denote by θ′ the differential of θ, that is, for x ∈ Rd, θ′(x) is a linearmap from Rd to R; we denote by ∇θ the gradient of θ, that is a vector field on Rd such that⟨∇θ(x), v
⟩= θ′(x) ·v for v ∈ Rd. For a vector field V on Rd and an application F : Rd → Rm, we
denote also by DV F the directional derivative of F along V , that is, (DV F )(x) = ddt |t=0
F (Xt),
where Xt is the flow associated to V . Let ut = θ(X−1t ). Then
dut
dt= θ′(X−1
t ) · dX−1t
dt. (i)
On the other hand,〈∇ut, Vt〉 = θ′(X−1
t ) ·DVtX−1t .
Now differentiating the equality x = X−1t (Xt(x)) with respect to the time t, we have
0 =dX−1
t
dt(Xt) + (X−1
t )′(Xt)dXt
dt=
dX−1t
dt(Xt) + (DVtX
−1t )(Xt).
Since Xt is bijective, so for each x ∈ Rd, the above equality gives
dX−1t
dt+ DVtX
−1t = 0.
According to (i):dut
dt+ Vt · ∇ut = 0, u0 = θ ∈ C1 (1.1.4)
here we used · for the inner product. Conversely, if ut ∈ C1 is a solution of (1.1.4), then
d
dt[ut(Xt)] =
dut
dt(Xt) + u′t(Xt)
dXt
dt=
dut
dt(Xt) + (Vt · ∇ut)(Xt) = 0,
so that ut(Xt) = θ or ut = θ(X−1t ): it is the unique solution to (1.1.4).
Remark 1.1 The equation (1.1.3) is non linear while the equation (1.1.4) is linear. (1.1.3) ⇒(1.1.4).
1.2 Osgood Condition
In this subsection, we suppose that V : Rd → Rd is a time-independent bounded continuousvector field such that
|V (x)− V (y)| ≤ C|x− y| log1
|x− y| , |x− y| ≤ δ < 1. (1.2.1)
Then the differential equationdXt
dt= V (Xt), X0 = x (1.2.2)
admits at least one solution (Xt)t≥0.
Proposition 1.2 Under (1.2.1), the differential equation (1.2.2) admits a unique solution.
4
Proof. Let (Xt)t≥0 and (Yt)t≥0 be two solutions to (1.2.2) starting from the same point. Setηt = Xt − Yt and ξt = |ηt|2. Let ε > 0 be a parameter and define the function
Ψε(ξ) =∫ ξ
0
ds
s log 1s + ε
, 0 ≤ ξ ≤ δ,
and Φε = eΨε . Then Φ′ε = Φε
ξ log 1ξ+ε
. Define
τ = inft > 0 : ξt ≥ δ2.
Then by (1.2.1), for t ≤ τ ,
|〈ηt, V (Xt)− V (Yt)〉| ≤ |ηt| · C|ηt| log1|ηt| =
C
2ξt log
1ξt
.
Hence∣∣∣∣d
dtΦε(ξt)
∣∣∣∣ =∣∣∣∣Φ′ε(ξt)
dξt
dt
∣∣∣∣ =∣∣∣∣Φ′ε(ξt) · 2〈ηt,
dηt
dt〉∣∣∣∣
≤ Φε(ξt) · 1ξt log 1
ξt+ ε
· Cξt log1ξt≤ CΦε(ξt).
This and ξ0 = 0 lead toΦε(ξt) ≤ Φε(ξ0)eCt = eCt, t < τ.
Letting ε ↓ 0, we get Φ0(ξt) ≤ eCt. If ξt > 0 for some given t, we have∫ ξt
0ds
s log 1s
≤ Ct, which is
impossible. Therefore we must have ξt = 0, which means Xt = Yt for any t ≥ 0. ¤Choose χ ∈ C∞
c (Rd) such that
χ ≥ 0, supp(χ) ⊂ B(1),∫
Rd
χdx = 1,
where B(r) = x ∈ Rd : |x| ≤ r. For n ≥ 1, define χn(x) = 2dnχ(2nx). Then supp(χn) ⊂B(2−n) and
∫Rd χndx = 1. Set Vn = V ∗χn (convolution product), then Vn is a bounded smooth
vector field on Rd.
Proposition 1.3 There exists ζ > 1 such that
supx∈Rd
|Vn(x)− V (x)| ≤ ζ−n for n big enough. (1.2.3)
Proof. We have
|Vn(x)− V (x)| ≤∫
Rd
|V (x− y)− V (y)|χn(y)dy ≤ C
∫
B(2−n)|y| log
1|y| · χn(y)dy
≤ C2−n log 2n ·∫
B(2−n)χn(y)dy ≤ Cζ−n
for some ζ > 1. ¤
5
Theorem 1.4 Let Xn(t, x)t≥0 be the solution to
dXn
dt= Vn(Xn), Xn(0) = x.
Then for any T > 0,lim
n→∞ supt≤T
supx∈Rd
|Xn(t, x)−X(t, x)| = 0. (1.2.4)
Proof. For simplicity, we omit x in Xn as well as in X. Set ξn(t) = |Xn(t)−X(t)|2 and
τn = inft > 0 : ξn(t) ≥ δ2.
Then by (1.2.1) and (1.2.3), for t ≤ τn,∣∣∣∣dXn(t)
dt− dX(t)
dt
∣∣∣∣ ≤ |Vn(Xn(t))− V (Xn(t))|+ |V (Xn(t))− V (X(t))|
≤ ζ−n + C|Xn(t)−X(t)| log1
|Xn(t)−X(t)| .
Therefore for t ≤ τn,∣∣∣∣dξn(t)
dt
∣∣∣∣ = 2∣∣∣∣〈Xn(t)−X(t),
dXn(t)dt
− dX(t)dt
〉∣∣∣∣ ≤ 2δζ−n + Cξn(t) log
1ξn(t)
.
Using the lemma below, we get for t ≤ τn ∧ T ,
ξn(t) ≤ (2δζ−n)e−Ct ≤ (2δζ−n)e−CT.
This last quantity is less than δ2 for n big enough. It follows that τn ≥ T and
sup0≤t≤T
ξn(t) ≤ (2δζ−n)e−CT.
Letting n →∞ yields the result (1.2.4). ¤A consequence of (1.2.4) is that x → Xt(x) defines a flow of homeomorphisms of Rd. In
fact, by previous section, the inverse maps X−1n as well as X−1
t satisfy the same type differentialequations. In the same way, X−1
n converges to X−1t uniformly with respect to (t, x) in any
compact subset of [0, +∞[×Rd.
Lemma 1.5 Let ϕ : R+ → (0, 1) be a derivable function such that for C > 0,
ϕ′(t) ≤ Cϕ(t) log1
ϕ(t), (1.2.5)
thenϕ(t) ≤ (ϕ(0))e−Ct
for t ≥ 0.
6
Proof. log ϕ(t) being negative, we use (1.2.5),
ϕ′(t)ϕ(t) log ϕ(t)
≥ −C.
Integrating this inequality between (0, t), it leads to∫ ϕ(t)
ϕ(0)
ds
s log s≥ −Ct, or log
(log ϕ(t)log ϕ(0)
)≥ −Ct.
Therefore log ϕ(t) ≤ log ϕ(0) · e−Ct or ϕ(t) ≤ (ϕ(0))e−Ct. ¤
In order to apply Lemma 1.5 in Theorem 1.4, we set
ηn(t) =2δ
Cζ−n + ξn(t)
and observe that for δ small enough,
−Cξn(t) log ξn(t) + 2δζ−n ≤ −Cηn(t) log ηn(t).
¤Assume now the divergence div(V ) ∈ L1
loc(Rd) exists in the distribution sense:∫
Rd
div(V ) ϕ dx = −∫
Rd
〈∇ϕ, V 〉dx, ϕ ∈ C∞c (Rd). (1.2.6)
We have
div(V n) =d∑
i=1
∂
∂xi(V n)i =
d∑
i=1
∫
Rd
V i(y)∂
∂xiχn(x− y)dy
= −∫
Rd
V (y) · ∇y(χn(x− y))dy =∫
Rd
div(V )(y)χn(x− y)dy = div(V ) ∗ χn,
It follows that div(Vn) converges to div(V ) in L1loc(Rd), as n → +∞.
Theorem 1.6 Assume (1.2.1) and div(V ) exists. Let θ ∈ C(Rd). Then ut(x) = θ(X−1t (x))
satisfies the transport equation
dut
dt+ V · ∇ut = 0, u0 = θ
in the sense that, for any ϕ ∈ C∞0 (Rd),
(ut, ϕ)L2 = (θ, ϕ)L2 −∫ t
0(us, div(ϕV ))L2ds. (1.2.7)
Proof. Step 1. Suppose firstly θ ∈ C1. Let Xn be given in Theorem 1.4 and set un(t) =θ(X−1
n (t)). Then by Section 1.1, for any ϕ ∈ C∞0 (Rd),
(un(t), ϕ)L2 = (θ, ϕ)L2 −∫ t
0(un(s),div(ϕVn))L2ds. (1.2.8)
7
Let K be the support of ϕ, then the support of div(ϕVn) = ϕdiv(Vn) + 〈∇ϕ, Vn〉 is containedin K. Let R = sup0≤t≤T supx∈K |X−1(t, x)| which is finite. By (1.2.4), for n big enough,X−1
n (t, x) ∈ B(R + 1) for all 0 ≤ t ≤ T, x ∈ K. Therefore for ε > 0, there exists n0 such thatfor n ≥ n0,
supx∈K
supt≤T
|θ(X−1n (t, x))− θ(X−1(t, x))| < ε.
Letting n →∞ in (1.2.8), we get (1.2.7).
Step 2. Let θ ∈ C(Rd). Let θn ∈ C1(Rd) which converges to θ on any compact set. ByStep 1, un
t = θn(X−1t ) satisfies (1.2.7). Now let K = supp(ϕ). By what was done in the above,
we see that θn(X−1t ) converges uniformly to θ(X−1
t ) over K. So that letting n →∞ in
(unt , ϕ)L2 = (θn, ϕ)L2 −
∫ t
0(un
s , div(ϕV ))L2ds,
we get the result. ¤
Corollary 1.7 If div(V ) = 0, then Xt preserves the Lebesgue measure.
Proof. Take θ ∈ Cc(Rd). Let K = supp(θ). Then KT = ∪0≤t≤T Xt(K) is compact. Now forx ∈ (KT )c, then for any t ∈ [0, T ], X−1
t (x) ∈ Kc, so that θ(X−1t (x)) = 0. Now take ϕ ∈ C∞
0 (Rd)such that ϕ = 1 on KT . We have for s ≤ t ≤ T ,
(us,div(ϕV ))L2 = −∫
us(x)〈∇ϕ, V (x)〉dx = 0,
so that ∫
Rd
θ(X−1t )dx = (ut, ϕ)L2 = (θ, ϕ)L2 =
∫
Rd
θ(x)dx,
which means that X−1t leaves the Lebesgue measure invariant, so does Xt ¤
For the general case, we have (see [10])
Theorem 1.8 Assume div(V ) ∈ L∞. Then the Lebesgue measure λd on Rd is quasi-invariantunder the flow Xt: (Xt)∗λd = kt λd; moreover
e−t||div(V )||∞ ≤ kt(x) ≤ et||div(V )||∞ . (1.2.9)
Proof. Take a positive function θ ∈ Cc(Rd) and set ut = θ(X−1t ). As seen in the proof of
the above corollary, there exists R > 0 such that ut(x) = 0 for t ∈ [0, T ] and |x| > R. Thenfor ϕ ∈ C∞
c (Rd) such that ϕ(x) = 1 for |x| ≤ R, we have, for s ∈ [0, T ], (us, div(ϕV ))L2 =− ∫
Rd us div(V ) dx. The equation (1.2.7) yields∫
Rd
ut dx =∫
Rd
θ dx−∫ t
0
(∫
Rd
us div(V ) dx)
ds.
It is easy to see that t → ∫Rd ut dx is absolutely continuous and
∣∣∣ d
dt
∫
Rd
ut dx∣∣∣ ≤ ||div(V )||∞
∫
Rd
ut dx.
We deduce that
e−t||div(V )||∞∫
Rd
θ dx ≤∫
Rd
θ(X−1t ) dx ≤ et||div(V )||∞
∫
Rd
θ dx.
It follows that (X−1t )∗λd is absolutely continuous with respect to λd. Now (1.2.9) follows. ¤
8
2 Diperna-Lions Theory
Let 1 ≤ p ≤ +∞. We denote by Lp(Rd) the space of measurable functions f : Rd → R such that∫Rd |f(x)|p dx < +∞, Lp
loc(Rd) the space of those functions f such that f 1K ∈ Lp(Rd) for all
compact subsets K ⊂ Rd. Note that Lp(Rd) is a Banach space, having Lq(Rd) as the dual space,where q ∈ [1,+∞] is the conjugate number of p: 1/p+1/q = 1 and Lp
loc(Rd) is not a Banach space,
but a vector space endowed with a complete metric. For a Banach space E, similarly, Lp(Rd, E)is the space of the measurable E-valued functions such that
∫Rd |f(x)|pE dx < +∞. The space
L1([0, T ], Lqloc) denotes the space of functions (t, x) → ut(x) such that
∫ T0
(∫K |ut(x)|qdx
)1/qdt
is finite for any compact subset K ⊂ Rd. For a sequence un ∈ L1([0, T ], Lqloc), we say that it
converges to u ∈ L1([0, T ], Lqloc) if
limn→+∞
∫ T
0
(∫
K|un
t (x)− ut(x)|qdx)1/q
dt = 0, for all compact K ⊂ Rd.
Definition 2.1 Let V (t, ·) ∈ L1loc(Rd,Rd) be a time-dependent vector field and ct ∈ L1
loc(Rd).Let p ∈ [1,∞] and T > 0 be given. We say that u ∈ L∞([0, T ], Lp(Rd)) is a (weak) solution to
∂ut
∂t+ Vt · ∇ut + ctut = 0 in (0, T )× Rd, (2.1)
with the initial function u0 ∈ Lp(Rd), if for any φ ∈ C∞0 ([0, T )× Rd),
−∫ T
0dt
∫
Rd
u∂φ
∂tdx−
∫
Rd
u0φ(0, x)dx +∫ T
0dt
∫
Rd
u (−div(Vtφ) + ctφ)dx = 0. (2.2)
Notice that the dependence of t → ut is not necessarily continuous.
Remark 2.2 If we take φ(t, x) = α(t)φ(x) with α(0) = 0 in (2.2), then
−∫ T
0α′(t)dt
∫
Rd
uφ dx +∫ T
0α(t)
∫
Rd
u (−div(V φ) + c φ)dx = 0,
or in the distribution sense,
d
dt
∫
Rd
utφdx +∫
Rd
ut · (−div(V φ) + c φ)dx = 0. (2.3)
Note that div(V φ) = 〈V,∇φ〉+ φ div(V ). Therefore (2.2) makes sense provided we assume
c− div(V ) ∈ L1([0, T ], Lqloc(R
d)), V ∈ L1([0, T ], Lqloc(R
d)). (2.4)
Under the condition (2.4), there exists a constant Cφ > 0 and R > 0 such that∣∣∣ d
dt
∫
Rd
utφdx∣∣∣ ≤ Cφ
[||ct − div(Vt)||Lq(B(R)) + ||vt||Lq(B(R))
].
We shall denote by ξ(t) =∫Rd utφ dx, m(t) = ||ct − div(Vt)||Lq(B(R)) + ||vt||Lq(B(R)) and ξ the
distributional derivative of ξ. Then we have∫ T
0|ξ(s)| ds ≤ Cφ
∫ T
0m(s) ds < +∞.
9
This means that ξ is in the Sobolev space W 1,1([0, T ]). Now for a function γ ∈ L1([0, T ]), wesay that t0 ∈ (0, T ) is a Lebesgue point of γ if γ(t0) = limη→0
12η
∫ t0+ηt0−η γ(s) ds. By Lebesgue
derivative theorem, the set of such points is of full measure. Now set
L = t ∈ (0, T ); Lebesgue point of ξ.
Then for t1 < t2 in L, we have
ξ(t2)− ξ(t1) =∫ t2
t1
ξ(s) ds.
It follows that
|ξ(t2)− ξ(t1)| ≤ Cφ
∫ t2
t1
m(s) ds, t1, t2 ∈ L.
Since L is dense in [0, T ], we deduce that ξ admits an uniformly continuous extension ξ on[0, T ]. Again by above inequality, such an uniform continuous version is absolutely continuous;therefore t → ξ(t) is derivable at almost point of (0, T ).
Proposition 2.3 Let p ∈ [1,∞] and u0 ∈ Lp(Rd). Assume (2.4) and
c, div(V ) ∈ L1([0, T ], L∞(Rd)).
Then there exists a solution u ∈ L∞([0, T ], Lp(Rd)) to (2.1) with u0 given.
Proof. For simplicity, consider the case c = 0. For 1 < p < ∞, the function x → |x|p isdifferentiable.A priori estimate: if (2.1) is satisfied in the classical sense, using (|x|p)′ = p sgn(x)|x|p−1,then
∂
∂t|ut|p + Vt · ∇|ut|p = 0.
Integrating this equation over Rd, we deduce
d
dt
∫
Rd
|u|pdx =∫
Rd
div(Vt)|ut|pdx ≤ ‖div(Vt)‖∞∫
Rd
|ut|pdx,
which implies by Gronwall lemma,∫
Rd
|ut|pdx ≤(∫
Rd
|u0|pdx
)e∫ t0 ‖div(Vs)‖∞ds. (2.5)
The inequality (2.5) is the so-called a priori estimate on solutions to (2.1). Now we are going toprove the existence of solutions to (2.1) in the space L∞([0, T ], Lp). For simplicity, we supposein the sequel that V ∈ L1([0, T ], Lq) and consider the case 1 < p < +∞ (for other cases, werefer to [2]).
Let χn be a regularizing sequence: supp(χn) ⊂ B(2−n),∫Rd χn(x)dx = 1 and χn ≥ 0. Define
V nt = Vt ∗ χn, un
0 = u0 ∗ χn.
We have‖χn‖L1 = ‖χ‖L1 = 1, ‖un
0‖Lp ≤ ‖u0‖Lp‖χn‖L1 = ‖u0‖Lp .
10
By harmonic analysis, as n → ∞, for t fixed, V nt → Vt in Lq and un
0 → u0 in Lp. Moreover,V n
t ∈ C1b (Rd), un
0 ∈ C1b (Rd). By Section 1, there exists a unique solution un· ∈ C([0, T ], C1
b (Rd))to
∂unt
∂t+ V n
t · ∇unt = 0, un|t=0 = un
0 .
Note that div(V nt ) = div(Vt)∗χn, we have ‖div(V n
t )‖∞ ≤ ‖div(Vt)‖∞. Now applying (2.5) toun, we see that (un)n≥1 is bounded in L∞([0, T ], Lp). Notice that L∞([0, T ], Lp) is the dual spaceof L1([0, T ], Lq). Applying the following result in Functional Analysis, up to a subsequence, un
converge to a function u ∈ L∞([0, T ], Lp) in the sense that
limn→∞
∫ T
0
∫
Rd
un(t)ϕ(t)dtdx =∫ T
0
∫
Rd
u(t)ϕ(t)dtdx
for each ϕ ∈ L1([0, T ], Lq). Now taking a limit procedure in (2.2), we see that u ∈ L∞([0, T ], Lp)is a solution to (2.2). ¤Theorem(from Functional Analysis). Let E be a Banach space; then the unit ball in the dualspace E′ is compact for weak∗ topology. More precisely, for any sequence `n ∈ E′ such that||`n||E′ ≤ R, then there exist ` ∈ E′ of ||`||E′ ≤ R and a subsequence nk such that
limk→+∞
`nk(x) = `(x) holds for all x ∈ E.
It is often difficult to work with an equation in the distribution sense. The following approx-imation will play an important role.
Theorem 2.4 Let 1 ≤ p ≤ ∞ and q its conjugate number. Suppose furthermore that
V ∈ L1([0, T ],W 1,αloc (Rd)) for some α ≥ q. (2.6)
Then, if we denote by un = u ∗ χn, un satisfies
∂un
∂t+ Vt · ∇un = rn, (2.7)
where rn → 0 in L1([0, T ], Lβloc(R
d)) and β is given by
1β = 1
α + 1p , if α or p < +∞;
any β < +∞, if α = β = ∞.
Proof. Recall that f ∈ W 1,αloc (Rd) if f and its distributional derivative ∂f/∂xi are in Lα
loc(Rd).Let w ∈ Lp
loc(Rd) and B be a vector field in W 1,α
loc . Define the term (B · ∇w) ∗ χn by
[(B · ∇w) ∗ χn](x) =∫
Rd
(B · ∇w) · τxχn dy = −∫
Rd
w div(τxχn ·B) dy
= −∫
Rd
w(y)(χn(x− y)div(B)(y) + 〈∇yχn(x− y), B(y)〉) dy,
where τxf(y) = f(x− y).
11
Lemma 2.5 For B ∈ W 1,αloc (Rd), set rn := (B · ∇w) ∗ χn − B · ∇(w ∗ χn). Then rn → 0 in
Lβloc(R
d).
Proof. We have
rn = −(w div(B)) ∗ χn +∫
Rd
w(y)〈B(y)−B(x), (∇χn)(x− y)〉dy.
By hypothesis, w div(B) ∈ Lβloc; therefore (w div(B)) ∗ χn → w div(B) in Lβ
loc.Next we estimate the second term in the right hand of the above expression. Firstly we shall
deal with the good case: w and B ∈ C1b . For n big enough, the integrand is concentrated in the
ball B(x, 2−n), therefore
In(x) :=∫
Rd
w(y)〈B(y)−B(x), (∇χn)(x− y)〉dy
≈ w(x)∫
Rd
〈B(y)−B(x),−∇(τxχn)〉dy
= w(x)∫
Rd
div(B) · τxχndy = w(x)∫
Rd
div(B) · χn(x− y)dy
→ w(x)div(B)(x) as n →∞.
Secondly, fix R > 0 and set Qx(y) = 〈B(y)−B(x), (∇χn)(x− y)〉. Then for x ∈ B(R),
|In(x)|β =∣∣∣∣∫
B(x,1)w(y)Qx(y)dy
∣∣∣∣β
≤ [λ(B(x, 1))]β−1
∫
B(R+1)|w(y)Qx(y)|βdy,
where λ denotes the Lebesgue measure. Let Cβ = λ(B(0, 1))(β−1)/β. Then, according to Holderinequality,
|In(x)| ≤ Cβ‖w‖Lp(B(R+1))
( ∫|Qx(y)|αdy
) 1α
.
We have
|Qx(y)| ≤ |B(y)−B(x)|2−n
|∇χ(x− y
2−n)| ≤ ||∇χ||∞ |B(y)−B(x)|
2−n|1|y−x|≤2−n.
It follows that there exists a constant Cβ,α,R > 0 such that
‖In‖Lβ(B(R)) ≤ Cβ,α,R ‖w‖Lp(B(R+1))
[ ∫
B(R)
( ∫
|x−y|≤2−n
( |B(y)−B(x)|2−n
)α
dy
)dx
] 1α
.
Now we claim that[ ∫
B(R)
(∫
|x−y|≤2−n
( |B(y)−B(x)|2−n
)α
dy
)dx
] 1α
≤ ‖∇B‖Lα(B(R+1)). (2.8)
If B ∈ C1, then
|B(y)−B(x)| ≤ |y − x|∫ 1
0|∇B(x + t(y − x))|dt
12
and ∫
|x−y|≤2−n
( |B(y)−B(x)|2−n
)α
dy ≤∫
|x−y|≤2−n
∫ 1
0|∇B(x + t(y − x))|αdtdy.
Therefore∫
B(R)
( ∫
|x−y|≤2−n
( |B(y)−B(x)|2−n
)α
dy
)dx
≤∫
B(R)dx
∫
|x−y|≤2−n
∫ 1
0|∇B(x + t(y − x))|αdtdy
=∫
B(R)dx
∫
B(2−n)
∫ 1
0|∇B(x + tz)|αdtdz
≤∫
B(R)dx
∫
B(1)
∫ 1
0|∇B(x)|αdtdz = λ(B(1))‖∇B‖α
Lα(B(R+1)).
We get (2.8). Now by density argument, we see that (2.8) holds for w ∈ Lploc(R
d) and B ∈ W 1,αloc .
Finally||In||Lβ(B(R)) ≤ Cβ,α,R ||w||Lp(B(R+1)) ||∇B||Lα(B(R+1)).
Using this estimate and the first step, we can see that In → w div(B) in Lβloc for B ∈ W 1,α
loc . ¤End of the proof of Theorem 2.4. Since un = u ∗ χn, ∂un
∂t = ∂u∂t ∗ χn and
Vt · ∇un = Vt · ∇(u ∗ χn) = (Vt · ∇u) ∗ χn + rn(t).
We have∂un
∂t+ Vt · ∇un =
(∂u
∂t+ Vt · ∇u
)∗ χn + rn(t) = rn(t).
Now using the above observation, we get the result. ¤
Remark 2.6 The equation (2.7) is understood in the distribution sense. However by theequation, we see that un ∈ W 1,1
loc ((0, T )×Rd); therefore for a.e. x ∈ Rd, t → un(t, x) is derivableat a.e. t0 ∈ (0, T ) in the classical sense.
In the sequel, we shall say that u ∈ L1([0, T ], L1) + L1([0, T ], L∞) if u = u1 + u2 such that u1
belongs to the first space and u2 to the second one.
Theorem 2.7 (Uniqueness) Under the hypothesis that
V ∈ L1([0, T ],W 1,qloc ), div(V ) ∈ L1([0, T ], L∞(Rd))
andV
1 + |x| ∈ L1([0, T ], L1) + L1([0, T ], L∞),
the uniqueness holds in L∞([0, T ], Lp), where p ∈ [1,∞].
13
Proof. We prove only the case p > 1. For p = 1, we refer to [2]. Let u ∈ L∞([0, T ], Lp) be asolution to (2.1) such that u|t=0 = 0. Let un = u ∗ χn. Using Theorem 2.4,
rn =∂un
∂t+ V · ∇un → 0 in L1([0, T ], L1
loc(Rd)).
Let β ∈ C1(R) with bounded derivative. We have
∂
∂tβ(un) + V · ∇β(un) = rnβ′(un). (2.9)
As |β(un) − β(u)| ≤ ‖β′‖∞|un − u|, β(un) converges to β(u) in Lp. Letting n → ∞ in (2.9)yields
∂
∂tβ(u) + V · ∇β(u) = 0 in distribution sense. (2.10)
Let φ ∈ C∞c (Rd). By the remark at the beginning of this section, we have
d
dt
∫
Rd
β(u) φdx =∫
Rd
div(Vtφ)β(u) dx.
Take now Φ ∈ C∞0 (Rd), 0 ≤ Φ ≤ 1, supp(Φ) ⊂ B(2) and Φ ≡ 1 on B(1). Next, we consider
ΦR(x) = Φ(x/R
), R ≥ 1. Then
d
dt
∫β(u)ΦRdx−
∫β(u)div(V )ΦRdx =
∫β(u)V · ∇ΦRdx.
Let M > 0, put β(t) = (|t| ∧M)p (β is Lipschitz continuous but not in C1: this problem maybe overcome by approximation argument), then
d
dt
∫(|u| ∧M)pΦRdx ≤ Ct
∫(|u| ∧M)pΦRdx +
‖∇Φ‖∞R
∫
R≤|x|≤2R(|u| ∧M)p|V (t, x)|dx.
Next we observe that (|u| ∧M)p ∈ L∞([0, T ], L1 ∩ L∞) while
1R|V (t, x)|1R≤|x|≤2R ≤
4|V (t, x)|1 + |x| 1R≤|x|,
therefore
d
dt
∫(|u| ∧M)pΦRdx ≤ Ct
∫(|u| ∧M)pΦRdx
+m(t)∫
|x|≥R(|u| ∧M)p dx + CMp
∫
|x|≥R
|V1(t, x)|1 + |x| dx,
where m(t) =∥∥∥V2(t,x)
1+|x|∥∥∥∞
. Let D1(R) =∫|x|≥R(|u|∧M)p dx and D2(R) = CMp
∫|x|≥R
|V1(t,x)|1+|x| dx.
Then
d
dt
∫(|u| ∧M)pΦRdx ≤ Ct
∫(|u| ∧M)pΦRdx + m(t)D1(R) + D2(R).
14
Gronwall lemma yields∫
(|u| ∧M)pΦRdx ≤∫ t
0e∫ t
s cudu (D1(R)m(s) + D2(R)) ds.
Notice that V11+|x| ∈ L1([0, T ], L1) and limR→+∞D1(R) = limR→+∞D2(R) = 0. Letting R →∞
in the above inequality yields∫
(|u| ∧ M)p dx = 0 which implies that |u| ∧ M = 0 almosteverywhere. Letting M ↑ +∞ gives the result. ¤In what follows, we shall indicate how to reduce c in (2.1) to the case c = 0. First, notice thatif ut solves
∂ut
∂t+ Vt · ∇ut = ct, (2.11)
then
d
dt
[ut(Xt)
]= ( d
dtut)(Xt) + (∇ut)(Xt) · dXtdt
= ddtut +∇ut · Vt = ct(Vt),
where Xt is the flow associated to Vt. It follows that ut(Xt) = c0 +∫ t0 cs(Xs) ds or
ut = c0(X−1t ) +
∫ t
0cs(X−1
t−s) ds. (2.12)
So the flow Xt allows also to solve (2.11) via (2.12). Secondly, let wt be a solution to
dwt
dt+ Vt · ∇wt = 0. (2.13)
Consider wt = e−utwt. Then ∂wt∂t = e−ut dwt
dt − e−utwt∂ut∂t and ∇wt = e−ut∇wt − e−utwt∇ut.
Therefore, according to (2.11) and (2.13),
∂wt
∂t+∇wt · Vt = −ctwt,
or wt solves (2.1).
To conclude this section, we state the following result
Theorem 2.8 Under the hypothesis that
V ∈ L1([0, T ], W 1,qloc ), c,div(V ) ∈ L1([0, T ]), L∞(Rd)
andV
1 + |x| ∈ L1([0, T ], L1(Rd)),
the transport equation (2.1) admits a unique solution u ∈ L∞([0, T ], Lp) if u0 ∈ Lp, wherep ∈ [1, +∞].
15
3 Ambrosio’s Representation Formula
Denote byM+(Rd) = positive regular Borel Radon measure on Rd.
A measure µ ∈ M+(Rd) is locally finite, the suitable topology on it is vague convergence:µn → µ vaguely if for any f ∈ Cc(Rd),
∫fdµn →
∫fdµ. Consider now
Mf+(Rd) = µ ∈M+(Rd) : µ(Rd) < +∞.
There are two natural topologies on Mf+(Rd):
(i) Narrow convergence:∫
fdµn →∫
fdµ for any f ∈ Cb(Rd);(ii) w∗-convergence:
∫fdµn →
∫fdµ for any f ∈ C0(Rd).
It is known that µn converges narrowly to µ if and only if µn converges to µ for w∗ topologyand µn(Rd) → µ(Rd). In particular, if we denote by P(Rd) the space of probability measures onRd, then the weak convergence in P(Rd) is identical to the narrow convergence, as well as theconvergence for w∗ topology.
Note that the notion of narrow convergence can be defined on any Polish space E and thefollowing result will be used freely.
Prohokov Theorem. Let (µn)n≥1 ⊂ Mf+(E) such that supn≥1 µn(E) < +∞. Then the
tightness implies the relative compactness with respect to the narrow convergence.
Definition 3.1 Let (t, x) → Vt(x) ∈ Rd a time-dependent Borel vector fields. We say that afamily of measures (µt)t∈[0,T ] on Rd is a solution to the continuity equation
dµt
dt+ Dx · (Vtµt) = 0, in [0, T ]× Rd, (3.1)
if∫ T0
( ∫K |Vt| dµt
)dt < +∞ for any compact subset K of Rd and for any ϕ ∈ C∞
c (Rd),
d
dt
∫
Rd
ϕdµt =∫
Rd
Vt · ∇ϕ dµt. (3.2)
Note. Dx· is the formal divergence.
Theorem 3.2 Let (µt)t∈(0,T ) be a solution to (3.1) such that supt∈[0,T ] µt(Rd) < +∞; then itadmits a version (µt)t∈[0,T ] such that t → µt is continuous respect to the w∗ topology.
Proof. The left hand side in (3.2) is taken in distribution sense and∣∣∣∣d
dt
∫
Rd
ϕdµt
∣∣∣∣ ≤ Cϕ
∫
K|Vt| dµt ∈ L1([0, T ]).
So t → ∫ϕdµt is in W 1,1([0, T ]). By discussion in Remark 2.2, it admits a uniformly continuous
version: there is a full measure subset Lϕ of (0, T ) and a uniformly continuous function ξϕ on[0, T ] such that
∫ϕdµt = ξϕ(t) for t ∈ Lϕ. Take a countable subset D ⊂ C∞
c (Rd) which isdense in Cc(Rd) and set L = ∩ϕ∈DLϕ. Let φ ∈ Cc(Rd) and ε > 0, there is ϕ ∈ D such that||ϕ − φ||∞ < ε. For s, t ∈ L, | ∫ φ dµt −
∫φdµs| ≤ | ∫ ϕdµt −
∫ϕdµs| + 2Cε, where C is a
16
constant dominating all µt(Rd). Therefore t → ∫φdµt is uniformly continuous over L. Let ξφ
be the uniform continuous extension over [0, T ]. We have for t ∈ L
|ξφ(t)| =∣∣∣∫
φdµt
∣∣∣ ≤ C ||φ||∞.
This inequality extends to [0, T ]. By Riesez representation theorem, for each t ∈ [0, T ], there is aBorel measure µt such that ξφ(t) =
∫φdµt. Therefore (µt)t∈[0,T ] admits a version of (µt)t∈[0,T ],
which is continuous with respect to the w∗ topology. ¤
Remark 3.3 Suppose that∫ T
0
∫
Rd
|Vt(x)|1 + |x|dµt(x)dt < +∞, (3.3)
then µt(Rd) = µ0(Rd) for all t ∈ [0, T ].In fact, consider a function ϕ ∈ C∞
c (Rd) such that ϕ ≡ 1 on B(1) and ϕ ≡ 0 on (B(2))c.Define ϕR(x) = ϕ(x/R), we have
‖∇ϕR‖∞ ≤ ‖∇ϕ‖∞R
1R≤|x|≤2R ≤4‖∇ϕ‖∞1 + |x| 1|x|≥R.
Therefore ∣∣∣∣d
dt
∫ϕR dµt
∣∣∣∣ ≤ 4‖∇ϕ‖∞∫
|x|≥R
|Vt(x)|1 + |x|dµt(x)dt,
or∫
ϕR dµt =∫
ϕR dµ0 + εR, where
εR ≤∫ T
0
∫
|x|≥R
|Vt(x)|1 + |x|dµt(x)dt → 0 as R → +∞.
Then letting R → +∞ gives the result. ¤
Remark 3.4 Let Vt be a smooth vector field such that
dXt
dt= Vt(Xt)
is well defined over [0, T ]. For µ0 a probability measure on Rd, set
µt = (Xt)∗µ0.
Let ϕ ∈ C∞c (Rd). Then
d
dt
∫ϕdµt =
d
dt
∫ϕ(Xt)dµ0 =
∫〈∇ϕ(Xt), Vt(Xt)〉dµ0 =
∫〈∇ϕ, Vt〉dµt.
Therefore (µt)t≥0 satisfies the continuity equation
∂µt
∂t+ Dx · (Vtµt) = 0 with µ|t=0 = µ0.
¤
17
Remark 3.5 If V ∈ L1([0, T ], Lqloc) and div(V ) ∈ L1([0, T ], L∞) and µt = ρtλd with ρt ∈
L∞([0, T ], Lploc), then (3.1) is reduced to the transport equation
∂ρt
∂t+ Vt · ∇ρt + div(Vt)ρt = 0. (3.4)
In the sequel, we shall always consider the family of probability measures (µt)t∈[0,T ]. Accordingto Theorem 3.2, such a solution to the continuity equation (3.1) admits a version (µt)t∈[0,T ],which is weakly continuous.
Now let W = C([0, T ],Rd), Wx = γ ∈ W : γ(0) = x and et : W → Rd be the evaluation map:et(γ) = γ(t). Given an η ∈ P(Rd ×W ), we define µη
t = (et)∗η for t ∈ [0, T ]. Let ηx(dγ) be theconditional probability measure given e0 = x; then
∫
Rd×Wψ(x, γ)dη(x, γ) =
∫
Rd
(∫
Wx
ψ(x, γ)dηx(γ))
dµη0(x).
Theorem 3.6 Let (µt)t∈[0,T ] be a solution to (3.1). Assume that µ0 admits the finite firstmoment and ∫ T
0
∫
Rd
|Vt(x)|α1 + |x| dµt(x)dt < +∞ for some α > 1. (3.5)
Then there exists η ∈ P(Rd ×W ) such that µηt = µt for t ∈ [0, T ] and under η it holds
γ(t) = x +∫ t
0Vs(γ(s))ds. (3.6)
Proof. The proof consists of several steps.
Step 1 (smoothing): Let ρε(x) = (2πε)−d2 e−
|x|22ε . Recall that (µ0 ∗ρε)(x) =
∫Rd ρε(x−y) dµ0(y),
which is a smooth function. It is easy to check that
sup0<ε≤1
(∫
Rd
|x| · (µ0 ∗ ρε)(x)dx
)≤ 1. (i)
Similarly (Vtµt) ∗ ρε(x) =∫Rd ρε(x− y)Vt(y)dµt(y), which is a smooth vector field. Define
µεt = µt ∗ ρε, V ε
t =(Vtµt) ∗ ρε
µεt
.
Then µεt → µt weakly as ε ↓ 0. We have for ϕ ∈ C∞
c (Rd),∫
ϕDx · (V εt µε
t ) dx = −∫〈∇ϕ, V ε
t 〉µεt dx
= −∫〈∇ϕ, (Vtµt) ∗ ρε〉 dx =
∫ϕ Dx · [(Vtµt) ∗ ρε] dx.
It follows that Dx · (V εt µε
t ) = Dx · [(Vtµt) ∗ ρε] and
d
dtµε
t + Dx · (V εt µε
t ) =(
d
dtµt
)∗ ρε + Dx · (Vtµt) ∗ ρε
=[
d
dtµt + Dx · (Vtµt)
]∗ ρε = 0,
18
or µεt solves the transport equation
∂µεt
∂t+ V ε
t · ∇µεt + div(V ε
t )µεt = 0.
On the other hand, let Xε(t, ·) be the solution to
dXε(t, ·)dt
= V εt (Xε(t, ·)).
Consider: νεt = (Xε
t )∗µε0. By Remark 3.4 and 3.5, the density of νε
t solves the same abovetransport equation with the same initial data as µε
t . By uniqueness, we get µεt = νε
t for all t.Define
ηε = (I ×Xε)∗µε0, where I ×Xε : x → (x,Xε(·, x)) ∈ Rd ×W.
Then ∫
Rd×Wϕ(γ(t))dηε =
∫
Rd
ϕ(Xε(t, x))dµε0 =
∫ϕdµε
t . (3.7)
Step 2 (tightness): Now we are going to prove that the family ηε : ε > 0 is tight. Considerthe functional ψ on Rd ×W :
ψ(x, γ) =
|x|+ ∫ T
0|γ(t)|α1+|γ(t)|dt, if γ is absolutely continuous;
+∞, otherwise.
where α > 1. We claim that KM = (x, γ) : γ(0) = x, ψ(x, γ) ≤ M is a compact subset ofRd ×W . In fact, for (x, γ) ∈ KM , we have γ(t) = x +
∫ t0 γ(s)ds, hence
|γ(t)| ≤ |x|+∫ t
0|γ(s)|ds
≤ |x|+( ∫ t
0
|γ(s)|α1 + |γ(s)|ds
) 1α( ∫ t
0(1 + |γ(s)|) β
α ds
) 1β
≤ M + M1α T
1β (1 + ‖γ‖∞)
1α ,
which implies that‖γ‖∞ ≤ M + M
1α T
1β (1 + ‖γ‖∞)
1α .
Therefore it exists a constant C = C(M, α, T ) such that
1 + ‖γ‖∞ ≤ C(1 + ‖γ‖∞)1α
or 1 + ‖γ‖∞ ≤ Cα
α−1 . This means that γ : (γ(0), γ) ∈ KM is uniformly bounded. Now
|γ(t)− γ(s)| ≤( ∫ t
s
|γ(u)|α1 + |γ(u)|du
) 1α(∫ t
s(1 + |γ(u)|) β
α du
) 1β
≤ C |t− s| 1β ,
where C > 0 is a constant. By Ascoli theorem, we see that KM is compact.Now ∫
Rd
ψ(x, γ)dηε(x, γ) =∫|x|dµε
0 +∫ T
0
∫
Rd
|Xε(t, x)|α1 + |Xε(t, x)|dµε
0(x)dt.
19
The second term on the right hand side is equal to∫ T
0
∫
Rd
|vεt (X
ε(t, x))|α1 + |Xε(t, x)| =
∫ T
0
∫
Rd
|V εt (x)|α
1 + |x| dµεt (x)dt.
Note that
|V εt (x)|α ≤
(∫ρε(x− y)|Vt(y)|dµt(y)∫
ρε(x− y)dµt(y)
)α
≤∫
ρε(x− y)|Vt(y)|αdµt(y)µε
t (x),
so that the above quantity is dominated by∫ T
0
∫
Rd×Rd
|Vt(y)|αρε(x− y)1 + |x| dµt(y)dxdt =
∫ T
0
∫
Rd
(∫
Rd
ρε(x− y)1 + |x| dx
)|Vt(y)|αdµt(y)dt
→∫ T
0
∫
Rd
|Vt(y)|α1 + |y| dµt(y)dt < +∞ as ε ↓ 0.
Thereforesup
0<ε≤1
∫ψ(x, γ)dηε(x, γ) < +∞,
which implies that ηε : ε > 0 is tight. Now let η be a limit point. By (3.7), we have∫
Rd×Wϕ(γ(t))dη =
∫ϕdµt. (3.8)
Final Step. The condition (3.5) implies that∫ T0
∫Rd
|Vt(x)|1+|x| dµt(x)dt < +∞. Fix ε0 ∈ (0, 1/2);
introduce the measure ν on [0, T ]× Rd by∫
ϕ(t, x)dν(t, x) =∫ T
0
∫
Rd
ϕ(t, x)1 + |x| − ε0
dµt(x)dt.
Note that ν is not a probability measure, but finite. It is clear that V ∈ L1([0, T ] × Rd, ν).Pick a sequence Cnn≥1 of continuous functions with compact support, converging to V inL1([0, T ]× Rd, ν). We have for such a Cn and t ∈ [0, T ],
∫
Rd×W
|γ(t)− x− ∫ t0 Cn
s (γ(s))ds|1 + ‖γ‖∞ dηε
=∫
Rd
|Xε(t, x)− x− ∫ t0 Cn
s (Xε(s, x))ds|1 + ‖Xε(·, x)‖∞ dµε
0(x)
=∫
Rd
| ∫ t0
(V ε
s (Xε(s, x))− Cns (Xε(s, x))
)ds|
1 + ‖Xε(·, x)‖∞ dµε0(x)
≤∫ t
0
∫
Rd
|V εs − Cn
s |1 + |x| dµε
sds
≤∫ t
0
∫
Rd
|V εs − Cn,ε
s |1 + |x| dµε
sds +∫ t
0
∫
Rd
|Cn,εs − Cn
s |1 + |x| dµε
sds
where Cn,εs is defined in the same way as V ε
s . Since Cn,εs → Cn
s uniformly as ε ↓ 0,
limε→0
∫ t
0
∫
Rd
|Cn,εs − Cn
s |1 + |x| dµε
sds = 0.
20
Remark that∫ t
0
∫
Rd
|V εs − Cn,ε
s |1 + |x| dµε
sds ≤∫ t
0
∫
Rd×Rd
|Vs(y)− Cns (y)|
1 + |x| ρε(x− y) dµs(y)dxds,
and for ε ≤ ε0, ∫
Rd
ρε(x− y)1 + |x| dx =
∫
Rd
ρε(x)1 + |x + y| dx ≤ 1
1 + |y| − ε0.
Finally ∫
Rd×W
|γ(t)− x− ∫ t0 Cn
s (γ(s))ds|1 + ‖γ‖∞ dηε ≤ ||V − Cn||L1(ν) + rε,
with rε → 0 as ε → 0. Letting ε → 0 in the above inequality, we get∫
Rd×W
|γ(t)− x− ∫ t0 Cn
s (γ(s))ds|1 + ‖γ‖∞ dη ≤ ||V − Cn||L1(ν).
We have
∫
Rd×W
|γ(t)− x− ∫ t0 Vs(γ(s))ds|
1 + ‖γ‖∞ dη
≤∫ t
0
∫
Rd×W
|Vs(γ(s))− Cns (γ(s))|
1 + |γ(s)| dη ds +∫ t
0
∫
Rd
|Vs − Cns |
1 + |x| dµsds,
which is less than 2||V − Cn||L1(ν). Now letting n → +∞, we obtain that∫
Rd×W
|γ(t)− x− ∫ t0 Vs(γ(s))ds|
1 + ‖γ‖∞ dη = 0.
The result follows. ¤
Theorem 3.7 (Ambrosio) The result in Theorem 3.6 subsists under the condition (3.3):∫ T
0
∫
Rd
|Vt(x)|1 + |x| dµt(x)dt < +∞.
Proof. The technical hypothesis in the above theorem has been removed in [1]. ¤
Theorem 3.8 Let V be a vector field satisfying the conditions:
V ∈ L1([0, T ], W 1,αloc ), div(V ) ∈ L1([0, T ], L∞),
|V |1 + |x| ∈ L1([0, T ], L1).
Then there exists a unique flow of measurable maps Xt : Rd → Rd such that for a.e. x ∈ Rd,
Xt(x) = x +∫ t
0Vs(Xs(x))ds
and(Xt)∗λd = kt(x)λd.
Moreovere−
∫ T0 ‖div(Vs)‖∞ds ≤ kt(x) ≤ e
∫ T0 ‖div(Vs)‖∞ds.
21
Proof. Take µ0 = γdλd the standard Gaussian measure: γd ∈ L1 ∩L∞. By Theorem 2.8, thereexists a unique u ∈ L∞([0, T ], L1 ∩ L∞) which solves the transport equation
∂ut
∂t+ Vt · ∇ut + div(Vt)ut = 0.
Then µt := utλd solves the continuity equation (3.1). It is clear that the condition (3.3) holds.By Theorem 3.6, there is a probability measure η on Rd ×W under which
γt = x +∫ t
0Vs(γs)ds.
To complete the proof, we need the following lemma.
Lemma 3.9 Let η ∈M+(Rd ×W ) be a probability measure such that
γ(t) = x +∫ t
0Vs(γ(s))ds, holds η-a.s.
Denote by et : Rd ×W → Rd the evaluation map et(x, γ) = γ(t) and µηt = (et)∗η. Suppose that
∫ T
0
∫
Rd
|Vt(x)|1 + |x| dµη
t dt < +∞.
Then µηt satisfies the continuity equation
∂µt
∂t+ Dx · (Vtµt) = 0.
Proof. We claim that for η-a.s γ,∫ T0 |Vs(γ(s))| ds < +∞; therefore t → γ(t) is absolutely
continuous. In fact, the above condition can be rewritten in the form:∫ T
0Eη
( |Vt(γ(t))|1 + |γ(t)|
)dt < +∞.
It follows that for η-a.s γ,∫ T0|Vt(γ(t))|1+|γ(t)| dt < +∞. For R > 0, consider the subset AR =
(x, γ); ||γ||∞ ≤ R. Then Rd ×W = ∪R>0AR. For (x, γ) ∈ R, we have∫ T
0|Vt(γ(t))| dt ≤ (R + 1)
∫ T
0
|Vt(γ(t))|1 + |γ(t)| dt < +∞.
Now γ(t) = x +∫ t0 Vs(γ(s)) ds yields the absolute continuity of γ. Let ϕ ∈ C∞
c ; then
d
dt〈µη
t , ϕ〉 =∫
Rd×W〈∇ϕ(γ(t)), Vt(γ(t))〉dη
= 〈∇ϕ · Vt, µηt 〉.
The proof of the lemma is complete. ¤End of the proof of Theorem 3.8 Let ηx be the conditional probability law of η, givenγ(0) = x. Note that for x ∈ Rd given, ηx is concentrated on the subset of those γ such
22
that γ(t) = x +∫ t0 Vs(γ(s)) ds. We shall prove that η is supported by a graph: there exists a
measurable map X : Rd → W such that η = (I, X)∗µ0. If η is not supported by a graph, thenthere exists a C ⊂ Rd with µ0(C) > 0 and for each x ∈ C, ηx is not a Dirac mass on Wx. Thenthere exists t0 ∈ [0, T ] and two disjoint Borel subsets E, E′ ⊂ Rd such that
ηx
(e−1t0
(E)) · ηx
(e−1t0
(E′))
> 0 for x ∈ C.
Note that we can choose C such that ηx
(e−1t0
(E))
and ηx
(e−1t0
(E′))
are bounded below by apositive constant ε0 > 0 for each x ∈ C. Define
η1x = 1C(x)
ηx
(1e−1
t0(E)∩·
)
ηx
(e−1t0
(E)) , η2
x = 1C(x)ηx
(1e−1
t0(E′)∩·
)
ηx
(e−1t0
(E′)) .
Then η1x, η2
x are concentrated as ηx for x ∈ C and we can check that the conditions in lemma 3.9are satisfied. Therefore µη1
t , µη2
t satisfy the continuity equation with the same initial measureµC
0 = µ0(1C ·). Since η1x and η2
x are absolutely continuous with respect to ηx, it is easy to seethat µη1
t , µη1
t admit density k1t and k2
t in L∞([0, T ], L1 ∩ L∞). But in this class, the uniquenessholds due to Theorem 2.8. Then µη1
t = µη2
t , which is impossible. Therefore η is supported bythe graph of a measurable map X : Rd → W ; then
(Xt)∗µ0 = µt = ut λd.
Let A ⊂ Rd such that λd(A) = 0, then (Xt)∗µ0(A) = 0 or∫
1A(Xt) dµ0 = 0 which implies that1A(Xt) = 0 a.e.; therefore
∫1A(Xt) dx = 0. In other words, (Xt)∗λd admits a density kt with
respect to λd. The estimates for kt are similar to (1.2.9). ¤
4 Stochastic Differential Equations: Strong Solutions
Let σ : Rd →Md,m(R) be a continuous function, taking values in the space of (d,m)-matricesand V : Rd → Rd a continuous function. Consider the following stochastic differential equation(abbreviated as SDE) on Rd:
dXt(ω) = σ(Xt(ω))dBt(ω) + V (Xt(ω))dt, X0(ω) = x ∈ Rd, (4.1)
where t → Bt(ω) is a Rm-valued standard Brownian motion. Now the situation is quite differentsince the Brownian motion is not unique (in the sense of paths), and the probability space(Ω,Ft,P) is not unique. How to understand (4.1)?
Definition 4.1 (Strong Solution) If for any given Brownian motion (Bt)t≥0 defined on a prob-ability space (Ω,P), there exists a (FB
t )t≥0-adapted process (Xt(ω))0≤t<τx such that a.s.,
Xt(ω) = x +∫ t
0σ(Xs(ω))dBs +
∫ t
0V (Xs(ω))ds, t < τx(ω), (4.2)
where τx : Ω → [0, +∞] is the life time, which is a FB· -stopping time.
23
Recall that FBt is the σ-field in Ω generated by Bs(·) : s ≤ t; τx is a FB· -stopping time in the
sense thatω : τx(ω) ≤ t ∈ FB
t , for all t ≥ 0.
If τx(ω) is finite, thenlim
t↑τx(ω)|Xt(ω)| = +∞.
The uniqueness for strong solutions to (4.1) is understood as the “pathwise uniqueness”: for twosolutions (Xt(ω))0≤t<τ1
xand (Yt(ω))0≤t<τ2
xto (4.1), such that X0 = Y0, then τ1
x = τ2x and
P(ω : Xt(ω) = Yt(ω) for all t < τ1x(ω)) = 1.
The existence of strong solutions is very important in probabilistic modelisations.
Definition 4.2 (Weak Solution) For the coefficients (σ, V ) given, if there exists a filtered prob-ability space (Ω,F ,Ft,P) satisfying the usual conditions on which are defined a Ft-compatibleBrownian motion (Bt)t≥0 and a Ft-adapted process (Xt)0<t<τx such that
Xt(ω) = X0(ω) +∫ t
0σ(Xs(ω))dBs(ω) +
∫ t
0V (Xs(ω))ds, t < τx(ω). (4.3)
Here are some explanations on Definition 4.2.(i) The usual conditions mean that the sub-σ-fields Ft are completed by null subsets in F
andFt = Ft+ =
⋂
ε>0
Ft+ε.
(ii) (Bt)t≥0 is compatible with (Ft)t≥0 means FBt ⊂ Ft and for any s < t, Bt − Bs is
independent of Fs. It is equivalent to say that FBt ⊂ Ft and
E(ei〈Bt−Bs,ξ〉∣∣Fs
)= e−(t−s)|ξ|2/2, s < t. (4.4)
(iii) Xt is measurable with respect to Ft, not necessarily measurable with respect to FBt . If
this latter situation happens, then the weak solutions become strong ones.
The suitable notion of uniqueness for weak solutions is the “uniqueness in law”. For the simplicityof exposition, we consider the case where τx = +∞. Then for ω ∈ Ω given, t → Xt(ω) is inC([0, +∞),Rd). The law of the solution is the law of ω → X·(ω) on C([0,+∞),Rd). Theuniqueness in law means that if X is another solution defined on another probability space(Ω, F , Ft, P). Then
law(X·) = law(X·).
Construction of strong solutions:
First case. We suppose that the coefficients satisfy the global Lipschitz conditions:
‖σ(x)− σ(y)‖ ≤ C|x− y|, |V (x)− V (y)| ≤ C|x− y|, ∀ x, y ∈ Rd, (4.5)
where || · || denotes the Hilbert-Schmidt norm for matrices: ||σ||2 =∑
ij(σij)2. In this case, the
Picard iteration does work. More precisely, let x ∈ Rd be given. Define
X0(t, x) = x,
Xn(t, x) = x +∫ t
0σ(Xn−1(s, x))dBs +
∫ t
0V (Xn−1(s, x))ds.
24
The condition (4.5) implies that the coefficients have linear growths:
‖σ(x)‖ ≤ C1(1 + |x|), |V (x)| ≤ C1(1 + |x|). (4.6)
We have
|Xn(t)|2 ≤ 3(|x|2 +
∣∣∣∣∫ t
0σ(Xn−1(s))dBs
∣∣∣∣2
+∣∣∣∣∫ t
0V (Xn−1(s))ds
∣∣∣∣2)
. (4.7)
Let T > 0. By Doob’s maximal inequality,
E(
sup0≤t≤T
∣∣∣∣∫ t
0σ(Xn−1(s))dBs
∣∣∣∣2)
≤ 4E∫ T
0‖σ(Xn−1(s))‖2ds ≤ 8C2
1
∫ T
0
(1 + E(|Xn−1(s)|2)
)ds,
E(
sup0≤t≤T
∣∣∣∣∫ t
0V (Xn−1(s))ds
∣∣∣∣2)
≤ 2C21T
∫ T
0
(1 + E(|Xn−1(s)|2)
)ds.
According to (4.7) and by induction, we see that
E(
sup0≤t≤T
|Xn(t)|2)
< +∞. (4.8)
Now we compute
Xn+1(t)−Xn(t) =∫ t
0
(σ(Xn(s))− σ(Xn−1(s))
)dBs +
∫ t
0
(V (Xn(s))− V (Xn−1(s))
)ds
=: I1(t) + I2(t).
Again by Doob’s maximal martingale inequality,
E(
sup0≤s≤t
|I1(s)|2)≤ 4E
∫ t
0‖σ(Xn(s))− σ(Xn−1(s))‖2ds ≤ 4C2
∫ t
0E
(|Xn(s)−Xn−1(s)|2)ds.
In the same way,
E(
sup0≤s≤t
|I2(s)|2)≤ TC2
∫ t
0E
(|Xn(s)−Xn−1(s)|2)ds.
Therefore
E(
sup0≤s≤t
|Xn+1(s)−Xn(s)|2)≤ 2C2(T + 4)
∫ t
0E
(sup
0≤u≤s|Xn(u)−Xn−1(u)|2
)ds.
By induction,
E(
sup0≤s≤T
|Xn+1(s)−Xn(s)|2)≤ K
(2C2(T + 4))n
n!Tn.
Hence
P(
sup0≤t≤T
|Xn+1(t)−Xn(t)| > 2−n
)≤ K
(8C2(T + 4))n
n!.
By Borel-Cantelli lemma, a.s.
sup0≤t≤T
|Xn+1(t)−Xn(t)| ≤ 2−n as n ↑ +∞.
25
It follows that the series
Xt(x, ω) := x ++∞∑
n=0
(Xn+1(t)−Xn(t)
)
converges uniformly with respect to t ∈ [0, T ]. Now remark that
E∣∣∣∣∫ t
0σ(Xn(s))dBs −
∫ t
0σ(X(s))dBs
∣∣∣∣2
≤ C2E( ∫ t
0|Xn(s)−Xs|2ds
)→ 0 as n → +∞.
So letting n → +∞ in
Xn+1(t) = x +∫ t
0σ(Xn(s))dBs +
∫ t
0V (Xn(s))ds,
we get
Xt(x) = x +∫ t
0σ(Xs(x))dBs +
∫ t
0V (Xs(x))ds.
¤A comment on the lifetime τx. By the above construction, we see that for x ∈ Rd, τx = +∞a.s. More precisely, let
A = (x, ω) ∈ Rd × Ω : τx(ω) = +∞.Let ν be a Borel probability measure on Rd, then
(ν ⊗ P)(A) = 1.
By Fubini theorem, ∫
Ω
[ ∫
Rd
1A(x, ω)dν(x)]dP(ω) = 1.
It follows that there exists Ω0 ⊂ Ω with full probability, such that for each ω ∈ Ω0, τx(ω) = +∞for ν-a.e. x. By limit procedure, we see that (x, ω) → Xt(x, ω) is measurable with respect to
B(Rd)⊗Ftν⊗P
.
A new definition of strong solution: Let W (Rd) = C([0, +∞),Rd) and W0(Rd) = w ∈W (Rd) : w(0) = 0. Let µW be the law of the Brownian motion on W0(Rm). We say that theSDE (4.1) has a strong solution if there exists F : Rd ×W0(Rm) → W (Rd) such that
(i) for any Borel probability measure ν on Rd, F is measurable with respect to
B(Rd)⊗ B(W0(Rm))ν⊗µW ;
(ii) for any t, x given, w → F (x,w)(t) is FWt -measurable, where FW
t = σ(w(s) : s ≤ t), andXt(w) = F (X0(w), B) is a strong solution to (4.1) in the sense of Definition 4.1, where X0 israndom variable independent of B.
Second case. Now we will give another example, for which the strong solution exists. Supposethat σ and V are bounded and
‖σ(x)− σ(y)‖2 ≤ C|x− y|2 log1
|x− y| , |V (x)− V (y)| ≤ C|x− y| log1
|x− y| (4.9)
26
for any |x− y| ≤ δ0. In this case, the Euler approximation does work. Let n ≥ 1, denote
Φn(t) = k2−n for t ∈ [k2−n, (k + 1)2−n), k ≥ 0.
Define Xn(0) = x and for t ∈ [k2−n, (k + 1)2−n),
Xn(t) = Xn(k2−n) + σ(Xn(k2−n))(Bt −Bk2−n) + V (Xn(k2−n))(t− k2−n) ∈ FBt .
Using Φn, Xn(t) can be expressed by
Xn(t) = x +∫ t
0σ(Xn(Φn(s))
)dBs +
∫ t
0V
(Xn(Φn(s))
)ds.
Then we can prove that a.s., Xn(t) converges to Xt uniformly with respect to t ∈ [0, T ]. There-fore, in this case, the strong solution exists. For a proof, we refer to [4]. ¤
Theorem 4.3 Under (4.9), the pathwise uniqueness holds.
Proof. Let Xt and Yt be two solutions to (4.1) starting from the same point. Set ηt = Xt − Yt
and ξt = |ηt|2. We have
dηt = (σ(Xt)− σ(Yt))dBt + (V (Xt)− V (Yt))dt.
In coordinate forms, for i = 1, · · · , d,
dηit =
m∑
j=1
(σij(Xt)− σij(Yt))dBjt + (V i(Xt)− V i(Yt))dt.
Ito stochastic contraction of dηt is given by
d∑
i=1
dηit · dηi
t =∑
i,j
(σij(Xt)− σij(Yt))2dt = ||σ(Xt)− σ(Yt)||2 dt.
Now by Ito formula,
dξt = 2〈ηt, dηt〉+ dηt · dηt
= 2〈ηt, (σ(Xt)− σ(Yt))dBt〉+ 2〈ηt, V (Xt)− V (Yt)〉dt + ‖σ(Xt)− σ(Yt)‖2dt
= 2〈(σ(Xt)− σ(Yt))∗ηt, dBt〉+ 2〈ηt, V (Xt)− V (Yt)〉dt + ‖σ(Xt)− σ(Yt)‖2dt.
We see thatdξt · dξt = 4
∣∣(σ(Xt)− σ(Yt))∗ηt
∣∣2dt.
Let ε > 0. Define Ψε(ξ) =∫ t0
dss log 1
s+ε
, Φε(ξ) = eΨε(ξ). Then
Φ′ε =Φε
ξ log 1ξ + ε
,
Φ′′ε =Φ′ε
(ξ log 1
ξ + ε)
+ Φε · (log ξ + 1)(ξ log 1
ξ + ε)2 =
Φε · (log ξ + 2)(ξ log 1
ξ + ε)2 ≤ 0 if ξ ≤ e−2.
27
Defineτ = inft > 0 : ξt ≥ e−2 ∧ δ2
0.By Ito formula,
Φε(ξt∧τ ) = 1 +∫ t∧τ
0Φ′ε(ξs)dξs +
12
∫ t∧τ
0Φ′′ε(ξs)dξs · dξs
= 1 + 2∫ t∧τ
0Φ′ε(ξs)〈(σ(Xs)− σ(Ys))∗ηs, dBs〉+ 2
∫ t∧τ
0Φ′ε(ξs)〈ηs, V (Xs)− V (Ys)〉ds
+∫ t∧τ
0Φ′ε(ξs)‖σ(Xs)− σ(Ys)‖2ds +
12
∫ t∧τ
0Φ′′ε(ξs)dξs · dξs.
Taking the expectation,
E(Φε(ξt∧τ )
) ≤ 1 + 2E∫ t∧τ
0
Φε · Cξs log 1ξs
ξs log 1ξs
+ εds ≤ 1 + 2C
∫ t
0E
(Φε(ξt∧τ )
)ds.
By Gronwall lemma,E
(Φε(ξt∧τ )
) ≤ e2Ct.
Letting ε ↓ 0, we get ξt∧τ = 0 a.s. If P(τ < T ) > 0, then by continuity of the paths, we get
ξτ = 0 on τ < T.
But ξτ = δ20 ∧ e−2. The contradiction implies that τ ≥ T a.s., and hence ξt = 0 a.s. ¤
5 Stochastic Flow of Homeomorphisms
In contrast to ODE, even under the global Lipschitz condition:
‖σ(x)− σ(y)‖ ≤ C|x− y|, |V (x)− V (y)| ≤ C|x− y|, (5.1)
for a study of the dependence x → Xt(x, ω), we need the following Kolmogorov modificationtheorem, one of the fundamental tools in probability theory:
Theorem 5.1 (Kolmogorov) Let Xt : t ∈ [0, 1]m be a family of Rd-valued random variables.Suppose there exist constants γ ≥ 1, C, δ > 0 such that
E(|Xt −Xs|γRd
) ≤ C|t− s|m+δRm . (5.2)
Then X admits a continuous version Xt, satisfying
E[
sups6=t
( |Xt − Xs||t− s|α
)γ]< +∞ for any α ∈
(0,
δ
γ
). (5.3)
In particular, there exists M ∈ Lγ(Ω) such that
|Xt − Xs| ≤ M |t− s|α, ∀ t, s ∈ [0, 1]m. (5.4)
28
Remark 5.2 (1) X is a version of X means that for each t ∈ [0, 1]m, Xt = Xt a.s.(2) Xt could take values in a Banach space, but it is crucial that the exponent of |t − s|Rm
is strictly bigger than m, namely δ > 0.(3) Let Xn
t : t ∈ [0, 1]m be a sequence of such family satisfying (5.2), but with C indepen-dent of n. Then there exist Mn ∈ Lγ bounded in Lγ such that
|Xnt − Xn
s | ≤ Mn |t− s|αRm , ∀ t, s ∈ [0, 1]m.
¤
The proof of Theorem 5.1 can be found in any textbook of probability theory. We refer to [6].
Let Xt(x, ω) be the solution to
dXt(ω) = σ(Xt(ω))dBt + V (Xt(ω))dt, X0(ω) = x. (5.5)
Theorem 5.3 Under the global Lipschitz condition (5.1), the solution to (5.5) admits a contin-uous version X, such that a.s., (t, x) → Xt(x, ω) is continuous and for each x ∈ Rd,
Pω : ∀ t ≥ 0, Xt(x, ω) = Xt(x, ω) = 1. (5.6)
Proof. Let R > 0, T > 0 and set I = [0, T ]× [−R, R]d. Consider ηt = Xt(x)−Xt(y). We have
dηt =(σ(Xt(x))− σ(Xt(y))
)dBt +
(V (Xt(x))− V (Xt(y))
)dt.
The Ito stochastic contraction of dηt is given by
dηt · dηt = ‖σ(Xt(x))− σ(Xt(y))‖2dt.
Let ξt = |ηt|2. Then by Ito formula,
dξt = 2〈ηt, dηt〉+ dηt · dηt
= 2⟨ηt,
(σ(Xt(x))− σ(Xt(y))
)dBt
⟩+ 2〈ηt, V (Xt(x))− V (Xt(y))〉dt
+‖σ(Xt(x))− σ(Xt(y))‖2dt.
The Ito stochastic contraction of dξt is
dξt · dξt = 4∣∣(σ(Xt(x))− σ(Xt(y))
)∗ηt
∣∣2dt.
Let p ≥ 2. By Ito formula,
dξpt = pξp−1
t dξt +12p(p− 1)ξp−2
t dξt · dξt
= 2pξp−1t
⟨ηt,
(σ(Xt(x))− σ(Xt(y))
)dBt
⟩+ 2pξp−1
t 〈ηt, V (Xt(x))− V (Xt(y))〉dt
+pξp−1t ‖σ(Xt(x))− σ(Xt(y))‖2dt + 2p(p− 1)ξp−2
t
∣∣(σ(Xt(x))− σ(Xt(y)))∗
ηt
∣∣2dt
=: I1(t) + I2(t) + I3(t) + I4(t).
By the Lipschitz conditions,
|I2(t)| ≤ 2pξp−1t · C|ηt|2 = 2pCξp
t ,
|I3(t)| ≤ pξp−1t · C2ξt = pC2ξp
t ,
|I4(t)| ≤ 2p(p− 1)ξp−2t |ηt|2C2|ηt|2 = 2p(p− 1)C2ξp
t .
29
Therefore for some constant Cp > 0, we have
E(ξpt ) ≤ |x− y|2p + Cp
∫ t
0E(ξp
s )ds.
By Gronwall lemma,E(ξp
t ) ≤ |x− y|2peCpT for any t ≤ T,
orE(|Xt(x)−Xt(y)|2p) ≤ |x− y|2peCpT . (5.7)
Next, there exists a Cp > 0 such that
|Xt(x)|2p ≤ Cp
(|x|2p +
∣∣∣∣∫ t
0σ(Xs(x))dBs
∣∣∣∣2p
+∣∣∣∣∫ t
0V (Xs(x))ds
∣∣∣∣2p)
. (5.8)
By Burkholder inequality, we have
E(
sup0≤t≤T
∣∣∣∣∫ t
0σ(Xs(x))dBs
∣∣∣∣2p)
≤ CpE(∫ t
0‖σ(Xs(x))‖2ds
)p
≤ Cp,T
[1 + E
(sup
0≤t≤T|Xt(x)|2p
)].
If we denote by ut = E(sup0≤s≤t |Xs(x)|2p
), then by (5.8),
ut ≤ Cp,T
(|x|2p +
∫ t
0usds
).
Again by Gronwall lemma, we get
ut ≤ Cp,T |x|2petCp,T ,
or
E(
sup0≤t≤T
|Xt(x)|2p
)≤ Cp,T |x|2peTCp,T . (5.9)
Now for t > s,
Xt −Xs =∫ t
sσ(Xr)dBr +
∫ t
sV (Xr)dr.
Again by Burkholder inequality:
E(∣∣∣∣
∫ t
sσ(Xr)dBr
∣∣∣∣2p)
≤ CpE( ∫ t
s‖σ(Xr)‖2dr
)p
≤ CpE[ ∫ t
s
(1 + sup
0≤r≤T|Xr|2
)dr
]p
≤ Cp
[1 + E
(sup
0≤t≤T|Xt|2p
)](t− s)p.
Finally we getE
(|Xt(x)−Xs(x)|2p) ≤ Cp,T |t− s|p.
30
Combining with (5.7), we have
E(|Xt(x)−Xs(y)|2p
) ≤ Cp,T
(|t− s|p + |x− y|2p). (5.10)
Now take p > d + 1, we can apply Kolmogorov’s modification theorem: there exists acontinuous version X such that for given (t, x) ∈ [0, T ] × [−R, R]d, a.s. Xt(x, ω) = Xt(x, ω).Since the two processes are continuous with respect to t, we obtain for given x ∈ Rd, a.s.
Xt(x, ω) = Xt(x, ω) for all t ≥ 0.
¤
Remark 5.4 By (5.7), we see that x → Xt(x, ω) is (1 − ε)-Holder continuous: there exists aloss of regularity with respect to the coefficients. ¤
The main result of this section is
Theorem 5.5 There exists a full measure subset Ω0 ⊂ Ω such that for ω ∈ Ω0 and any t ≥ 0,x → Xt(x, ω) is a global homeomorphism of Rd.
Proof. Step 1. Let x 6= y be given, for α < 0, we estimate E(|Xt(x)−Xt(y)|2α
).
Take ε ∈ (0, |x − y|2). Set ηt = Xt(x) − Xt(y) and ξt = |ηt|2. We have ξ0 = |x − y|2 > ε.Define the stopping time
τε = inft > 0 : ξt ≤ ε.Then ξt∧τε ≥ ε. By Ito formula,
ξαt∧τε
= |x− y|2α + α
∫ t∧τε
0ξα−1s dξs +
12α(α− 1)
∫ t∧τε
0ξα−2s dξs · dξs
≤ |x− y|2α + Mt∧τε + 2α
∫ t∧τε
0ξα−1s 〈ηs, V (Xs(x))− V (Xs(y))〉ds
+12α(α− 1)
∫ t∧τε
0ξα−2s dξs · dξs
=: |x− y|2α + I1 + I2 + I3.
By Lipschitz condition,
|I2| ≤ 2|α|C∫ t∧τε
0ξαs ds.
In the same way, there exists a constant Cα > 0 such that
E(ξαt∧τε
) ≤ |x− y|2α + Cα
∫ t
0E
(ξαs∧τε
)ds.
It follows thatE
(ξαt∧τε
) ≤ |x− y|2αetCα . (5.11)
Let τ0 = inft > 0 : |Xt(x)−Xt(y)| = 0, we see that
τε ↑ τ0 as ε ↓ 0.
31
Then letting ε ↓ 0 in (5.11), we get
E(ξαt∧τ0
) ≤ |x− y|2αetCα .
If P(τ0 < +∞) > 0, then P(τ0 ≤ T ) > 0 for some T > 0. Therefore
+∞ = E(1τ0≤Tξα
T∧τ0
) ≤ E(ξαT∧τ0
) ≤ |x− y|2αeTCα < +∞.
The contradiction implies that τ0 = +∞ a.s., in other words,
a.s. Xt(x) 6= Xt(y) for all t ≥ 0.
Remark 5.6 Here the ”a.s.” is dependent of the given x 6= y. ¤
Step 2. Let δ > 0 be given. Set
4Rδ = (x, y) ∈ Rd × Rd : |x− y| ≥ δ, |x| ≤ R, |y| ≤ R.
Let Xt(x) be the continuous version. Define
ηt(x, y) = |Xt(x)−Xt(y)|−1.
For p ≥ 2, (x, y) and (x, y) ∈ 4Rδ ,
|ηt(x, y)− ηs(x, y)| ≤ ηt(x, y)ηs(x, y)∣∣|Xs(x)−Xs(y)| − |Xt(x)−Xt(y)|∣∣
≤ ηt(x, y)ηs(x, y)(|Xs(x)−Xt(x)|+ |Xs(y)−Xt(y)|).
By (5.11),E
(ηt(x, y)4p
) ≤ eC−pT δ−4p, E(ηs(x, y)4p
) ≤ eC−pT δ−4p.
Combining with (5.10), we have
E(|ηt(x, y)− ηs(x, y)|p) ≤ Cp,T,δ
(|t− s| p2 + |x− x|p + |y − y|p).Taking p
2 > 2d + 1, again by Kolmogorov’s modification theorem, ηt(x, y) admits a continuousversion ηt(x, y) : (t, x, y) → ηt(x, y) on [0, T ]×4R
δ . Since δ > 0, R > 0 are arbitrary, we have
a.s., (t, x, y) → ηt(x, y) is continuous on [0, T ]×40,
where 40 = (x, y) ∈ Rd × Rd : x 6= y.Let D be a dense countable subset of [0, T ]×40. Then there exists a subset Ω0 ⊂ Ω of full
measure such that for every ω ∈ Ω0, (t, x, y) ∈ D, ηt(x, y) = ηt(x, y), or
|Xt(x)−Xt(y)| = (ηt(x, y))−1.
By continuity, it holds for all (t, x, y) ∈ [0, T ]×40; especially |Xt(x)−Xt(y)| = (ηt(x, y))−1 6= 0.In conclusion, we get a full measure subset Ω0 (independent of x, y), such that for each ω ∈ Ω0,∀ t ≥ 0, ∀x 6= y, we have Xt(x) 6= Xt(y).Step 3 (Surjectivity): By considering x = x
|x|2 and
ηt(x) =
(1 + |Xt(x)|)−1, if x 6= 0;0, if x = 0,
32
the same machinery shows that x → Xt(x) is continuous at ∞. Set Rd = Rd ∪ ∞ which ishomeomorphic to Sd. Then almost surely for all t ≥ 0, x → Xt(x, ω) is continuous from Rd
onto Rd. But X0(·, ω) = Id, therefore Xt(·, ω) is homotopic to Id. By a result from GeneralTopology, Xt(R
d) = Rd. In particular,
Xt(Rd) = Rd.
¤Due to Remark 5.4, in order that for a.s ω ∈ Ω, x → Xt(x, ω) is C2, we have to suppose thatσ ∈ C2+δ
b , V ∈ C2+δb , the space of bounded functions having bounded first and second order
derivatives and the second order derivative being δ-Holder continuous; for more detail, we referto Kunita [8]. In what follows, we assume that the coefficients satisfy these conditions. Letϕ ∈ C∞
c (Rd), consider (Ptϕ)(x) = E(ϕ(Xt(x, ·)). Then (t, x) → (Ptϕ)(x) is in C2b . By Ito
formula,
E(ϕ(Xt(x))
)= ϕ(x) +
∫ t
0E
((Lϕ)(Xs(x))
)ds,
where
(Lϕ)(x) =12
d∑
i,j=1
aij(x)∂2ϕ
∂xi∂xj(x) +
d∑
i=1
Vi(x)∂ϕ
∂xi(x)
with a = σσ∗. Now let µ0 be a probability measure on Rd and Pµ0 the diffusion distribution onW = C([0, T ],Rd) defined by Pµ0 =
∫Rd Pxdµ0(x), where Px is the law of ω → X·(x, ω). Define
µt ∈ P(Rd) by∫
Rd
ϕdµt =∫
Wϕ(γ(t))dPµ0 =
∫
Rd
E(ϕ(Xt(x, ω))
)dµ0(x), for all ϕ ∈ Cb(Rd).
Therefore for ϕ ∈ C2b (Rd),
d
dt
∫
Rd
ϕdµt =∫
Rd
E((Lϕ)(Xt(x))
)dµ0(x) =
∫
Rd
Lϕdµt. (5.12)
Definition 5.7 Let (µt)t∈[0,T ] be a family of probability measures on Rd. We say that (µt)satisfies the following Fokker-Planck equation:
∂µt
∂t= L∗µt, µ|t=0 = µ0, (5.13)
if for ψ ∈ C∞c ((0, T )× Rd),
−∫
∂ψ
∂tdµtdt =
∫Lψ dµtdt. (5.14)
As what was done in Theorem 3.2, (µt) admits a version (µt) such that t → µt is weaklycontinuous. Therefore the equation (5.14) holds for ψ ∈ C2
b ([0, T ]× Rd).
Theorem 5.8 Suppose that a ∈ C2+δb and V ∈ C2+δ
b . Then the Fokker-Planck equation (5.13)admits a unique solution.
33
Proof. By linearity, it is sufficient to prove that µt = 0 for all t ∈ [0, T ] if µ0 = 0. Fixϕ ∈ C∞
c (Rd), t0 ∈ [0, T ]. Then the following backward PDE admits a unique solution f ∈ C2b :
∂f∂t + Lf = 0 in [0, t0]× Rd,
f(t0, ·) = ϕ.(5.15)
In fact, f(t, x) = (Pt0−tϕ)(x) is the candidate. Let α ∈ C∞c ((0, T )). By (5.14),
−∫
∂
∂t(αf) dµtdt =
∫α Lf dµtdt,
or
−∫ T
0α′(t)
(∫
Rd
f(t, x)dµt(x))dt =
∫ T
0α(t)
(∫
Rd
(∂f
∂t+ Lf) dµt
)dt.
It follows thatd
dt
∫
Rd
f(t, x)dµt(x) =∫
Rd
(∂f
∂t+ Lf) dµt = 0.
Therefore0 =
∫
Rd
f(0, x)dµ0(x) =∫
Rd
f(t0, x)dµt0(x) =∫
Rd
ϕ(x)dµt0(x).
It follows that µt0 = 0. As t0 is arbitrary, we get the result. ¤
6 Stochastic Differential Equations: Weak Solutions
Let σ : Rd → Md,m(R) and V : Rd → Rd be bounded Borel functions. Suppose that (Xt, Bt)solves
dXt(ω) = σ(Xt(ω))dBt + V (Xt(ω))dt. (6.1)
Then by Ito formula, for any f ∈ C2c (Rd),
Mft := f(Xt)− f(X0)−
∫ t
0(Lf)(Xs)ds =
∫ t
0〈σ∗(Xs)∇f(Xs), dBs〉
is a L2-martingale, where
(Lf)(x) =12
d∑
i,j=1
aij(x)∂2f
∂xi∂xj(x) +
d∑
i=1
Vi(x)∂f
∂xi(x), a = σσ∗. (6.2)
In general, for a given operator L such as in (6.2), we say that a continuous adapted process(Xt)t≥0 defined on a probability space (Ω,F ,Ft,P) is a solution to L-martingale problem if forany f ∈ C2
c (Rd),
Mft := f(Xt)− f(X0)−
∫ t
0(Lf)(Xs)ds
is a continuous martingale. If furthermore the matrix a admits a decomposition a = σσ∗ withσ : Rd →Md,m(R), then there is a Brownian motion compatible with (Ft) such that Xt solves(6.1). This topic is well treated in Stroock-Varadhan’s book [11]. In what follows, we willconstruct a weak solution.
34
Theorem 6.1 Let σ, V be bounded continuous and µ0 ∈ P(Rd) with compact support. Thenthe SDE (6.1) admits a weak solution (Xt, Bt) such that µ0 = law(X0)
Proof. First there exists a probability space (Ω,F ,Ft,P) on which are defined a compatibleBrownian motion (Bt) and a random variable ξ0 ∈ F0 with law(ξ0) = µ0. Next for n ≥ 1, defineXn(0) = ξ and for t ∈ [k2−n, (k + 1)2−n),
Xn(t) = Xn(k2−n) + σ(Xn(k2−n)
)(Bt −Bk2−n) + V (Xn(k2−n))(t− k2−n).
Using Φn(t) = k2−n for t ∈ [k2−n, (k + 1)2−n), we express Xn(t) as
Xn(t) = ξ +∫ t
0σ(Xn(Φn(s))
)dBs +
∫ t
0V
(Xn(Φn(s))
)ds.
As σ and V are bounded, for T > 0 fixed, by Burkholder inequality,
supnE
(sup
0≤t≤T|Xn(t)|2p
)< +∞, sup
nE
(|Xn(t)−Xn(s)|2p) ≤ Cp,T |t− s|p.
By Kolmogorov’s modification theorem, there exists Mn ∈ L2p bounded in L2p such that
|Xn(t)−Xn(s)| ≤ Mn|t− s|α, α <p− 12p
. (6.3)
LetKR = w ∈ C([0, T ],Rd) : |w(0)| ≤ R, |w(t)− w(s)| ≤ R|t− s|α.
By Ascoli theorem, KR is compact in C([0, T ],Rd). Let νn = law(Xn(·)). Then
νn(KcR) ≤ νn(|w(0)| > R) + νn(∃ t 6= s, |w(t)− w(s)| > R|t− s|α).
But for R big enough,
νn(|w(0)| > R) = P(|Xn(0)| > R) = µ0(|x| > R) = 0,
and according to (6.3),
νn(∃ t 6= s, |w(t)− w(s)| > R|t− s|α) = P(∃ t 6= s, |Xn(t)−Xn(s)| > R|t− s|α)
≤ P(Mn ≥ R) ≤ ‖Mn‖2pL2p
R2p≤ Cp
R2p.
Therefore supn νn(KcR) < ε for n big enough. The family νn : n ≥ 1 is tight. Up to a
subsequence, νn converges to ν weakly. Now let f ∈ C2c (Rd) and F be a bounded continuous
function from C([0, T ],Rd) to R which is FWs -measurable. We have
Eν
[(f(w(t))− f(w(s))−
∫ t
s(Lf)(wu)du
)F
]
= limn→∞E
[(f(Xn(t))− f(Xn(s))−
∫ t
s(Lf)(Xn(u))du
)F (Xn)
]. (6.4)
35
By Ito formula, we have
f(Xn(t))− f(Xn(s)) = Ms,t +∫ t
s(Lf)(Xn(Φn(u)))du,
where Ms,t is the martingale part, so that
E(Ms,tF (Xn)) = 0.
Now we will prove that
limn→+∞E
[(∫ t
s
((Lf)(Xn(Φn(u)))− (Lf)(Xn(u))
)du
)F (Xn)
]= 0. (6.5)
If Xn(·) converges to X(·) uniformly over [0, T ], it is easy to see that (6.5) holds; this canbe realized by using Skorohod representation theorem. We will give another proof using thefollowing basic estimate, having its own interest.
Lemma 6.2 For 1 < a <√
2, there exists C > 1 such that
P(
sup0≤t≤T
|Xn(t)−Xn(Φn(t))| ≥ a−n
)≤ CT e−Cn/4. (6.6)
End of the proof of Theorem 6.1. Let
Ωn =
sup0≤t≤T
|Xn(t)−Xn(Φn(t))| ≥ a−n
.
Let ε > 0, since f is compactly supported, x → (Lf)(x) is uniformly continuous on the supportof f . Therefore for n ≥ 1 big enough, for ω /∈ Ωn,
|(Lf)(Xn(Φn(u)))− (Lf)(Xn(u))| < ε, ∀u ∈ [0, T ].
Then∣∣∣∣E
( ∫ t
s(Lf)
(Xn(Φn(u))
)− (Lf)(Xn(u))du
)F (Xn)
∣∣∣∣ ≤ εP(Ωcn) + CP(Ωn).
Then (6.5) follows. ¤To prove Lemma 6.2, we need the following preparation.
Lemma 6.3 Let Xt =∫ t0 σsdBs +
∫ t0 fsds be a semi-martingale with σs ∈ Md,m, fs ∈ Rd.
Suppose furthermore that‖σs(ω)‖ ≤ A, |fs(ω)| ≤ B. (6.7)
Then for T > 0, R >√
dBT ,
P(
sup0≤t≤T
|Xt| ≥ R
)≤ 2d e−(R−
√dBT )2/2dA2T . (6.8)
36
Proof. We have |X(t)| = ( ∑di=1 X2
i (t)) 1
2 ≤√
dmax1≤i≤d |Xi(t)|, thus
|X(t)| ≥ R ⊂d⋃
i=1
|Xi(t)| ≥ R√
d
.
But
Xi(t) =∫ t
0〈σ∗sεi, dBs〉+
∫ t
0〈fs, εi〉ds,
where ε1, · · · , εd is the canonical basis of Rd. Then for t ≤ T ,
|Xi(t)| ≤∣∣∣∣∫ t
0〈σ∗sεi, dBt〉
∣∣∣∣ + BT.
Therefore sup
0≤t≤T|Xi(t)| ≥ R√
d
⊂
sup
0≤t≤T
∣∣∣∣∫ t
0〈σ∗sεi, dBs
∣∣∣∣ ≥R√d−BT
. (6.9)
Let α > 0. We know that
Mt := exp
α
∫ t
0〈σ∗sεi, dBs〉 − α2
2
∫ t
0|σ∗sεi|2ds
is a martingale. For M > 0, we have
sup0≤t≤T
∫ t
0〈σ∗sεi, dBs〉 ≥ M
⊂
sup
0≤t≤TMt ≥ exp
(αM − α2
2
∫ T
0|σ∗sεi|2ds
)
⊂
sup0≤t≤T
Mt ≥ exp(
αM − α2
2A2T
).
Again by Doob’s maximal inequality,
P(
sup0≤t≤T
∫ t
0〈σ∗sεi, dBs〉 ≥ M
)≤ e−αM+α2
2A2TE(MT ) = e−αM−α2
2A2T .
Taking the minimum over α > 0, we get
P(
sup0≤t≤T
∫ t
0〈σ∗sεi, dBs〉 ≥ M
)≤ e−M2/2A2T .
Now
sup0≤t≤T
∣∣∣∣∫ t
0〈σ∗sεi, dBs〉
∣∣∣∣ ≥ M
⊂
sup0≤t≤T
∫ t
0〈σ∗sεi, dBs〉 ≥ M
⋃sup
0≤t≤T
(−
∫ t
0〈σ∗sεi, dBs〉
)≥ M
.
It follows that
P(
sup0≤t≤T
∣∣∣∣∫ t
0〈σ∗sεi, dBs〉
∣∣∣∣ ≥ M
)≤ 2e−M2/2A2T .
37
Replacing M by R√d−BT , we get the result. ¤
Proof of Lemma 6.2. We have
Xn(t)−Xn(k2−n) =∫ t
k2−n
σ(Xn(Φn(s))
)dBs +
∫ t
k2−n
V(Xn(Φn(s))
)ds.
Using the estimate (6.8),
P(
supt∈[k2−n,(k+1)2−n]
|Xn(t)−Xn(Φn(t))| ≥ a−n
)≤ 2de−
12
(2
a2
)n
.
Now
P(
sup0≤t≤T
|Xn(t)−Xn(Φn(t))| ≥ a−n
)≤ T2n · 2de−
12
(2
a2
)n
≤ 2dTe−14
(2
a2
)n
for n big enough such that n log 2 ≤ 14
(2a2
)n. ¤Now we shall study the uniqueness in law. First, remark that if µB is the law of the Brownian
motion ω → B·(ω), then∫
Wϕ(γ(t1), · · · , γ(tn))dµB(γ)
=∫
Rn
ϕ(x1, · · · , xn)Pt1(x1)Pt2−t1(x2 − x1) · · ·Ptn−tn−1(xn − xn−1),
where Pt(x) = (2πt)−d/2e−|x|2/2t. Therefore the law of ω → B·(ω) is uniquely determined.
Theorem 6.4 Let V be a bounded Borel vector field on Rd. Then the SDE
dXt = dBt + V (Xt)dt (6.10)
admits a weak solution and the uniqueness in law holds.
Proof. Let
MT = exp−
∫ T
0〈V (Bs), dBs〉 − 1
2
∫ T
0|V (Bs)|2ds
. (6.11)
Then the Girsanov theorem says that, under the new probability measure dQ = MT dP,
wt := Bt +∫ t
0V (Bs)ds
is a Brownian motion. Therefore (Bt, wt) solves (6.10) under Q. By expression (6.11), MT is afunctional of B : MT = F (B). If γ is the law of SDE (6.10), then
∫ϕdν = E
(ϕ(B)MT
)=
∫ϕF dµB.
It follows that ν is uniquely determined. ¤The following general result is due to Stroock and Varadhan [11].
38
Theorem 6.5 Let a(x) = σ(x)σ(x)∗ be a bounded continuous matrix-valued function such thata ≥ cId, c > 0; V a bounded Borel vector field. Then the SDE (6.1) has the uniqueness in law.
In the remainder, we shall show how the ellipticity makes a weak solution to be a strong one.
Theorem 6.6 Under the same hypothesis as in Theorem 6.4, the SDE (6.10) admits a uniquestrong solution.
We will not give a complete proof, but emphasize the crucial role of the ellipticity. Here aretwo basic results in this context.
A result from PDE: Assume V· ∈ L2d+2(R+ × Rd). Then the backward PDE
∂ukt
∂t+ ∆uk
t + V · ∇ukt = 0, uk
T (x) = xk ∈ R (6.12)
admits a solution u ∈ W 1,22d+2
((0, T )×B(R)
); moreover there exists a small T0 such that
|ut(x)− ut(y)| ≥ c|x− y|, ∀ t ∈ [0, T ], x, y ∈ Rd. (6.13)
Krylov estimate: Let X0, X1 be two solutions to (6.10) with the same (Bt) and X0(0) =X1(0) = x. For any α ∈ [0, 1], set
Xα(t) = αX1(t) + (1− α)X0(t).
Then for any Borel function f : R+ × Rd → R+ and q ≥ d + 1, λ > 0, T > 0, we have
E( ∫ τR∧T
0e−λtf(t,Xα(t))dt
)≤ N‖f‖Lq((0,T )×B(R)), (6.14)
where τR = inft ≥ 0 : |X0(t)− x|+ |X1(t)− x| ≥ R. For u given in (6.12), Ito formula holds:
duk(t,Xi(t)) =∂uk
∂xj(t,Xi(t))dBj(t), i = 0, 1.
In this way, the drift term Vt disappears. Now we compute |u(t, X1(t)) − u(t,X0(t))|2. Letηt = u(t,X1(t))− u(t,X0(t)) and ξt = |ηt|2. By Ito formula,
dξt = 2〈ηt, dηt〉+ dηt · dηt
=d∑
j=1
2⟨ηt,
∂u
∂xj(t,X1(t))− ∂u
∂xj(t, X0(t))
⟩dBj(t)
+d∑
j=1
∣∣∣∣∂u
∂xj(t,X1(t))− ∂u
∂xj(t,X0(t))
∣∣∣∣2
dt
= ξt (dMt + dNt), ξ0 = u(0, x)− u(0, x) = 0,
39
where
dMt = 2ξ−1t
d∑
j=1
⟨ηt,
∂u
∂xj(t,X1(t))− ∂u
∂xj(t,X0(t))
⟩dBj
t ,
dNt = ξ−1t
d∑
j=1
∣∣∣∣∂u
∂xj(t,X1(t))− ∂u
∂xj(t,X0(t))
∣∣∣∣2
dt.
In the “good situation”, ξt satisfies a linear SDE driven by a semi-martingale with ξ0 = 0. Bypathwise uniqueness, ξt = 0 a.s. Then according to (6.13), X0(t) = X1(t) for t ≤ T0. Now fort ∈ [T0, 2T0], introduce BT0
t = B(T0 + t)−B(T0). Then t → Xi(t + T0) satisfies
Xi(t + T0) = Xi(T0) + BT0t +
∫ t
0V (Xi
s+T0)ds.
By what has been done in the above, we get
X1(t + T0) = X0(t + T0) for t ≤ T0.
Proceeding in this way, we see that X1(t) = X0(t) for all t ≥ 0. Now we shall show that we arein this “good situation”. We have
dMt · dMt = 4ξ−2t
d∑
j=1
⟨ηt,
∂u
∂xj(t,X1(t))− ∂u
∂xj(t,X0(t))
⟩2dt. (6.15)
By mean formula,
∂u
∂xj(t, X1(t))− ∂u
∂xj(t,X0(t)) =
(∫ 1
0∇x
∂u
∂xj(t,Xα(t))dα
)(X1(t)−X0(t)),
where ∇x denotes the gradient with respect to x, hence∣∣∣∣∂u
∂xj(t,X1(t))− ∂u
∂xj(t,X0(t))
∣∣∣∣ ≤ |ηt|∫ 1
0
∣∣∣∣∇x∂u
∂xj(t,Xα(t))
∣∣∣∣dα.
Combining with (6.15), it is sufficient to show
E( ∫ T0∧τR
0
∣∣∣∣∇x∂u
∂xj(t,Xα(t))
∣∣∣∣2
dt
)< +∞.
By Krylov estimate (6.14),
E(∫ T0∧τR
0
∣∣∣∣∇x∂u∂xj
(t, Xα(t))∣∣∣∣2
dt
)≤ eT0N
∥∥∥∥∇x∂u∂xj
∥∥∥∥Lq((0,T )×B(R))
≤ eT0N‖u‖W 1,2
q ((0,T )×B(R))
which is finite due to PDE’s result. Therefore
t → Mt∧τR + Nt∧τR =: St
is a continuous L2-semi-martingale. We have
dξt∧τR = ξt∧τRdSt and ξ0 = 0.
It follows thatξt∧τR = 0.
Now it is not hard to prove that τR ≥ T0, so that we get ξt = 0 a.s. for t ∈ [0, T0]. ¤
40
7 Notes
Section 1 is essentially taken from [3] and [10].Section 2 is taken from DiPerna and Lions [2] in which the authors treated mainly the delicate
case V ∈ L1([0, T ],W 1,1loc (Rd)).
Section 3 is taken from L. Ambrosio [1], but the proof of Theorem 3.3 is slightly different.The definitions in Section 4 follow the book of Ikeda and Watanabe [6]. The proof of Theorem
4.3 is taken from Fang and Zhang [4].Section 5 follows the book of Kunita [8], the proof of Kolmogorov’s modification theorem
can be found in [6] p20.The relations between martingale problems and second order partial differential operators
are well studied in Stroock and Varadhan’s book [11]. The proof of Theorem 6.1 does not usethe Skorohod representation theorem; instead, we used the elementary exponential martingaleestimates; the outline of the proof of Theorem 6.6 is taken from Krylov and Rockner [7].
Acknowledgment: This note is based on a mini course given at the Stochastic Center of BNUin July 2007. The author is grateful to the auditors for their attention: it is an encouragementto him; the author thanks also Professor CHEN MuFa for suggesting an arrangement of thecourse to the publication.
References
[1] L. Ambrosio, Transport Equation and Cauchy Problem for non-Smooth Vector Fields.Course Cetraro, 2005.
[2] R.J. DiPerna and P.L. Lions, Ordinary Differential Equations, Transport Equation andSobolev Spaces. Invent. Math. 98 (1989), 511–547.
[3] Shizan Fang and Dejun Luo, Flow of Homeomorphisms and Stochastic Transport Equa-tions. Accepted by Stochastic Analysis and Applications, 2007.
[4] Shizan Fang and Tusheng Zhang, A Study of a Class of Differential Equations with non-Lipschitzian Coefficients. Probab. Theory Relat. Fields, 132 (2005), 356–390.
[5] A. Figalli, Existence and uniqueness of martingale solutions for SDEs with rough or de-generate coefficients, preprint 2007.
[6] N. Ikeda and S. Watanabe, Stochastic Differential Equations and Diffusion Processes.North-Holland, Amsterdam, 1981.
[7] N.V. Krylov and M. Rockner, Strong Solutions of Stochastic Equations with Singular TimeDependent Drift. Probab. Theory Related Fields, 131 (2005), 154–196.
[8] H. Kunita, Stochastic Flows and Stochastic Differentail Equations. Cambridge UniversityPress 1990.
[9] C. LeBris and P.L. Lions, Existence and uniqueness of solutions to Fokker-Planck typeequations with irregular coefficients, Preprint 2007.
41
[10] Dejun Luo, Regularity of Solutions to Differential Equations with non-Lipschitz Coeffi-cients. Accepted by Bulletin Des Sciences Mathematique, 2007.
[11] D.W. Stroock and S.R.S. Varadhan, Multidimensional Diffusion Processes. GrundlehrenSeries 233, Springer-Verlag, 1979.
42