Upload
phungkhuong
View
217
Download
0
Embed Size (px)
Citation preview
Differential Equations of Mathematical Physics;
Theory and Numerical Simulations
Ruben Flores EspinozaMartın Gildardo Garcıa Alvarado
Georgii Omel’yanov
April 6, 2005
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1 Partial Differential Equations of the First Order 13
1.1 Introductory comments . . . . . . . . . . . . . . . . . . . 13
1.2 The method of characteristics . . . . . . . . . . . . . . . . 16
1.2.1 Conservation laws . . . . . . . . . . . . . . . . . . 16
1.2.2 General first-order partial differential equationssolved for the time derivative . . . . . . . . . . . . 24
1.2.3 General Cauchy problem for first-orderpartial differential equations . . . . . . . . . . . . . 30
1.3 Systems of partial differential equations of the first order . 37
1.3.1 Semilinear systems . . . . . . . . . . . . . . . . . . 37
1.3.2 Quasilinear systems . . . . . . . . . . . . . . . . . 41
1.4 Singular solutions . . . . . . . . . . . . . . . . . . . . . . . 45
1.4.1 Sketch of the distributions theory . . . . . . . . . . 45
1.4.2 Applications to differential equations . . . . . . . . 52
1.4.3 Shock waves propagation . . . . . . . . . . . . . . 55
1.4.4 Propagation of weak singularities . . . . . . . . . . 58
7
8 CONTENTS
1.4.5 The entropy condition and rarefaction waves . . . 59
1.5 Numerical simulations . . . . . . . . . . . . . . . . . . . . 62
1.5.1 Applications of the characteristics method . . . . . 62
1.5.2 Direct numerical methods . . . . . . . . . . . . . . 64
1.6 Bibliographical comments . . . . . . . . . . . . . . . . . . 66
2 Boundary Value Problems for Elliptic Equations 69
2.1 Clasification of equations . . . . . . . . . . . . . . . . . . 69
2.2 Differential equations of the elliptic type . . . . . . . . . . 76
2.2.1 Boundary value conditions . . . . . . . . . . . . . . 76
2.2.2 Main properties of harmonic functions . . . . . . . 79
2.2.3 The Green formula for the Dirichlet problem . . . 81
2.2.4 The eigenvalue problem and the Fourier method . 83
2.2.5 Asymptotics for elliptic equations with a smallparameter . . . . . . . . . . . . . . . . . . . . . . . 91
2.3 Numerical simulations . . . . . . . . . . . . . . . . . . . . 98
2.3.1 Discretization of domains . . . . . . . . . . . . . . 98
2.3.2 Discretization of differential operators . . . . . . . 99
2.3.3 Discretization of boundary conditions . . . . . . . 102
2.3.4 Gauss method for systems with three-diagonal ma-trices . . . . . . . . . . . . . . . . . . . . . . . . . 105
2.3.5 The eigenvalue problem for afinite-difference scheme . . . . . . . . . . . . . . . 108
2.3.6 Discretization of the Laplace operator.Iterative method for solving algebraic systems . . . 110
2.4 Discretization in the case of variable coefficients . . . . . . 113
CONTENTS 9
2.5 Bibliographical comments . . . . . . . . . . . . . . . . . . 115
3 Parabolic Equations 119
3.1 Parabolic linear equations . . . . . . . . . . . . . . . . . . 119
3.1.1 Cauchy problems and boundary value problemsfor parabolic equations . . . . . . . . . . . . . . . . 119
3.1.2 The Cauchy problem for the diffusion equation . . 121
3.1.3 The mixed problem for the diffusion equation . . . 124
3.1.4 The Fourier method for the diffusion equation . . . 126
3.1.5 A-priori estimates for mixed problems . . . . . . . 130
3.2 Finite difference schemes . . . . . . . . . . . . . . . . . . . 135
3.2.1 The one-dimensional spatial case . . . . . . . . . . 135
3.2.2 The multidimensional case . . . . . . . . . . . . . . 149
3.3 A-priori estimates and stability . . . . . . . . . . . . . . . 158
3.3.1 Auxiliary formulas . . . . . . . . . . . . . . . . . . 158
3.3.2 Some spaces of net functions and embeddingtheorems . . . . . . . . . . . . . . . . . . . . . . . 162
3.3.3 A-priori estimates . . . . . . . . . . . . . . . . . . 166
3.4 Bibliographical comments . . . . . . . . . . . . . . . . . . 173
4 Hyperbolic Equations 175
4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
4.1.1 The Duhamel formulas . . . . . . . . . . . . . . . . 176
4.1.2 The Cauchy problem for the wave equation . . . . 178
4.2 Mixed problems for hyperbolic equations . . . . . . . . . . 189
4.2.1 The reflection method . . . . . . . . . . . . . . . . 189
10 CONTENTS
4.2.2 The Fourier method for the wave equation . . . . . 198
4.2.3 Energy relations for the mixed problem . . . . . . 204
4.3 Finite difference schemes . . . . . . . . . . . . . . . . . . . 209
4.3.1 One dimensional case . . . . . . . . . . . . . . . . 209
4.3.2 A-priori estimates and stability for the caseof variable coefficients . . . . . . . . . . . . . . . . 219
4.3.3 Two dimensional case . . . . . . . . . . . . . . . . 225
4.4 The WKB method . . . . . . . . . . . . . . . . . . . . . . 230
4.4.1 The Schrodinger equation . . . . . . . . . . . . . . 230
4.4.2 The wave equation . . . . . . . . . . . . . . . . . . 235
4.5 Bibliographical comments . . . . . . . . . . . . . . . . . . 240
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
0.0. INTRODUCTION 11
Introduction
This book is the product of a series of lectures given at the MathematicsDepartment in the University of Sonora during the year 2002.
The original idea of the course was just to present the most conve-nient and popular methods of numerical simulations for partial differen-tial equations. However, it became clear almost immediately that it isimpossible to consider finite-difference schemes without a knowledge ofthe correctness of boundary value problems, the typical behavior of solu-tions for basic classes of differential equations of mathematical physics,and so on. Therefore, the basic elements of standard courses of the the-ory of PDE’s have been included. Moreover, in order to indicate waysto obtain examples of solutions at least to test the computer programs,we decided to describe some methods to find exact and asymptotic so-lutions. Furthermore, we believe that it is time to include nonlinearequations, at least the most important of them, into standard coursesabout differential equations of mathematical physics. As a result, thewhole text of the textbook series includes both the elements of linearand nonlinear PDE’s theories, asymptotic methods and methods of exactintegration, and methods of numerical simulations.
One of the authors (G. O.) is indebted to the administration ofthe Mathematics Department and the University of Sonora for the op-portunity to stay in Hermosillo and for the kind hospitality received. Hewould like also to thank personally Israel Segundo Caballero and RubenFlores Espinoza for their friendly collaboration.
12 CONTENTS
Chapter 1
Partial DifferentialEquations of the FirstOrder
1.1 Introductory comments
The main objective of this part is the consideration of the initial valueproblem for the first-order PDE of the form
∂ u
∂ t+ F
(u,∂ u
∂ x, x, t
)= 0. (1.1)
Here, x ∈ Rn, t > 0 and F (u, p, x, t) ∈ C2 is a scalar function. It isnatural to treat the variable t as the time and to pose the initial valuecondition
u|t=0 = u0(x). (1.2)
However, sometimes the original equation is not solved with respect tothe first derivative, ut. In this case we have the general form of thefirst-order PDE:
F
(u,∂ u
∂ t,∂ u
∂ x, x, t
)= 0, (x, t) ∈ Rn+1, (1.3)
13
14CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
and, instead of (1.2) we can consider the general Cauchy problem
u|γ = u0(x ′, t), (x ′, t) ∈ γ ⊂ Rn+1, (1.4)
where γ is a smooth surface of codimension 1.
In the one-dimensional case, the original equation can often bewritten in the form
∂ u
∂ t+
∂
∂ xΦ(u, x, t) = 0, (1.5)
which is called a conservation law. When u and Φ are vector functions,(1.5) represents a system of conservation laws.
What is the physical meaning of these equations?
When physicists describe the real world, they usually obtain ex-tremely complicated equations. However, after concentrating the atten-tion on some specific processes, neglecting some not so important effects,they obtain simpler equations. For example, the more or less realisticmodel for the gas dynamics is the system
∂ ρ
∂ t+ div (ρu) = 0,
ρ
(∂ u
∂ t+ 〈u,∇〉u
)+ ∇p = ε2∆u,
ρ
(∂ T
∂ t+ 〈u,∇T 〉
)= ε2div (κ∇T ) + E
(∂ p
∂ t+ 〈u,∇p〉
),
p = ρT,
(1.6)
where x ∈ R3, ρ is the density, u is the velocity vector, p is the pressure,T is the temperature and ε,κ and E are parameters.
Let us assume that the dissipation is small, that is ε 1 and thatthe process is isothermic, that is, T ' constant. Then we can pass fromsystem (1.6) to the first-order system
∂ ρ
∂ t+ divρ u = 0,
ρ
(∂ u
∂ t+ 〈u,∇〉u
)+ ∇p = 0,
p = cργ .
(1.7)
1.1. INTRODUCTORY COMMENTS 15
Let us now assume that the solution depends upon only one space vari-able. This means that at each plane
P =x ∈ R3 : x1 = constant = x0
1
the unknown functions are constant with respect to the space variablesx2, x3. Then we can rewrite equations (1.7) as the system of conservationlaws
∂ ρ
∂ t+
∂
∂ x(ρ u) = 0,
∂ (ρ u)∂ t
+∂
∂ x(ρ u2 + p) = 0.
(1.8)
However, system (1.8) is still very complicated. So, let us assume, in ad-dition, that, for some reason, the velocity and the pressure are constant.Then, we can obtain from (1.8) the transport equation:
∂ ρ
∂ t+∂(uρ)∂ x
= 0, (1.9)
with not necessarily constant u. This equation appears in the modelingof the motion of pollutants in a water stream. However, when passingfrom the gas dynamics equations (1.6) to the transport equation (1.9)we lost the extremely important property of nonlinearity. We will seelater that the behaviors of the solutions of equations (1.8) and (1.9) arequalitatively different. So, a more appropriate simplification of equation(1.8) is the Hopf equation, or, what is the same, the inviscid Burger’sequation
∂ v
∂ t+∂ v2
∂ x= 0. (1.10)
However, the passage from equation (1.8) to equation (1.10) is not soadequate because we have to assume that the density is constant andto neglect the first equation. However, equations (1.8) and (1.10) arequalitatively very similar. Even more, such equations appear in otherapplications. For example, the simplest traffic flow model is the equation
∂ u
∂ t+ v1
∂
∂ xu
(1 − 2
u
u1
)= 0, u1, v1 are constants.
16CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
It is clear that, after a change of variables, we can rewrite this equation asequation (1.10.) On the other hand, this equation is just the continuityequation (the first equation in system (1.6)). The point is that here uis just the density of cars on the road (number of cars per mile). Then,the flux function
Φ def= v1
(1 − 2
u
u1
)u
can be represented as the product u · v with v = v1(1 − 2u/u1). So, thefunction v = v(u) can be treated as the velocity of the traffic flow. Fromthis viewpoint the meaning of the parameters u1 and v1 is clear: v1 isthe maximum possible speed and u1 is the maximum possible densityon the road.
Another source for first-order equations is asymptotic theory. Forexample, when we construct the WKB asymptotic solution, we can ob-tain a special type of nonlinear PDE’s, the Hamilton-Jacobi equation
∂ u
∂ t+ F
(∂ u
∂ x, x, t
)= 0.
1.2 The method of characteristics
The main idea of the method of characteristics consists in finding afamily of curves such that the PDE becomes an ODE along the curvesof the family.
1.2.1 Conservation laws
Example 1.2.1. Let us consider, as our first example, the transportequation (1.9) with u =constant. Assume that the characteristic curvesare
x = X(t, x0),
1.2. THE METHOD OF CHARACTERISTICS 17
where x0 is a parameter (the position X at time t = 0) and the functionX is sufficiently smooth. We put ρ = ρ(X(t, x0), t) and derive
dρ(X, t)dt
=(∂ ρ(x, t)∂ t
+dX
dt
∂ ρ(x, t)∂ x
)∣∣∣∣x=X
. (1.11)
It is clear that if we choosedX
dt= u (1.12)
we find that the equality (1.11) is just the left-hand side of equation(1.9). So, defining X as a solution of equation (1.12) we transform ourpartial differential equation into the ordinary differential equation
dρ
dt= 0, (1.13)
but only along the curves x = X(t, x0). Now, let us consider the initialvalue problem adding to equation (1.9) the initial condition
ρ|t=0 = ρ0(x). (1.14)
Relation (1.14) implies the initial condition for equation (1.13):
ρ|t=0 = ρ0(x0). (1.15)
SinceX |t=0 = x0, (1.16)
we obtain the system of equations (1.12), (1.13) with conditions (1.15),(1.16). This system is called the characteristic system and the curvesx = X(t, x0) are called the characteristic curves. For our example thecharacteristics are the lines x = X(t, x0) = ut+ x0, (see Figure 1.1) andρ = ρ0(x0) along these lines.
Finally, solving for x0 from equation x = X(t, x0) = ut + x0
and substituting the result into the equation ρ = ρ0(x0) we obtain thesolution
ρ(t, x) = ρ0(x− ut)
to equation (1.9) with initial condition (1.14). ♦1
1The symbol ♦ denotes the end of an example.
18CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
sx0
x
t
Figure 1.1: Characteristics for the equation (1.9) with u =constant.
Example 1.2.2. Now, let us consider the Hopf equation (1.10), addingthe initial condition
u∣∣t=0
= u0(x). (1.17)
For the same reasons as before we obtain the characteristic system
dX
dt= 2U,
dU
dt= 0,
X∣∣t=0
= x0, U∣∣t=0
= u0(x0),
(1.18)
where U = u(X(t, x0), t). The second of the equations in (1.18) showsthat U = u0(x0) is constant along the characteristics X = 2Ut + x0 =2u0(x0)t+x0. However, since the initial function u0 is not constant, theequation
x = 2u0(x0)t+ x0 (1.19)
can be solved for the parameter x0 only, generally speaking, for smallvalues of t. More precisely, x0 can be found from (1.19) only if theJacobian
J def=∂ X
∂ x0= 1 + 2t
∂ u0
∂ x0(x0)
is positive. Let J (t) > 0 for t < t∗. Then we obtain the functionx0 = x0(x, t) and so, the solution to our initial value problem is
u = u0(x0)∣∣x0=x0(x,t)
, 0 6 t < t∗. ♦
In order to give a more detailed description of the solution’s de-pendence on the initial data we consider two cases. The first one is the
1.2. THE METHOD OF CHARACTERISTICS 19
Cauchy problem (1.10), (1.17) with non decreasing initial data,
∂ u0
∂ x0≥ 0 for all x ∈ R1. (1.20)
The graph of u0 looks like in Figure 1.2
0x
0
u0
1
Figure 1.2: An example of non decreasing initial data for the Hopf equa-tion
Due to (1.19), the family of the characteristic curves look like inFigure 1.3. Of course, this is a consequence of assumption (1.20). So,
0x
t
Figure 1.3: Characteristics for the Hopf equation with non decreasinginitial data
generally speaking, we have almost the same situation as in Example1.2.1 (See Figure 1.1).
Hence, the solution for such initial data exists globally in time.
20CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
Conversely, let
∂ u0
∂ x0< 0 for all x ∈ R1. (1.21)
Illustrating this case, we draw the initial data as in Figure 1.4. It is
0x
0
u0
1
Figure 1.4: An example of decreasing initial data
clear, then, that relation (1.19) implies that the characteristic curveslook as shown in Figure 1.5
The intersection means that we can treat the pair (x0, t) as acoordinate system only for t < t∗. Even more, the classical solution tothe Hopf’s equation exists only for t < t∗.
We can summarize our first results in the following statement of
0x
0
t
Figure 1.5: Characteristics for the Hopf equation with decreasing initialdata
1.2. THE METHOD OF CHARACTERISTICS 21
the Cauchy problem for the simplest conservation law:
∂ u
∂ t+
∂
∂ xφ(u, t) = 0, u|t=0 = u0(x). (1.22)
Theorem 1.2.1. Let φu ∈ C2(R1 × R1+) and u0 ∈ C2(R1). Then, there
exists a t∗ > 0 such that the classical solution u ∈ C2(R1 × [0, t∗)) toproblem (1.22) exists and it is unique. Moreover, this solution has theform
u = u0(x0(x, t)), (1.23)
where x0 = x0(x, t) is the solution of the equation
x = X(t, x0) (1.24)
under the conditionJ def
=dX
dx0> 0 (1.25)
and the function X satisfies the problem
dX
dt=∂ φ
∂ u
∣∣∣∣u=u0(x0)
, X∣∣t=0
= x0. (1.26)
Remark 1.2.1. Recall that the critical time t∗ is the minimal solutionof the equation J (t∗) = 0.
Remark 1.2.2. Theorem 1.2.1 can be generalized to the n-dimensionalcase. Indeed, let us consider the initial value problem for the multidi-mensional conservation law
∂ u
∂ t+
n∑
i=1
∂
∂ xiφi(u, t) = 0, u|t=0 = u0(x). (1.27)
Assuming that φi and u0 have the same properties as above and re-peating our construction, we obtain again that u is constant along thecharacteristics. However, to find the characteristics, instead of (1.26) wehave the system of equations
dXi
dt=∂ φi∂ u
∣∣∣∣u=u0(x0)
, Xi
∣∣t=0
= x0i , i = 1, 2, . . . , n. (1.28)
22CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
Obviously, this is the vector form of equation (1.26), and condition (1.25)transforms into
J def= det(∂ X
∂ x0
)> 0, 0 6 t < t∗. (1.29)
Consider now more general conservation laws of the form
∂ u
∂ t+
n∑
i=1
∂
∂ xiφi(u, x, t) = 0, u
∣∣t=0
= u0(x). (1.30)
Trying again to transform the PDE (1.30) into a family of ordinaryequations, we set
u = u(X, t), X = X(t, x0).
Sincedu(X, t)
dt=
(∂ u(x, t)∂ t
+n∑
i=1
dXi
dt
∂ u(x, t)∂ xi
)∣∣∣∣x=X
,
it is clear that we have to define X as followsdXi
dt=∂ φi(u,X, t)
∂ u, i = 1, 2, . . . , n. (1.31)
However, in contrast with the previous equation (1.22), u is not constantalong the characteristics, but satisfies the equation
du
dt+
n∑
i=1
∂ φi(z, x, t)∂ xi
∣∣∣∣z=u,x=X
= 0. (1.32)
In order to avoid a missing u = u(x, t) with independent variables x, tand u = u(X, t) on the characteristics, denote
U = u(X, t),
and rewrite the characteristic system (1.31), (1.32) as follows
dXi
dt=∂ φi(u,X, t)
∂ u
∣∣∣∣u=U
,
dU
dt+
n∑
i=1
∂ φi(z, x, t)∂ xi
∣∣∣∣z=U,x=X
= 0.
(1.33)
1.2. THE METHOD OF CHARACTERISTICS 23
The initial data are chosen in the same way as before
U∣∣t=0
= u0(x0), X∣∣t=0
= x0. (1.34)
The classical solution of the Cauchy problem (1.33), (1.34) exists if φi ∈C2. Thus to find the solution to the original problem (1.30), we have tosolve the equation
x = X(t, x0)
with respect to x0. Obviously, the condition (1.29) appears again.
Summarizing the results for conservation laws, we stress that thecondition on the non-degeneration of the Jacobian ((1.25) for x ∈ R1
and (1.29) for x ∈ Rn) plays a central role for the possibility of applyingthe characteristics method and for the existence of a classical solution.At the initial instant of time J = 1 since X
∣∣t=0
= x0. The tendency ofthe Jacobian evolution is described by the following theorem.
Theorem 1.2.2. (Liouville)
Consider the Cauchy problem
dX
dt= Φ(X, t), X
∣∣t=0
= x0, (1.35)
where X = (X1, . . . , Xn),Φ = (Φ1, . . . ,Φn) ∈ C1. Assume that the Ja-cobian J = J (t, x0) satisfies the condition (1.29). Then, the followingequality holds
1JdJdt
= 〈∇x,Φ(x, t)〉∣∣x=X
. (1.36)
Example 1.2.3. Consider the multidimensional version of the transportequation (1.9):
∂ ρ
∂ t+
n∑
i=1
∂
∂ xi(uiρ) = 0,
ρ∣∣t=0
= ρ0(x),
(1.37)
where u = (u1, . . . , un) ∈ C2(Rn×R1+) is a known vector function. Equal-
ity (1.37) is called the continuity equation. In this case the characteristic
24CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
system (1.33) has the form
dρ
dt+ ρ divu
∣∣x=X
= 0, ρ∣∣t=0
= ρ0(x0), (1.38)
dX
dt= U(X, t), X
∣∣t=0
= x0. (1.39)
Applying condition (1.29), from (1.38) we obtain
ρ(t, x0) = ρ0(x0)e−
∫ t0 divu
∣∣x=X(t ′,x0)
dt ′. (1.40)
However, the statement of Liouville’s theorem results in the equality
divu∣∣x=X
=d
dtlnJ .
Hence
ρ(t, x0) =ρ0(x0)J (t, x0)
.
Solving equation (1.39) we find the functions X = X(t, x0) and, fort < t∗, the inverse function x0 = x0(X, t). So the final form of thesolution to the Cauchy problem (1.37) is as follows:
ρ(t, x) =ρ0(x0)J (t, x0)
∣∣∣∣x0=x0(X,t)
. (1.41)
Note that for a specific class of flows u, namely those for whichdiv u(x, t) = 0, formula (1.41) becomes extremely simple:
ρ(t, x) = ρ0(x0)∣∣x0=x0(X,t)
. ♦
1.2.2 General first-order partial differential equationssolved for the time derivative
All the previous examples can be summarized in the equation
∂ u
∂ t+ F
(u,∂ u
∂ x, x, t
)= 0,
1.2. THE METHOD OF CHARACTERISTICS 25
where F (u, p, x, t) is linear in p,
F (u, p, x, t) =⟨p,∂ φ(u, x, t)
∂ u
⟩+ Φ(u, x, t), (1.42)
φ = (φ1, . . . , φn), p = (p1, . . . , pn) and 〈·, ·〉 denotes, as usual, the scalarproduct in Rn,
〈f, g〉 =n∑
i=1
figi
and Φ =∑n
i=1∂ φi(z,x,t)
∂ xi
∣∣z=u
= 〈∇x, φ(z, x, t)〉∣∣z=u
.
In this sense the characteristic system (1.33) can be written asfollows
dXi
dt=∂ F (U, p, x, t)
∂ pi, i = 1, 2, . . . , n,
dU
dt= −Φ(U,X, t) =
⟨p,∂ F (U, p, x, t)
∂ p
⟩− F (U, p, x, t).
(1.43)
We stress that the right-hand sides of equations (1.43) do not dependon p for functions F of the form (1.42).
Let us consider the general equation (1.1) in which F (u, p, x, t)is non linear with respect to p. It is clear that the right-hand sides ofequations (1.43) will depend on the derivative p = ∂ u/∂ x. In orderto close the characteristic system, let us treat this variable p as thenew unknown function. Then, adding the equation for p we obtain thefollowing characteristic system:
dX
dt=∂ F (U, p,X, t)
∂ p, X |t=0 = x0,
dp
dt= −∂ F (U, p, x, t)
∂ x− p
∂ F (U, p, x, t)∂ U
, p|t=0 = p0,
dU
dt=⟨p,∂ F (U, p, x, t)
∂ p
⟩− F (U, p, x, t), U |t=0 = u0(x0),
(1.44)
26CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
where X and p are vector functions if x ∈ Rn and
p0 =∂
∂ xu0(x)
∣∣∣∣x=x0
. (1.45)
The solution to this system is also called the characteristics. If F issufficiently smooth, the characteristics exist at least for small t. Wedenote this solution by
X = X(t, x0), p = P (t, x0), and U = U(t, x0)
and again consider the equation
x = X(t, x0), (1.46)
that can be solved under the assumption
J def= det(∂ X
∂ x0
)> 0, for t < t∗. (1.47)
If this solution we denote by x0 = x0(x, t) we obtain the solution to theoriginal Cauchy problem:
u = U(t, x0)∣∣x0=x0(x,t)
. (1.48)
Theorem 1.2.3. Let F (u, p, x, t) ∈ C2(R2n+1 × R1+) and u0 ∈ C2(Rn).
Then, under assumption (1.47) there exists a unique solution u(x, t) ∈C2(Rn×[0, t∗)) to problem (1.1), (1.2). This solution has the form (1.48).
Corollary 1.2.1. The following relation holds:
d
dtF (U(t, x0), P (t, x0), X(t, x0), t) =
∂ F (u, z, x, t)∂ t
∣∣∣∣u=U,z=P,x=X
.
(1.49)
Indeed, the total derivative in the left-hand side of (1.49) can berewritten in the form
dF
dt=∂ F
∂ U
dU
dt+⟨∂ F
∂ P,∂ P
∂ t
⟩+⟨∂ F
∂ X,∂ X
∂ t
⟩+∂ F
∂ t.
Applying the characteristics system (1.44), we obtain the equality (1.49).
1.2. THE METHOD OF CHARACTERISTICS 27
Corollary 1.2.2. Let F = F (U, P,X). Then F is the first integral forthe characteristic system (1.44),
dF
dt(U, P,X) = 0.
In view of the applications, let us consider a special case of equa-tions (1.1) with F = F (p, x, t). Recall that the equation
∂ u
∂ t+ F
(∂ u
∂ x, x, t
)= 0 (1.50)
is called the Hamilton-Jacobi equation. Since the Hamiltonian F (p, x, t)does not depend on u, the two first equations from the characteristicsystem (1.44) transform as follows
dX
dt=∂ F (P,X, t)
∂ P, X
∣∣t=0
= x0,
dP
dt= −∂ F (P,X, t)
∂ X, P
∣∣t=0
= P 0.
(1.51)
So, these equations split from the whole system (1.44) and form a closedsystem. This is called the Hamiltonian system or, again, the character-istic system. Functions X = X(t, x0) and P = P (t, x0) are called thebi-characteristics or, often, simply the characteristics again.
Now, since the bi-characteristicsX and P can be treated as knownfunctions, the right-hand side in the third equation in system (1.44) is aknown function as well. Thus, after integration, we rewrite this equationas follows
U(t, x0) = u0(x0) +∫ t
0
⟨P,∂ F (P,X, t ′)
∂ P
⟩− F (P,X, t ′)
∣∣∣∣t=t ′
dt ′.
(1.52)
Example 1.2.4. (The harmonic oscillator) A special case of the Hamilton-Jacobi equation with quadratic Hamiltonian in p and x is the harmonicoscillator. Consider the problem
∂ u
∂ t+
12
(∂ u
∂ x
)2
+12x2 = 0, u|t=0 =
12x2. (1.53)
28CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
After performing all the calculations for our example (1.53) we obtainthe Hamiltonian system
dX
dt= p, X |t=0 = x0,
dp
dt= −X, p|t=0 = x0,
and the equalitydU
dt=
12(p2 −X2).
ThusX = x0 (cos t + sin t) =
√2x0 cos
(π4− t),
p = x0 (cos t − sin t) =√
2 x0 sin(π
4− t),
(1.54)
and we find that the Jacobian
J =√
2 cos(π
4− t)
is positive only for 0 6 t < t∗ = 34π. Since
12(p2 −X2) = −2
(x0)2 cos t sin t
we obtain the formula
U(t, x0) =
(x0)2
2cos 2t
of the solution along the characteristics x = X(t, x0). Finally, solvingthe equation x = X(t, x0) we obtain the solution to problem (1.53):
u(t, x) =x2
2tan
(π4− t), 0 6 t <
34π. ♦
1.2. THE METHOD OF CHARACTERISTICS 29
@
@@
@@
@@
@
x
pΛ0
Λt
Figure 1.6: Bi-characteristics and Lagrange’s manifolds rot the harmonicoscilator
rt∗t
x
Figure 1.7: Characteristics (projections of the bi-characteristics on the(x− t) plane) for the harmonic oscilator
Let us draw the characteristics (x = X(t, x0), p = P (t, x0)) on thephase plane (x, p) (see Figure 1.6). The characteristics are circles andthey are defined for all time t. It is useful to draw also the sets Λt =X(t, x0), P (t, x0), t = constant, x0 ∈ R1
(V.P. Maslov called these sets
Lagrange’s manifolds). For our example, Λt are lines rotating with time.Until time t∗ the manifolds Λt have unique projections on the line p =0. At time t = t∗ this line becomes vertical, and its projection is thepoint (0,0). This behavior of Lagrange’s manifolds coincides with thebehavior of characteristics x = X(t, x0) (more precisely, the projectionof characteristics (X,P ) onto line X .) The first of the formulas in (1.54)implies that any characteristic curve starting at the point (x0, 0) tendsto the point (0, t∗) as t→ t∗ uniformly in x0 (see Figure (1.7)). It is alsoclear that u→ ∞ as t → t∗. Such behavior of the solution is similar to
30CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
the behavior of the trajectories of light rays into lens. This is the reasonwhy the described phenomenon is called appearance of a focal point andthe point (0, t∗) is called the focal point.
1.2.3 General Cauchy problem for first-orderpartial differential equations
Consider briefly the most general case (1.3) of first-order equations withCauchy data on a surface γ. For simplicity, we start with the two-dimensional case (x, t) ∈ R2 but the formulation of the main theoremwill be given for the general n-dimensional case.
First of all, note that now, with the initial data on the curveγ ⊂ R2, the variables t and x have the same rights. So we will make thechange in notation t↔ y and rewrite our problem in the form
F
(u,∂ u
∂ x,∂ u
∂ y, x, y
)= 0, u|γ = u0, γ ⊂ R2
xy . (1.55)
Moreover, such problems are usually treated as stationary and equation(1.55) is often called the eikonal equation.
Let γ be a curve without self intersections described parametricallyby γ =
(x, y)|x = x0(ξ), y = y0(ξ), ξ ∈ R1
. Let also F (u, p, q, x, y)
and u0 = u0(ξ) be twice differentiable functions. Then, performingalmost the same considerations as above we obtain the system of char-acteristic equations corresponding to (1.55):
dX
dτ=∂ F
∂ p,
dY
dτ=∂ F
∂ q,
dp
dτ= − ∂ F
∂ X− p
∂ F
∂ U,
dq
dτ= −∂ F
∂ Y− q
∂ F
∂ U,
dU
dτ= p
∂ F
∂ p+ q
∂ F
∂ q.
(1.56)
In these equations F = F (U, p, q,X, Y ) and τ is the parameter along thecharacteristics. Note that in the last of equations (1.56) the term −F
1.2. THE METHOD OF CHARACTERISTICS 31
does not appear. The point is that F (U, p, q,X, Y ) is the first integralfor system (1.56). To prove this, we calculate the derivative
d
dτF =
∂ F
∂ p
dp
dτ+∂ F
∂ q
dq
dτ+∂ F
∂ X
dX
dτ+∂ F
∂ Y
dY
dτ+∂ F
∂ U
dU
dτ.
Substituting dp/dτ, . . . , dU/dτ from (1.56) we readily obtain dF/dτ = 0.
Now, we have to pose the initial condition for system (1.56). Atthe initial “instant of time” τ = 0 the point (X, Y ) has to belong to thecurve γ and the function U has to be equal to the initial value u0 at thispoint. So
X |τ=0 = x0(ξ), Y |τ=0 = y0(ξ), U |τ=0 = u0(ξ). (1.57)
However, the initial data for p and q are not derived so easily. The pointis that p0 = ∂ u/∂ x|τ=0 , q
0 = ∂ u/∂ y|τ=0 but the initial function u0 isdefined only on γ and we cannot calculate these derivatives directly. Atthe same time,
du0
dξ=∂ u
∂ x
∂ x
∂ ξ+∂ u
∂ y
∂ y
∂ ξ
∣∣∣∣τ=0
= p0∂ x0
∂ ξ+ q0
∂ y0
∂ ξ. (1.58)
On the other hand, it is natural to assume that equation (1.55) is satis-fied on γ too. Thus, we can replace Eq. (1.55) by the following equation
F (u0, p0, q0, x0, y0) = 0, (1.59)
and derive two equations for the two unknown functions. Let p0(i),
q0(i), i = 1, 2, . . . ,M be the solutions to the systems (1.58), (1.59) atpoint ξ = ξ0. Then, we need to assume that there exists a neighborhoodω of ξ0 such that, for any i = 1, 2, . . . ,M,
J0 = det
∣∣∣∣∣∣∣
∂ x0
∂ ξ∂ y0
∂ ξ
∂ F∂ p
∂ F∂ q
∣∣∣∣∣∣∣
∣∣∣∣∣∣∣ω⊆γ
6= 0, (1.60)
where the arguments of the derivatives of F are u0, p0(i), q
0(i), x
0 and y0
and ξ ∈ ω. This assumption plays a very important role. It is obvi-ous that under (1.60) there exists (locally) functions p0
(i) = p0(i)(ξ) and
32CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
q0(i) = q0(i)(ξ). On the other hand, assumption (1.60) guarantees thecorrectness of the initial value problem (1.55). Indeed, this inequalitymeans that the tangents to γ and the characteristic (X, Y ) (see formulas(1.56)) are not parallel. It is convenient to note that with this geometri-cal viewpoint, assumption (1.60) means that the pair (ξ, τ) is the localcoordinate system. However, and this is very important, the solutionp0, q0 is not necessarily unique.
Let us choose one of these solutions and pose the initial condition
p|τ=0 = p0(i), p|τ=0 = q0(i). (1.61)
Then, there exists a neighborhood Ω of γ ∩ ω in which the Cauchyproblem (1.56), (1.57), (1.61) has in Ω the unique solution
X(i) = X(i)(τ, ξ), Y(i) = Y(i)(τ, ξ),
p(i) = p(i)(τ, ξ), q(i) = q(i)(τ, ξ),
U(i) = U(i)(τ, ξ).
Now, we have to return to Euler coordinates x, y. To do this, we haveto solve the equations
x = X(i)(τ, ξ), y = Y(i)(τ, ξ).
Thus, we come to the assumption
J(i) = det
∣∣∣∣∣∣∣∣∣
∂ X(i)
∂ ξ
∂ Y(i)
∂ ξ
∂ F(i)
∂ p
∂ F(i)
∂ q
∣∣∣∣∣∣∣∣∣6= 0, (1.62)
where we took into account the first two equations in the characteristicsystem (1.56) and F(i) = F (U(i), p(i), q(i), X(i), Y(i)). We stress that theright-hand side of (1.62) evaluated at τ = 0 is just the same as in (1.60).So assumption (1.62) is satisfied at least for small values of τ. Thus, forsuch values of τ we have the functions
τ(i) = τ(i)(x, y), ξ(i) = ξ(i)(x, y)
1.2. THE METHOD OF CHARACTERISTICS 33
and, as a result, the solution
u = U(i)(τ(i)(x, y), ξ(i)(x, y)) (1.63)
of the original Cauchy problem (1.55). This construction can be sum-marized in the following theorem.
Theorem 1.2.4. Let F and u0 be twice differentiable functions and γ bea nondegenerate C2 curve. Let equations (1.58)-(1.59) have M solutionssuch that assumption (1.60) is satisfied for each of them. Then, thereexists a domain Ω ⊃ γ such that the Cauchy problem (1.56) has Mclassical solutions of the form (1.63).
Example 1.2.5. (The eikonal equation)
Consider the equation(∂ u
∂ x
)2
+(∂ u
∂ y
)2
= n2,
u∣∣γ
= a,
(1.64)
where a and n are constants and γ =x0 = R cos ξ, y0 = R sin ξ
, R =
constant > 0. Obviously, the curve γ is nondegenerate and smooth. Letus consider equations (1.58), (1.59). In our case we have
−p0 sin ξ + q0 cos ξ = 0, (p0)2 + (q0)2 = n2. (1.65)
The first of the equation in (1.65) means that the vector (p0, q0) is pro-portional to (x0, y0). Thus,
p0 = σ cos ξ, q0 = σ sin ξ,
where σ is an arbitrary constant. Therefore, from the second of theequations in (1.65) we find two possible roots:
σ± = ±n.
Letp0(1) = n cos ξ, q0(1) = n sin ξ
34CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
andp0(2) = −n cos ξ, q0(2) = −n sin ξ.
Then the initial Jacobian is
J0(±) = det
∣∣∣∣∣∣
−R sin ξ R cos ξ
2σ± cos ξ 2σ± sin ξ
∣∣∣∣∣∣= ∓2nR 6= 0.
Next, we write out the characteristic system for both roots, σ± :
dX
dτ= 2p,
dY
dτ= 2q, X
∣∣τ=0
= R cos ξ, Y∣∣τ=0
= R sin ξ,
dp
dτ= 0,
dq
dτ= 0, p
∣∣τ=0
= σ± cos ξ, q∣∣τ=0
= σ± sin ξ,
dU
dτ= 2(p2 + q2) = 2n2, U
∣∣τ=0
= a.
Obviously, p ≡ p0 and q ≡ q0, and we obtain two systems of character-istics:
X± =(2σ±τ +R
)cos ξ, Y ± =
(2σ±τ +R
)sin ξ,
and U = a+2n2τ. To come back to Eulerian variables, we have to checkthe Jacobians
J(±) = det
∣∣∣∣∣∣
−(2σ±τ + R) sinξ (2σ±τ + R) cos ξ
2σ± cos ξ 2σ± sin ξ
∣∣∣∣∣∣= ∓2n(R± 2nτ).
So, for the first roots p0(1), q
0(1), J+ 6= 0 for all τ ≥ 0. Since for this root
τ =12n(x2 + y2
)1/2 − R,
we find the solution
u = a+ n(x2 + y2
)1/2 − R
1.2. THE METHOD OF CHARACTERISTICS 35
defined everywhere outside the circle γ.
For the second family of roots, p0(2), q
0(2), the Jacobian J− is nonzero
only for τ < R/2n. Again, solving the equations x = X, y = Y we findinside the circle γ
τ =12n
(R−
√x2 + y2
), u = a− n
(√x2 + y2 −R
).
It is clear that this solution has a weak singularity at the point (0,0).This result is obvious from the geometrical viewpoint. The character-istics (X+, Y +) go outside the circle γ and do not intersect anywhere.Conversely, the other pair of characteristics, (X−, Y −) go inside thecircle and intersect each other at the point (0,0). ♦
The obvious generalization of the case (1.55) for n independentvariables is the following.
Consider the initial value problem
F
(u,∂ u
∂ x, u
)= 0, u|γ = u0, (1.66)
where x ∈ Rn and γ ⊂ Rn is a surface of codimension 1, parameterizedby ξ = (ξ1, . . . , ξn−1). Similar to (1.56) we write out the system for thevector function p0 = (p0
1, . . . , p0n)
⟨p0,
∂ x0
∂ ξi
⟩=∂ u0
∂ ξi, i = 1, . . . , n− 1, (1.67)
F (u0, p0, x0) = 0,
where x0 = x0(ξ) ⊂ γ.
36CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
We will assume that the correctness condition,
J0 = det
∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣
∂ x01
∂ ξ1· · · ∂ x0
n
∂ ξ1
.... . .
...∂ x0
1
∂ ξn−1· · · ∂ x0
n
∂ ξn−1
∂ F
∂ p1· · · ∂ F
∂ pn
∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣γ, p=p0
6= 0, (1.68)
is satisfied in some domain ω ⊂ γ.
Then, we write out the characteristic system
dX
dP=
∂ F
∂τ,
dP
dτ= −∂ F (U, P,X)
∂ X− P
∂ F (U, P,X)∂ U
,
dU
dτ=
⟨P,∂ F (U, P,X)
∂ P
⟩
and pose the initial conditions
X |τ=0 = X0(ξ), P |τ=0 = p0(ξ), U |τ=0 = U0(ξ),
choosing as p0 one of the roots of equations (1.67). Then we assumethat
J = det
∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣
∂ X1
∂ ξ1· · · ∂ Xn
∂ ξ1
.... . .
...
∂ X1
∂ ξn−1· · · ∂ Xn
∂ ξn−1
∂ F
∂ p1· · · ∂ F
∂ pn
∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣
6= 0.
1.3. SYSTEMS OF PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER37
Solving equation x = X(τ, ξ) we obtain the solution to problem (1.66).
Theorem 1.2.5. Let F ∈ C2, γ ∈ C2, u0 ∈ C2 and assume that system(1.67) has M solutions p0
j such that, for each of them, condition (1.68)is satisfied. Then, there exists a domain Ω ⊃ γ such that problem (1.66)has M solutions in Ω.
1.3 Systems of partial differential equations of
the first order
1.3.1 Semilinear systems
By definition, the first-order PDE’s system
∂ u
∂ t+ A
∂ u
∂ x= f, x ∈ R1, (1.69)
is called semilinear if A = A(x, t) is a N × N matrix with smoothentries and f = f(u, x, t) = (f1(u, x, t), . . . , fN(u, x, t)) is a sufficientlysmooth vector function and u = (u1, . . . , uN). System (1.69) is said tobe hyperbolic if all the eigenvalues of A, this is, the solutions of thecharacteristic equation
det(A− λE) = 0, (1.70)
where E is the N ×N identity matrix, are real.
System (1.69) is said to be hyperbolic in the strong sense if alleigenvalues λi = λi(x, t), i = 1, . . . , N are different, uniformly in x andt.
We will consider the Cauchy problem for the hyperbolic in thestrong sense system (1.69) for t > 0 and posing, at t = 0, the initialvalue
u|t=0 = u0(x), (1.71)
38CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
where u0 is a sufficiently smooth (at least u0 ∈ C1) vector function. Inorder to apply the method of characteristics to the problem (1.69)-(1.71),we consider the left eigenvectors of matrix A :
`(k)A = λk`(k), k = 1, . . . , N. (1.72)
Multiplying equation (1.69) by `(k) we obtain the system in characteristicform
`(k)(∂ u
∂ t+ λk
∂ u
∂ x
)= `(k)f, k = 1, . . . , N, (1.73)
where `(k)f def=∑N
i=1 `(k)i fi.
Now, we introduce the Riemann invariants
Γk = `(k)udef=
N∑
i=1
`(k)i ui, k = 1, . . . , N. (1.74)
We stress that under our assumptions about A this system is solvablewith respect to u and the solution is given by
uk = b(k)Γ def=N∑
i=1
b(k)i Γi, k = 1, 2, . . . , N, (1.75)
where b(k) are smooth vector functions.
It is clear that
`(k)∂ u
∂ t=∂ Γk∂ t
− Γkt , `(k)∂ u
∂ x=∂ Γk∂ x
− Γkx , (1.76)
where
Γkt
def=∂ `(k)
∂ tu, Γkx
def=∂ `(k)
∂ xu.
So, we can rewrite system (1.73) in terms of the Riemann invariants, asfollows:
∂ Γk∂ t
+ λk∂ Γk∂ x
= gk(Γ, x, t), k = 1, . . . , N, (1.77)
1.3. SYSTEMS OF PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER39
wheregk = `(k)f − Γkt − Γkx ,
and, using (1.75), we write the right-hand side in terms of Γ.
If we introduce the characteristics x = Xk(t, x0) such that
dXk
dt= λkXk(t), Xk|t=0 = x0, k = 1, . . . , N, (1.78)
then system (1.77) is transformed into the characteristic form(dΓkdt
)
k
= gk(Γ, Xk, t), k = 1, . . . , N, (1.79)
where (dΓk/dt)k denotes the total derivative along the characteristicx = Xk, (
dΓkdt
)
k
def=∂ Γk∂ t
+ λk(Xk, t)∂ Γk∂ x
.
In the special case A = constant and f = f(x, t) the right-hand side of(1.79) does not depend on Γ. This implies that we can split the originalsystem (1.69) into N scalar equations. In the general case, gk dependsboth on Γk and Γ1, . . . ,Γk−1,Γk+1, . . . ,ΓN . However, the left hand-sideof k-th equation in (1.79) includes only Γk and we can split this system,solving the Cauchy problem numerically.
Example 1.3.1. Let us consider the wave equation
∂2u
∂ t2− a2 ∂
2u
∂ x2= 0, a = constant. (1.80)
Introducing the function v such that vt = −ux we rewrite equation (1.80)as the system of first-order PDE’s
∂
∂ t
(u
v
)+(
0 a2
1 0
)∂
∂ x
(u
v
)=(
00
). (1.81)
The eigenvalues of matrix A =(
0 a2
1 0
)are
λ1 = a, λ2 = −a,
40CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
and the corresponding left eigenvectors are
`(1) = (1, a), `(2) = (1,−a).
Thus, the Riemann invariants have the form
Γ1 = u+ av, Γ2 = u− av.
So, we pass from system (1.81) to the following system in the Riemanninvariants (
dΓ1
dt
)
(1)
= 0,(dΓ2
dt
)
(2)
= 0, (1.82)
where the derivatives are evaluated along the characteristics
x = X1 = at+ x0, x = X2 = −at + x0. (1.83)
From (1.82) we obtain
Γi = Γ0i (x
0) along x = Xi, i = 1, 2,
where Γ0i are arbitrary functions (initial data). Solving equation (1.83)
with respect to x0 we obtain
Γ1 = Γ01(x− at), Γ2 = Γ0
2(x+ at).
Thus, we obtain the well known D’Alambert formula for the solution ofthe wave equation (1.80):
u(x, t) =12(Γ0
1(x− at) + Γ02(x+ at)
). (1.84)
In terms of the initial data for equation (1.80),
u|t=0 = u0(x),∂ u
∂ t
∣∣∣∣t=0
= v0(x),
we have
Γ01(x) = u0(x)− 1
a
∫ x
−∞v0(s)ds,
1.3. SYSTEMS OF PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER41
Γ02(x) = u0(x) +
1a
∫ x
−∞v0(s)ds.
So, we can write D’Alambert formula (1.84) in the following equivalentform:
u(x, t) =12
(u0(x− at) + u0(x+ at)) +12a
∫ x+at
x−atv0(s)ds. ♦
1.3.2 Quasilinear systems
System (1.69) is called quasilinear hyperbolic in the strong sense if A =A(u, x, t) and all roots of equation (1.70) are real and different uni-formly in u, x and t. It is clear that we can write such systems in thecharacteristic form (1.73). At the same time, introducing the Riemanninvariants in the form (1.74) we obtain that Γkt and Γkx include ∂ u/∂ tand ∂ u/∂ x again, since `(k) depends on u. To avoid this obstacle wewill assume that there exists multipliers µk = µk(u, x, t) such that the1-forms ωk
def= `(k)du can be transformed as follows
µkωkdef= µk
N∑
i=1
`(k)i dui =
N∑
i=1
∂ Γk∂ ui
dui, i = 1, . . . , N. (1.85)
The functions Γk = Γk(u, x, t) are called the Riemann invariants for thequasilinear system.
If these multipliers µk exist then, multiplying equations (1.73) byµk we rewrite these equations in the following form, similar to (1.76):
∂ Γk∂ t
+ λk∂ Γk∂ x
= gk(Γ, x, t), k = 1, . . . , N, (1.86)
in which the right-hand side is given by
gk(Γ, x, t) = µk`(k)f − Γkt − λkΓkx ,
where the derivatives Γkt and Γkx of the Riemann invariants are calcu-lated treating u as a constant.
42CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
The obvious last step is the passage from (1.86) to the system ofthe form (1.79).
Let us note that all 2× 2 systems can be rewritten in the form ofRiemann invariants. For such systems, the Riemann invariants Γk canbe derived very easily. Indeed, let the equations
ωk(x0, t0, u, du) = 0, k = 1, 2, (1.87)
have the integralsφk(x0, t0, u) = constant.
Then, we can poseΓk = φk(x, t, u).
For the case N ≥ 3 the existence of multipliers µk is an additionalassumption.
Example 1.3.2. Consider the isothermal gas dynamics system
∂ ρ
∂ t+
∂
∂ xρv=0,
∂ v
∂ t+ v
∂ v
∂ x+
1ρ
∂ p
∂ x= 0,
(1.88)
in which p = cργ, γ > 0. This system has the form (1.69) with theunknown vector function u = (ρ, v) and the matrix
A =
v ρ1ρp′ρ v
and represents an example of an hyperbolic in the strong sense system.Here, p′ρ
def= d p(ρ)/d ρ.
The eigenvalues of A are
λ1 = v + c, λ2 = v − c, c =√p′ρ(ρ) (1.89)
1.3. SYSTEMS OF PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER43
(Note the importance of the assumption p′ρ > 0).
The corresponding left eigenvectors are
`(1) =(c, ρ), `(2) =
(−c, ρ
).
Let us rewrite equation (1.87) in the form
dv
dρ= ± c
ρ.
These equations have the first integrals
Γ1 = v + ϕ(ρ), Γ2 = v − ϕ(ρ), ϕ(ρ) =∫ ρ
ρ0
c(ρ′)ρ′
dρ′, (1.90)
where ρ0 is an arbitrary constant.
Now, we take into account that the relations (1.85) for our examplehave the form
µ1(c dρ+ ρ dv) =c
ρdρ+ dv,
µ2(−c dρ+ ρ dv) = − cρdρ+ dv.
Thus, µ1 = µ2 = 1/ρ and we can rewrite system (1.88) in terms of theRiemann invariants (1.90) in the following form (similar to (1.86)):
(dΓ1
dt
)
1
= 0,(dΓ2
dt
)
2
= 0. (1.91)
The derivatives here are taken along the characteristics x = Xi(t, x0),i = 1, 2,
dX1
dt= v + c,
dX2
dt= v − c, Xi|t=0 = x0. (1.92)
In order to rewrite the eigenvalues λ1,2 as functions of Γ1 and Γ2 we notethat according to (1.90)
v =12(Γ1 + Γ2), ϕ(ρ) =
12(Γ1 − Γ2).
44CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
So
ρ = ϕ−1
(12(Γ1 − Γ2)
),
and
c(ρ) = ψ
(12(Γ1 − Γ2)
),
where ϕ−1 denotes the inverse of ϕ and ψ is a smooth function thatsatisfies the relation
dψ
dρ= ρ
c′ρ(ρ)c(ρ)
, (1.93)
where c′ρ(ρ)def= d c(ρ)/d ρ. Obviously, equation (1.91) leads to the con-
clusion that the Riemann invariant Γ1 is the constant Γ01(x
0) along thecharacteristic x = X1(t, x0) and the second invariant Γ2 is the constantΓ0
2(x0) along the characteristic x = X2(t, x0).
For a further reduction, let us assume that the initial data for(1.88) is such that
v|t=0 + ϕ(ρ)|t=0 = constant def= Γ01. (1.94)
Then, the first invariant Γ1 is the constant Γ01. Thus
v =12(Γ0
1 + Γ02(x
0)), ρ = ϕ−1
(12(Γ0
1 − Γ02(x
0)))
and to find the function x01 = x0(x, t) we have to solve the equation
(according to the second of the equations in (1.92))
x = g(x0)t+ x0, g(x0) =12(Γ0
1 + Γ02(x
0))− ψ
(12(Γ0
1 − Γ02(x
0)))
.
(1.95)Using formula (1.93) we obtain the solvability condition for equation(1.95):
J def=dX2
dx0= 1 + t
12
(1 + ρ
c′ρc
)∣∣∣∣ρ=ρ0(x0)
dΓ02
dx0.
1.4. SINGULAR SOLUTIONS 45
Then, due to assumption (1.94) we have
Γ02 = v0 − ϕ(ρ0) = 2v0 − Γ0
1.
Thus,dΓ0
2
dx0= 2
d v0(x0)d x0
.
Taking into account that the coefficient 1 + ρ c′ρ/c is positive, we obtainthe main result: under assumption (1.94) the solution for the Cauchyproblem for the gas dynamics system (1.88) will be globally smooth ifdv0(x)/dx > 0 for all x ∈ R1. Conversely, the solution will loose itssmoothness (at an instant of time) if dv0(x)/dx < 0 for some x.
1.4 Singular solutions of partial differential equa-
tions of the first order
1.4.1 Sketch of the distributions theory
The theory of distributions was created by S. Sobolev (in the 1930’s)and by L. Schwarz (in the 1950’s). For our purposes, the following factsare important.
Let Ω ⊂ Rn be a bounded domain. We denote the space of in-finitely differentiable functions with compact support in Ω by D(Ω).More precisely:
D(Ω) =ϕ ∈ C∞(Ω) : ∃Ω′ ⊆ Ω, m(Ω′) <∞, ϕ(x) = 0 ∀x /∈ Ω′
,
where m(Ω′) denotes the Lebesgue measure Ω. The space D(Ω) is alsodenoted by C∞
0 (Ω).
Definition 1.4.1. The elements of D(Ω) are called test functions
Definition 1.4.2. A sequence ϕk of test functions converges to 0 inthe sense of D(Ω) if and only if
46CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
1.- There exists a bounded domain Ω′ such that
supp ϕk ⊆ Ω′ ∀ k
2.- The functions ϕk(x) and all their derivatives tend to zero uniformlyin x ∈ Ω′ as k → ∞.
Definition 1.4.3. A distribution f is a linear continuous functional onD(Ω) :
f : D(Ω) → R1.
The real number that a distribution f associates to a test function ϕ(x) ∈D(Ω) is denoted with
〈f, ϕ〉
and it is called the action of f on ϕ.
The main properties of distributions are:
A. (Linearity) For any real numbers α1, α2 and any test functionsϕ1, ϕ2,
〈f, α1ϕ1 + α2ϕ2〉 = α1 〈f, ϕ1〉 + α2 〈f, ϕ2〉 .
B. (Continuity) Let ϕk be a sequence of test functions and assumethat ϕk → 0 as k → ∞. Then the sequence of real numbers〈f, ϕk〉 → 0 as k→ ∞ for any distribution f.
The space of distributions is denoted by D′(Ω), or simply by D′.
Definition 1.4.4. A sequence fn of distributions is said to convergeweakly to f ∈ D′ (or in the sense of distributions, or in the sense of D′)if and only if for any test function ϕ
〈fn, ϕ〉 → 〈f, ϕ〉 as n→ ∞.
Definition 1.4.5. A distribution f is equal to zero in the domain Ω ifand only if
〈f, ϕ〉 = 0
for any ϕ ∈ D(Ω). In such a case we write f = 0 on Ω.
1.4. SINGULAR SOLUTIONS 47
Definition 1.4.6. Let f ∈ D ′(Ω). The support of f , denoted by suppf,is the complement of the set
x : f = 0 on a neighborhood of x.
Example 1.4.1. Let F : Rn → R be a locally integrable function. Letus associate with F a distribution f ∈ D′ whose action on any testfunction ϕ ∈ D is defined to be
〈f, ϕ〉 =∫
Rn
F (x)ϕ(x)dx. (1.96)
It is straightforward to check that this functional satisfies properties Aand B. ♦
Definition 1.4.7. A distribution is said to be regular if it can be de-scribed as in (1.96). If it does not exist a locally integrable function Fsuch that the action of the distribution f on any test function ϕ can beobtained by integrating the product Fϕ over the whole space then f issaid to be a singular distribution.
Example 1.4.2. Consider the sequence fk of functions
fk(x) =12
(1 + tanh(kx)) ∈ C∞(R1).
For any test function ϕ ∈ D we have∫ ∞
−∞fk(x)ϕ(x)dx =
∫ ∞
0ϕ(x)dx+
∫ 0
−∞fk(x)ϕ(x)dx+
+∫ ∞
0(fk(x)− 1)ϕ(x)dx.
(1.97)
Since
fk(x) =e2kx
1 + e2kxfor x 6 0,
we have∣∣∣∣∫ 0
−∞fk(x)ϕ(x)dx
∣∣∣∣ 6 max |ϕ(x)|∫ 0
−∞e2kxdx =
12k
max |ϕ(x)|.
48CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
Similar estimates for the third integral in (1.97) imply that for any ϕ ∈ D
limk→∞
∫ ∞
−∞fk(x)ϕ(x)dx =
∫ ∞
0ϕ(x)dx =
∫ ∞
−∞H(x)ϕ(x)dx,
where H(x) is the Heaviside function
H(x) =
1 for x > 0,0 for x 6 0.
Thus, fk(x) → H(x) as k → ∞ in the sense of distributions. It is clearthat H(x) is a regular distribution. The support of H(x) is the half linex ≥ 0. ♦
Example 1.4.3. Let us define the functions
x+(x) =
x for x > 0,0 for x 6 0
,
and
x−(x) =
0 for x > 0,−x for x 6 0
.
Since x+ and x− are locally integrable, we can associate with them theregular distributions x+ and x−:
〈x+, ϕ〉 =∫ ∞
0xϕ(x)dx, 〈x−, ϕ〉 = −
∫ 0
−∞xϕ(x)dx. ♦
Example 1.4.4. (The Dirac δ-function ) Let ω = ω(x) ∈ C∞ be anonnegative function decaying sufficiently fast as |x| → ∞, and suchthat
∫∞−∞ ω(x)dx = 1. Consider the sequence
ωε(x) =1εω(xε
)as ε→ 0.
If ϕ(x) is any test function, then
1ε
∫ ∞
−∞ω(xε
)ϕ(x)dx =
∫ ∞
−∞ω(y)ϕ(εy)dy
= ϕ(0)∫ ∞
−∞ω(y)dy +
∫ ∞
−∞ω(y) [ϕ(εy)− ϕ(0)]dy.
1.4. SINGULAR SOLUTIONS 49
Since |ϕ(εy) − ϕ(0)| 6 εc, c =constant, we conclude that
〈ωε, ϕ〉 → ϕ(0) as ε→ 0. (1.98)
Let us denote with δ(x) the weak limit of ωε as ε → 0. According to(1.98), δ(x) is the distribution such
〈δ(x), ϕ(x)〉 = ϕ(0) ∀ϕ ∈ D. (1.99)
The function δ(x) is called the Dirac δ-function and it is an exampleof a singular distribution, because it does not exist a locally integrablefunction F such that ϕ(0) =
∫R F (x)ϕ(x)dx. The support of δ(x) is the
point x = 0. ♦
Definition 1.4.8. (Multiplication of distributions) Let f ∈ D′ anda(x) ∈ C∞. Then af is the distribution such that for any ϕ ∈ D
〈af, ϕ〉 = 〈f, aϕ〉.
We stress that in general it is not possible to define the multiplica-tion of two distributions in such a way that the product is a distribution.
Definition 1.4.9. (Differentiation of distributions) Let f ∈ D′(Ω),Ω ⊂Rn. The weak derivative of order |α| for f,
∂ |α|
∂ xαf
is the distribution such that for any test function ϕ,⟨∂ |α|
∂ xαf, ϕ
⟩= (−1)|α|
⟨f,∂ |α|
∂ xαϕ
⟩. (1.100)
In this definition, α is a multi-index α = (α1, α2, . . . , αn) where αi =0, 1, 2, . . . , |α| =
∑ni=1 αi and ∂ xα = ∂ xα1
1 ∂ xα22 · · ·∂ xαn
n .
The derivative ∂|α|
∂ xα f is also called derivative in the sense of dis-tributions or derivative in D′.
50CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
Note that if f ∈ C |α|, definition (1.4.9) is equivalent to the classicaldefinition of derivative.
The operation of weak derivative has the following properties.
1.- Each distribution f is infinitely differentiable in D′ sense.
2.-∂
∂ x1
(∂ f
∂ x2
)=
∂
∂ x2
(∂ f
∂ x1
).
3.- (The Leibnitz rule) Let f ∈ D′ and a ∈ C∞. Then
∂
∂ x(af) =
∂ a
∂ xf + a
∂ f
∂ x. (1.101)
4.- Let f = 0 in the sense of distributions for x ∈ Ω. Then ∂|α|
∂ xα f = 0on Ω for any |α| ≥ 0.
5.- Let fk → f as k→ ∞ in the sense of distributions. Then, for any|α| ≥ 0
∂ |α|fk∂ xα
−→ ∂ |α|f
∂ xαas k→ ∞
in the sense of distributions.
Example 1.4.5.dH(x)dx
= δ(x). (1.102)
Indeed,⟨dH
dx, ϕ
⟩= −
⟨H,
dϕ
dx
⟩= −
∫ ∞
0
dϕ(x)dx
dx = ϕ(0) = 〈δ, ϕ〉. ♦
Example 1.4.6.
dx+
dx= H(x),
dx−dx
= −H(−x). (1.103)
1.4. SINGULAR SOLUTIONS 51
Indeed,⟨dx+
dx, ϕ
⟩= −
⟨x+,
dϕ
dx
⟩= −
∫ ∞
0
xdϕ(x)dx
dx
= −xϕ∣∣∣∣∞
0
+∫ ∞
0ϕ(x)dx = 〈H(x), ϕ(x)〉 ,
⟨dx−dx
, ϕ
⟩= −
⟨x−,
dϕ
dx
⟩=∫ 0
−∞xdϕ(x)dx
dx
= xϕ
∣∣∣∣0
−∞−∫ 0
−∞ϕ(x)dx = − 〈H(−x), ϕ(x)〉 . ♦
Example 1.4.7. Let f(x) be a piecewise smooth function
f(x) =
f1(x) ∈ C∞ for x < a,
f2(x) ∈ C∞ for x > a.(1.104)
Then,df
dx=df
dx
+ [f ]a δ(x− a), (1.105)
where df/dx is the derivative of f in the classical sense:
df
dx
=
df1dx
for x < a,
df2dx
for x > a,
[f ]a is the jump of f at x = a:
[f ]a = f2(a+ 0) − f1(a− 0),
and δ(x−a) is the distribution whose action on a test function ϕ is givenby
〈δ(x− a), ϕ(x)〉 = ϕ(a). (1.106)
52CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
To prove formula (1.105) it is sufficient to note that the function (1.104)can be written in the form
f(x) = f1c(x) + (f2c(x) − f1c(x))H(x− a),
where f1c(x) and f2c(x) are smooth continuations of functions f1(x) andf2(x) to the whole line R1. Then after using Leibnitz rule, (1.101), andformulas (1.102) and (1.106) we readily obtain (1.105).
1.4.2 Applications to differential equations
Consider a linear differential equation of m-th order
L(x,D)u = f(x), (1.107)
where
L(x,D) =m∑
|α|=0
aα(x)Dα, Dα =∂ |α|
∂ xα=
∂ |α|
∂ xα11 · . . . · ∂ xαm
m.
From the classical viewpoint, the right hand side must be at least acontinuous function (f ∈ C) and the function u must have at least mcontinuous derivatives (u ∈ Cm). However, from the point of view of thetheory of distributions one can calculate any derivative for any distribu-tion. So, equation (1.107) makes sense for f and u in D′.
Definition 1.4.10. A weak solution of equation (1.107) is a distributionu such that equation (1.107) is satisfied in the sense of distributions:
〈L(x,D), ϕ〉= 〈f, ϕ〉 for any ϕ ∈ D. (1.108)
Recall that, according to definition (1.4.9),
〈L(x,D), ϕ〉=
⟨u,
m∑
|α|=0
(−1)|α|Dα(aα(x)ϕ)
⟩.
As for nonlinear systems the situation is not so simple. We stress againthat D′ is not an algebra. Thus, to consider nonlinear terms we have toknow in advance that the product makes sense in D′.
1.4. SINGULAR SOLUTIONS 53
Example 1.4.8. Let u = H(x). Then the product u2 exists and it is adistribution. Actually, u2 = u = H(x). Consequently the derivative ofu2 exists and it is given by the formula
∂
∂ xu2 = δ(x).
However, notice that
δ(x) =∂
∂ xu2 6= 2u
∂
∂ xu,
since u = H(x), ∂ u/∂ x = δ(x) and H(x)δ(x) /∈ D′. ♦
In accordance with this example, it is clear that conservation lawsconstitute a very important type of first-order partial differential equa-tions. In the one-dimensional case, these equations have the form
∂ u
∂ t+
∂
∂ xφ(x, t, u) = f(x, t, u), (1.109)
where φ and f are smooth. Hence, it is necessary only to check thatφ(x, t, u) and f(x, t, u) exist in D′ sense for some special choice of thedistribution u.
The next natural question is: how to describe the set of u’s forwhich φ(x, t, u) ∈ D′ for any smooth function φ? The main point hereis the result of V. P. Maslov. He proved that only three generators formalgebras in D′ in the one-dimensional case. They are:
1.- The Heaviside function H(x);
2.- The functions x+ and x− with weak singularities;
3.- The ε-δ singularity
The last generator is of great importance for physical applications, but itis very strange at the first sight, since it is zero in the D′ sense. Actually,the ε−δ singularity is the pointwise limit of an infinitely narrow soliton.
54CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
A A
x x
uε u
vt∼ ε ∼ ε vt
ε→ 0-
Figure 1.8: The ε-δ singularity
For example, the soliton solution of the Korteweg-deVries equation withsmall dispersion ε2 has the form of
uε = A cosh−2
(βx− vt
ε
),
where A, β = β(A) and v = v(A) are constants. Taking the limit in D′
sense as ε→ 0 we obtain that uε → 0 (see Example 1.4.4). However, inthe pointwise sense, the limit is not zero, but a function which is equalto zero everywhere except at the point x = vt and equal to A at thispoint (see Figure 1.8). At the same time 1
εuε → 2Aβ δ(x − vt) as ε → 0in D′. This remark explains the term “ε-δ singularity” and allows to usethis object in the asymptotic theory of distributions.
Now, if u belongs to one of these sub algebras, φ and f will belongto D′(R2). So, according to definition 1.4.10, the weak solution of equa-tion (1.109) can be defined in the following way: u is a weak solution onR2x,t if for any test function ϕ = ϕ(x, t) ∈ D(R2) the equality∫ ∞
−∞
∫ ∞
−∞
u∂ ϕ
∂ t+ φ(x, t, u)
∂ϕ
∂ x+ f(x, t, u)ϕ
dxdt = 0 (1.110)
holds.
Considering the Cauchy problem, this definition has to be changeda little.
1.4. SINGULAR SOLUTIONS 55
Letu∣∣t=0
= u0(x), (1.111)
and consider equation (1.108) on the half planet > 0, x ∈ R
. Then,
we write H(t)u(x, t) instead of u and require the equality
∫ ∞
0
∫ ∞
−∞
u∂ ϕ
∂ t+ φ(x, t, u)
∂ϕ
∂ x+ f(x, t, u)ϕ
dx dt
+∫ ∞
−∞u0(x)ϕ(x, 0)dx= 0
(1.112)
to hold for any ϕ ∈ D(R2). Since for smooth u (with respect to t)
∫ ∞
−∞H(t)u
∂ ϕ
∂ tdt =
∫ ∞
0u∂ ϕ
∂ tdt = −u0ϕ
∣∣∣∣t=0
−∫ ∞
0ϕ∂ u
∂ tdt,
the initial condition is included into definition (1.112). A similar correc-tion has to be made in the case of boundary value problems.
1.4.3 Shock waves propagation
Let us return to Example 1.2.2 for the simplest quasilinear PDE of thefirst order. We rewrite the Hopf equation in divergence form
∂ u
∂ t+
∂
∂ xu2 = 0 (1.113)
and recall that the characteristics for the initial value problem (1.17)intersect at the point x = 1, t = 1/2 (see Figure 1.4). The phenomenomthat occurs at the instant of time t = 1/2 is called gradient catastrophesince the derivative of u (the gradient, in the multidimensional case)increase infinitely as t → 1/2. As it was shown above, after the break-ing time there exists no classical solutions. However, treating equation(1.113) in the weak sense and taking into account Maslov’s result, wesee that the gradient catastrophe means simply the passage from smoothsolutions to the non smooth Hopf-type solution. In the problem under
56CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
consideration, it has the form
u =
1 for x < v(t− t∗) + x∗,
0 for x > v(t− t∗) + x∗,t > t∗, (1.114)
where x∗ = 1, t∗ = 1/2 and v is the velocity of motion of the point ofdiscontinuity. We stress that the parameter v is unknown beforehand.
For physical reasons, piecewise solutions of the form (1.114) arecalled shock waves.
To find the shock wave velocity v physicists often use integralconservation laws. However, a more direct approach is through thetheory of distributions.
Let us write the shock wave (1.114) in the form
u = H(v(t− t∗) − x). (1.115)
According to Examples 1.4.5 and 1.4.8 we have
∂ u
∂ t= vδ(v(t− t∗)− x),
∂ (u2)∂ x
= −δ(v(t− t∗)− x).
Now, by treating the Hopf equation in the weak sense we obtain
∂ u
∂ t+
∂
∂ xu2 = (v − 1)δ(v(t− t∗) − x) = 0.
Thus, for our example (1.114), v = 1.
Consider the general case
∂ u
∂ t+
∂
∂ xφ(u, x, t) = 0, u
∣∣t=0
=
u0
1 for x < x∗,
u00 for x > x∗,
(1.116)
where φ is a convex in u (φ ′′ > 0) smooth function and u0i (x) are smooth
functions defined for all x ∈ R1. We assume that u01(x
∗) > u00(x
∗).Rewrite the initial data in the form
u∣∣t=0
= u00(x) + u0(x)H(x∗ − x),
1.4. SINGULAR SOLUTIONS 57
where u0(x) = u01(x) − u0
0(x). Let us look for solutions in the form oftraveling shock waves:
u(x, t) = u0(x, t) + uH(ψ(t)− x), (1.117)
where u = u1(x, t) − u0(x, t), u0(x, t), u1(x, t) and ψ(t) are unknownfunctions such that
u0
∣∣t=0
= u00 for x > x∗,
u1
∣∣t=0
= u01 for x 6 x∗,
ψ∣∣t=0
= x∗.
It is easy to verify the identity
φ(u, x, t) = φ(u0, x, t) + φH(ψ− x), (1.118)
where φ = φ(u1, x, t)− φ(u0, x, t).
Then
∂ u
∂ t=
dψ
dtuδ(ψ− x) +H(ψ− x)
∂
∂ tu +
∂ u0
∂ t,
∂ φ
∂ x= −φδ(ψ − x) +H(ψ − x)
∂
∂ xφ +
∂
∂ xφ(u0, x, t).
Thus, after substituting these formulas into equation (1.116) and col-lecting terms we obtain the relation(u∂ ψ
∂ t− φ
)δ(ψ − x) +
(∂
∂ tu − ∂
∂ xφ
)H(ψ− x)
+∂ u0
∂ t+
∂
∂ xφ(u0, x, t) = 0.
(1.119)
We furthermore note that
uδ(ψ − x) = [u]δ(ψ − x), [u] def= u1
∣∣x=ψ
− u0
∣∣x=ψ
,
φδ(ψ− x) = [φ]δ(ψ− x), [φ] def= φ(u1, x, ψ)∣∣x=ψ
− φ(u0, x, ψ)∣∣x=ψ
.
58CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
Obviously, in order for the relation (1.119) to hold it is necessary that
dψ
dt=
[φ][u], (1.120)
∂ u0
∂ t+
∂
∂ xφ(u0, x, t) = 0, x > ψ(t), t > 0, (1.121)
∂ u1
∂ t+
∂
∂ xφ(u1, x, t) = 0, x < ψ(t), t < 0. (1.122)
Equation (1.120) is the Rankine-Hugoniot condition for both, the veloc-ity dψ/dt of the shock wave motion and the jumps of φ and u across thepoint of discontinuity. Equations (1.121) and (1.122) are the originalconservation law (1.116) written before and after the shock. Since theshock path x = ψ(t) is unknown, these equations constitute the systemfor the three unknown functions ψ, u0 and u1.
1.4.4 Propagation of weak singularities
The same idea can be used to describe continuous but non smooth solu-tions. We write (x−vt)+ = (x−vt)H(x−vt) and look for a solution witha weak singularity in the form of expansion with respect to smoothness
u = u0 +A1(x− vt)+ + A2(x− vt)2+ + more smooth terms,
where, for simplicity, u0, v and Ai are constants. After substituting(1.123) into (1.113) and collecting the most non smooth terms (coeffi-cients of the Heaviside function) we obtain
v = 2u0.
This means that the weak singularities propagates along the character-istics. Actually, we used these well known result in subsection 1.4.3 foran example with non smooth initial data.
1.4. SINGULAR SOLUTIONS 59
1.4.5 The entropy condition and rarefaction waves
Consider the Hopf equation
∂ u
∂ t+∂ u2
∂ x= 0 (1.123)
with constant initial data of the form
u
∣∣∣∣t=0
=
u0
1 for x < x0,
u00 for x > x0.
(1.124)
Now we do not assume the inequality u01 > u0
0. On the contrary, sincethe case u0
1 ≥ u00 is just the discussed above, we assume that
u01 < u0
0. (1.125)
Formulas (1.120) do not require any special sign for the jumps [φ] and[u]. So, formally speaking the shock wave exists in the case under con-sideration:
u(x, t) =
u0
1 for x < ψ(t),u0
0 for x > ψ(t),
ψ(t) = (u01 + u0
0)t+ x0.
(1.126)
Furthermore, in the case (1.125) it is possible to construct the set ofsolutions of the form
u(x, t) =
u01 for x < ψ1(t),A for ψ1(t) < x < ψ2(t),u0
0 for x > ψ2(t),
ψ1(t) = u01t + x0, ψ2(t) = u0
0t + x0,
(1.127)
with arbitrary parameters A ∈ [u01, u
00]. Thus the solution is non unique
in the case (1.125). However the described shock waves are unstable.Indeed, let us regularize the initial data by changing the right-hand
60CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
side in (1.124) by a smooth function u0ε(x) where ε is the regularization
parameter and the inequality ∂ u0ε(x)/∂ x ≥ 0 holds. The method of
characteristics is now applicable and, since J > 0, the classical solutionexists. Moreover, the limit solution as ε → 0 will be smooth for t > 0outside the two points of weak discontinuity. This indicates that heremust be a solution of problem (1.123), (1.124) with similar properties.Consider the wedge-shaped region bounded by the characteristics x =2u0
1t+ x∗ and x = 2u00t + x∗ and write the solution in the form
u = g
(x− x∗
t
).
Obviously
∂ u
∂ t= −τ
tg′τ(τ)
∣∣∣∣τ=x/t
,∂ u2
∂ t= 2
1tg(τ)g′τ(τ)
∣∣∣∣τ=x/t
.
Substituting these derivatives into equation (1.123) we obtain the equa-tion
−1t
(τ − 2g(τ)
)g′τ (τ) = 0.
Since the root gτ ′ = 0 implies g = constant, we see that we have tochoose
g(τ) =12τ.
Summarizing, we obtain the solution
u(x, t) =
u01 for x < 2u0
1t + x∗,12x− x∗
tfor 2u0
1t+ x∗ < x < 2u00t + x∗,
u00 for x > 2u0
0t + x∗.
(1.128)
Substituting x = 2u01t + x∗ and x = 2u0
0t + x∗ we observe that thissolution is continuous. The wave inside the wedge-shaped region is calledrarefaction wave. We stress again that the solution is non unique inthe case (1.125). In order to avoid the non-uniqueness it is necessaryto specify additional information. For conservation laws the entropycondition is used to select a solution which is physically realistic. Forthe Hopf equation this condition states the following:
1.4. SINGULAR SOLUTIONS 61
there exists a positive constant E such that
u(x+ h, t) − u(x, t)h
6E
t(1.129)
for all t > 0, h > 0 and x.
If in (1.127) we choose x and x+ h on opposite sides of the shock pathsx = ψ1(t) or x = ψ2(t), we obtain
u(x+ h, t)− u(x, t)h
=A − u0
1
tor
u(x+ h, t) − u(x, t)h
=u0
0 −A
t.
Since these expressions tend to infinity as h→ 0, it is impossible to finda constant E such that inequality (1.129) is satisfied. The same resultwe obtain for the wave (1.126):
u(x+ h, t) − u(x, t)h
=u0
0 − u01
h−→ +∞ as h→ 0.
Conversely, for the rarefaction wave (1.128) the left-hand side in (1.129)has the form
u(x+ h, t) − u(x, t)h
=12t.
Thus, (1.128) is the entropy solution with constant the E = 1/2.
We stress that the entropy condition helps in selecting the stablesolution. Furthermore, the shock wave solutions of the form (1.117) withnegative jump amplitude are the entropy solutions. Indeed, choosingx+h and x before and after the shock point x = ψ(t) and letting h→ 0we obtain
u(x+ h, t) − u(x, t)h
−→ [u]h
−→ −∞ as h→ 0.
Thus, the inequality (1.129) holds for any positive constant E.
62CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
1.5 Numerical simulations of the Cauchy prob-
lem for the first-order PDE’s.
1.5.1 Applications of the characteristics method
Let it be known beforehand that the solution is sufficiently smooth dur-ing the time interval of interest. Then the best way for numerical sim-ulations is the use of the characteristics method, since we pass to thesystem of ODE.
Let us recall the most useful formulas for solving numerically sys-tems of first-order ODE’s
du
dt= f (u, t) ,
u|t=0 = u0,
(1.130)
where u = (u1, . . . , uN) and f = (f1, . . . , fN) are vectors.
Put the net tn , tn = nτ, n = 0, 1, ..., on the line t, choosing asufficiently small step τ. Let un def= u|t=tn be known. The question ofinterest is: how to define the solution un+1 at the next time tn+1?
A. The Runge-Kutta methods.
A.1. The method of the first order (the Euler scheme)
un+1 = un + τf (un, tn) . (1.131)
Scheme (1.131) has order of approximation O(τ).
A.2. The method of the second order
un+1 = un +12τ (k1 + k2) ,
where k1 = f (un, tn) , k2 = f (un + τk1, tn+1) , has order of approxima-tion O(τ2).
1.5. NUMERICAL SIMULATIONS 63
A.3. The method of the fourth order
un+1 = un +16τ (k1 + 2k2 + 2k3 + k4) ,
where
k1 = f (un, tn) ,
k2 = f(un +
τ
2k1, tn +
τ
2
),
k3 = f(un +
τ
2k2, tn +
τ
2
),
k4 = f (un + τk3, tn+1) ,
has order of approximation O(τ4).
The Euler scheme (1.131) is the simplest of the numerical algo-rithms. However, this scheme is seldom used since errors, accumulatingstep by step, can increase to O(1) after O(1/τ) steps. There exists alsomethods with higher orders of approximation. These schemes are alsoseldom used because the formulas involved become more complicated.
B. The Adams schemes.
The Runge-Kutta methods have only one drawback: they require tocalculate nonlinearities k1, k2, ... at any step anew. This drawback isovercome in the Adams’ schemes where only one nonlinear function,fn
def= f(un, tn), has to be calculated at any new step.
B.1. The method of the first order is the Euler scheme A.1.
un+1 = un + τfn.
B.2. The method of the second order
un+1 = un + τfn + τ2 (fn − fn−1) .
B.3. The method of the third order
un+1 = un + τfn + τ2 (fn − fn−1) + 5
12τ (fn − 2fn−1 + fn−2) .
64CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
1.5.2 Direct numerical methods
As it was illustrated in the previous sections, solutions of nonlinearPDE’s conserve, as a rule, the smoothness only till some critical instantof time. Correspondingly, the use of the characteristics method afterthis time becomes more complicated. Indeed, we have to add Hugo-niot’s conditions and to consider the differential equations at both sidesof the shock path. This is not so easy just in the scalar case.
It is more convenient to simulate numerically nonsmooth solutionsdirectly for the original partial differential equations. However, the prob-lem has to be regularized. First of all, the nonsmooth initial data has tobe changed to smooth ones. For example, we write for some small ε > 0
u∣∣t=0
=12
(1− tanh
(x− x0
ε
))(1.132)
instead of the shock wave
u∣∣t=0
=12
(1 −H(x− x0)) . (1.133)
Furthermore, the equations have to be regularize too. It is possible todo it explicitely and implicitly.
The explicit way consists in the supplement of small viscousterms. For example, adding the second derivative with a small coefficientε2 we pass from the Hopf equation
∂ u
∂ t+∂ u2
∂ x= 0 (1.134)
to the Burgers equation
∂ u
∂ t+∂ u2
∂ x= ε2
∂ 2u
∂ x2, ε 1. (1.135)
The solution to both Cauchy data (1.132) and (1.133) are nonsmoothfor the Hopf equation (1.134) (for t ≥ 0 in the case (1.133) and fort > t∗ = O(ε) in the case (1.132)). On the contrary, solutions of the
1.5. NUMERICAL SIMULATIONS 65
parabolic equation (1.135) are smooth for any fixed parameter ε > 0.We will consider parabolic equations and appropriate finite-differenceschemes in the next chapters.
The implicit way of regularization means that the finite-differencescheme is written in such a manner that dissipation is hidden inside theformulas. As an example we consider the Lax scheme.
Let us consider the simplest example of conservation law
∂ u
∂ t+
∂
∂ xφ(u) = 0. (1.136)
Consider a net (xi, tj) on the half plane with nodes (xi = ih, tj = jτ)and steps h > 0, τ > 0. Write yji instead of u(xi, tj). Using Cauchy datawe easily define
y0i = u
∣∣∣∣t=0
(xi).
Assume that yj′
i are known for j ′ = 0, 1, . . . , j. To find yj+1i we set
yj+1i = yji −
τ
2h
φ(yji+1) − φ(yji−1)
, (1.137)
where, according to the Lax method
yji =12
(yji+1 + yji−1
). (1.138)
Consider the scheme (1.137), (1.138), rewriting it in the form
yj+1i − y
ji
τ+
12h
φ(yji+1) − φ(yji−1)
=yji − y
ji
τ. (1.139)
As τ → 0 and h→ 0 we obtain
yj+1i − yjiτ
=∂ u
∂ t
∣∣∣∣(xi,tj)
+ O(τ),
12h
φ(yji+1) − φ(yji−1)
=∂ φ(u)∂ x
∣∣∣∣(xi,tj)
+ O(h2).
66CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
Furthermore, after the Taylor expansion we obtain
yji − y
ji
τ=h2
2τ∂ 2u
∂ x2
∣∣∣∣(xi,tj)
+ O(h4
τ
).
Thus, with precision O(τ + h2 + h4/τ), the scheme (1.137), (1.138) ap-proximates the equation with small viscosity
∂ u
∂ t+
∂
∂ xφ(u) = ε
∂ 2u
∂ x2, ε =
h2
2τ 1.
It is necessary to say that schemes of the form (1.137) are conditionallystable. This means that all calculations for a number of the order O(1/τ)of time steps will be stable only under the Courant condition:
τ
hmaxu
|φ ′(u)| 6 1.
In the next chapters we will consider finite-difference schemes in moredetail.
1.6 Bibliographical comments
Some books were influential in the writing of these notes and are recom-mended for readers wishing supplement or expand the material presentedhere. For the first reading we refer to the textbooks [1]-[9].
Let us consider the bibliography in more detail. As it is clear fromthe terminology, the most important investment in the creation of thetheory for first-order PDEs has been contributed by Hamilton, Jacobiand Riemann. A modern description of this theory is contained in thewell known book by R. Courant [10]. There are also many excellentbooks about this theory. We can recommend the textbooks by R. Knobel[1], by E. DiBenedetto [2] and by G. B. Whitham [3], for a first reading,as well as the books by P. Lax [11], B. Vainberg [12], B. Rozhdestvenskiiand N. Yanenko [13] and by V. Maslov and M. Fedoryuk [14] for afurther reading. Applications of this theory are considered in the books
1.6. BIBLIOGRAPHICAL COMMENTS 67
by R. Haberman [15] (traffic flows), and by B. Rozhdestvenskii and N.Yanenko [13] (gas dynamics). The uniqueness problems, as well as theentropy conditions have been discussed by many authors. We refer toworks by P. Lax [11], O. Oleinik [16], [17], as well as the books by B.L. Rozhdestvenskii and N.I. Yanenko [13], J. Smoller [18] and by G. B.Whitham [3].
Perhaps the best textbooks on distributions theory are those writ-ten by L. Schwarz [4] and I. Gelfand and G. Shilov [5]. At the same timethis theory is excellently described in many textbooks about PDE’s.Among others, we refer to the textbooks by V. Vladimirov [42] and byA. Friedman [19].
A further development of the distributions theory, which allowsto construct algebraic structures results in the theory of generalizedfunctions, see the books by J. Colombeau [21], H. Biagioni [22] and M.Oberguggenberger [23].
The idea to expand solutions with respect to the smoothness hasbeen created by R. Courant [10]. This idea has been further developedfor nonlinear PDE’s by V. Maslov [24]. About the algebraic backgroundof this approach we refer to the papers [25], [26].
The theory of ordinary differential equations and numerical meth-ods for ODE’s are the subject of many books. Among others, we referto the books of W. Boyce and DiPrima [6] and R. Miller and A. Michel[7]. Consideration of numerical methods for first-order PDE’s can befound in the books by R. Richtmyer and K. Morton [8], P. Roache [9],R. LeVeque [27], and by B. L. Rozhdestvenskii and N.I. Yanenko [13].
68CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
Chapter 2
Boundary Value Problemsfor Elliptic Equations
2.1 Classification of linear differential equations
of the second order
Consider the linear differential equations with two independent variables
a∂ 2u
∂ x2+ 2 b
∂ 2u
∂ x∂ y+ c
∂ 2u
∂ y2+ g
∂ u
∂ x+ e
∂ u
∂ y+ l u+ f = 0, (2.1)
where a, b, c, . . .f are twice differentiable functions of x and y and, ateach point (x, y) ∈ R2, |a(x, y)|+ |b(x, y)|+ |c(x, y)| 6= 0. Let us intro-duce a change of coordinates with the aim of simplyfying equation (2.1).Assume that ξ(x, y) and η(x, y) are C2(R2) functions, such that
J = det
ξx ηx
ξy ηy
6= 0. (2.2)
69
70CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
Let us write u(ξ, η) = u(x(ξ, η), y(ξ, η)). Then, we have
∂ u
∂ x=
ξx∂ u
∂ ξ+ ηx
∂ u
∂ η
∣∣∣∣x=x(ξ,η),y=y(ξ,η)
,
∂ 2u
∂ x2=
(ξx)
2 ∂2u
∂ ξ2+ 2ξx ηx
∂ 2u
∂ ξ∂ η+ (ηx)
2 ∂2u
∂ η2
+ξxx∂ u
∂ ξ+ ηxx
∂ u
∂ η
∣∣∣∣x=x(ξ,η), y=y(ξ,η)
.
Similarly,
∂ u
∂ y=
ξy∂ u
∂ ξ+ ηy
∂ u
∂ η
∣∣∣∣x=x(ξ,η),y=y(ξ,η)
,
∂ 2u
∂ y2=
(ξy)
2 ∂2u
∂ η2+ 2ξy ηy
∂ 2u
∂ ξ∂ η+ (ηy)
2 ∂2u
∂ η2
+ξyy∂ u
∂ ξ+ ηyy
∂ u
∂ η
∣∣∣∣x=x(ξ,η), y=y(ξ,η)
,
and
∂ 2u
∂ x∂ y=
ξxξy
∂ 2u
∂ ξ2+(ξxηy + ξyηx
)∂ 2u
∂ η∂ ξ
+ηxηy∂ 2u
∂ η2+
∂ 2ξ
∂ x ∂ y
∂ u
∂ ξ+ ηxy
∂ u
∂ η
∣∣∣∣x=x(ξ,η), y=y(ξ,η)
.
After substituting these formulas into (2.1) and collecting the terms weobtain the equation
a∂ 2u
∂ ξ2+ 2 b
∂ 2u
∂ ξ∂ η+ c
∂ 2u
∂ η2+ g
∂ u
∂ ξ+ e
∂ u
∂ η+ l u + f = 0, (2.3)
2.1. CLASIFICATION OF EQUATIONS 71
where
a =aξ2x + 2 bξx ξy + cξ2y
∣∣∣∣x=x(ξ,η), y=y(ξ,η)
,
b = aξxηx + b (ξxηy + ξyηx) + cξyηy∣∣∣∣x=x(ξ,η), y=y(ξ,η)
,
c =aη2
x + 2 bηxηy + cη2y
∣∣∣∣x=x(ξ,η), y=y(ξ,η)
,
l = l
∣∣∣∣x=x(ξ,η), y=y(ξ,η)
, f = f
∣∣∣∣x=x(ξ,η), y=y(ξ,η)
,
g =gξx + eξy + aξ2xx + 2 bξxξy + cξ2yy
∣∣∣∣x=x(ξ,η), y=y(ξ,η)
,
(2.4)
and there is a formula for e similar to the formula for g.
Equation (2.3) looks similar to equation (2.1). However, the co-efficients a, b and c depend on the free functions ξ and η. Let us choosethese functions in such manner that the coefficients a and c vanish. So,ξ and η shall satisfy the equations
a (ξx)2 + 2 bξxξy + c (ξy)
2 = 0,
a (ηx)2 + 2 bηxηy + c (ηy)
2 = 0.
(2.5)
Let (x0, y0) be a point such that a(x0, y0) 6= 0 and consider a neighbor-hood of this point. Then, solving the equations (2.5) for ξx and ηx weobtain
ξx + λ1(x, y)ξy = 0, ηx + λ2(x, y)ηy = 0, (2.6)
where
λ1 =b−
√d
a, λ2 =
b+√d
a,
λ2 − λ1 = 2√d
a, d = b2 − ac.
72CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
There are three possible situations:
I) d > 0, (the so called hyperbolic case);
II) d = 0, (the so called parabolic case);
III) d < 0, (the so called elliptic case).
Let us consider the hyperbolic case. Since d∣∣x=x0,y=y0
> 0, solutions ofequations (2.6) exist and assumption (2.2) is satisfied in a neighborhoodof the point (x0, y0). Thus, in terms of the new variables the originalequation (2.1) transforms into the so called canonical form of hyperbolicequations in two variables:
∂ 2u
∂ ξ∂ η+ φ
(ξ, η, u,
∂ u
∂ ξ,∂ u
∂ η
)= 0, (2.7)
where
φ =12b
(g∂ u
∂ ξ+ e
∂ u
∂ η+ lu+ f
).
By setting ρ = ξ + η, σ = ξ − η, u1(ρ, σ) = u(12(ρ + σ), 1
2(ρ − σ)) weobtain another canonical form for such equations:
∂ 2u1
∂ ρ2− ∂ 2u1
∂ σ2+ φ1 = 0. (2.8)
As an example, we mention the wave equation
∂ 2u1
∂ ρ2− ∂ 2u1
∂ σ2= 0, (2.9)
which is a model for vibrations of a stretched string.
Let us consider now the parabolic case, d∣∣x=x0,y=y0
= 0. In thiscase we have λ1 = λ2, so we can define only one function from (2.6), forinstance,
ηx +b
aηy = 0.
2.1. CLASIFICATION OF EQUATIONS 73
Due to assumption (2.2), we choose ξ = x. Then J = ηx 6= 0 anda = a, b = 0, c = 0.
As a result, we obtain the canonical form of parabolic equationsin two variables:
∂ 2u
∂ ξ2+ φ = 0. (2.10)
The most important example of such equations is the so called heatdiffusion equation:
∂ 2u
∂ ξ2− ∂ u
∂ η+ f = 0. (2.11)
Let us consider, finally, the elliptic case (d∣∣x=x0 ,y=y0
< 0). Now theroots λ1 and λ2 are complex conjugated, λ1 = λ2. Since λ1 is an analyticfunction, the complex-valued analytical solution ω(x, y) of the equation
∂ ω
∂ x+ λ1
∂ ω
∂ y= 0 (2.12)
exists at least in a sufficiently small neighborhood of the point (x0, y0).Moreover, ∂ ω/∂ y 6= 0 on this neighborhood. We define
ξ =12
(ω(x, y) + ω(x, y)) , η =12i
(ω(x, y)− ω(x, y)) . (2.13)
Condition (2.2) is satisfied, since
J =D(ξ, η)D(x, y)
= − 12i
2
√d
a
∂ ω
∂ y
∂ ω
∂ y= −
√−da
∣∣∣∣∂ ω
∂ y
∣∣∣∣2
6= 0.
Let us find the canonical form for elliptic equations. Notice that ωsatisfies the equation
a
(∂ ω
∂ x
)2
+ 2 b∂ ω
∂ x
∂ ω
∂ y+ c
(∂ ω
∂ y
)2
= 0.
In terms of the real and imaginary parts, ξ and η, of ω, we obtain
a (ξx)2 + 2 bξxξy + c (ξy)
2 = a (ηx)2 + 2 bηxηy + c (ηy)
2 ,
aξxηy + b (ξxηy + ηxξy) + cξyηy = 0.
74CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
A comparison of these formulas and (2.4) shows that a = c and b = 0.Thus, equation (2.3) takes the canonical form for the elliptic case:
∂ 2u
∂ ξ2+∂ 2u
∂ η2+ φ = 0. (2.14)
We mention Laplace’s Equation
∂ 2u
∂ ξ2+∂ 2u
∂ η2= 0
and Poisson’s Equation
∂ 2u
∂ ξ2+∂ 2u
∂ η2= f (ξ, η),
as the most important example of elliptic equations.
The obvious generalizations of equations (2.9), (2.11) and (2.14)for the multidimensional case are
∂ 2u
∂ η2− ∆u+ f = 0,
∆u− ∂ u
∂ η+ f = 0,
∆u = f ,
respectively, where ∆ is Laplace’s Operator
∆ =n∑
i=1
∂ 2
∂ ξ2i.
Remark 2.1.1. We stress that the type of an equation with variablecoefficients depend on the point (x0, y0). In the general case, one givenequation can change its type from point to point.
Consider, as an example, the Tricomi equation
y∂ 2u
∂ x2+∂ 2u
∂ y2= 0. (2.15)
2.1. CLASIFICATION OF EQUATIONS 75
According to the classification described above, this equation is hyprbolicfor y < 0, elliptic for y > 0 and parabolic for y = 0.
For y < 0 the characteristics y = y(x) for equations (2.6) are thesolutions of the equations
dy
dx= ± 1√
−y .
Consequently, the curves
32x+
√−y3 = c1,
32x−
√−y3 = c2,
where c1 and c2 are arbitrary constants, are the characteristics for Eq.(2.15). With the change of variables
ξ =32x+
√−y3, η =
32x−
√−y3
we transform the Tricomi equation into the canonical hyperbolic form
∂ 2u
∂ ξ∂ η− 1
6(ξ − η)
(∂ u
∂ ξ− ∂ u
∂ η
)= 0, ξ > η.
In the case y > 0 we have
ω =32x− i
√y3.
Therefore, the substitution
ξ =32x, η = −
√y3
results in the canonical form for the elliptic case
∂ 2u
∂ ξ2+∂ 2u
∂ η2+
13η∂ u
∂ η= 0, η < 0.
76CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
2.2 Differential equations of the elliptic type
Consider the Poisson equation
∆u = −f, x ∈ Ω ⊂ Rn, (2.16)
where Ω is a bounded domain with piecewise smooth boundary andf = f(x) ∈ C1(Ω)∩ C(Ω).
The physical meaning of Poisson equation, as well as of any ellipticequation is the description of stationary states of substances. The mostimportant applications are to problems of limiting in time heat diffusionand distribution of a substance along the domain Ω. In the first caseu = u(x) denotes the temperature at point x and f(x) is an externalsource of heat. In the second case, u = u(x) represents the substance’sconcentration at point x and f(x) is a source of substance.
2.2.1 Boundary value conditions
To define a unique solution to the Poisson equation (2.16) it is necessaryto prescribe the regime on the boundary ∂ Ω. These additional conditionsare called boundary conditions. There are three types of them.
1. If the temperature (or the concentration) u is known on ∂ Ω, wehave a Dirichlet Condition (or Condition of the First Type):
u∣∣∂ Ω
= u0(x ′), x ′ ∈ ∂ Ω. (2.17)
2. If the temperature (or concentration) flux is known across theboundary ∂ Ω then we have a Neumann Condition (or Conditionof the Second Type):
∂ u
∂ ν
∣∣∣∣∂Ω
= u1(x ′), x ′ ∈ ∂ Ω, (2.18)
where ν is the outward normal to ∂ Ω.
2.2. DIFFERENTIAL EQUATIONS OF THE ELLIPTIC TYPE 77
3. If there is a heat exchange (or substance exchange) through theboundary ∂ Ω then we have a mixed condition (or Condition of theThird Type):
k∂ u
∂ ν+ h(u− u2)
∣∣∣∣∂ Ω
= 0, (2.19)
where k is the conductivity coefficient, h is the exchange coefficientand u2(x) is the temperature (or concentration) outside Ω. It isassumed that the known functions k, h and u2 are continuous on∂ Ω and k ≥ 0, h ≥ 0 and k + h > 0.
The combination of equation (2.16) and one of the above boundary con-ditions is called boundary value problem for the Poisson equation. Westress that only these boundary conditions are correct for elliptic equa-tions.
Here we mention the correctess (well posedness) in the Hadamardsense. The Hadamard conditions for a well posed problem are:
1. the solution must exist;
2. the solution must be uniquely determined, and
3. the solution must depend continuosly on the initial data and/orboundary data.
If at least one of these conditions is broken, the problem is said to beill-posed.
Example 2.2.1. An example of an ill-posed problem presents the Hadamardexample.
Consider the problem
∂ 2u
∂ x2+∂ 2u
∂ y2= 0, x > 0, y ∈ R1,
u∣∣x=0
= 0,∂ u
∂ x
∣∣∣∣x=0
=1k
sin(ky),
(2.20)
78CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
where k is an arbitrary number. The solution to problem (2.20) is
u(x, y) =1k2
sinh(kx) sin(ky). (2.21)
The initial derivative ux∣∣x=0
tends to zero as k → ∞. Thus, for thelimiting (as k → ∞) function u we obtain the Laplace equation withhomogeneous initial conditions. This problem has the trivial solutionu ≡ 0. At the same time, the exact solution (2.21) does not tend tozero as k → ∞ anywhere outside the points y = jπ, j = 0,±1, . . . .This implies the violation of the third Hadamard condition for problem(2.20). As the result, we see that small (for k 1) variations of theinitial data result in large changes in the solution. ♦
Theorem 2.2.1. Let f ∈ C(Ω), u0 ∈ C1(∂ Ω) (or u1 ∈ C(∂ Ω), or u2 ∈C(∂ Ω)), where ∂ Ω is a piecewise smooth surface. Then the solutionu ∈ C2(Ω)∩C1(Ω) for the Dirichlet boundary value problem (2.16), (2.17)for the Poisson equation (or for the third type problem (2.16), (2.19))exists and it is unique. The solution of the Neumann boundary problem(2.16), (2.18) exists under the assumption
∫
Ωf(x) dx =
∫
∂ Ωu1(x) ds.
This solution is unique up to an additive constant.
In closing this subsection we mention that Laplace’s and Poisson’sequations are closely related, as stated in the next theorem.
Theorem 2.2.2. Let f ∈ C2(Ω)∩ C1(Ω). Then the function
V (x) = −∫
ΩEn(x− y)f(y)dy, x ∈ Rn (2.22)
belongs to C2(Ω) and satisfies the Poisson equation
∆V = −f(x). (2.23)
2.2. DIFFERENTIAL EQUATIONS OF THE ELLIPTIC TYPE 79
Here, En(x) is a fundamental solution solution of the Laplace equation∆E = 0 for x 6= 0 (see Subsection 2.2)
E2 =12π
ln |x|, x ∈ R2, E3 = − 14π|x|
, x ∈ R3.
So, any boundary value problem for the Poisson equation can betransformed into a boundary value problem for the Laplace equation.
Indeed, consider the problem
∆u = −f(x), x ∈ Ω,
u∣∣∂ Ω
= g(x ′), x ′ ∈ ∂ Ω.
(2.24)
Setu = V (x) + u(x),
where V has the form (2.22). In the view of the formula (2.23) we have
∆u = ∆V + ∆u = −f + ∆u.
Thus, to solve problem (2.24) it is enough to find the solution for theDirichlet problem for the Laplace equation
∆u = 0, x ∈ Ω,
u∣∣∂Ω
= g(x ′)− V (x ′), x ′ ∈ ∂ Ω.
2.2.2 Main properties of harmonic functions
Definition 2.2.1. A function u(x) ∈ C2(Ω) is called harmonic if u is asolution of Laplace’s equation
∆u = 0 (2.25)
in Ω.
Example 2.2.2. In R2 the polynomials u = x2−y2, u = 12xy
2− 16x
3, u =16
(xy3 − x3y
)are harmonic functions. ♦
80CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
Example 2.2.3. Let Ω = Rn \0 and σn be the area of the unit sphereS1 ⊂ Rn. Then the functions
E2 =12π
ln |x|, x ∈ R2, En = − 1(n − 2)σn
|x|−n+2,
x ∈ Rn, n ≥ 3(2.26)
are harmonic in Ω. These functions are called fundamental solutions ofthe Laplace operator ∆ and they satisfy the equation
∆En = δ(x), x ∈ Rn,
where δ(x) is the Dirac δ-function. ♦
We now present a list of the more important facts about harmonicfunctions.
Theorem 2.2.3. (The Mean Value Property) Let u be a harmonic func-tion in the ball UR and u ∈ C(UR). Then
u(0) =1
σnRn−1
∫
SR
u(x) ds =1σn
∫
S1
u(Rx) ds.
Theorem 2.2.4. (The Maximum Principle) Let u(x) 6≡constant be aharmonic function in the domain Ω and let u ∈ C(Ω). Then, u(x) attainsits maximin and minimum values on ∂ Ω.
Remark 2.2.1. There exist many generalizations of the Maximum Prin-ciple for elliptic equations. For example, if u ∈ C2(Ω)∩C(Ω) is a solutionfor the Dirichlet problem
−∆u + q(x)u = f(x), x ∈ Ω, u∣∣∂ Ω
= v(x′),
then the following estimates holds
‖u‖C(Ω) 61q0‖f‖C(Ω) + ‖v‖C(∂Ω).
Here q0 = minx∈Ω q(x) > 0 and ‖f‖C(Ω)def= maxx∈Ω |f(x)|.
2.2. DIFFERENTIAL EQUATIONS OF THE ELLIPTIC TYPE 81
Corollary 2.2.1. If the Dirichlet problem for the Poisson equation hasa solution, then this solution is unique.
Theorem 2.2.5. If∆u = 0, x ∈ Rn,
then u is a polynomial.
Theorem 2.2.6. (The Green Formula) Let u be a harmonic functionin a domain Ω ⊂ Rn. Then, for any x ∈ Ω,
u(x) = −∫
∂Ω
En(x− y)
∂ u(u)∂ νy
− u(y)∂
∂ νyEn(x− y)
dsy , (2.27)
where En is the fundamental solution (2.26) and νy is the outward normalto ∂ Ω.
Corollary 2.2.2. If u is a harmonic function in Ω, then u ∈ C∞(Ω).
2.2.3 The Green formula for the Dirichlet problem
Consider the Dirichlet boundary value problem
∆u = 0, x ∈ Ω,
u∣∣∂Ω
= u0(x ′), x ′ ∈ ∂ Ω,
(2.28)
where u0 ∈ C(∂Ω) and Ω is a bounded domain in R3 with sufficientlysmooth boundary.
Definition 2.2.2. Let the following conditions hold for a functionG(x, y), x ∈Ω, y ∈ Ω :
1. For any y ∈ Ω, G(x, y) has the representation
G(x, y) =1
4π|x− y|+ g(x, y), (2.29)
where g(x, y) is harmonic in Ω and g ∈ C(Ω), as a function of x;
82CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
2. For any y ∈ Ω, G(x, y) satisfies the boundary condition
G(x, y)∣∣x∈∂Ω
= 0. (2.30)
Then, G(x, y) is called the Green function for the Dirichlet problem inΩ.
This definition and the maximum principle imply that the follow-ing inequalities hold:
0 < G(x, y) <1
4π|x− y| , x ∈ Ω, y ∈ Ω.
Theorem 2.2.7. Let ∂ Ω be a C2 surface. Then the Green functionG(x, y) exists, has for any y ∈ Ω the normal derivative ∂ G(x,y)
∂ ν and is asymmetric function
G(x, y) = G(y, x), x ∈ Ω, y ∈ Ω.
The knowledge of the Green function allows to present the solutionfor the boundary value problem (2.28) in the explicit form
u(x) = −∫
∂Ωu0(y)
∂ G(x, y)∂ νy
dsy , x ∈ Ω. (2.31)
Moreover, aplying the formula (2.22) we can write the solution
u(x) = −∫
∂Ωu0(y)
∂G(x, y)∂ νy
dsy +∫
ΩG(x, y)f(y) dy, x ∈ Ω, (2.32)
of the Dirichlet boundary-value problem for the Poisson equation
∆u = −f(x), x ∈ Ω.
Remark 2.2.2. Formulas similar to (2.31) and (2.32) hold for any n ≥2. It is enough to write the representation (2.29) in the form
G(x, y) = −En(x− y) + g(x, y), n ≥ 2.
2.2. DIFFERENTIAL EQUATIONS OF THE ELLIPTIC TYPE 83
To construct the Green function one uses the method of images(see the bibliographical comments).
Example 2.2.4. Let Ω be a ball UA of radius A. Then
G(x, y) =1
4π|x− y|− A|y|
4π|x|y|2 − yA2|, y ∈ UA
Example 2.2.5. Let Ω be the half-space (x1, x2, x3), x3 > 0. Then
G(x, y) =1
4π|x− y| −1
4π|x− y∗| , y3 > 0,
where y∗ = (y1, y2,−y3) is a point symmetric to y = (y1, y2, y3)
Example 2.2.6. Let Ω be the dihedral angle Ω = (x1, x2, x3), x2 >0, x3 > 0. Then
G(x, y) =1
4π|x− y|− 1
4π|x− y∗|− 1
4π|x− y ′|+
14π|x− y ′|
,
where y∗ = (y1, y2,−y3), y ′ = (y1,−y2, y3) and y ′ = (y1,−y2,−y3) arethe points symmetric to y = (y1, y2, y3).
2.2.4 The eigenvalue problem and the Fourier method
The Eigenvalue problem
Let L be an elliptic operator of the second order
Ludef= −〈∇, p∇u〉+ qu, (2.33)
where p = p(x) ∈ C1(Ω), q = q(x) ∈ C(Ω), p(x) > 0 and q(x) ≥ 0 forx ∈ Ω.
Recall that we are using vector notation. So
〈∇, p∇u〉 def=n∑
i=1
∂
∂ xi
(p∂ u
∂ xi
).
84CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
Assume that Ω is a bounded domain with piecewise boundary and let
αu+ β∂ u
∂ ν
∣∣∣∣∂ Ω
= 0, (2.34)
where ν is the outward normal to ∂ Ω, α = α(x′) ∈ C(∂ Ω), β = β(x′) ∈C(∂ Ω), α(x′) ≥ 0, β(x′) ≥ 0 and α(x′) + β(x′) > 0 for any x′ ∈ ∂ Ω.
Denote by ML the set of functions u ∈ C2(Ω)∩C1(Ω) that satisfythe boundary condition (2.34).
Theorem 2.2.8. (General Properties of L) Let u and v be functions inC2(Ω)∩ C1(Ω). Then:
1. The First Green’s Formula∫
ΩvLu dx =
∫
Ωp〈∇u,∇v〉+ q uvdx−
∫
∂Ωpv∂ u
∂ νds
holds.
2. The Second Green’s Formula∫
Ω(vLu− uLv)dx =
∫
∂Ωp
(u∂ v
∂ ν− v
∂ u
∂ ν
)ds
holds.
3. Let u, v ∈ ML. Then L is the positive self-adjoint (Hermitian)operator ∫
Ωu Lv dx =
∫
Ωv Lu dx,
∫
Ω
u Lu dx ≥ minx∈Ω
p(x)∫
Ω
|∇u|2dx.
Consider the eigenvalue problem
Lϕ = λϕ, x ∈ Ω,
αϕ+ β∂ ϕ
∂ ν
∣∣∣∣∂ Ω
= 0.
(2.35)
The main properties of eigenvalues and eigenfunctions for problem (2.35)are:
2.2. DIFFERENTIAL EQUATIONS OF THE ELLIPTIC TYPE 85
1. Eigenvalues are real and nonnegative.
2. Eigenfunctions ϕi and ϕj corresponding to eigenvalues λi and λj ,respectively, are orthogonal if i 6= j.
3. Eigenfunctions are real-valued.
4. λ = 0 is eigenvalue of L if and only if q = 0 and α = 0. Thecorresponding eigenfunction is ϕ =constant.
Theorem 2.2.9. The set of eigenvalues λ for problem (2.35) is denu-merable without finite limit points. Any eigenfunction has finite multi-plicity. Any f ∈ ML can be expanded into a Fourier Series in terms ofthe eigenfunctions of L. Moreover, the system of eigenfunctions of L iscomplete in L2(Ω).
Example 2.2.7. (The Sturm-Liouville Problem)
Consider the one dimensional case n = 1, Ω = (0, `), and let
L = − d2
dx2, α = 1, β = 0.
Then we have the particular case of problem (2.35)
−d2ϕ
dx2= λϕ, x ∈ (0, `), ϕ
∣∣x=0
= ϕ∣∣x=`
= 0. (2.36)
The general solution of equation (2.36) has the form
ϕ = A sin√λx+B cos
√λx. (2.37)
Arbitrary coefficients A and B have to be found from the boundaryconditions and the assumption that ϕ is not trivial (ϕ 6≡ 0). The leftboundary condition implies that B = 0. The right boundary condition
A sin√λ` = 0
and the fact that ϕ is not trivial imply that
sin√λ` = 0. (2.38)
86CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
Then, the eigenvalues are
λj =(jπ
`
)2, j = 0, 1, 2, . . . , (2.39)
and the eigenfunctions are
ϕj = A sin(jπ
`x), j = 1, 2, . . . (2.40)
In (2.40) we choose only positive j’s because ϕ−j = A sin(−j π` x
)=
−A sin(j π` x
)= −ϕj . Constant A is determined by the normalization
condition:‖ϕ‖L2(Ω) =
∫
Ωϕ2j (x) dx = 1. (2.41)
In the case we are considering we readily obtain
A =
√2`. ♦ (2.42)
Example 2.2.8. Consider the case of Neumann Conditions
−d2ϕ
dx2= λϕ, x ∈ (0, `),
∂ ϕ
∂ x
∣∣∣∣x=0
=∂ ϕ
∂ x
∣∣∣∣x=`
= 0. (2.43)
Now, A = 0 however the equation for the eigenvalues has again the form(2.38). At the same time, the sequence of eigenvalues for problem (2.43),
λj =(jπ
`
)2, j = 0, 1, 2, . . . (2.44)
differs from sequence (2.39) in the term λ0 = 0, since the first eigenfunc-tion ϕ0 from the sequence
ϕj = B cos(jπ
`x), j = 0, 1, 2, . . . ,
is non trivial. To determine B we use again the normalization condition(2.41). ♦
2.2. DIFFERENTIAL EQUATIONS OF THE ELLIPTIC TYPE 87
Example 2.2.9. Consider the problem
−d2ϕ
dx2= λϕ, x ∈ (0, `), ϕ
∣∣x=0
= 0, ϕ+ β∂ ϕ
∂ x
∣∣∣∣x=`
= 0. (2.45)
The coefficient B in the solution (2.37) is equal to zero. However, incontrast to example 2.2.7, the boundary condition at x = ` has now theform
A[sin
√λ`+ β
√λ cos
√λ`]
= 0. (2.46)
Thus, λ is solution of the trigonometric equation
tan√λ` = −β
√λ. (2.47)
The relation (2.47) defines a sequence of eigenvalues λj∞j=1 such that0 < λ1 < λ2 < · · · and λj → jπ as j → ∞. The sequence of thecorresponding normalized eigenfunctions is
ϕj =
√2`
sin(√
λjx), j = 1, 2, . . . . ♦
Example 2.2.10. Consider the two dimensional problem in the rectan-gle Ω = (0, `1) × (0, `2)
−∆ϕ = λϕ, x ∈ Ω,ϕ∣∣∂ Ω
= 0.(2.48)
If we try a solution in separated variables ϕ = X(x1)Y (x2) for equation(2.48) we obtain
−X′′
X=Y ′′
Y+ λ, x ∈ Ω.
Since the left-hand side is a function that depends only on x1 whereasthe right-hand side depends only on x2 and variables x1 and x2 areindependent, the last equality holds only in the case
−X′′
X=Y ′′
Y+ λ = constant, x ∈ Ω. (2.49)
88CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
So, we obtain two ordinary equations
−X′′
X= µ, −Y
′′
Y= ν,
where µ > 0 and ν > 0 are arbitrary constants and λ = µ + ν. Theboundary conditions in (2.48) imply
X∣∣x1=0
= 0, X∣∣x1=`1
= 0, Y∣∣x2=0
= 0, Y∣∣x2=`2
= 0.
So, by analogy with Example 2.2.7 we obtain the sequence of eigenvaluesand eigenfunctions
µj =(jπ
`1
)2
, νi =(iπ
`2
)2
Xj = A sin(jπ
`1x1
), Yi = A sin
(iπ
`2x2
).
Thus, the normalized solution for the eigenvalue problem (2.48) is
λji =(jπ
`1
)2
+(iπ
`2
)2
,
ϕji =2√`1`2
sin(jπ
`1x1
)sin(iπ
`2x2
), i, j = 1, 2, . . . . ♦
The Fourier method
Consider the boundary value problem for the elliptic equation
−∂2u
∂ y2+ Lu = f(x, y), y ∈ (0, `), x ∈ Ω,
u∣∣y=0
= u0(x), u∣∣y=`
= u`(x),
αu + β∂ u
∂ ν
∣∣∣∣Σ
= 0,
(2.50)
2.2. DIFFERENTIAL EQUATIONS OF THE ELLIPTIC TYPE 89
where the operator L has the form (2.33) and Σ = (0, `) × ∂ Ω is thelateral area of the cylinder (0, `) × Ω. Let f, u0 and u` belong to ML
and look for the solution for this problem as the series
u(x, y) =∞∑
k=1
Yk(y)ϕk(x), (2.51)
where ϕk are the eigenfunctions of the operator L,
Lϕk = λkϕk, αϕk + β∂ ϕk∂ ν
∣∣∣∣∂Ω
= 0,
and Yk are arbitrary functions.
Expanding functions f, u0 and u` in Fourier series,
f =∞∑
k=1
fk(y)ϕk(x), u0 =∞∑
k=1
akϕk(x), u` =∞∑
k=1
bkϕk(x),
we obtain the equations
−d2Ykdy2
+ λkϕk = fk(y), k = 1, 2, . . . (2.52)
and boundary conditions
Yk∣∣y=0
= ak, Yk∣∣y=`
= bk. (2.53)
The solution for problem (2.52), (2.53) is
Yk(y) = aksinh
√λk(`− y)
sinh√λk`
+ bksinh
√λky
sinh√λk`
+∫ `
0Gk(y, ξ)fk(ξ)dξ,
where
Gk(y, ξ) =
1√λk√
sinh√λky
sinh√λky sinh
√λk(`− ξ) for 0 6 y 6 ξ,
sinh√λk(`− y) sinh
√λkξ for ξ 6 y 6 `,
90CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
is the so called Green’s function for the problem
−v ′′ + λkv = fk , v(0) = v(`) = 0.
Thus, the formal solution for problem (2.50) is
u(x, y) =∞∑
k=1
ak
sinh√λk(`− y)
sinh√λk`
+
+ bksinh
√λky
sinh√λk`
+∫ `
0
Gk(y, ξ)fk(ξ)dξϕk(x).
(2.54)
Example 2.2.11. Consider the Dirichlet’s problem for Laplace’s equa-tion in the rectangle (0, `1) × (0, `)
∆u = 0,u∣∣x=0
= u∣∣x=`1
= 0,u∣∣y=0
= u0(x), u∣∣y=`
= u`(x).
(2.55)
We rewrite this equation in a form similar to (2.50)
−∂2u
∂ y2+ Lu = 0, L = − ∂ 2
∂ x2.
Due to example 2.2.7,
ϕk(x) =√
2`1
sin(kπ
`1x
), λk =
(kπ
`1
)2
.
Thus, formula (2.54) implies, for problem (2.55),
u(x, y) =√
2`1
∞∑
k=1
ak sinh
(kπ`− y
`1
)+
+bk sinh(kπ
y
`1
) sin(kπ x
`1
)
sinh(kπ `
`1
) ,
(2.56)
2.2. DIFFERENTIAL EQUATIONS OF THE ELLIPTIC TYPE 91
where
ak =√
2`1
∫ `
0u0(x) sin
(kπ
x
`1
)dx,
bk =√
2`1
∫ `
0
u`(x) sin(kπ
x
`1
)dx.
♦
Example 2.2.12. Consider the Dirichlet’s boundary value problem
∆u = 0,
u∣∣x=0
= v0(y), u∣∣x=`1
= v`(y),
u∣∣y=0
= u0(x), u∣∣y=`
= u`(x).
We write u = V +W and define the functions V and W as the solutionsto the problems
∆V = 0, V∣∣x=0
= v0(y), V∣∣x=`1
= v`(y), V∣∣y=0
= V∣∣y=`
= 0,
∆W = 0, W∣∣x=0
= W∣∣x=`1
= 0, W∣∣y=0
= u0(y), W∣∣y=`
= u`(x).
Obviously W has the form (2.56). Moreover, the same form has thefunction V after the change of variables x→ y, y → x ♦
2.2.5 Asymptotics for elliptic equations with a smallparameter
Regular perturbations
Consider the Dirichlet boundary value problem
−〈∇, p∇u〉+ q(x, ε)u = f(x), x ∈ Ω, (2.57)
u∣∣∂Ω
= µ(x′), x′ ∈ ∂ Ω, (2.58)
where p ∈ C∞(Ω), p > 0, µ ∈ C∞(∂ Ω), f ∈ C∞(Ω) and Ω is a boundeddomain with smooth boundary ∂ Ω. We will assume also that
q(x, ε) ∈ C∞(Ω× [0, 1]), q > 0. (2.59)
92CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
Let us treat ε as a small parameter (which is represented by ε → 0),assuming that ε is smaller than any fixed positive number.
The assumption (2.59) states the regular dependence of q on ε; thefunction q(x, 0) exists and we can expand q(x, ε) as the Taylor serieswith respect to ε at the point ε = 0. This is why the problem (2.57),(2.58) is called regularly perturbed.
It is natural to assume that the solution of such problem is regulartoo. So, we write
u(x, ε) ∈ C∞(Ω× [0, 1]), (2.60)
and, for ε→ 0, expand u into the Taylor series
u(x, ε) = u0(x) + εu1(x) + ε2u2(x) + ε3u3(x) + · · · . (2.61)
A similar expansion holds for q :
q(x, ε) = q0(x) + εq1(x) + ε2q2(x) + ε3q3(x) + · · · . (2.62)
A substitution of (2.61) and (2.62) into the equation (2.57) implies thefollowing:
−〈∇, p∇u0〉+ q0(x)u0 − f(x) + ε −〈∇, p∇u1〉 + q0(x)u1 + q1(x)u0
+ε2 −〈∇, p∇u2〉 + q0(x)u2 + q1(x)u1 + q2(x)u0 + · · · = 0.
(2.63)Since ε tends to 0, the relation (2.63) is satisfied only in the case when
−〈∇, p∇u0〉+ q0(x)u0 = f(x),−〈∇, p∇u1〉+ q0(x)u1 = −q1(x)u0,
−〈∇, p∇u2〉+ q0(x)u2 = −q1(x)u1 − q2(x)u0,...
(2.64)
Furthermore, for ε→ 0 the assumption (2.59) implies that
q0(x) > 0. (2.65)
2.2. DIFFERENTIAL EQUATIONS OF THE ELLIPTIC TYPE 93
Next, note that the boundary condition (2.58) and the expansion (2.61)imply the boundary conditions for (2.64):
u0
∣∣∂Ω
= µ(x′), uk∣∣∂Ω
= 0, k = 1, 2, . . . . (2.66)
The solutions of the boundary value problems (2.64), (2.65) exist andare unique.
Furhermore, the series (2.61) is convergent in the asymptotic sense,that is, for sufficiently small ε. However, usually it is enough to constructonly some first terms of this asymptotic expansion. In this case, onewrites
u(x, ε) = uN(x, ε) + O(εN+1), (2.67)
where
uN(x, ε) =N∑
j=0
εjuj(x), N ≥ 0 is fixed, (2.68)
is called asymptotic solution with accuracy O(εN+1) or the asymptoticmod O(εN+1) solution, or simply the asymptotic.
Remark 2.2.3. Usually the notation (2.67) means that uN is a formalasymptotic mod O(εN+1) solution: substitution of (2.67) into the equa-tion results in the apearence of a reminder which is mod O(εN+1) in Csense. For the problem (2.57), (2.58) the asymptotic uN is not formalsince the difference between the exact solution u and the asymptotic uN
is mod O(εN+1) in C(Ω) sense
Singular perturbations
Consider now the singularly perturbed problem
−ε2〈∇, p∇u〉+ q(x)u = f(x), x ∈ Ω, (2.69)
u∣∣∂Ω
= µ(x′), x′ ∈ ∂ Ω, (2.70)
under the same assumptions about p, q, f, µ and Ω as in the problem(2.57), (2.58).
94CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
Note that the formal change of variables x → x′ = x/ε trans-forms (2.69) into an equation of the form (2.57) (with coefficients p′ =p(εx′), q′ = q(εx′) and right-hand side f ′ = f(εx′) and over the domainΩ′, |Ω′| ∼ 1
ε → ∞ as ε→ 0).
However, the solution will depend on the variable x/ε and wecannot assume its smoothness at ε = 0. Such dependence of the solutionon a small parameter is called singular.
Nevertheless, let us try to repeat the previous construction for theproblem (2.69), (2.70). Substituting of the asymptotic of the form (2.67)into equation (2.69) results in the algebraic equations
qu0 = f, u2i+1 ≡ 0, i = 0, 1, 2, . . .
qu2j = 〈∇, p∇u2(j−1)〉, j = 1, 2, . . . .
(2.71)
So, in contrast to the regular case, functions of the form (2.67) do notsatisfy the boundary condition. To improve this, rewrite the asymptoticmod O(εN+1) solution in the form
u(x, ε) = uNreg(x, ε) + uNsing
(ν(x)ε, x′, ε
). (2.72)
Here uNreg ∈ C∞(Ω × [0, 1]) is a function of the form (2.68), ν(x) is thedistance to ∂ Ω along the inner normal, x′ ∈ ∂ Ω. We assume that
uNsing(τ, x′, ε) ∈ C∞(R1
+ × ∂ Ω× [0, 1]), uNsing(τ, x′, ε) → 0 as τ → ∞.
(2.73)So, the dependence of uNsing on ε is singular in the first argument andregular in the third one.
Write uNreg(x, ε) in the form (2.68) and similarly write
uNsing(τ, x′, ε) =
N∑
j=0
εjUj(τ, x′). (2.74)
2.2. DIFFERENTIAL EQUATIONS OF THE ELLIPTIC TYPE 95
Next, note the equalities
ε∇U(νε, x′)
=∇ν ∂ U(τ, x′)
∂ τ+ ε∇U(τ, x′)
∣∣∣∣τ= ν
ε
,
ε2∆U(νε, x′)
=|∇ν|2∂
2U(τ, x′)∂ τ2
+ ε∆ν∂ U(τ, x′)
∂ τ+
+ε2∆U(τ, x′)∣∣∣∣
τ= νε
.
Thus, substitution of (2.72) results in the relation
−|∇ν|2p∂ 2U0∂ τ2 + qU0 + qu0 − f
+
+ ε−|∇ν|2p∂ 2U1
∂ τ2 + qU1 − p (p∆ν + 〈∇ν,∇p〉) ∂ U0∂ τ
+ · · ·
∣∣∣∣τ= ν
ε
= O(εN+1).(2.75)
Next, letting τ → ∞ and taking into account the second assumption(2.73) we readily obtain the same equations (2.71) for the functionsu0, u1, . . . .
Then recall the inequality
τke−λτ 6 c(k), (2.76)
which holds uniformly in τ ≥ 0 for any λ > 0. Thus, assuming theexponential rate of Uj , j ≥ 0, decreasing, we have the following estimate
νkUj
(νε, x′)
= εkτkUj(τ, x′)
∣∣∣∣τ= ν
ε
= O(εk).
Consequently, for any smooth function a(x) we can write the asymptoticexpansion
a(x)Uj(νε, x′)
=
N∑
j=0
εjaj(x′)τ jUj(τ, x′)
∣∣∣∣τ= ν
ε
+ O(εN+1), (2.77)
96CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
where a0(x′) = a(x)∣∣∂Ω, a1(x′) = 〈∇ν,∇〉a(x)
∣∣∂ Ω
def= ∂ a∂ ν
∣∣ν=0
and aj(x′)for j > 1 are the j-th order derivatives of a along the inner normalderivarive divided by j!.
Application of the formula (2.77) for the expansion (2.75) andrecollection of terms result in the relation
−α0
∂ 2U0
∂ τ2+ q0U0
+
N∑
j=1
εj−α0
∂ 2Uj∂ τ2
+ q0Uj − Fj
∣∣∣∣∣τ= ν
ε
=
= O(εN+1),(2.78)
where α0 = |∇ν|2p∣∣ν=0
, q0 = q∣∣ν=0
and Fj = Fj(U0, . . . , Uj−1, τ, x′) are
polynomials with respect to τ and U0, . . . , Uj−1, and their derivatives.In particular,
F1 =(p∆ν +
∂ p
∂ ν
) ∣∣∣∣ν=0
∂ U0
∂ τ+τ∂
∂ ν
(|∇ν|2p
) ∣∣∣∣ν=0
∂ 2U0
∂ τ2− ∂ q
∂ ν
∣∣∣∣ν=0
U0
.
Thus, we obtain the ordinary differential equations
−α0∂ 2U0
∂ τ2+ q0U0 = 0, τ > 0, (2.79)
−α0∂ 2Uj∂ τ2
+ q0Uj = Fj , τ > 0, j = 1, 2, . . . , N. (2.80)
It remains to set the boundary conditions,
U0
∣∣τ=0
= µ(x′)− u0(x)∣∣ν=0
, (2.81)
Uj∣∣τ=0
= −uj(x)∣∣ν=0
, j ≥ 1. (2.82)
Obviously, the solutions of (2.79), (2.81) and (2.80), (2.82) vanish expo-nentialy as τ → 0. This justifies all of our assumptions.
Finally, we see that the asymptotic for the problem (2.69), (2.70)consists of two parts: one “slowly varying” regular part, ureg, and an-other “rapidly decreasing” singular part, using, which is concentrated inan O(εγ), γ ∈ (0, 1), neighborhood of the boundary. Such asymptoticsare called boundary layers.
2.2. DIFFERENTIAL EQUATIONS OF THE ELLIPTIC TYPE 97
Example 2.2.13. Consider the problem
−ε2∆u+ qu = 0, x ∈ Ω,
u∣∣∂Ω
= µ(x′), x′ ∈ ∂ Ω,
(2.83)
where q =constant> 0 and Ω is the disk of unit radius.
Obviously, in this case ν = 1 − r, x′ = ϕ, where r and ϕ are thepolar coordinates. Since the equation (2.83) is homogeneous, ureg = 0and we write
u = uNsing
(νε, ϕ, ε
)+ O(εN+1). (2.84)
Furthermore, in our case the problem (2.79)-(2.82) have the followingform
−∂2U0
∂ τ2+ qU0 = 0, U0
∣∣τ=0
= µ(ϕ),
−∂2Uj∂ τ2
+ qUj = Fj , Uj∣∣τ=0
= 0, j ≥ 1,
where
F1 = −∂ U0
∂ τ, F2 = −∂ U1
∂ τ− τ
∂ U0
∂ τ+∂ 2U0
∂ ϕ2,
F3 = −∂ U2
∂ τ− τ
∂ U1
∂ τ+∂ 2U1
∂ ϕ2− τ2∂ U0
∂ τ+ 2τ
∂ 2U0
∂ ϕ2.
Thus,
U0 = µe−√qτ , U1 =
12τµe−
√qτ ,
and all the next terms of the asymptotics can be calculated recursively.♦
98CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
2.3 Numerical simulations for elliptic equations
Let Ω be the rectangle Ω = (0, `1) × (0, `2) and consider the boundary-value problem for the elliptic equation
Ludef= −〈∇, p(x)∇u〉+ q(x)u = f(x), x ∈ Ω,
αu+ β∂ u
∂ ν
∣∣∣∣∂Ω
= u0(x ′), x ′ ∈ ∂ Ω,
(2.85)
where p ∈ C1(Ω), q, f ∈ C(Ω), p > 0 and q ≥ 0, α, β ≥ 0, α + β > 0.
To write a finite-difference scheme for the differential problem it isnecessary: a) to change the domain Ω of continuous variables x for a setof discrete points, xij, b) to change differential operators in the originalboundary value problem for finite-difference operators. Respectively,instead of the original function u = u(x) with continuous arguments, weshall consider the so called net function u(xij)
def= yij .Actually, with fixedsteps of discretization h1 and h2, the net function yij is a vector withcomponents y00, y01, . . . , yN1N2 . However, in order to analyze the finite-difference schemes, we will treat the steps h1, h2 as small parameters.Since `1, `2 =constant, the numbers N1 = `1/h1, N2 = `2/h2 tend toinfinity as h1, h2 → 0.
2.3.1 Discretization of domains
In the case of rectangular domains this procedure is very simple. Wechose the steps of the discretization h1 > 0, h2 > 0 and define the pointsx1,i = ih1, x2,j = jh2, i = 0, 1, 2, . . . , N1, j = 0, 1, 2, . . . , N2. Choose thenumbers N1 and N2 such that N1h1 = `1 and N2h2 = `2. The setof points (x1,i, x2,j), i = 0, 1, 2, . . . , N1, j = 0, 1, 2, . . . , N2 is called thenet over Ω and it is denoted by Ωh. Each point (x1,i, x2,j) is called aknot and it is enumerated by the indexes i, j. Knots xij with indexesi = 1, 2, . . . , N1 − 1, j = 1, 2, . . . , N2 − 1 are called inner knots. Theother knots form the boundary of Ωh and are denoted by ∂ Ωh.
2.3. NUMERICAL SIMULATIONS 99
2.3.2 Discretization of differential operators
Consider the simplest differential operator in the one-dimensional case.Recall (Chapter 1) that the finite difference analogs for d/dx are the socalled forward derivative
du
dx
∣∣∣∣x=xi
−→ yi+1 − yih
def= yixdef= ∂ xyi, (2.86)
and the so called backward derivativedu
dx
∣∣∣∣x=xi
−→ yi − yi−1
h
def= yixdef= ∂ xyi. (2.87)
For smooth functions any linear combination of operators for ahead, ∂ x,and back, ∂ x, derivatives approximates again the derivative d/dx
du
dx
∣∣∣∣x=xi
−→ αyix + (1 − α)yix, 0 ≤ α ≤ 1. (2.88)
The most usual combination is for α = 1/2 :
du
dx
∣∣∣∣x=xi
−→ yi+1 − yi−1
2hdef= y0
x. (2.89)
Recall also that the order of approximation for formulas (2.86)-(2.88)(α 6= 1/2) is O(h) whereas the order of approximation for α = 1/2 isO(h2).
Operators ∂ x and ∂ x are commutative: ∂ x∂ x = ∂ x∂ x. So, up toorder O(h2)
d2u
dx2
∣∣∣∣x=xi
−→ yxxdef=
yi+1 − 2yi + yi−1
h2. (2.90)
However, we stress that Leibnitz’s rule does not hold for finite-differenceoperators. Instead, we have the rules
(yigi)xdef=
1h
yi+1gi+1 − yigi
= gi+1yix + yigix,
(yigi)xdef=
1h
yigi − yi−1gi−1
= giyix + yi−1gix.
(2.91)
100CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
Consider the one-dimensional version of the differential equation in (2.85)
− d
dx
(p(x)
du(x)dx
)+ q(x)u(x) = f(x), x ∈ (0, `). (2.92)
Let us assume that p, q and f are constants. Then, in accordance with(2.90) we write the scheme for the net function yi = u(xi),
−pyixx + qyi = f, i = 1, 2, . . . , N − 1, (2.93)
or, more in detail,
− p
h2(yi+1 − 2yi + yi−1) + qyi = f, i = 1, 2, . . . , N − 1.
So, the scheme (2.93) is the linear system of algebraic equations
Ay = f, (2.94)
where A is the three-diagonal matrix with entries
Ai,i = 2p
h2+ q, Ai+1,i = Ai−1,i = − p
h2. (2.95)
It is straightforward to verify that the order of approximation for scheme(2.93) is O(h2). Note that in order to obtain N + 1 equations for N + 1unknowns y0, y1, . . . , yN we have to complete the system (2.93) withtwo additional conditions. We will do it considering discretizations ofboundary conditions.
Consider now the case in which p, q and f are functions of x. Theheat balance law holds for equation (2.92):
W∣∣βα
+∫ β
αq(x)u(x)dx =
∫ β
αf(x)dx, (2.96)
where W is the heat flow
W (x) = −p(x)d ud x
(2.97)
and α < β are arbitrary numbers.
2.3. NUMERICAL SIMULATIONS 101
Let us derive a finite-difference analog for the heat balance law(2.96).
Let α = xi− 12
def= (i − 12)h and β = xi+ 1
2
def= (i + 12)h. Using the
simplest interpolation formula for integrals we obtain∫ x
i+12
xi− 1
2
q(x)u(x)dx ' hdiyi, didef=
1h
∫ xi+1
2
xi− 1
2
q(x)dx. (2.98)
Integration of (2.97) implies that
u(xi−1)− u(xi) =∫ xi
xi−1
W (x)p(x)
dx.
Consequently,
yi−1 − yi ≈ hWi− 12a−1i , ai
def=
(1h
∫ xi
xi−1
1p(x)
dx
)−1
. (2.99)
So, the flux is
Wi− 12
= −aiyi − yi−1
h. (2.100)
From formulas (2.98)-(2.100) we obtain the finite-difference analog for(2.96)
− 1h
(ai+1
yi+1 − yih
− aiyi − yi−1
h
)+ diyi = ϕi (2.101)
whereϕi
def=1h
∫ xi+1
2
xi− 1
2
f(x)dx. (2.102)
A more compact form of (2.101) is
−(aiyix)x + diyi = ϕi, i = 1, 2, . . . , N − 1. (2.103)
As a rule, subscript i is omited in such formulas. So, instead of (2.103),one writes the formula
−(ayx)x + dy = ϕ (2.104)
102CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
in analogy with the differential equation (2.92):
− d
dx
(pdu
dx
)+ qu = f,
where arguments are ommited if they are the same in any term.
Thus, we have obtained the finite-difference scheme for the heatbalance law (2.96). From the algebraic point of view, (2.104) is a slightlymore complicated variant of system (2.93). However, formulas (2.98),(2.99) and (2.102) for d, a and ϕ are not convenient in practice. Usu-ally, simpler formulas are used, which are obtained from interpolationof integrals. For example
ai = pi− 12
def= p(xi −12h), di = qi, ϕi = fi, (2.105)
or
ai = 2pipi−1
pi + pi−1, di =
12
(qi+ 1
2+ qi− 1
2
), ϕi =
12
(fi+ 1
2+ fi− 1
2
).
(2.106)In closing this subsection, we rewrite scheme (2.104) in the form of thesystem of algebraic equations
Aiyi+1 − Ciyi +Biyi+1 = −Fi, i = 1, 2, . . . , N − 1, (2.107)
where the coefficients Ai, Bi, Ci and the right-hand side Fi have the form
Ai = ai, Bi = ai+1, Ci = ai + ai+1 + h2di, Fi = h2ϕi.
2.3.3 Discretization of boundary conditions
The discretization of Dirichlet-type boundary conditions is obvious. In-deed, instead of
u∣∣x=0
= u0, u∣∣x=`
= u` (2.108)
we writey0 = u0, yN = u` (2.109)
2.3. NUMERICAL SIMULATIONS 103
and complete the system of algebraic equations (2.107).
Consider the Neumann-type boundary conditions
du
dx
∣∣∣∣x=0
= v0,du
dx
∣∣∣∣x=`
= v`. (2.110)
The simplest variant of the discretization is
y1 − y0h
= v0,yN − yN−1
h= v`. (2.111)
However, the remainder of approximation (2.111) is O(h). Thus, thetotal order of approximation for scheme (2.104), (2.111) is O(h) whereasfor problem (2.104), (2.109) is O(h2). To improve this, note that, due toequation (2.92)
d2u
dx2=
1p
qu− f − p ′du
dx
. (2.112)
Using the Taylor expansion we write
y1 − y0h
=1h
(u(h) − u(0)) =1h
(u(0) + h
du
dx
∣∣∣∣0
+h2
2d 2u
dx2
∣∣∣∣0
−
−u(0) + O(h3)
)
Thusy1 − y0h
− h
2d 2u
dx2
∣∣∣∣0
=du
dx
∣∣∣∣0
+ O(h2). (2.113)
Since the left boundary condition (2.110) prescribes the value of du/dx∣∣0,
we rewrite the last relation in the form
y1 − y0h
− h
2p0
q0y0 − f0 − p′0v0
= v0 + O(h2),
where p′0 = (dp/dx)∣∣x=0
.
104CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
Performing similar considerations for the right-hand side bound-ary condition, we obtain that the scheme containing equation (2.104)and the boundary conditions
y1 −(
1 + h2 q02p0
)y0 = hv0 −
h2
2p0
(f0 + p′0v0
),
(1 + h2 qN
2pN
)yN − yN−1 = hv` +
h2
2pN
(fN + p′NvN
),
(2.114)
will have second order of approximation.
Next, consider the boundary conditions of the third type:
α0u− β0∂ u
∂ x
∣∣∣∣x=0
= u0, α`u− β`∂ u
∂ x
∣∣∣∣x=`
= u`, (2.115)
where the coefficients α0, β0 and α`, β` are nonzero.
Using relations (2.112) and (2.113) we have with precision O(h2)
α0y0 − β0
y1 − y0h
− h
2p
(qy0 − f − p ′du
dx
) ∣∣∣∣x=0
= u0.
The value of du/dx∣∣x=0
is unknown. However, the change
hdu
dx
∣∣∣∣x=0
−→ y1 − y0
has second order of approximation. So, combining formulas (2.109) and(2.113) we readily obtain the following discretization for (2.115)
(β0 + α0h+
β0h
2p0(p′0 + q0h)
)y0 − β0
(1 +
p′02p0
h
)y1 =
= hu0 + h2 β0
2p0f0,
(β` + α`h− β`h
2pN(p′N − qN )
)yN − β`
(1 − p′N
2pNh
)yN−1 =
= hu` + h2 β`2pN
fN ,
(2.116)
2.3. NUMERICAL SIMULATIONS 105
where p0, q0, f0 and p′0 are the values of p, q, f and p′ at the point x = 0,whereas pN , qN , fN and p′N are the values at x = `.
A more compact form of these formulas is
y0 = κ1y1 + ν1, yN = κ2yN−1 + ν2, (2.117)
where
κ1 = β01 + hp′0/2p0
β0 + α0h+ hβ0p′0/2p0 + β0q0h2/2p0
,
ν1 = hu0 + hβ0f0/2p0
β0 + α0h+ hβ0p′0/2p0 + β0q0h2/2p0
,
κ2 = β`1− hp′N/2pN
β` + α`h− β`p′Nh/2pN + β`qNh/2pN
,
ν2 = hu` + hβ`fN/2pN
β` + α`h− β`p′Nh/2pN + β`qNh/2pN
.
Note that (2.117) is the general form for boundary conditions for finite-difference schemes. Indeed, in case (2.109) we put κ1 = 0, ν1 = u0,κ2 =0, ν2 = u` and in the case of Neumann conditions we put κ1 = 1, ν1 =−hv0,κ2 = 1, ν2 = hv` for the rough approximation in (2.111) and
κ1 =(
1 + h2 q02p0
)−1
, ν1 = κ1
(hv0 −
h2
2p0(f0 + p′0v0)
),
κ2 =(
1 + h2 qN2pN
)−1
, ν2 = κ2
(hv` +
h2
2pN(fN + p′Nv`)
),
for the second order approximation (2.114).
2.3.4 Gauss method for systems with three-diagonal ma-trices
In the previous subsection we obtained that the discretization of thethird type boundary value problem for the second order linear equation(2.92) results in system (2.107), (2.117)
Aiyi+1 − Ciyi +Biyi+1 = −Fi, i = 1, 2, . . . , N − 1,y0 = κ1y1 + ν1, yN = κ2yN−1 + ν2.
(2.118)
106CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
The standard Gauss method uses O((N+1)3) arithmetic operations forsolving a system of N+1 equations. However, system (2.118) is specific,since only three coefficients, at most, are nonzero in every equation. Thisallows a simplification of the standard method.
Write the solution of (2.118) in the form
yi = αi+1yi+1 + βi+1, i = 0, 1, . . . , N − 1, (2.119)
with arbitrary coefficients αi and βi. Recursion of (2.119) implies
yi−1 = αiαi+1yi+1 + αiβi+1 + βi.
Substituting these formulas in equations (2.118) we obtain the relation(αi+1(αiAi − Ci) + Bi
)yi+1 + (αiAi − Ci)βi+1 + βiAi + Fi = 0.
Since αi and βi are arbitrary, these relations can be satisfied choosing
αi+1 =Bi
Ci − αiAi, βi+1 =
Aiβi + FiCi − αiAi
, i = 1, 2, . . . , N − 1. (2.120)
Furthermore, from the left boundary condition we derive
α1 = κ1, β1 = ν1. (2.121)
Thus, (2.121) and the recursion relation (2.120) allow to find all thecoefficients α1, . . . , αN , β1, . . . , βN . Now, consider the knot xN = `. Forthis knot we have the equalities
yN = κ2yN−1 + ν2, yN−1 = αNyN + βN . (2.122)
Solving this system we obtain the value for yN :
yN =ν2 + κ2βN1− κ2αN
. (2.123)
This finishes the first step of the Gauss method. Now we can start thecalculations of the second step deriving
yi = αi+1yi+1 + βi+1, i = N − 1, N − 2, . . . , 0. (2.124)
2.3. NUMERICAL SIMULATIONS 107
Obviously, the realization of formulas (2.120), (2.123), (2.124) requiresO(N) arithmetic operations. Since N ∼ 1/h 1 (theoretically N → ∞as h→ 0), this advantage is very important.
At the same time it is necessary to check the stability of calcula-tions for the described algorithm.
Theorem 2.3.1. (Sufficient conditions for stability) Let
Ai > 0, Bi > 0,
Ci ≥ Ai + Bi, Ci 6≡ Ai + Bi, i = 1, 2, . . . , N − 1,
0 6 κj 6 1, j = 1, 2,
(2.125)
or
Ai > 0, Bi > 0, Ci ≥ Ai +Bi, i = 1, 2, . . . , N − 1,
0 6 κj 6 1, j = 1, 2, κ1 + κ2 < 2.
(2.126)
Then the described algorithm is stable.
Example 2.3.1. Consider the Dirichlet problem
d2u
dx2= −f, x ∈ (0, 1),
u(0) = µ1, u(1) = µ2.
(2.127)
We haveAi = Bi = 1, Ci = 2, κ1 = κ2 = 0.
Thus, conditions (2.126) are satisfied and the algorithm for problem(2.127) is stable. ♦
Example 2.3.2. For the third type boundary value problem
d2u
dx2+ au = −f, x ∈ (0, 1),
u ′(0) = σu(0) + µ1, u(1) = µ2,
(2.128)
108CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
we have
Ai = Bi = 1, Ci = 2 − ah2, κ1 = (1 + σh)−1, κ2 = 0.
Thus, the conditions for stability are
a 6 0, σ ≥ 0. ♦
Example 2.3.3. Consider the first order differential equation
∂ u
∂ t+ a
∂ u
∂ x= 0.
As the discretization for this equation we write the three-points scheme:
yji − yj−1i
τ+ a
yji+1 − yji−1
2h= 0 (2.129)
where yji = u(xi, tj), xi = ih, tj = jτ and yj−1i are treated as known
quantities. After collecting terms, (2.129) can be written in the form(2.118) with coefficients
Ai = aτ
2h, Bi = −a τ
2h, Ci = −1.
So, scheme (2.129) is unstable for any sign of a. ♦
2.3.5 The eigenvalue problem for afinite-difference scheme
Consider the finite-difference analog of the Sturm-Liouville problem
yxx + λy = 0, y0 = yN = 0, y 6≡ 0. (2.130)
Let yi = sinαxi with arbitrary α. Since
yi+1 + yi−1 = 2 sinαxi cosαh,
2.3. NUMERICAL SIMULATIONS 109
equation (2.130) implies that
2 sinαxi cosαh = 2(
1 − 12λh2
)sinαxi.
Thus, for nontrivial solution,
cosαh = 1 − 12λh2
and, hence,
λ =2h2
(1− cosαh) =4h2
sin2 αh
2. (2.131)
In order to find α we use the boundary conditions. As in the Sturm-Liouville problem, we obtain
sinα` = 0, ` = N/h. (2.132)
However, in contrast with the case of differential equations, equation(2.132) has N − 1 solutions (N − 1 is the number of inner knots in thenet). So,
α = αk = kπ
`, k = 1, 2, . . . , N − 1. (2.133)
Hence, the finite-difference Sturm-Liouville problem (2.130) has N − 1eigenvalues and eigenfunctions
y(k)i = sin
(kπ
`xi
), λk =
4h2
sin2
(kπh
2`
), k = 1, 2, . . . , N − 1.
Moreover,
0 < λ1 =4h2
sin2 πh
2`< λ2 < · · · < λN−1 =
=4h2
sin2 πh(N − 1)2`
=
=4h2
cos2πh
2`<
4h2,
110CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
and the eigenfunctions are orthogonal:
(y(k), y(m)
)def= h
N−1∑
i=1
y(k)i y
(m)i = 0 for k 6= m.
So, for finite-difference schemes a theory similar to that for differentialequations can be constructed. The main difference is that, for the formercase, the number of eigenvalues and eigenvectors is finite.
2.3.6 Discretization of the Laplace operator.Iterative method for solving algebraic systems
Consider the Dirichlet boundary value problem for the Poisson equationin the rectangle Ω = (0, `1) × (0, `2) :
∆u = −f(x), x ∈ Ω,
u∣∣∂ Ω
= u0(x ′), x ′ ∈ ∂ Ω.
(2.134)
Let xi1,i2 be a net over Ω, i1 = 0, 1, . . . , N1, i2 = 0, 1, . . . , N2, with
steps of discretization equal to h1 and h2. Let y def= yi1 ,i2 be the netfunction, such that yi1,i2 = u(xi1,i2). Then, using formula (2.90) wewrite
∂ 2u
∂ x21
−→ yx1x1
def=yi1+1,i2 − 2yi1,i2 + yi1−1,i2
h21
,
∂ 2u
∂ x22
−→ yx2x2
def=yi1 ,i2+1 − 2yi1,i2 + yi1,i2−1
h22
.
So, the finite-difference analog for Poisson equation is
yx1x1 + yx2x2 = ϕ, i1 = 1, 2, . . . , N1 − 1, i2 = 1, 2, . . . , N2 − 1, (2.135)
where the right-hand side ϕ def= ϕi1,i2 is defined by the method describedin Subsection 2.3.2 or simply ϕi1,i2 = fi1,i2 .
Theoretically, by completing (2.135) with the boundary condi-tions, the resulting algebraic system can be solved. At the same time,
2.3. NUMERICAL SIMULATIONS 111
the use of the general Gauss method requires O((N1N2)3) operations.On the other hand, it is possible to construct an algorithm, similar tothe described above, for equations with five unknowns. However, thisalgorithm is not necessarily stable. To overcome these obstacles, the socalled iterative method can be used, as we describe next.
Let yj, j = 0, 1, . . . denote the sequence of net functions. Then,starting with some “initial” y0, we define yj+
12 as the solution of the
equation
yj+12 − τ
(1)j+1y
j+ 12
x1x1= yj + τ
(1)j+1
(yjx2x2
+ ϕ),
yj+ 1
20,i2
= u0(x0,i2), yj+ 1
2N1 ,i2
= u0(xN1,i2),
i2 = 1, 2, . . . , N2 − 1,
(2.136)
and after that, we derive yj+1 :
yj+1 − τ(2)j+1y
j+1x2x2
= yj+12 + τ
(2)j+1
(yj+ 1
2x1x1
+ ϕ
),
yj+1i1 ,0
= u0(xi1,0), yj+1i1 ,N2
= u0(xi1,N2),
i1 = 1, 2, . . . , N1 − 1,
(2.137)
where τ (1)j+1 and τ (2)
j+1 are parameters.
It is clear that system (2.136) can be treated as N2 − 1 systems
for N1 + 1 equations with respect to the unknowns yj+ 1
2i1i2
with i1 =0, 1, . . . , N1 and fixed i2. Thus, for any fixed i2, this system can besolved as described in Subsection 2.3.4. Therefore, the total number ofarithmetic operations for scheme (2.136) is O(N1N2). Obviously, thesame is true for scheme (2.137) and, as a consequence, to the passagefrom yj to yj+1.
However, the questions remain about how many iterations must
112CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
be performed, and how to choose the parameters τ (1)j and τ
(2)j . This
problem solves fixing the resulting accuracy.
Denote with ‖f‖ the net-L2 norm
‖f‖ def=
(h1h2
N1∑
i1=1
N2∑
i2=1
f2i1i2
)1/2
.
Consider the initial error z0 = y0 − u and the error after J steps,zJ = yJ − u, where u is the exact solution of the original problem(2.104).
We require that, for some ε,
‖zJ ‖ 6 ε‖z0‖. (2.138)
Define the number η :
η =1 − ξ
1 + ξ, ξ =
√(∆1 − δ1)(∆2 − δ2)(∆1 + δ1)(∆2 + δ2)
,
where
δi =4h2i
sin2
(πhi2`i
), ∆i =
4h2i
cos2(πhi2`i
), i = 1, 2,
are the minimal and maximal eigenvalues, respectively, of the operator−∂ xixi coupled with the Dirichlet conditions.
Then, the number of iterations needed for estimate (2.138) to holdis
J =1π2
ln(
4ε
)ln(
4η
).
After that, we calculate the auxiliary numbers
κ =(∆1 − δ1)∆2
(∆2 + δ1)∆1, p =
κ − ξ
κ + ξ, r =
∆1 − ∆2 + (∆1 + ∆2)p2∆1∆2
,
q = r +1 − p
∆1, Θ =
116η2
(1 +
12η2
), σj = 1− 1
2j,
ωj =(1 + 2Θ)(1 + Θσj )
2Θσj/2(1 + Θ1−σj + Θ1+σj ), j = 1, 2, . . . ,J .
2.4. DISCRETIZATION IN THE CASE OF VARIABLE COEFFICIENTS113
Finally, we calculate the parameters
τ(1)j =
qωj + r
1 + ωjp, τ
(2)j =
qωj − r
1 − ωjp.
This way of choosing the parameters J , τ (1)j and τ
(2)j was proposed by
Jordan. There exist another approaches for this problem. For moredetails see the bibliographical comments.
2.4 Discretization of elliptic operators with vari-
able coefficients. The maximum principle
Consider the elliptic equation withh variable coefficients in two variables
Ludef=
2∑
α=1
∂
∂ xα
(pα
∂ u
∂ xα
)− q(x)u = −f(x), (2.139)
where pα = pα(x) ∈ C1(Ω), q = q(x) ∈ C(Ω), pα(x) > 0 and q(x) ≥ 0 forx ∈ Ω. Let the domain Ω be a rectangle (0, `1) × (0, `2).
Applying the method of discretization described in Subsections2.3.2 and 2.3.6 we pass to the following finite-difference scheme
2∑
α=1
(aα(x)yxα)xα− d(x)y = −ϕ(x), x ∈ Ωh, (2.140)
where Ωh = xi = (i1h1, i2h2) is a net over Ω,
aα(x) = pα(xα − 0, 5hα, xβ), α, β = 1, 2, α 6= β,
d(x) = q(x), ϕ(x) = f(x), x ∈ Ωh.
It is easy to check that the precision of this scheme is O(h21 + h2
2).
Fix a point xi, i = (i1, i2) and rewrite the scheme (2.140) as follows
A(xi)y(xi) −∑
ξ∈ωh,i
B(xi, ξ)y(ξ) = ϕ(xi). (2.141)
114CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
Here the coefficients A and B are of the form
A(xi) =1h2
1
(a1(xi1+1,i2) + a1(xi1,i2)
)+
+1h2
2
(a2(xi1,i2+1) + a2(xi1,i2)
)+ d(xi1,i2),
∑
ξ∈ωh,i
B(xi, ξ)y(ξ) =1h2
1
(a1(xi1+1,i2)a(xi1+1,i2) + a1(xi1,i2)y(xi1−1,i2)
)+
+1h2
2
(a2(xi1,i2+1)y(xi1,i2+1) + a2(xi1,i2)y(xi1,i2−1)
).
These formulas are the result of collecting of terms with y at the knotxi and the neighboring knots ωh,i = xi1+1,i2 , xi1−1,i2 , xi1 ,i2+1, xi1,i2−1.
We stress that these coefficients have the following properties:
A(xi) > 0, B(xi, ξ) > 0, A(xi) −∑
ξ∈ωh,i
B(xi, ξ)y(ξ) > 0. (2.142)
Note that the schemes of the form (2.141) appear also for the discretiza-tions of more general equations as (2.139) which include the first orderderivatives. However, under assumptions (2.142) the maximum principleholds for them.
Theorem 2.4.1. (The maximum principle) Let the assumptions (2.142)be satisfied for the scheme (2.141) and y be not any constant. Let alsoϕ(x) 6 0 for any x ∈ Ωh (ϕ(x) ≥ 0 for any x ∈ Ωh). Then y atainsits positive maximum value (negative minimum value) on the boundaryγh = ∂ Ωh.
Corollary 2.4.1. Let y∣∣γh
≥ 0 and the assumptions on Theorem 2.4.1be satisfied. Let ϕ(x) ≥ 0 for x ∈ Ωh. Then, y(x) ≥ 0 for any x ∈ Ωh.
Corollary 2.4.2. Let y∣∣γh
6 0 and the assumptions on Theorem 2.4.1be satisfied. Let ϕ(x) 6 0 for x ∈ Ωh. Then, y(x) 6 0 for any x ∈ Ωh.
Corollary 2.4.3. Let y∣∣γh
= 0 and ϕ(x) = 0 under the assumptionsmentioned above. Then the solution of (2.141) is zero identically.
2.5. BIBLIOGRAPHICAL COMMENTS 115
Consider now the scheme (2.141) coupled with the Dirichlet bound-ary condition
y∣∣γh
= µ(x), x ∈ γh. (2.143)
Denote y the solution of the similar problem
A(xi)y(xi) −∑
ξ∈ωh,i
B(xi, ξ)y(ξ) = ϕ(xi). (2.144)
y∣∣γh
= µ(x), x ∈ γh. (2.145)
Theorem 2.4.2. (The comparison theorem) Let the assumptions ofTheorem 2.4.1 be satisfied and
|ϕ(x)| 6 ϕ(x), x ∈ Ωh, |µ(x)| 6 µ(x), x ∈ γh.
Then|y(x)| 6 y(x), x ∈ Ωh.
Corollary 2.4.4. Let ϕ ≡ 0. Then the inequality
‖y‖Ωh6 ‖µ‖γh
,
holds for the solution of problem (2.141), (2.143). Here
‖y‖Ωh= max
xi∈Ωh
|y(xi)|, ‖µ‖γh= max
xi∈γh
|µ(xi)|.
2.5 Bibliographical comments
There are many textbooks about linear differential equations of secondorder. Part of them have been written specially for the preliminaryand for first reading. Among them are the textbooks by S. Farlow [34],W. Boyce and R. DiPrima [6], Zanderer [28] and by P. Berg and J.McGregor [31]. Other contains the material more in detail, for examplethe textbooks by H. Bateman [29], S. Sobolev [30], A. Tikhonov andSamarskii [43] and V. Vladimirov [42]. However, they are available for
116CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
the first reading too. We refer to the textbook by Vladimirov as a veryclearly written book with a modern point of view on the material.
About numerical simulations for elliptic equations, the textbooksby A. Samarskii [40], G. F. Forsythe and W. Wasow [32] and by W.Ames [33] can be recommended.
Classification of linear PDE’s of the second order is described inany of the cited textbooks. There is considered also the n-dimensionalcase, n ≥ 2.
Physical meaning of elliptic equations and boundary conditions isdescribed more in detail in the books [34], [6], [29], [30], [43], [42].
The proof of the properties of harmonic functions can be found,for example, in [30], [42]. There are considered also some additionalproperties of harmonic functions.
A detailed consideration of the Green functions for boundary valueproblems can be found in [28], [30], [42].
We did not touch here such method for solving boundary valueproblems for elliptic equations as transformation of the differential prob-lem into an integral equation. About this approach we refer the readersto the books [30], [42].
Historically, the Fourier method had been the first used for numer-ical simulations for boundary value problems. So, there are nowadaysmany examples of such calculations for domains of different geometries.A part of them can be found in [34], [29], [31], [30], [43], [42]. Theconvergence of Fourier methods have been discused there, too.
The Sturm-Liouville problem is described in the textbooks by [28],[6], [43], [42]. It is considered also in any textbook on ordinary differen-tial equations.
Regular perturbations for elliptic boundary value problems, in-cluding the case of boundary perturbations, have been discused in [28].
The theory of boundary layer asymptotics for singularly perturbed
2.5. BIBLIOGRAPHICAL COMMENTS 117
elliptic equations has been developed firstly by M. I. Vishik and L. A.Lyusternik [38]. The modern theory for such asymptotics can be found inthe excellent book by A. Il’in [39]. Less in detail, this theory is describedin [28]. However, the case of degeneration of an elliptic equation intoequations of the first order has been considered there.
We do not discuss in the present book the solving of boundaryvalue problems for the Laplace equation by the conformal representation.About this theory we refer the readers to the textbook [29]. The moderntheory of equations of elliptic type is the contents of the book [41], [35],[36], [20] and [2]. We refer the readers to these books for the secondreading.
Another remark is connected with boundary value problems inexternal domains and the Helmholtz equation. This material is notpresented here and we refer the readers to the book [42].
As for numerical simulations, we bounded ourselves by the caseof rectangular domains. More general cases are discussed in the citedbooks [40], [32] and [33]. The consideration of the Jordan’s method tofind the iterative parameters τ (k)
j can be found in the paper [37]. Otheriterative schemes for elliptic problems have been described in detail inthe book by A. Samarskii [40].
118CHAPTER 2. BOUNDARY VALUE PROBLEMS FOR ELLIPTIC EQUATIONS
Chapter 3
Parabolic Equations
3.1 Linear differential equations of the parabolic
type
3.1.1 Cauchy problems and boundary value problems forparabolic equations
Consider the parabolic equation
ρ∂ u
∂ t+ Lu = f, t > 0, x ∈ Ω ⊆ Rn, (3.1)
where Lu = −〈∇, p∇u〉+ qu is an elliptic operator, p = p(x), q = q(x).Recall that p ≥ constant > 0 and q ≥ 0 are sufficiently smooth functions,p ∈ C1(Ω), q ∈ C(Ω). We assume that ρ = ρ(x) ≥ constant > 0 andf = f(x) are continuous functions.
From the physical point of view this equation describes both thepropagation of heat and the diffusion of particles in a substance. In thisinterpretation, ρ is the density, p is the diffusivity of heat (a measure ofthe heat conductivity of the medium). The term qu describes the loss ofheat in the medium and the right-hand side, f , is the heat source. This
119
120 CHAPTER 3. PARABOLIC EQUATIONS
equation is derived from physical principles, namely the conservation ofenergy and the facts that usually the rate of heat energy flow is propor-tional to the temperature gradient and that the loss of heat is describedby linear function (like Hooke’s law in mechanics). For simplicity, weassume that ρ = 1.
In order to obtain a unique solution for (3.1) it is necessary tospecify the temperature at an initial instant of time. So, we prescribethe initial condition
u∣∣t=0
= u0(x), x ∈ Ω. (3.2)
Consider the case Ω = Rn. Then, it is sufficient to supply (3.1) with thecondition (3.2) to guarantee the uniqueness of solutions. However, formany reasons, one assumes that u vanishes at infinity. Problem (3.1)-(3.2) is called the Cauchy problem for the diffusion (heat) equation.
Consider the case of a bounded domain Ω with piecewise smoothboundary ∂ Ω. Now we have to specify boundary conditions on the sur-face Σ = (0,∞) × ∂ Ω of the cylinder Q = (0,∞) × Ω. In the generalsituation, we consider the boundary condition of the third type
αu + β∂ u
∂ ν
∣∣∣∣Σ
= v(x ′, t), x ′ ∈ ∂ Ω. (3.3)
Here, as in the case for elliptic equations, α ≥ 0 and β ≥ 0 are continuousfunctions such that α + β > 0, ν is the outward normal to Σ.
Problem (3.1), (3.2), (3.3) is called the mixed problem for thediffusion equation or, what is the same, the boundary value problem.
Specifying the boundary condition as the Dirichlet type (β = 0),
u∣∣Σ
= v,
or as the Neumann type (α = 0),
∂ u
∂ ν
∣∣∣∣Σ
= v,
we consider the Dirichlet boundary value problem or the Neumann bound-ary value problem for the diffusion equation.
3.1. PARABOLIC LINEAR EQUATIONS 121
3.1.2 The Cauchy problem for the diffusion equation
Consider the problem
∂ u
∂ t= a2∆u+ f, t > 0, x ∈ Rn,
u∣∣t=0
= u0(x),
(3.4)
where f = f(x, t) and u0 are continuous functions and a is a positiveconstant.
Definition 3.1.1. A function u ∈ C2((0,∞) × Rn) ∩ C([0,∞) × Rn)is called a classical solution of the Cauchy problem (3.4) if u satisfiesequation (3.4) and the initial boundary condition as t→ 0
Theorem 3.1.1. Let f and u0 be bounded continuous functions. Then,the classical solution of problem (3.4) exists and it is unique. Moreover,the solution is a bounded function in C2((0,∞) × Rn) ∩ C([0,∞)× Rn)which depends continously on f and u0; this is to say, if
|f − f | < ε and |u− u0| < ε0
then the corresponding solutions, u and u satisfy the estimate
|u(x, t)− u(x, t)| < Tε+ ε0
on (0, T )× Rn for any T.
Remark 3.1.1. The Cauchy problem (3.4) is also correct for functionssuch that, for αT = constant ≥ 0, the estimate
|u(x, t)| 6 CT eαT |x|2
holds in (0, T )× Rn for any T.
The solution for the Cauchy problem (3.4) can be written in ex-plicit form. To do this, consider the fundamental solution (or Gaussian
122 CHAPTER 3. PARABOLIC EQUATIONS
Kernel, or Gauss-Weierstrass Kernel, or Heat Kernel) for the heat equa-tion, i.e., the solution of the equation
∂ En∂ t
− a2∆En = δ(x, t),
where δ is the Dirac δ-function. The distribution En is of the form
En(x, t) =H(t)(
2a√πt)n e−
|x|2
4a2t , (3.5)
where H(t) is the Heaviside function. The fundamental solution En hasthe following properties
1.- En = 0 for t < 0;
2.- En → δ(x) as t → +0 in the D ′ sense;
3.- ∫
Rn
Endx = 1 for t > 0.
Using the fundamental solution, the solution to the Cauchy problem(3.4) is represented as the sum of the heat potential V (x, t) and thesurface heat potential V 0(x, t)
u(x, t) = V (x, t) + V 0(x, t), (3.6)
V (x, t) =∫ t
0
∫
Rn
f(ξ, t)(2a√π(t− τ)
)n exp(− |x− ξ|2
4a2(t− τ)
)dξdτ, (3.7)
V 0(x, t) =1(
2a√πt)n∫
Rn
u0(ξ) exp(−|x− ξ|2
4a2t
)dξ, t > 0. (3.8)
The potentials satisfy the estimates
|V (x, t)| 6 t supξ
sup0≤τ≤t
|f(ξ, τ)|, t > 0,
|V 0(x, t)| 6 supξ
|u0(ξ)|, t > 0,
3.1. PARABOLIC LINEAR EQUATIONS 123
and
V (x, t)∣∣∣∣t=+0
= 0, V 0(x, t) → u0(x) as t→ +0.
Representation (3.6) allows to explain the nature of heat diffusion. Con-sider the problem (3.4) in the weak sense and let f ≡ 0 and u0(x) = δ(x).Then V ≡ 0 and
V 0(x, t) = En(x, t), t > 0. (3.9)
This means that the influence of the initial source of heat at the pointx = 0 diffuses immediately over the whole space Rn. The solution (3.9)vanishes at infinity exponentially fast; however the support of u is nolonger compact for t > 0.
Next, show that the homogeneous Cauchy problem
∂ u
∂ t= a2∆u, t > 0, x ∈ Rn,
u∣∣t=0
= u0(x)
(3.10)
represents the general case of Cauchy problems for the diffusion equa-tion.
Indeed, consider the problems
∂ v
∂ t= a2∆v + f(x, t), t > 0, x ∈ Rn,
v∣∣t=0
= 0,
(3.11)
and∂ w
∂ t= a2∆w, t > τ, x ∈ Rn,
w∣∣t=τ
= f(x, t).
(3.12)
The solution for the last problem can be expressed in the form w(x, t, τ) =V 0(x, t − τ) where the surface heat potential V 0(x, t) is calculated byformula (3.8). Write
v(x, t) =∫ t
0w(t, x, τ)dτ. (3.13)
124 CHAPTER 3. PARABOLIC EQUATIONS
Then
∂ v
∂ t= w(t, τ, x)
∣∣t=τ
+∫ t
0
∂ w
∂ t(t, x, τ)dτ
= f(x, t) + a2
∫ t
0∆w(t, x, τ)dτ
= f(x, t) + a2∆∫ t
0w(t, x, τ)dτ
= f(x, t) + a2∆v.
This constitutes the proof of the following theorem.
Theorem 3.1.2. (The Duhamel Principle) Let w(x, t, τ) be the solutionof Cauchy problem (3.12). Then the solution of (3.11) is given by theformula (3.10).
3.1.3 The mixed problem for the diffusion equation
Consider the initial boundary value problem
∂ u
∂ t= 〈∇, p∇u〉 − qu+ f, (x, t) ∈ Q,
u∣∣t=0
= u0(x), x ∈ Ω,u∣∣Σ
= v(x, t), (x, t) ∈ Σ,
(3.14)
where Q = (0,∞)×Ω is the cylinder and Σ = (0,∞)×∂ Ω is the surfaceof Q. The remaining notation is the same as for equation (3.1).
Let u0 and v satisfy the matching condition
u0
∣∣∂Ω
= v∣∣t=0. (3.15)
Definition 3.1.2. A function u ∈ C2(Q) ∩ C(Q) is called a classicalsolution for problem (3.14) if it satisfies the equation, the initial and theboundary condition in (3.14).
3.1. PARABOLIC LINEAR EQUATIONS 125
Theorem 3.1.3. Let f ∈ C(Q), u0 ∈ C(Ω) and v ∈ C(Σ). Assume thatthe matching condition (3.15) holds. Then the classical solution for themixed problem (3.14) exits, it is unique and depends continously on u0, fand v.
Let us assume that∥∥f − f
∥∥C(QT )
6 ε,∥∥u0 − u0
∥∥C(Ω)
6 ε0,∥∥v − v
∥∥C(ΣT )
6 ε1,
where QT = [0, T ] × Ω,ΣT = [0, T ] × ∂ Ω and T > 0 is an arbitrarynumber. Then the difference between two classical solutions u, u satisfythe inequality ∥∥u − u
∥∥C(QT )
6 max(ε0, ε1) + Tε.
One of the main properties of the solution for the mixed problem is themaximum principle.
Theorem 3.1.4. Assume that u ∈ C2(Q) ∩ C(Q) satisfies equation(3.14). Assume also that f 6 0 in QT . Then, either u 6 0 in QTor u attains its positive maximum value on ΣT or on 0 × Ω :
u(x, t) 6 max
0, maxx∈Ω,t=0
u(x, t), maxx∈∂Ω,t∈[0,T ]
u(x, t).
If we change u by −u we obtain the minimum principle
Theorem 3.1.5. Assume that the hypotheses in Theorem 3.1.4 hold andlet f ≥ 0 in QT . Then
u(x, t) ≥ min
0, minx∈Ω,t=0
u(x, t), minx∈∂Ω,t∈[0,T ]
u(x, t).
Example 3.1.1. Consider the mixed problem
∂ u
∂ t=
12∂ 2u
∂ x2, t > 0, x ∈ (0, 2π),
u∣∣t=0
= e−x cos x,
u∣∣x=0
= cos t, u∣∣x=2π
= e−2π cos t.
126 CHAPTER 3. PARABOLIC EQUATIONS
The exact solution is
u(x, t) = e−x cos(x− t).
It can directly be checked that this function attains its maximum andminimum values only on the boundary x = 0 ∪ x = 2π ∪ t =0. ♦
3.1.4 The Fourier method for the diffusion equation
Consider the initial boundary value problem
∂ u
∂ t+ Lu = f(x, t), (x, t) ∈ Q,u∣∣t=0
= u0(x), x ∈ Ω,
αu + β∂ u
∂ ν
∣∣∣∣Σ
= 0,
(3.16)
where L = −〈∇, p∇〉+ q is an elliptic operator. Recall that L, suppliedwith boundary conditions, has a countable set of eigenvalues λj andeigenfunctions ϕj = ϕj(x). Furthermore, any function
g ∈ MLdef=g ∈ C2(Ω)∩ C1(Ω), αg+ β∂ g/∂ ν
∣∣∂Ω
= 0
can be expanded into a Fourier series
g(x) =∞∑
j=1
gjϕj(x), gj =∫
Ωg(x)ϕj(x)dx. (3.17)
Moreover, the system of eigenfunctions ϕj , j = 1, 2, . . . is complete inL2(Ω). Thus, it is possible to approximate functions g ∈ L2(Ω) by theFourier expansion (3.17).
Let us write the solution for problem (3.16) in the form
u(x, t) =∞∑
j=1
Tj(t)ϕj(x), (3.18)
3.1. PARABOLIC LINEAR EQUATIONS 127
where ϕj are the eigenfunctions of L and Tj(t) are unknown functionsof t. We write similar representations for f and u0 :
f(x) =∞∑
j=1
fj(t)ϕj(x), u0(x) =∞∑
j=1
u0jϕj(x), (3.19)
where
fj(t) =∫
Ω
f(x, t)ϕj(x)dx, u0j =
∫
Ω
u0(x)ϕj(x)dx.
Substitute (3.18), (3.19) into (3.16) to obtain the relations
∂ u
∂ t+ Lu =
∞∑
j=1
T ′jϕj + TjLϕj
=
∞∑
j=1
T ′j + λjTj
ϕj =
∞∑
j=1
fjϕj .
From the last equality we conclude that, for each j = 1, 2, . . . , the func-tion Tj is the solution to the first order linear differential equation
T ′j + λjTj = fj ,
Tj∣∣t=0
= u0j .
(3.20)
The solution to (3.20) is
Tj(t) = u0je
−λj t +∫ t
0e−λj(t−τ)fj(τ)dτ.
Consequently, the solution for the original problem (3.16) is
u(x, t) =∞∑
j=1
u0je
−λjt +∫ t
0e−λj(t−τ)fj(τ)dτ
ϕj(x). (3.21)
Example 3.1.2. Let Ω = (0, `), p = 1, q = 0, α = 1, β = 0. Let also
f = 0. Then λj =(j π`)2, ϕj(x) =
√2` sinλjx and
u(x, t) =
√2`
∞∑
j=1
u0je
−λjt sinλjx, u0j =
∫ `
0u0(x)ϕj(x)dx. ♦
128 CHAPTER 3. PARABOLIC EQUATIONS
Example 3.1.3. Let f = f(x) be a function independent of t and letα > 0. Then, formula (3.21) can be rewritten in the form
u(x, t) =∞∑
j=1
u0je
−λjt +1λjfj(1− e−λjt)
ϕj(x), (3.22)
where 0 < λ1 < λ2 < · · · . Formula (3.22) shows that the solutionstabilizes as t→ ∞. The limiting function
u∞(x) =∞∑
j=1
1λjfjϕj(x) (3.23)
is the solution of the elliptic problem
Lu∞ = f(x), αu∞ + β∂ u∞
∂ ν
∣∣∣∣∂ Ω
= 0. (3.24)
This fact is the background for the stabilization method for numericalsimulations of elliptic equations. ♦
Example 3.1.4. Consider the mixed problem with Neumann condition
∂ u
∂ t= 〈∇, p∇u〉+ f(x),
u∣∣t=0
= u0(x),∂ u
∂ ν
∣∣∣∣Σ
= 0.
(3.25)
The first eigenvalue for the corresponding eigenvalue problem
−〈∇, p∇ϕj〉 = λjϕj , x ∈ Ω,
∂ ϕj∂ ν
∣∣∣∣∂Ω
= 0,
is zero, λ0 = 0, and ϕ0 = 1. Thus, we cannot use expansion (3.22) insuch situation. Moreover, the limiting problem (3.24) with α = 0 andq = 0 is solvable only under condition
∫
Ωf(x)dx = 0. (3.26)
3.1. PARABOLIC LINEAR EQUATIONS 129
Consider now the average u with respect to x of the solution for equation(3.24); this is,
u =∫
Ωu(x, t)dx.
Under the condition (3.26) the average u satisfies the equation
du
dt=∫
∂Ωp∂ u
∂ νdσ =
∫
Ωf(x)dx = 0,
whereas the mean value u grows in time if (3.25) does not hold. Obvi-ously the condition (3.25) implies
u = u0 def=∫
Ω
u0(x)dx
and, hence, we obtain
u = u0 +∞∑
j=1
u0je
−λjt +1λjfj(1− e−λjt)
ϕj(x),
where
u0j =
∫
Ω
(u(x) − u0
)ϕj(x)dx, fj =
∫
Ωf(x)ϕj(x)dx.
Respectively, passing to the limit as t→ ∞, we find the solution
u∞ = u0 +∞∑
j=1
1λjfjϕj(x) (3.27)
for the limiting Neumann boundary value problem
−〈∇, p∇u∞〉 = f(x), x ∈ Ω,
∂ u∞
∂ ν
∣∣∣∣∂ Ω
= 0.
(3.28)
We stress that the problem (3.28) does not depend on the function u0(x).Thus, in the solution (3.27), u0(x) is an arbitrary number.
Obviously, the appearance of the term u0 in the limiting solu-tion u∞ is the result of non uniqueness of Neumann boundary valueproblems. ♦
130 CHAPTER 3. PARABOLIC EQUATIONS
3.1.5 A-priori estimates for mixed problems
Consider a mixed problem for the diffusion equation
∂ u
∂ t− 〈∇, p(x, t)∇u〉+ q(x, t)u = f(x, t), (x, t) ∈ Q,
u∣∣t=0
= u0(x), x ∈ Ω,
u∣∣Σ
= 0.
(3.29)
In contrast with problem (3.16), the coefficients p(x, t) ≥ p0 = constant >0 and q(x, t) ≥ 0 depend on t. It is clear that this does not prevent themaximum principle to hold for this problem. However, the time depen-dence of p and/or q on t is the obstacle for the application of the Fouriermethod. The simplest way to describe the energy dynamics for prob-lem (3.29) is the use of a-priori estimates. A-priori means that both theexistence of the solution and its smoothness are assumed from the onset.
This construction for problem (3.29) is extremely simple. Indeed,let us multiply the heat equation (3.29) by u and integrate over Ω.Obviously
∫
Ω
∂ u
∂ tu dx =
12d
dt
∫
Ωu2 dx,
∫
Ωu 〈∇, p∇u〉 dx =
∫
∂Ωu p 〈ν,∇u〉dσ−
∫
Ωp|∇u|2dx,
where ν is the outer normal vector for the boundary ∂ Ω. Thus we cometo the equality
d
dt
∫
Ω
u2 dx+ 2∫
Ω
p|∇u|2dx+ 2∫
Ω
q u2 dx = 2∫
Ω
f u dx. (3.30)
Let us recall the definitions of some useful spaces.
Definition 3.1.3. L2(Ω) denotes the Banach space of functions withfinite norm
‖f ;L2(Ω)‖ =(∫
Ω|f(x)|2dx
)1/2
,
where the integral is understood in the Lebesgue sense.
3.1. PARABOLIC LINEAR EQUATIONS 131
Definition 3.1.4. W k2 (Ω), k = 0, 1, 2, . . . , denotes the Sobolev space
of functions such that any derivative (in the sense of distributions)∂ αf/∂ xα, |α| 6 k, belongs to L2(Ω). The norm in W k
2 (Ω) is definedas
‖f ;W k2 (Ω)‖ =
[‖f ;L2(Ω)‖2 +
n∑
i=1
∥∥∥∥∂ kf
∂ xki;L2(Ω)
∥∥∥∥2]1/2
. (3.31)
Obviously W k2 (Ω) is also a Banach space. Notice that W 0
2 (Ω) = L2(Ω).
In order to simplify the notation we will write
‖f‖ def= ‖f ;L2(Ω)‖, ‖f‖(k)def= ‖f ;W k
2 (Ω)‖.
Definition 3.1.5.W k
2 (Ω) denotes the closure of C∞0 (Ω) with norm
(3.31).
The main difference between W k2 (Ω) and
W k
2 (Ω) is that functions
inW k
2 (Ω) are “equal” to zero on the boundary ∂ Ω. This means thefollowing:
Lemma 3.1.1. Let f ∈W k
2 (Ω) and k ≥ 1. Then
‖f ;L2(∂ Ω)‖ def=(∫
∂Ωf2dσ
)1/2
= 0.
Let k > n/2, where n is the spatial dimension. Then f∣∣∂Ω
= 0.
Lemma 3.1.2. (Poincare, Friedricks) Let f ∈W 1
2 (Ω). Then, thereexists a constant c > 0, depending only on Ω, such that
‖f‖ 6 c(Ω)
[n∑
i=1
∥∥∥∥∂ f
∂ xi
∥∥∥∥2]1/2
. (3.32)
132 CHAPTER 3. PARABOLIC EQUATIONS
This assertion implies the following
Corollary 3.1.1. The norm (3.31) and the norm
‖f ;W k
2 (Ω)‖ =
[n∑
i=1
∥∥∥∥∂ kf
∂ xki
∥∥∥∥2]1/2
are equivalent inW k
2 (Ω).
Let us go back to equality (3.30). Integrating it with respect to tand using properties p ≥ p0 > 0, q ≥ 0 we obtain the inequality
‖u‖2(t) + 2p0
∫ t
0‖u‖2
(1)(t′)dt ′ 6 ‖u0‖2 + 2
∫ t
0
∫
Ω|f u| dx dt ′. (3.33)
To transform the right hand side in (3.33), we use the Cauchy-Bunyakowski-Schwarz inequality with arbitrary α > 0 :
2∫ t
0
∫
Ω|f u| dx dt ′ 6 1
α
∫ t
0‖f‖2(t ′)dt ′ + α
∫ t
0‖u‖2(t ′)dt ′. (3.34)
Next, note that according to (3.32) and to Corollary 3.1.1 we have
α
∫ t
0‖u‖2(t ′)dt ′ 6 α c2(Ω)
∫ t
0‖u‖2
(1)(t′)dt ′. (3.35)
Choose α = p0/c2(Ω). Then inequalities (3.33)-(3.35) result in the esti-
mate
‖u‖2(t) + p0
∫ t
0‖u‖2
(1)(t) 6 ‖u0‖2 +1α
∫ t
0‖f‖2(t ′)dt ′, (3.36)
which means that the left-hand side is uniformly bounded in t if theL2(Ω) norm of f has a “good” behavior in t. Moreover, using againestimate (3.32), we can rewrite (3.36) as follows
‖u‖2(t) + α
∫ t
0‖u‖2(t ′)dt ′ 6 ‖u0‖2 +
1α
∫ t
0‖f‖2(t ′)dt ′. (3.37)
3.1. PARABOLIC LINEAR EQUATIONS 133
Denoting
v(t) = ‖u‖2(t), ϕ(t) =1α‖f‖2(t),
rewrite (3.37) as follows
v(t) + α
∫ t
0
v(t ′)dt ′ 6 v0 +∫ t
0
ϕ(t ′)dt ′. (3.38)
The assumption
v(t) > v0e−αt +∫ t
0e−α(t−τ)ϕ(τ)dτ
and the identity
α
∫ t
0
∫ t ′
0e−α(t ′−τ)ϕ(τ)dτdt ′ =
∫ t
0ϕ(τ)dτ −
∫ t
0e−α(t−τ)ϕ(τ)dτ
immediately result in contradiction with (3.38).
Thus, we obtain the final estimate
‖u‖2(t) 6 ‖u0‖2e−αt +1α
∫ t
0
∫ τ
0
‖f‖2(t′)dt′e−α(t−τ)dτ, (3.39)
which describes, similar to the Fourier expansion (3.21), the dissipationof the solution.
Consider problem (3.29) with negative coefficient q, |q(x, t)| ≤ q0.After performing the same transformations as above we obtain an in-equality of the form (3.33) with the additional term
2q0∫ t
0
‖u‖2(t ′)dt ′ (3.40)
in the right hand side. Further estimates depend on the relation betweenthe constants p0, q0 and c(Ω) (see inequality (3.32)). Let
p0
c2(Ω)− q0 = µ > 0. (3.41)
134 CHAPTER 3. PARABOLIC EQUATIONS
Then, instead of (3.37), we obtain the inequality
‖u‖2(t) + (2µ− α)∫ t
0‖u‖2(t ′)dt ′ 6 ‖u0‖2 +
1α
∫ t
0‖f‖2(t ′)dt ′. (3.42)
Hence, choosing α = µ we again obtain an inequality of the form (3.39)but with smaller coefficient α. Conversely, let assumption (3.41) bebroken. This is, assume that
p0
c2(Ω)< q0. (3.43)
Then the dissipation term in the left-hand side of (3.33) can not com-pensate the growth implied by the term (3.40) in the right-hand side.Hence, instead of (3.42) we obtain only the following inequality withsome coefficients β > 0 and α > 0 :
‖u‖2(t) 6 ‖u0‖2 +1α
∫ t
0
‖f‖2(t ′)dt ′ + β
∫ t
0
‖u‖2(t ′)dt ′. (3.44)
Now, let us use the following well known fact:
Lemma 3.1.3. (Gronwall) Let g(t) ≥ 0 and Ci, i = 1, 2, be non negativeconstants. Let us also assume that
g(t) 6 C1 + C2
∫ t
0g(t ′)dt ′.
Theng(t) 6 C1e
C2t.
Fix a time instant T > 0 and let t 6 T. Denote
C1 = ‖u0‖2 +1α
∫ T
0‖f‖2(t ′)dt ′, g(t) = ‖u‖2(t).
Then, according to Gronwall’s lemma, inequality (3.44) implies the es-timate
‖u‖2(t) 6 C1eβt. (3.45)
3.2. FINITE DIFFERENCE SCHEMES 135
Therefore, the solution remains bounded in the case (3.43) only for ar-bitrary, but finite, time.
Finally, note that a-priori estimates are the basis for proving bothexistence and uniqueness theorems not only for parabolic equations andnot only for the linear case. For more details we refer the reader to theexcellent book [46].
3.2 Finite difference schemes for parabolic equa-
tions
3.2.1 The one-dimensional spatial case
Finite-difference schemes and the order of approximation
Consider the simplest version of the mixed problem for the diffusionequation
∂ u
∂ t= a
∂ 2u
∂ x2, t > 0, x ∈ (0, `),
u∣∣t=0
= u0(x), u∣∣x=0
= u0(t), u∣∣x=`
= u`(t).
(3.46)
In order to create a finite-difference scheme, we set a net on the strip[0,∞)×[0, `].Denote with (xi, tj) the knots, where tj = jτ, j = 0, 1, . . . , xi =ih, i = 0, 1, . . . , N, Nh = `. The parameters τ and h are the time andspace steps of the discretizations, respectively. The value of the functionu(x, t) at the point (xi, tj) is denoted with yji .
Consider a knot (xi, tj) such that j ≥ 1 and 0 < i < N. We canuse at least two finite difference versions for the first derivative ∂ u/∂ t :
∂ u
∂ t−→ yjit
def=yj+1i − yjiτ
, (3.47)
∂ u
∂ t−→ yj
it
def=yji − yj−1
i
τ. (3.48)
136 CHAPTER 3. PARABOLIC EQUATIONS
For the second derivative (with respect to x) we use the discretization
∂ 2u
∂ x2−→ yjixx
def=yji+1 − 2yji + y
ji−1
h2. (3.49)
If we combine formulas (3.47) and (3.49) we obtain the finite-differenceversion of heat equation (3.46):
yjit = ayjixx, i = 1, 2, . . . , N − 1, j ≥ 0. (3.50)
Similarly, we can combine (3.48) and (3.49) to obtain another version of(3.46):
yjit
= ayjixx, i = 1, 2, . . . , N − 1, j ≥ 1. (3.51)
Consider the version (3.50). The grid points for this scheme are (tj , xi−1),(tj , xi), (tj , xi+1) and (tj+1, xi) (see Figure 3.1). Since it is natural to
x
t
i− 1 i i+ 1
j
j + 1
ss
ss
Figure 3.1: Grid points for the explicit scheme (3.50)
assume that the function y is known on layer j, the equations (3.50) canbe rewritten in explicit form for unknowns yj+1
i :
yj+1i = a
τ
h2
(yji+1 − 2yji + yji−1
)+ yji ,
i = 1, 2, . . . , N − 1, j ≥ 0.(3.52)
3.2. FINITE DIFFERENCE SCHEMES 137
That is why the scheme (3.50) is called explicit scheme.
Now, it is clear how to design the algorithm of calculations. In-deed, we derive from the initial condition (3.46) the value of y on the0-th layer:
y0i = u0(xi), i = 0, 1, . . . , N. (3.53)
Then, using formula (3.52) we calculate y1i for i = 1, 2, . . . , N − 1.
To determine y10 and y1
N we use the boundary condition (3.46):
yj0 = u0(tj), yjN = u`(tj), j = 1, 2, . . . , (3.54)
Note that, due to the matching conditions, u0(0) = u0(0) and u`(0) =u0(`).
After that, we repeat the same procedure for j = 1, 2, . . .
Consider the version (3.51). The grid points for this scheme are(tj+1, xi−1), (tj+1, xi), (tj+1, xi+1) and (tj , xi). (See Figure 3.2)
s s ss
x
t
i− 1 i i+ 1
j
j + 1
Figure 3.2: Grid points for the implicit scheme (3.51)
So we can not write the solution of equations (3.51) in an explicitform, similar to (3.52). That is why (3.51) is called implicit scheme.However, this scheme can be rewritten in the following form
aτ
h2yj+1i−1 −
(1 + 2a
τ
h2
)yj+1i + a
τ
h2yj+1i+1 = −yji ,
i = 1, 2, . . . , N − 1, j ≥ 0.(3.55)
138 CHAPTER 3. PARABOLIC EQUATIONS
Thus, we obtain a three-diagonal linear system. Recall that the systemof equations
Aiyi−1 − Ciyi + Biyi+1 = −Fi, i = 1, 2, . . . , N − 1,
y0 = κ1y1 + ν1, yN = κ2yN−1 + ν2,
can be solved using O(N) arithmetic operations. The stability condi-tions for this method are
Ai > 0, Bi > 0, Ci ≥ Ai +Bi, Ci 6≡ Ai + Bi, 0 ≤ κi ≤ 1.
The initial and boundary conditions for (3.55) have the same formas in (3.53) and (3.54). This implies that
κ1 = κ2 = 0, Ai = Bi = aτ
h2, Ci = 1 + 2a
τ
h2> Ai +Bi,
and hence, the stability conditions hold for the economic method, de-scribed in Section 5.4
Rewrite the schemes (3.50) and (3.51) in the more general form
yjit = a(σyj+1i + (1− σ)yji )xx, i = 1, 2, . . . , N − 1, j ≥ 0, (3.56)
which uses the six grid points shown in Figure 3.3. Obviously, σ = 0 inthe explicit scheme (3.50) and σ = 1 in the implicit scheme (3.51).
Let us consider the order of approximation. To this end, we denotewith zji the difference between the numerical and the exact solutions:
zjidef= yji − u(xi, tj).
If we omit indexes we get
zt = a(σz + (1− σ)z)xx + ψ,
z0 = zN = 0, z0 = 0,
(3.57)
where we use the notation g = gji , g = gj+1i for any function g. The
expressionψ = a(σu+ (1− σ)u)xx− ut, (3.58)
3.2. FINITE DIFFERENCE SCHEMES 139
x
t
i− 1 i i+ 1
j
j + 1 ss s
s ss
Figure 3.3: Grid points for the implicit scheme (3.56)
where uxx and ut are finite difference derivatives, is the remainder. Weare taking into account that the initial and boundary conditions areexactly satisfied. Now, we write
u =12(u+ u) +
12(u− u) =
12(u+ u) +
12τut,
u =12(u+ u) − 1
2(u− u) =
12(u+ u)− 1
2τut.
Then
ψ =12a(u+ u)xx + a(σ − 1
2)τutxx − ut.
Let us denote by t the intermediate time, t = (j + 12)τ and apply the
Taylor expansion at the point (xi, t). Then, for smooth functions u, weobtain
u =u+
τ
2∂ u
∂ t
∣∣∣∣(x,t)=(xi,t)
+ O(τ2),
u =u− τ
2∂ u
∂ t
∣∣∣∣(x,t)=(xi,t)
+ O(τ2),
ut =∂ u
∂ t
∣∣∣∣(x,t)=(xi,t)
+ O(τ2).
140 CHAPTER 3. PARABOLIC EQUATIONS
Since u is the exact solution to the heat equation (3.46), we have theequality
∂ 4u
∂ x4=
∂ 2
∂ x2
(∂ 2u
∂ x2
)=
∂ 2
∂ x2
(1a
∂ u
∂ t
)=
1a
∂ 3u
∂ t∂ x2. (3.59)
This implies the following relations
uxx =∂ 2u
∂ x2+h2
12∂ 4u
∂ x4
∣∣∣∣x=xi
+ O(h4)
=∂ 2u
∂ x2+τ
2∂
∂ t
∂ 2u
∂ x2+
h2
12a∂ 3u
∂ x2∂ t
∣∣∣∣x=xi,t=t
+ O(τ2 + h4),
uxx =∂ 2u
∂ x2+h2
12∂ 4u
∂ x4
∣∣∣∣x=xi
+ O(h4)
=∂ 2u
∂ x2− τ
2∂
∂ t
∂ 2u
∂ x2+
h2
12a∂ 3u
∂ x2∂ t
∣∣∣∣x=xi,t=t
+ O(τ2 + h4).
Substituting these expressions into the formula (3.58) for the remainderψ we obtain the relation
ψ =−∂ u∂ t
+ a∂ 2u
∂ x2
∣∣∣∣x=xi,t=t
+a(σ − 1
2)τ +
h2
12
∂ 3u
∂ t∂ x2
∣∣∣∣x=xi,t=t
+
+O(τ2 + h4)(3.60)
The expression in the first brackets is zero, due to the heat equation(3.46). Now, it is clear that the following relations hold for the remainderψ :
ψ = O(τ2 + h2) for σ =12,
ψ = O(τ2 + h4) for σ = σ∗def=
12− 1
12ah2
τ,
ψ = O(τ + h2) for σ 6= 12
and σ 6= σ∗.
(3.61)
Thus, the order of approximation for the scheme (3.56) varies from O(τ+h2) to O(τ2 + h4), depending on the choice of σ.
3.2. FINITE DIFFERENCE SCHEMES 141
Consider the nonhomogeneous problem
∂ u
∂ t= a
∂ 2u
∂ x2+ f(x, t), t > 0, x ∈ (0, `),
u∣∣t=0
= u0(x), u∣∣x=0
= u0(t), u∣∣x=`
= u`(t).
(3.62)
Considerations similar to those discussed before lead to the scheme
yjit = a(σyj+1i + (1− σ)yji )xx + ϕji , i = 1, 2, . . . , N − 1, (3.63)
where ϕji is an approximation of f at the knot (xi, tj). Consider thedifference z = y − u. Clearly, we have again the problem (3.57) withremainder
ψ = a(σu+ (1− σ)u)xx− ut + ϕ.
Then, instead of formula (3.60) we obtain
ψ = (−f + ϕ)∣∣(x,t)=(xi,t)
+a(σ − 1
2)τ +
h2
12
∂ 2
∂ x2
∂ u
∂ t
∣∣∣∣(x,t)=(xi,t)
− h2
12∂ 2f
∂ x2
∣∣∣∣(x,t)=(xi,t)
+ O(τ2 + h4).
Thus, to obtain the order of approximation O(τ + h2) or O(τ2 + h2) wehave to set
ϕji = f(xi, tj +
τ
2
)(3.64)
and choose σ as it was described in (3.61). However, to obtain theremainder O(τ2 + h4) we have to set σ = σ∗ and to define ϕ as follows
ϕji =
f +
h2
12∂ 2f
∂ x2
∣∣∣∣(x,t)=(xi,t)
. (3.65)
Note that with precision O(h2) we can change the derivative ∂ 2f/∂ x2
by the finite-difference derivative fxx. Therefore, instead of (3.65) wecan use the formula
ϕji =56fj+ 1
2i +
112
(fj+ 1
2i−1 + f
j+ 12
i+1
), (3.66)
142 CHAPTER 3. PARABOLIC EQUATIONS
preserving the accuracy O(τ2 + h4).
Let us now consider the heat equation with variable coefficients
∂ u
∂ t=
∂
∂ x
(p(x, t)
∂ u
∂ x
)+ f(x, t), t > 0, x ∈ (0, `),
u∣∣t=0
= u0(x),
u∣∣x=0
= u0(t), u∣∣x=`
= u`(t),
(3.67)
where p ≥ constant > 0 is a function in C2(QT ), QT = (0, T )× (0, `).
To obtain the finite-difference scheme let us write the heat balancelaw on the interval (xi− 1
2, xi+ 1
2) (see Subsection 5.4)
∫ xi+ 1
2
xi− 1
2
∂ u
∂ t(x, t)dx = W (xx
i+12
, t) −W (xxi− 1
2
, t) +∫ x
i+12
xi−1
2
f(x, t)dx,
where t = tj+ 12
and W (x, t) = p(x, t)∂ u
∂ x(x, t).
Using the simplest of the interpolation techniques we have∫ x
i+12
xi−1
2
∂ u
∂ t(x, t)dx ≈ h u
jit,
∫ xi+1
2
xi− 1
2
f(x, t)dx ≈ hϕji ,
W (xi− 12, t) ≈ ai(σu
j+1ix + (1− σ)ujix),
where, similarly to the steady state equations (see Subsection 5.4),
ai = p(xi− 1
2, tj+ 1
2
), ϕji = f
(xi, tj+ 1
2
).
Therefore, we obtain the scheme
yjit =(aji (σy
j+1i + (1− σ)yji )x
)x
+ ϕji ,
i = 1, 2, . . . , N − 1, j ≥ 0.
(3.68)
3.2. FINITE DIFFERENCE SCHEMES 143
Applying almost the same considerations as above we conclude thatscheme (3.68) has order of accuracy
O(τ + h2) for σ 6= 12 ,
O(τ2 + h2) for σ = 12 .
At the same time, we cannot reach the accuracy O(τ2 + h4). Indeed,in the case of variable a, the last equality in formula (3.59) is not true.Thus, instead of (3.60) we have the additional term
ah2
12
∂ u
∂ t
∂ 2
∂ x2
(1p
)+ 2
∂ 2u
∂ x∂ t
∂
∂ x
(1p
)∣∣∣∣(x,t)=(xi,t)
.
It is clear that it is impossible to remove this term by choosing particularvalues for a or σ.
Stability of the schemes
Let us first consider scheme (3.63) with constant coefficient a. For sim-plicity, we assume that the boundary data u0 and u` are zero in theoriginal problem (3.62).
Sinceyj+1i = yji + τyjit,
by omitting the indexes we can rewrite scheme (3.63) as follows
yt − aστytxx = ayxx + ϕ. (3.69)
The ommited subscript i in (3.69) runs from 1 to N − 1. The initial andDirichlet type boundary conditions at j = 0 and i = 0, N are
y0i = u0(xi), yj0 = 0, yjN = 0, (3.70)
respectively.
144 CHAPTER 3. PARABOLIC EQUATIONS
Let us define the net functions y and y as the solutions to theproblems
yt − aστytxx = ayxx, yj0 = y
jN = 0, y0
i = u0(xi), (3.71)
and
yt − aστytxx = ayxx + ϕ, yj0 = yjN = 0, y0 = 0. (3.72)
Obviously, y = y + y.
In order to consider the stability with respect to the initial data,we have to prove that the estimate
‖yj‖ 6 c‖u0‖ (3.73)
holds for the solution to problem (3.71). Here and in what follows ‖ · ‖denotes the L2
h net norm
‖y‖ def=√
(y, y), (y, g) def= h
n−1∑
i=1
yigi. (3.74)
Let us look for the solution by the Fourier method. If we write thepartial solution in the form
yji = T jXi, (3.75)
we find thatyxx = TXxx and yt = T tX.
Thus, equation (3.71) can be rewritten in the form
T j+1 − T j
aτ (σT j+1 + (1− σ)T j)=Xixx
Xi.
Since the indexes i and j are independent, we obtain two equations
Xixx = −λXi, i = 1, 2, . . . , N − 1, (3.76)
3.2. FINITE DIFFERENCE SCHEMES 145
T j+1 − T j = −λaτ(σT j+1 + (1− σ)T j
), j = 0, 1, 2, . . . (3.77)
Taking into consideration the boundary conditions
X0 = XN = 0
for equation (3.76), we obtain the eigenvalue problem for the net func-tions. This problem has N − 1 eigenvalues λk and eigenfunctions X(k)
(see Subsection 5.5):
λk =4h2
sin2
(πk
2`h
), k = 1, 2, . . . , N − 1,
X(k)i =
√2`
sin(πk
`xi
).
(3.78)
It is clear that 0 < λ1 < · · · < λN−1 and
λ1 =4h2
sin2
(πh
2`
), λN−1 =
4h2
cos2(πh
2`
)<
4h2.
The eigenfunctionsX(k)
form an orthonormal system:
(X(k1),X
(k2))
= δk1k2
and any net function f , f0 = fN = 0, can be written as a linear combi-nation of X(k) :
f =N−1∑
k=1
f(k)X(k), f(k) = (f,X(k)). (3.79)
Moreover, the Parseval identity,
‖f‖2 =N−1∑
k=1
f2(k), (3.80)
holds.
146 CHAPTER 3. PARABOLIC EQUATIONS
Next, put λ = λ(k) and consider the equation (3.77). Simpletransformations allow to write it in the form
T j+1(k)
= qkTj(k), qk =
1 − aλkτ(1 − σ)1 + aλkτσ
. (3.81)
Thus,T j+1
(k) = qj+1k T 0
(k). (3.82)
From (3.82) we obtain the condition
|qk| 6 1. (3.83)
Let us restrict ourselves to the case σ ≥ 0. Then, since qk = 1−aλkτ/(1+aλkτσ), the inequality qk < 1 holds automatically. Consider the inequal-ity qk ≥ −1, or, what is the same, the inequality
qk + 1 =2 + aλkτ(2σ − 1)
1 + aλkτσ≥ 0.
The last formula obviously implies the condition
σ ≥ 12− 1aτλk
. (3.84)
Since, for any k,
λk 6 λN−1 <4h2,
condition (3.84) can be rewritten in the final form
σ ≥ 12− h2
4aτdef= σ0. (3.85)
Thus, any harmonics y(k) = TkX(k) are stable under the assumption
(3.85).
We now show that condition (3.85) is sufficient for the stability ofproblem (3.71) in the L2
h sense. To do this, we write the general solutionof (3.71) in the form
yj+1 =N−1∑
k=1
T j+1(k)
X(k) =N−1∑
k=1
qkTj(k)X(k).
3.2. FINITE DIFFERENCE SCHEMES 147
Then
‖yj+1‖2 =N−1∑
k=1
q2k(Tj(k))
2 6 maxk
q2k
N−1∑
k=1
(T j(k))2 = max
kq2k ‖yj‖2.
Therefore, under assumption (3.85), we have the inequalities
‖yj+1‖ 6 ‖yj‖ 6 · · · 6 ‖y0‖ 6 ‖u0.‖
Thus, the estimate (3.73) with c = 1 has been proved.
Let us consider now some special cases.
1. The explicit scheme (σ = 0).
The stability condition (3.85) implies the inequality
τ
h26
12a. (3.86)
This means that the explicit scheme (3.50) is stable only under as-sumption (3.86). This is why such schemes are called conditionallystable.
2. The completely implicit scheme (σ = 1).
Condition (3.85) obviously holds for this case, just as for anyscheme with σ ≥ 1/2. This implies that the scheme (3.51) is stablefor any choice of τ and h, reason for which such schemes are calledabsolutely stable.
3. Schemes with increasing order of approximation (σ = σ∗).
Consider the difference σ∗ − σ0 (see formulas (3.61) and (3.85)):
σ∗ − σ0 = − 112a
h2
τ+
h2
4aτ=
h2
6aτ.
We conclude that this scheme is absolutely stable.
148 CHAPTER 3. PARABOLIC EQUATIONS
Let us go back to problem (3.72) and consider the stability with respectto the right-hand sides. We write
yj+1 =N−1∑
k=1
T j+1(k) X
(k), ϕj =N−1∑
k=1
ϕj(k)X(k). (3.87)
Thus, equation (3.72) implies the equality
N−1∑
k=1
T j(k)t + aστλkT
j(k)t + aλkT
j(k) − ϕj(k)
X(k) = 0.
Since the systemX(k)
is orthogonal, we obtain the equation
(1 + aστλk)Tj(k)t + aλkT
j(k) = ϕ
j(k),
or, what is the same,
Tj+1(k) = qkT
j(k) +
τϕj(k)
1 + aτσλk, qk =
1 − aτλk(1− σ)1 + aστλk
. (3.88)
This equation allows to connect yj+1 and yj :
yj+1 =N−1∑
k=1
qkTj(k)X(k) + τ
N−1∑
k=1
ϕj(k)
1 + aτσλkX(k). (3.89)
Furthermore
‖yj+1‖ 6 maxk
|qk|
[N−1∑
k=1
(T j
(k)
)2]1/2
+maxk
τ
1 + aτσλk
[N−1∑
k=1
(ϕj
(k)
)2]1/2
.
This implies the inequality
‖yj+1‖ 6 maxk
|qk |‖yj‖ + τ‖ϕj‖. (3.90)
Since |qk | 6 1, under the assumption (3.85) the sum in (3.90) impliesthe estimate
‖yj+1‖ 6 τ
j∑
j′=0
‖ϕj′‖, (3.91)
3.2. FINITE DIFFERENCE SCHEMES 149
which guarantees the stability with respect to the right-hand sides.
Consider briefly equation (3.67) with variable coefficients. Toprove the stability of this scheme we will use a method of a-priori esti-mates (see Subsection 3.3). However, the result is almost the same asfor the case of constant coefficients. We only have to write the upperbound of a, instead of constant a in formula (3.85). So, assume thata 6 p1 and
σ ≥ 12− h2
4p1τ. (3.92)
Then scheme (3.68) is stable both with respect the initial data and right-hand sides. As above, assumption (3.92) results, for the explicit scheme(σ = 0), in the requirement
τ
h2<
12p1
. (3.93)
3.2.2 The multidimensional case
Equations with constant coefficients
Consider the mixed problem∂ u
∂ t= a∆u, (x, t) ∈ QT ,
u∣∣Σ
= µ∣∣Σ, u
∣∣t=0
= u0(x),
(3.94)
where QT = (0, T ) × Ω,Σ = (0, T ) × ∂ Ω. For simplicity we restrictourselves to the case in which Ω is a rectangle in R2.
Associate with problem (3.94) the two-layers scheme
yt = Λ(σy + (1 − σ)y),
y∣∣Σh
= µ∣∣Σh, y0
i = u0(xi).
(3.95)
Here, y def= yji , i = (i1, i2), ydef= yj+1
i and
Λ = Λ1 + Λ2, Λkdef= ayxkxk
, k = 1, 2.
150 CHAPTER 3. PARABOLIC EQUATIONS
We consider equation (3.95) on the interior net
ωh × t1, . . . , tJ , J τ = T,
ωh = (xi1 , xi2), ik = 1, 2, . . . , Nk − 1, xik = ikh ,
whereas the boundary conditions have been set on the boundary
Σh = ∂ ωh × t1, . . . , tJ .
Similar to the proof in Subsection 3.2.1 it can be stated that the stabilitycondition for scheme (3.95) is
σ ≥ 12− h2
8aτ. (3.96)
(Here the number “8” comes from the factor 4n for n = 2).
For the explicit scheme (σ = 0) this condition implies the restric-tion
τ 614ah2.
Therefore, the step of time discretization τ has to vanish as h vanishesor a grows.
At the same time, the implicit schemes (σ ∈ [1/2, 1]) are stable.However, to solve the corresponding algebraic system (using standardGaussian elimination) it is necessary to perform O(h−6) arithmetic oper-ations. For small h this implies a large time of computations. To avoidthese consider the so called economic schemes which require O(h−2)arithmetic operations to go from layer j to layer j + 1.
The simplest of such schemes is called Peaceman-Rachford schemeand consists in the following. Divide the passage from layer j to layerj + 1 in two sub-steps:
yj+12 − yj
12τ
= Λ1yj+ 1
2 + Λ2yj , (3.97)
yj+1 − yj+12
12τ
= Λ1yj+ 1
2 + Λ2yj+1, (3.98)
3.2. FINITE DIFFERENCE SCHEMES 151
where yj+12 is treated as the solution to the intermediate layer tj+ 1
2=(
j + 12
)τ. The initial and boundary conditions have the form
y0 = u0, (3.99)
yj+1 = µj+1 for i2 = 0 and i2 = N2, (3.100)
yj+12 = µ for i1 = 0 and i1 = N1, (3.101)
whereµ =
12(µj+1 + µj
)− τ
4Λ2
(µj+1 − µj
). (3.102)
It is clear that both schemes, (3.97) and (3.98), reduce to a sequence ofalgebraic systems with three-diagonal matrices. Indeed, scheme (3.97)requires N2 − 1 systems of N1 − 1 algebraic equations, where i2 is fixedand i1 goes from 1 to N1 − 1. See Figure 3.4 where the grid of unknown
values yj+ 1
2i1,i2
is shown on layer tj+ 12, whereas the grid of known values
yji1 ,i2 is shown on layer tj . Conversely, scheme (3.98) requires N1 − 1systems of N2 − 1 algebraic equations. See Figure 3.5.
The total number of arithmetic operations is O(1/h2). The senseof the initial condition (3.99) and of the boundary condition (3.100) isobvious. To clarify the reason of the special boundary value µ, subtractequation (3.98) from equation (3.97). We find
2yj+12 = yj+1 + yj − τ
2Λ2
(yj+1 − yj
). (3.103)
This relation has to be satisfied both in any inner mesh point and onthe boundaries for i1 = 0 and i1 = N1, because of equation (3.98). Thuswe obtain the expression (3.102) for µ.
Let us now consider the stability of the scheme (3.97), (3.98).Formula (3.103) allows to eliminate yj+1/2 from equation (3.97):
yj+1 − yj
τ− 1
2Λ2
(yj+1 − yj
)=
12Λ1
(yj+1 + yj
)− τ
4Λ1Λ2
(yj+1 − yj
).
(3.104)
152 CHAPTER 3. PARABOLIC EQUATIONS
0 0x1 x1
x2 x2
s s s sss
i1 i1N1 N1
i2 i2
N2 N2
tj+ 12
tj
(a) (b)
Figure 3.4: The grid of unknowns (a) and knowns (b) for the first sub-step (3.97)
0 0x1 x1
x2 x2
sss s s s
i1 i1N1 N1
i2 i2
N2 N2
tj+1 tj+ 12
(a) (b)
Figure 3.5: The grid of unknowns (a) and knowns (b) for the secondsub-step (3.98)
3.2. FINITE DIFFERENCE SCHEMES 153
Now we use the identity
yj+1 = yj + τyjt . (3.105)
Substitution of (3.105) into (3.104) allows to transform this relation asfollows (
1 − τ
2Λ1 −
τ
2Λ2 +
τ2
4Λ1Λ2
)yjt = Λyj .
Since Λ1 and Λ2 are commutative, we can write the last relation in thefinal form (
1 − τ
2Λ1
)(1 − τ
2Λ2
)yjt = Λyj . (3.106)
Since operators Aidef= −Λi, i = 1, 2 and A
def= A1 + A2 are positive, itfollows that A1A2 > 0 and
Bdef=(1 +
τ
2A1
)(1 +
τ
2A2
)> 1 +
τ
2A. (3.107)
Multiplying scheme (3.106) by 2τyjt and taking the sum over i we obtain
2τ(Byjt , yjt ) + 2τ(Ayj , yjt ) = 0, (3.108)
where (f, g) denotes the scalar product of vectors f and g : (f, g) def=h∑N−1
i=1 figi.
Using the identity
yj =12(yj+1 + yj
)− τ
2yjt ,
(3.108) can be written in the form
2τ(
(B − 12τA)yjt , y
jt
)+(A(yj+1 + yj), yj+1 − yj
)= 0. (3.109)
However,(A(yj+1 + yj), yj+1 − yj
)= (Ayj+1, yj+1) − (Ayj , yj).
154 CHAPTER 3. PARABOLIC EQUATIONS
Thus, formula (3.109) can be written as follows
2τ(
(B − 12τA)yjt , y
jt
)+ (Ayj+1, yj+1) = (Ayj , yj). (3.110)
Inequality (3.107) implies the estimate
(Ayj+1, yj+1) 6 (Ayj , yj) 6 · · · 6 (Ay0, y0). (3.111)
Thus, scheme (3.97), (3.101) is stable with respect to the initial data.
Remark 3.2.1. The operators Ai are self-adjoint in the case of homo-geneous boundary conditions, µ = 0. In that case, the estimate (3.111)can be written as follows:
‖yj+1x ‖ 6 ‖y0
x‖,
where ‖f‖ denotes the L2h-norm: ‖f‖ =
√(f, f).
Remark 3.2.2. Similar considerations show that the estimate
(Ayj+1, yj+1) 6 (Ay0, y0) + cτ
j∑
j ′=0
‖ϕj ′‖2 (3.112)
holds for the non homogeneous version of scheme (3.97),(3.101) (See alsobelow). Therefore, this scheme is stable with respect to the right handsides.
Consider the order of approximation for the Peaceman-Rachfordscheme. Denote by z = y − u the difference between the numericalsolution and the exact solution in the original problem (3.94). Formulas(3.106) imply (we omit the indexes):
Bzt = Λz + ψ,
z∣∣Σh
= 0, z0 = 0,
(3.113)
whereψ = Λuj − Bu
jt .
3.2. FINITE DIFFERENCE SCHEMES 155
Similar to the one dimensional case, we write
uj =12(uj+1 + uj
)− τ
2ujt .
Thus, ψ = 12Λ(uj+1 + uj
)− ujt + O(τ2). Next, expanding uj+1, uj and
ujt in Taylor series at the intermediate time t∗ = tj+ 12
we have
12Λ(uj+1 + uj
)= Λu
∣∣t=t∗
+ O(τ2) = a∆u∣∣t=t∗
+ O(τ2 + |h|2),
ujt =∂ u
∂ t
∣∣∣∣t=t∗
+ O(τ2).
This implies the estimate
ψ = O(τ2 + |h|2), |h|2 = h21 + h2
2. (3.114)
It remains to note that the estimate (3.112) holds for problem (3.113).Thus, the estimate (3.114) has as a consequence, the final relation
z = O(τ2 + |h|2). (3.115)
Equations with variable coefficients
Consider the problem
∂ u
∂ t=
2∑
k=1
∂
∂ xkpk(x, t)
∂ u
∂ xk+ f(x, t), (x, t) ∈ QT ,
u∣∣Σ
= µ(x ′, t), x ′ ∈ ∂ Ω, u∣∣t=0
= u0(x),
(3.116)
where pk > 0, f, µ and u0 are sufficiently smooth functions
The Peaceman-Rachford scheme for this mixed problem has a form
156 CHAPTER 3. PARABOLIC EQUATIONS
similar to that of (3.97), (3.102):
yj+12 − yj
12τ
= Λ1(tj+12 )yj+
12 + Λ2(tj)yj + ϕj ,
yj+1 − yj+12
12τ
= Λ1(tj+12 )yj+
12 + Λ2(tj+1)yj+1 + ϕj ,
y0i = u0(xi)
yj+1 = µj+1 for i2 = 0 and i2 = N2,
yj+12 = µ for i1 = 0 and i1 = N1,
(3.117)
where µ = 12
(µj+1 + µj
)− τ
4Λ2(tj+12 )(µj+1 − µj
).
The finite difference operators Λk(t) have the standard form
Λk(t)y = (ak(xi, t)yxk )xk(3.118)
and the coefficients ak are defined by the formula
ak(xi, t) = p(xi∗k , t), (3.119)
where
xi∗k = xi1−1/2,i2 for k = 1 and xi∗k = xi1,i2−1/2 for k = 2,
orak(xi, t) =
12
(pk(xi, t) + pk(xi ′k , t)
), (3.120)
where
xi ′k = xi1−1,i2 for k = 1, and xi ′k = xi1,i2−1 for k = 2.
Formulas (3.119) or (3.120) guarantee approximation of second order foroperators (3.121):
Λk(t)u− ∂
∂ xk
(pk(x, t)
∂ u
∂ xk
)= O(h2
k).
3.2. FINITE DIFFERENCE SCHEMES 157
These formulas, as well as the standard approximation for the right-handside
ϕji = f
(xi, tj +
12τ
)
imply approximation of second order, O(τ2 + |h|2), for scheme (3.117).However, a high degree of smoothness is assumed for u: for any (x, t) ∈QT there has to exist a constant c such that
∣∣∣∣∂ 3u
∂ t3
∣∣∣∣ 6 c,
∣∣∣∣∂ 5u
∂ x21∂ x
22∂ t
∣∣∣∣ 6 c
∣∣∣∣∂ 4u
∂ x4k
∣∣∣∣ 6 c, k = 1, 2.
Consider now the stability of scheme (3.117). First of all we note thatthe inequality
‖(1− (1− σ)τA)y‖ 6 ‖(1 + στA)y‖ (3.121)
holds for any nonnegative operator A and any number σ ≥ 1/2. Indeed,for σ ≥ 1
2 ,
‖(1+στA)y‖2−‖(1−(1−σ)τA)y‖2 = 2τ(Ay, y)+2(σ − 1
2
)τ2‖Ay‖2 ≥ 0.
Let us define
A1def= −Λ1
(tj+
12
), A2
def= −Λ1
(tj), A2
def= −Λ1
(tj+1
)
and rewrite scheme (3.117)as follows(
1 +12τA1
)yj+
12 =
(1− 1
2τA2
)yj +
τ
2ϕj ,
(1 +
12τA2
)yj+1 =
(1 − 1
2τA1
)yj+
12 +
τ
2ϕj .
Applying estimate (3.121) we obtain∥∥∥∥(
1 +12τA1
)yj+
12
∥∥∥∥ 6∥∥∥∥(
1 +12τA2
)yj∥∥∥∥+
τ
2
∥∥ϕj∥∥,
∥∥∥∥(
1 +12τA2
)yj+1
∥∥∥∥ 6∥∥∥∥(
1 +12τA1
)yj+
12
∥∥∥∥+τ
2∥∥ϕj
∥∥.
158 CHAPTER 3. PARABOLIC EQUATIONS
Thus,∥∥∥∥(
1 +12τA2
)yj+1
∥∥∥∥ 6∥∥∥∥(
1 +12τA2
)yj∥∥∥∥ + τ
∥∥ϕj∥∥.
This inequality implies the estimate
‖yj+1‖(2) 6 ‖y0‖(2) + τ
j∑
j ′=0
‖ϕj ′‖, (3.122)
where
‖y‖(2)def=∥∥∥∥(
1 +12τA2
)y
∥∥∥∥ ≥ ‖y‖.
Moreover, the norm ‖ · ‖(2) can be rewritten in the equivalent form
‖y‖2(2) = ‖y‖2 +
τ2
4‖A2y‖2.
The estimate (3.122) implies the absolute stability (this is, for any τ >0, h1 > 0 and h2 > 0) of scheme (3.117).
3.3 A-priori estimates and stability
The investigations of finite-difference schemes for the heat equation witha constant coefficient carried out in Subsection 3.2.1 used exact solutionsof eigenvalue problems. A-priori estimates for schemes allow to do thesame in more general situations. However, we restrict ourselves to ex-plain this approach for the one spatial dimension heat equation.
3.3.1 Auxiliary formulas
Let y and g be net functions on a net xiNi=0 . Consider finite-differenceversions of Leibniz’s formula
d
dx(uv) =
du
dxv + u
dv
dx. (3.123)
3.3. A-PRIORI ESTIMATES AND STABILITY 159
By the definition of finite-difference scheme for derivatives we have
(yigi)x =1h
(yi+1gi+1 − yigi)
=1h
(yi+1gi+1 − yigi+1 + yigi+1 − yigi)
= yixgi+1 + yigix.
(3.124)
Thus, instead of (3.123) we obtain (3.124) with shifted index. The sameis true for the backward difference derivative:
(yigi)x =1h
(yigi − yi−1gi−1)
=1h
(yigi − yigi−1 + yigi−1 − yi−1gi−1)
= yixgi−1 + yigix.
(3.125)
Moreover, any finite-difference version of the first derivative does notimply the explicit version of Leibniz’s formula (3.123) without shiftedindexes. It is more convenient for our aims to rewrite formulas (3.124)and (3.125) as follows:
(yigi)x = yixgi+1 ± yixgi + yigix = yixgi + yigix + hyixgix, (3.126)
(yigi)x = yixgi−1 + yigix ± yixgi = yixgi + yigix − hyixgix (3.127)
Since the indexes in (3.126) and (3.127) are the same in all terms, theycan be ommited. Note also that a direct consequence of formulas (3.126)and (3.127) is the following
(yg)x
= yxg + yg
x+h2
2(yxgx)x. (3.128)
An immediate consequence of the formulas above is the finite-differenceversions of the equality
d
dx(u2) = 2u
du
dx.
160 CHAPTER 3. PARABOLIC EQUATIONS
Indeed, letting g = y formulas (3.126)-(3.128) can be written as
(y2)x = 2yyx + h(yx)2, (3.129)
(y2)x = 2yyx − h(yx)2, (3.130)
(y2)x
= 2yyx
+h2
2(y2x
)x. (3.131)
Next, consider the discretization of the integration by parts formula∫ 1
0
udv
dxdx = uv
∣∣∣∣1
0
−∫ 1
0
vdu
dxdx. (3.132)
For the forward difference derivative, according to formula (3.124), wehave
h
N−1∑
i=1
yigix = h
N−1∑
i=1
(yigi)x − h
N−1∑
i=1
yixgi+1
= yNgN − y1g1 − h
N∑
i=2
yixgi ± hy1xg1
= yNgN − y0g1 − h
N∑
i=1
yixgi.
(3.133)
Similarly for the backward difference derivative,
h
N−1∑
i=1
yigix = h
N−1∑
i=1
(yigi)x − h
N−1∑
i=1
yixgi−1
= yN−1gN−1 − y0g0 − h
N−2∑
i=0
yixgi ± hy(N−1)xgN−1
= yNgN−1 − y0g0 − h
N−1∑
i=0
yixgi.
(3.134)We can avoid the appearance of terms y0g1 or yNgN−1 by changing thesummation limits
h
N−1∑
i=0
yigix = yNgN − y0g0 − h
N∑
i=1
yixgix. (3.135)
3.3. A-PRIORI ESTIMATES AND STABILITY 161
We see that the indexes shift in all of these formulas. To overcometechnical difficulties due to this fact, we follow an idea of O. Ladijenskaya[46]. We restrict ourselves to the case of functions that vanish on theboundary:
y0 = yN = 0, g0 = gN = 0. (3.136)
Extend the net outside the boundaries, x0, xN , setting yi = 0 for i ≤ 0and i ≥ N. Then, the differences in indexes in formulas (3.133) and(3.135) disappear and we can rewrite them in the simple symmetricform
h∑
yigix + h∑
yixgi = 0. (3.137)
Here and in what follows the symbol∑
denotes summation over allindexes (actually, over i = 1, . . . , N − 1). Moreover, the index i can beommited in formula (3.137).
Recall also some algebraic inequalities. Let a ≥ 0 and b ≥ 0. Then
ab 61pap +
1p ′ b
p ′, (3.138)
where p ≥ 1 and p ′ ≥ 1 are the so called adjoint numbers, satisfying theequality
1p
+1p ′ = 1. (3.139)
Inequality (3.138) can be rewritten in the so called ε-form:
ab 6 ε
pap + c(ε)bp
′, (3.140)
where ε > 0 is arbitrary and c(ε) = 1p ′ ε
−p ′/p.
Now, recall Holder’s inequality for sums
∣∣∣∣∣N∑
i=0
figi
∣∣∣∣∣ 6(
N∑
i=0
|fi|p)1/p( N∑
i=0
|gi|p′
)1/p ′
, (3.141)
162 CHAPTER 3. PARABOLIC EQUATIONS
where p ≥ 1 and p ′ ≥ 1 are adjoint numbers. For p = p ′ = 2 inequality(3.141) becomes the Cauchy-Schwarz-Bunyakowski inequality for sums
∣∣∣∣∣N∑
i=0
figi
∣∣∣∣∣ 6(
N∑
i=0
|fi|2)1/2( N∑
i=0
|gi|2)1/2
. (3.142)
Recall also the more general form of Holder’s inequality (3.141) for sums:∣∣∣∣∣∣
N∑
i=0
k∏
j=1
f(j)i
∣∣∣∣∣∣6
k∏
j=1
(N∑
i=0
|f (j)i |pj
)1/pj
, (3.143)
where pj ≥ 1, j = 1, 2, . . . , k, and∑k
j=1 1/pj = 1.
We will also use a finite-difference version of Gronwall’s lemma.
Lemma 3.3.1. Let yj ≥ 0, j ≥ 0 and ci ≥ 0, i = 1, 2. Let also
yj 6 c1 + c2τ
j∑
j′=0
yj′, j ≥ 0.
Thenyj 6 c′1e
c2tj ,
where tj = jτ and c′1 = c1(1 + O(τ)).
3.3.2 Some spaces of net functions and embeddingtheorems
Under the same assumption (3.136) it is easy to introduce net versions
of L2 andW 1
2 spaces
Definition 3.3.1. We denote with L2h the space of net functions y suchthat, independently of h,
‖y;L2h‖def=(h∑
y2)1/2
<∞, y0 = yN = 0. (3.144)
3.3. A-PRIORI ESTIMATES AND STABILITY 163
Definition 3.3.2. We denote with
W 12h the space of net functions y
such that, independently of h,
‖y;
W 12h ‖
def= ‖yx;L2h‖ <∞, y0 = yN = 0. (3.145)
To simplify notation in what follows we will write
‖y‖ def= ‖y;L2h‖ and ‖y‖1def= ‖y;
W 1
2h ‖. (3.146)
Remark 3.3.1. We took into account two notations in Definition 3.3.2.The first is the equality
h
N∑
i=0
(yix)2 = h
N+1∑
i=1
(yix)2
= h
N∑
i=0
(yix)2 + h
(yN+1 − yN
h
)2
− h
(y0 − y−1
h
)2
= hN∑
i=0
(yix)2
So, under assumption (3.136), ‖yx‖ = ‖yx‖.
Next, we will prove the inequality
‖y‖ 6 c‖yx‖, y0 = yN = 0, (3.147)
where c = `/2 and ` is the length of the interval [x0, xN ]. Hence thenorm (3.145) is equivalent to the standard one
‖y;
W 12h ‖2 = ‖y;L2h‖2 + ‖yx;L2h‖2.
Definition 3.3.3. Denote with Ch the space of net functions y suchthat, independently of h,
‖y‖Cdef= max
i|yi| <∞. (3.148)
164 CHAPTER 3. PARABOLIC EQUATIONS
It is very important that for the defined net spaces the same prop-erties are true as for the corresponding spaces of functions with contin-uous arguments. We will use the followings facts.
Lemma 3.3.2. (Poincare’s Lemma) Let y0 = yN = 0. Then
‖y‖C 6 c‖yx‖, (3.149)
where c is some constant, independent of h and y.
Lemma 3.3.3. Let y0 = yN = 0. Then
‖y‖ 6 c‖yx‖,
where c is some constant, independent of h and y.
Lemma 3.3.4. Let y0 = yN = 0. Then
‖y‖C 6 c‖y‖1/2‖yx‖1/2, (3.150)
where c is some constant, independent of h and y.
Proof. Under assumption (3.136) we have the two following iden-tities
yj+1 = h
j∑
i=0
yix, yj+1 = −hN−1∑
i=j+1
yix. (3.151)
Let j be an index for which xj+1 6 `/2. Then, using the first identity in(3.151) and the inequality (3.142) we obtain
|yj+1|2 =
(h
j∑
i=0
yix
)2
6 j h2j∑
i=0
|yix|2 6 `
2‖yx‖2. (3.152)
For xj+1 ≥ `/2 we use the second identity (3.151) with the same result(3.152). This implies inequality (3.149) with constant c =
√`/2. A
more complicated construction shows that this constant can be taken asc =
√`/2.
3.3. A-PRIORI ESTIMATES AND STABILITY 165
Now, summing the inequality
|yj+1|2 6 ‖yx‖2j h
results in the following:
h
N−1∑
j=0
|yj+1|2 6 ‖yx‖2h2N−1∑
j=0
j 6(Nh)2
2‖yx‖2 =
`2
2‖yx‖2.
Thus, we obtain inequality (3.147) with constant c = `/√
2. Again, thisconstant can be improved to c = `/2. Now, summing identity (3.130)over i′ = 1, 2, . . . , i. Since y0 = 0, this implies the following identity
y2i = 2h
i∑
i′=1
yi′yi′x − h2i∑
i′=1
(yi′x)2. (3.153)
Therefore, using inequality (3.142) we obtain
y2i 6 2
√√√√hi∑
i′=1
y2i′
√√√√hi∑
i′=1
y2i′x 6 2‖y‖‖yx‖.
This implies estimate (3.150) with c =√
2.
Inequality (3.149) implies the embedding W 12h(Ωh) ⊂ Ch(Ωh) for
bounded nets Ωh ⊂ R1. This statement can easily be verified throughsimple algebraic calculations. In a general situation of bounded netsΩh ⊂ Rn such inequalities are a consequence of the existence of fulfill-
ments uh(x) for net functions y. In particular, let y ∈W
1
2h (Ωh). Then,there exists a fulfillment uh(x) ∈ C(Ω) such that
‖uh(x);W
1
2h (Ωh)‖ 6 c1‖y;W
1
2h (Ωh)‖ 6 c2‖uh(x);W
1
2 (Ω)‖,
where c1 and c2 are constants independent of y and h.
This fact results in the possibility of use of all theorems for usualspaces W k
2 (Ω), Ck(Ω) for the corresponding spaces of net functions.In particular we conclude that the embedding W k
2h(Ωh) ⊂ Ch(Ωh) fork > n/2 and bounded Ωn ⊂ Rn is compact. For more details, see [46].
166 CHAPTER 3. PARABOLIC EQUATIONS
3.3.3 A-priori estimates
In order to simplify the algebraic manipulations, let us consider sepa-rately implicit and explicit schemes for the mixed problem
∂ u
∂ t=
∂
∂ x
(p(x, t)
∂ u
∂ x
)− q(x, t)u+ f(x, t), x ∈ (0, `), t > 0,
u∣∣t=0
= u0(x), u∣∣x=0
= u∣∣x=`
= 0,
(3.154)where 0 < p(x, t) 6 p0.
Similar to (3.68) the completely implicit scheme is as follows
yjt =(ajyj+1
x
)x− bjyj+1 + ϕj ,
y0 = u0, yj0 = yjN = 0,
(3.155)
where
aji = p(xi− 1
2, tj+ 1
2
), ϕji = f
(xi, tj+ 1
2
), bji = q
(xi, tj+ 1
2
).
Let us multiply equation (3.155) by yj+1 and take into account identity(see formula (3.130)):
yjt yj+1 = yj+1
tyj+1 =
12
((yj+1
)2)t+τ
2
(yj+1
t
)2. (3.156)
Summing index i in (3.156) imply the equality
‖yj+1‖2t+τ‖y
j+1t
‖2+2h∑
aj(yj+1x
)2+2h
∑bj(yj+1
)2= 2h
∑ϕjyj+1.
(3.157)Thus, we obtain the analog of equality (3.30) with only one additionalterm: τ‖yj+1
t‖2, in the left-hand side. Therefore, the repetition of the
construction from Subsection 3.1.5 implies the finite difference versionsof the estimates (3.39) and (3.45). Indeed, let q ≥ 0 or q < 0 and q0 ≥ |q|
3.3. A-PRIORI ESTIMATES AND STABILITY 167
and assume, in addition, that q and q0 satisfy assumption (3.41). Then,we pass from inequality (3.157) to the inequality
‖yj+1‖2 + (2µ− α)τj+1∑
j′=1
‖yj′‖2 6 ‖y0‖2 +τ
α
j∑
j′=0
‖ϕj′‖2, (3.158)
where the same notation as in (3.42) has been used. Set α = µ and,simplifying algebra, assume that
τ
j∑
j′=0
‖ϕj′‖2 6 φ = constant,
uniformly in j ≥ 0. Therefore, it remains to analyze the inequality
gj + µτ
j∑
j′=0
gj′ 6 c. (3.159)
Denoteq = (1 + µτ)−1
and letgj > cqj . (3.160)
Assumption (3.160) and the identity
qj + µτ
j∑
j′=1
qj′= 1
imply a contradiction with equality (3.162). Thus
gj 6 c qj = cej ln q.
At the same time
j ln q = −j ln(1 + µτ) = −µtj(1 + O(τ)).
168 CHAPTER 3. PARABOLIC EQUATIONS
This implies the estimate
‖yj‖2 6(‖y0‖2 + φ
)e−cµtj , (3.161)
where cµ = µ(1 + O(τ)) and tj = jτ.
Let −q0 6 q < 0 and assume that the condition (3.43) holds.Then, we obtain from (3.157) the following
‖yj+1‖2 6 ‖y0‖2 + φ+ βτ
j+1∑
j′=1
‖yj′‖2. (3.162)
An application of Gronwall’s lemma results in the estimate
‖yj‖2 6(‖y0‖2 + φ
)ecβ tj , (3.163)
Thus, independently of the sign of the coefficient q, the completely im-plicit scheme is absolutely stable, at least for any finite T ≥ tj .
Consider the explicit scheme
yjt =(ajyjx
)x− bjyj + ϕj , j ≥ 0,
y0 = u0, yj0 = yjN = 0,
(3.164)
where aj , bj and ϕj are the same as in (3.155).
For methodological reasons, set, on the onset, bj = 0. Note thatthe product yjxy
j+1x can be rewritten as follows
yjxyj+1x =
12
(yjx
)2+
12
(yj+1x
)2− 1
2
(yj+1x − yjx
)2. (3.165)
Moreover, the last summand in the right-hand side of (3.168) is τ2(yjxt
)2.
So, multiplying equation (3.164) times yj+1, summing over i and usingformulas (3.156) and (3.165) results in the identity
∥∥yj∥∥2
t+τ∥∥∥yjt∥∥∥
2+∥∥∥√ajyj+1
x
∥∥∥2+∥∥∥√ajyjx
∥∥∥2
= τ2∥∥∥√ajyjxt
∥∥∥2+2h
∑ϕjyj+1.
(3.166)
3.3. A-PRIORI ESTIMATES AND STABILITY 169
Next, use both possibilities to estimate yxt :
τ2∥∥∥√ajy
jxt
∥∥∥2
6 2(∥∥∥
√ajy
j+1x
∥∥∥2+∥∥∥√ajy
jx
∥∥∥2), (3.167)
τ2∥∥∥√ajyjxt
∥∥∥2
6 2τ2
h2
(∥∥∥∥√ajiy
jit
∥∥∥∥2
+∥∥∥∥√ajiy
ji−1t
∥∥∥∥2)
6 4p1τ2
h2
∥∥∥yjt∥∥∥
2,
(3.168)where the assumption p(x, t) 6 p1 has been used. So, after summing ofequation (3.166) over j we obtain the estimate
‖yj+1‖2 + τ
j∑
j′=0
(∥∥∥√aj
′yj
′+1x
∥∥∥2+∥∥∥√aj
′yj
′
x
∥∥∥2+ τ
∥∥∥yj′
t
∥∥∥2)
6
6 ‖y0‖ + τ
j∑
j′=0
2ε(∥∥∥
√aj
′yj′+1x
∥∥∥2+∥∥∥√aj
′yj′
x
∥∥∥2)
+
+ (1− ε)4p1τ
h2τ‖yj
′
t ‖2 + δ‖yj′+1‖2 +1δ‖ϕj′‖2
,
(3.169)where ε ∈ [0, 1] and δ > 0 are arbitrary numbers.
Let us chooseε =
12
(1− δ0) (3.170)
and assume thatτ
h2=
12p1
11 + δ0
, (3.171)
where δ0 ∈ (0, 1) is arbitrary. Then, estimate (3.169) can be rewrittenas follows
‖yj+1‖2 + δ0τ
j∑
j′=0
(∥∥∥√aj′yj
′+1x
∥∥∥2+∥∥∥√aj′yj
′
x
∥∥∥2)
6
6 ‖y0‖2 + τ
j∑
j′=0
(δ∥∥∥yj′+1
∥∥∥2+
1δ
∥∥∥ϕj′∥∥∥
2).
(3.172)
170 CHAPTER 3. PARABOLIC EQUATIONS
Now, it is clear that we have to take into account the assumption
p(x, t) ≥ p0 > 0 (3.173)
to use again the embedding theorem
‖y‖ 6 c(Ω)‖yx‖, (3.174)
and to setδ =
12
p0
c2(Ω)δ0. (3.175)
This results in the passage from (3.172) to the estimate
‖yj+1‖2 + µτ
j∑
j′=0
[‖yj′+1x ‖2 + ‖yj′x ‖2
]6 ‖y0‖2 + cτ
j∑
j′=0
‖ϕj′‖2, (3.176)
which is similar to (3.158). Here µ = 12δ0p0 and c = 2c2(Ω)/p0δ0. Ob-
viously this result implies an estimate of the form (3.161). Thus, theexplicit scheme (3.164) with b ≡ 0 is stable and has the dissipative prop-erties. However, restriction (3.171) on τ and h is required. Since δ0 isarbitrary, this assumption coincides with requirement (3.93).
Now, consider scheme (3.164) with non negative b,
0 6 bji 6 q1. (3.177)
Sinceyj+1yj =
12(yj+1
)2+
12(yj)2 − 1
2τ2(yjt
)2,
the repetition of the above construction results in the appearance of theterms ∥∥∥
√bjyj+1
∥∥∥2+∥∥∥√bjyj
∥∥∥2
in the left-hand side of identity (3.166) and of the term
τ2∥∥∥√bjyjt
∥∥∥2
3.3. A-PRIORI ESTIMATES AND STABILITY 171
in the right-hand side of (3.166). Similar to (3.167), (3.168) we write
τ2∥∥∥√bjyjt
∥∥∥2
6∥∥∥√bjyj+1
∥∥∥2+∥∥∥√bjyj
∥∥∥2+
12q1τ
2∥∥∥yjt∥∥∥
2. (3.178)
Thus, instead of (3.169) we come to the following estimate
‖yj+1‖2 + τ
j∑
j′=0
(∥∥∥√aj
′yj
′+1x
∥∥∥2+∥∥∥√aj
′yj
′
x
∥∥∥2+ τ
∥∥∥yj′
t
∥∥∥2)
6
6 ‖y0‖ + τ
j∑
j′=0
2ε(∥∥∥
√aj
′yj
′+1x
∥∥∥2+∥∥∥√aj
′yj
′
x
∥∥∥2)
+
+(
(1− ε)4p1τ
h2+
12q1τ
)τ‖yj
′
t ‖2 + δ‖yj′+1‖2 +1δ‖ϕj′‖2
,
.
(3.179)Choose ε of the form (3.172) and let
τ
h2=
12p1
(1 + δ0 +
12q1h
2
)−1
. (3.180)
Then, we obtain inequality (3.172) again. Obviously, this results in thefinal estimate of the form (3.161). So, the case (3.177) differs slightlyfrom b = 0: only O(h2) terms appear in the requirement on τ and h.
It remains to consider the case of a negative coefficient b. Let
bji < 0, |bji | 6 q1. (3.181)
Then, the influence of this term in scheme (3.164) results in the appear-ance of the term
−2h∑
bjyjyj+1 (3.182)
in the right-hand side of equality (3.166). Furthermore, this leads to theappearance of the term
τ
j∑
j′=0
2q1‖yj′‖ ‖yj′+1‖ (3.183)
172 CHAPTER 3. PARABOLIC EQUATIONS
in the right-hand side of the inequality (3.169). Use again estimate(3.174). Then, instead of (3.172) we obtain the following
‖yj+1‖2 + δ0τ
j∑
j′=0
(∥∥∥√aj′yj
′+1x
∥∥∥2+∥∥∥√aj′yj
′
x
∥∥∥2)
6 ‖y0‖ + τ
j∑
j′=0
q1c
2(Ω)α
(∥∥∥yj′+1x
∥∥∥2+∥∥∥yj
′
x
∥∥∥2)
+1δ‖ϕj′‖2
,
(3.184)
where
α =
√1 +
(δ
2q1
)2
− δ
2q1
and δ, δ0 ∈ (0, 1) are the same as in (3.172). Thus we have to considerthe inequality
p0δ0 >q1αc2(Ω). (3.185)
Recall that the coefficient δ can be chosen arbitrarily small. Hence, theassumption
q1 <p0
c2(Ω)(3.186)
allows to choose a δ0 ∈ (c2(Ω)q1/p0, 1) such that inequality (3.185)holds. Therefore, we come to the following dilemma. The first pos-sibility is to choose δ0 ∈ (c2(Ω)q1/p0, 1). As a result, we reinforce thestability requirement (3.171) increasing δ0. However, for such τ and h
scheme (3.164) preserves dissipative properties. The second possibilityis to choose an arbitrarily small δ0 ∈ (0, 1) preserving the same stabilitycondition (3.171). Obviously, such a choice of δ0 does not allow to ob-tain any reasonable estimate from inequality (3.184). Thus, we have toreturn one step back and simply estimate the term (3.182) as follows
τ
j∑
j′=0
2q1‖yj′‖‖yj′+1‖ 6 τq1
‖y0‖2 + ‖yj+1‖2 + 2
j∑
j′=0
‖yj′‖2
.
3.4. BIBLIOGRAPHICAL COMMENTS 173
Then, setting δ = q1, instead of (3.184) we obtain the following inequal-ity
‖yj+1‖2 + δ0τ
j∑
j′=0
(∥∥∥√aj′yj
′+1x
∥∥∥2+∥∥∥√aj′yj
′
x
∥∥∥2)
6
6 ‖y0‖ +1q1τ
j∑
j′=0
‖ϕj′‖2 + 2τq1j+1∑
j′=0
‖yj′‖2.
(3.187)
It remains to apply Gronwall’s lemma to obtain the final estimate
‖yj+1‖2 6(‖y0‖2 +
1q1φ
)ec2q1 tj , (3.188)
where the same notation as in (3.163) has been used. Hence, the choiceof τ and h via condition (3.171) for δ0 ∈ (0, c2(Ω)q1/p0) guarantees thestability of the explicit scheme (3.164) during any finite time interval.However the scheme looses dissipative properties.
Obviously, if assumption (3.186) is broken, only the second possi-bility can be realized. This completes the analysis of the explicit scheme(3.164).
We restricted ourselves by the consideration of only one spatialdimension. However, the generalization of the discussed method to themultidimensional case is obvious.
3.4 Bibliographical comments
A preliminary overlook on parabolic equations is presented, among oth-ers, in the textbooks by S. J. Farlow [34], W. Boyce and R. DiPrima [6]and D. Bleecker and G. Csordas [44]. They consider Cauchy problems[34] and boundary value problems [34, 44] for one spatial dimension dif-fusion equation with a constant coefficient. However, it is enough toexplain briefly the main properties of parabolic equations solutions.
174 CHAPTER 3. PARABOLIC EQUATIONS
A physical meaning of both the diffusion equation and boundaryconditions is described in the cited textbooks and in the textbook [31].A detailed derivation of the diffusion equation from basic physics lawscan be found in the book by J. S. Farlow [34].
For a first reading we refer the readers to the textbooks [45, 31,43, 42, 46]. The textbook by G. Hellwig [45] is written in such a clearmanner that it canbe recommended as a preliminary reading as well.Applications of the Fourier method to boundary value problems aredescribed in detail in all of the cited textbooks. The convergence ofFourier series is stated there rigorously. Considerations on the Cauchyproblem are presented in the textbooks [45], [43] and [42]. We refer tothe textbooks by V. S. Vladimirov [42] and by O. A. Ladyzhenskaya [46]as two very clearly written from the modern point of view texts.
We do not consider in the present text the method of integraltransformations (Fourier and Laplace) and refer readers to the textbooks[45, 43, 42].
A very clearly written consideration of the method of a-prioriestimates is a part of the textbook [46]. We refer readers to this bookfor more details about this method and functional spaces.
Modern mathematical theory of general parabolic equations is thecontent of the books [47, 20]. In particular, an approach based on themaximum principle and the theory of Holder spaces has been used sys-tematicaly in the monograph by O. A. Ladyzhenskaya, V. A. Solonnikovand N. N. Uraltseva [47].
A brief description of finite-difference schemes for the diffusionequation is containded in the textbook [34]. This approach is consideredin detail in [46] and [40]-[32]. We can recommend the textbooks [46] and[40] as very clearly written texts.
Different methos for solving algebraic systems are considered inthe text book [48]. Among others, the Gauss methods for 3, 5 and 7diagonal matrices are considered there.
Chapter 4
Hyperbolic Equations
4.1 The Cauchy problem for hyperbolic equa-
tions
Recall that the most important example of hyperbolic equation is thewave equation
∂ 2u
∂ t2− a2∆u = f(x, t), (4.1)
where the coefficient a is strongly positive.
The Cauchy problem for the wave equation means that we considerequation (4.1) for t > 0 and x ∈ Rn, whereas at the initial instant oftime we set the initial conditions
u∣∣t=0
= u0(x),∂ u
∂ t
∣∣∣∣t=0
= u1(x). (4.2)
A more general form of the wave equation is (4.1) with some ellipticoperator L = −〈∇, p(x)∇〉+q(x) instead of the Laplace operator −a2∆ :
∂ 2u
∂ t2+ Lu = f(x, t), x ∈ Rn, t > 0. (4.3)
The initial conditions for equation (4.3) have again the form (4.2)
175
176 CHAPTER 4. HYPERBOLIC EQUATIONS
4.1.1 The Duhamel formulas
Consider the Cauchy problem for equation (4.3) with initial conditions
u∣∣t=0
= 0,∂ u
∂ t
∣∣∣∣t=0
= 0. (4.4)
Let us discuss the question: Is it possible to pass from the nonhomoge-neous equation (4.3) to an homogeneous one?
The answer gives the First Duhamel Formula
u(x, t) =∫ t
0v(x, t, τ)dτ, (4.5)
where v is the solution to the problem
∂ 2v
∂ t2+ Lv = 0, x ∈ Rn, t > τ,
v∣∣t=τ
= 0,∂ v
∂ t
∣∣∣∣t=τ
= f(x, τ).
(4.6)
Indeed,∂ u
∂ t= v(x, t, t) +
∫ t
0
∂ v
∂ t(x, t, τ)dτ.
However, v(x, t, t) = 0, due to the first initial condition in (4.6). Then
∂ 2u
∂ t2=
∂ v
∂ t(x, t, τ)
∣∣∣∣τ=t
+∫ t
0
∂ 2v
∂ t2(x, t, τ)dτ
= f(x, t)−∫ t
0Lv(x, t, τ)dτ
= f(x, t)− L
∫ t
0v(x, t, τ)dτ
= f(x, t)− Lu(x, t).
4.1. 177
Consider now a second question: How to pass from problem (4.6) to theproblem
∂ 2w
∂ t2+ Lw = 0, x ∈ Rn, t > τ,
w∣∣t=τ
= f(t, τ),∂ w
∂ t
∣∣∣∣t=τ
= 0.
(4.7)
The Second Duhamel Formula shows that this passage is very simple:
v(x, t, τ) =∫ t
τw(x, t ′, τ)dt ′. (4.8)
Indeed∂ v
∂ t(x, t, τ) = w(x, t, τ),
∂ 2v
∂ t2(x, t, τ) =
∂ w
∂ t(x, t, τ). (4.9)
At the same time
Lv(x, t, τ) =∫ t
τLw(x, t ′, τ)dt ′ = −
∫ t
τ
∂ 2w
∂ (t′)2(x, t ′, τ)dt ′
= −∂ w∂ t′
(x, t ′, τ)∣∣∣∣t ′=t
t ′=τ
= −∂ w∂ t
(x, t, τ).
A comparison with the second equality in (4.9) justifies formula (4.8).
Thus, the solution for problem (4.3), (4.4) can be written in theform
u(x, t) =∫ t
0
∫ t
τw(x, t ′, τ)dt ′dτ. (4.10)
Below, it will be shown that the problem (4.7) can be solved exactly atleast in the case L = −a2∆, a = costant > 0. For the one dimensionalcase, the formula for w is very simple:
w(x, t, τ) =12f(x− a(t − τ), τ) + f(x+ a(t− τ), τ)
. (4.11)
If we substitute (4.11) into (4.10) and change the variables we obtainthe solution for (4.1), (4.4) for x ∈ R1 :
u(x, t) =12a
∫ t
0
∫ x+a(t−τ)
x−a(t−τ)f(ξ, τ)dξdτ. (4.12)
178 CHAPTER 4. HYPERBOLIC EQUATIONS
4.1.2 The Cauchy problem for the wave equation
The one spatial dimension case
Let us first consider the one dimensional case∂ 2u
∂ t2= a2∂
2u
∂ x2, x ∈ R1, t > 0,
u∣∣t=0
= u0(x),∂ u
∂ t
∣∣∣∣t=0
= u1(x).
(4.13)
Note that u of the form
u(x, t) = F (x− at) +G(x+ at) (4.14)
satisfyes equation (4.13) for any F (ξ), G(ξ) ∈ C2(R1).
Indeed,
∂ 2u
∂ t2= a2F ′′(ξ)
∣∣∣∣ξ=x−at
+ a2G ′′(ξ)∣∣∣∣ξ=x+at
= a2∂2u
∂ x2.
The sense of this fact is that x = at and x = −at are just the character-istics of the wave equation.
Taking into account the initial conditions we obtain the algebraicsystem
F (x) + G(x) = u0(x),
−aF ′(x) + aG ′(x) = u1(x).
(4.15)
Integration of the second equality in (4.15) leads to the relation
−F (x) + G(x) =1a
∫ x
0
u1(ξ)dξ + c, (4.16)
where c is a constant of integration. Combining (4.16) and the firstequation in (4.15) we obtain
2G(x) = u0(x) +1a
∫ x
0
u1(ξ)dξ + c
2F (x) = u0(x) −1a
∫ x
0u1(ξ)dξ − c.
4.1. 179
Therefore, from this and (4.13) we find out the final form for the solution:
u(x, t) =12(u0(x− at) + u0(x+ at)
)+
12a
∫ x+at
x−atu1(ξ)dξ. (4.17)
Consider now the special case (4.7) for L = −a2∂ 2/∂ x2, x ∈ R1. Insteadof (4.14) we write
w(x, t, τ) = F (x − a(t− τ)) + G(x− a(t+ τ)).
Then the same calculations as before result in formula (4.12).
Thus, we obtain the D ’Alembert Formula
u(x, t) =12u0(x− at) + u0(x+ at)
+
12a
∫ x+at
x−atu1(ξ)dξ+
+12a
∫ t
0
∫ x+a(t−τ)
x−a(t−τ)f(ξ, τ) dξ dτ
(4.18)
for the solution for the problem
∂ 2u
∂ t2= a2∂
2u
∂ x2+ f(x, t), x ∈ R1, t > 0,
u∣∣t=0
= u0(x),∂ u
∂ t
∣∣∣∣t=0
= u1(x).
(4.19)
One can prove that, for f ∈ C1(R1+ ×R1), u0 ∈ C2(R1) and u1 ∈ C1(R1),
solution (4.18) is unique and u ∈ C2(R1+ × R1).
Consider the solution (4.18) for the homogeneous equation. Letu1 = 0 and let u0 be concentrated at the point x = 0.
According to D ’Alembert formula, the initial perturbation willpropagate along the characteristics x = at and x = −at (See Figure4.1). So, at any time instant t ′ the solution u is zero everywhere outsidethe points x = ±at ′. This situation, in which perturbations go from onepoint to another is called Huygens Principle. Let u1 be non zero, butconcentrated at the same point x = 0 as u0. Let t ′ be a fixed instant
180 CHAPTER 4. HYPERBOLIC EQUATIONS
0x
t
−at ′ at ′
t ′x = −at ′ x = at ′
Figure 4.1: Propagation of a perturbation concentrated at a point
of time. It is clear that for any x 6= ±at ′ the first summand in (4.17)is zero. At the same time, if x ∈ (−at ′, at ′), the second summand isnon zero. Thus, the solution will be concentrated inside the interval[−at ′, at ′], t ′ > 0. Such situation is called diffusion of waves. However,the support for the solution will be inside the “cone” Γ+ =
(x, t), t ≥
0, x ∈ [−at, at]. This is why Γ+ is called cone of the future or cone of
influence.
Now, let u1 be identically equal to zero and u0 a function withcompact support K0. The D ’Alembert formula requires the dynamics ofthe support Kt as shown in Figure 4.2. There exists a time instant t∗such that for 0 6 t 6 t∗ the support
Kt =x ∈ [αt, βt], αt = −at + α0, βt = at + β0
increases. The moving points αt and βt are called leading front of thewave. However, for t > t∗ the support will consists on two parts:
K(1)t =
(x, t), t > t∗, x ∈ [αt, β
(1)t ],
andK
(2)t =
(x, t), t > t∗, x ∈ [α(1)
t , βt]
where α(1)t = at + α0 and β(1)
t = −at + β0. Since u is zero between β(1)t
and α(1)t , these points are called the back front of the wave.
4.1. 181
×××××× ××××
××××××× ×××××××b b b b b b bαt ′ βt ′αt∗ βt∗α0 β0
β(1)t ′ α
(1)t ′
K0
x
t
t ′
t∗
x = −at + α0 x = at + β0
x = −at + β0 x = at+ α0
××××××
Figure 4.2: The dynamics of the support for initial data with compactsupport in the cases u1 ≡ 0 (×) and u1 6≡ 0 (× and o)
Therefore, if the initial derivative u1 is zero, then, in the onedimensional case, the Huygens principle holds, the solution has compactsupport and the leading and back fronts exist. But if u1 is non zeroand has the same support as u0 then the support Kt of the solution fillsout the whole interval [αt, βt] for any t ≥ 0. Thus, in the case of thediffusion of waves, the solution has compact support but the back frontdisappears.
Let us next consider again the solution (4.17) and fix the point(x ′, t ′). Obviously, we can change u0 and u1 outside the interval [x ′ −at ′, x ′ + at ′] without changing the value of u(x ′, t ′). That is why the“cone”
Γ−(x ′ ,t ′) =
(x, t), 0 6 t 6 t ′, x ′ − at ′ 6 x 6 x ′ + at ′
is called the cone of dependence (see Figure 4.3). Conversely, setting thevertex p of the cone of the future at the point (x ′, t ′) we obtain thedomain of influence of u(x ′, t ′) for t > t ′.
182 CHAPTER 4. HYPERBOLIC EQUATIONS
r
x
t
t ′
x ′x ′ − at ′ x ′ + at ′
Γ−(x ′,t ′)
Γ+(x ′,t ′)
p
Figure 4.3: The cones of dependence, Γ−(x′ ,t′), and of influence Γ+
(x′ ,t′)
Example 4.1.1. Consider the Cauchy problem
∂ 2u
∂ t2= 4
∂ 2u
∂ x2, x ∈ R1, t > 0
u∣∣t=0
= H(x)H(1− x),
∂ u
∂ t
∣∣∣∣t=0
= 0,
(4.20)
where H(x) is the Heaviside function and the solution will be consid-ered in the weak (D ′) sense. The initial data u0(x) = H(x)H(1−x) hascompact support K0 = [0, 1]. Consider the characteristics x = x0 ± 2twhich go from the end points x0 = 0 and x0 = 1 of K0. It is clear thatthese characteristics divide the x-t plane into six regions (see Figure 4.4)Let the point (x, t) belong to the Region I. Drawing the cone Γ−
(x,t) ofdependence (dotted lines in Figure 4.4) we see that u(x, t) = 1, sinceu0 = 1 at both points x1
′ and x1′′. As for the Regions II and III ,
the same considerations show that u = 0 since u0(xk ′) = u0(xk ′′) = 0for k = 2, 3. Consider Regions IV and V. Now, for any point fromthese regions, one of the lines ∂ Γ−
(x,t) starts from the point x4′ or x5
′′,
where u0 = 0 whereas the other line starts from the point x4′′ or x5
′
4.1. 183
q q q q q q qqq q q q q q q q q q q q q q
qq q q q q q q q q q q q q qqq q q q q q q q q q q q q q
qq q q q q q qx2
′ x2′′ x4
′ x4′′ x1
′ x1′′ x3
′ x3′′x
t
III III
IV V
V I
0 1
Figure 4.4: Six regions created by four characteristics
where u0 = 1. Thus, u = 1/2 into these regions. Finally, for RegionV I it is clear that u = 0 since both lines ∂ Γ−
(x,t), extending back from(x, t) ∈ V I, go to points where u0 = 0. So, we can construct the solutionu which is constant within each of the six regions, as shown in Figure4.5. This result can be interpreted by saying that the unit step functionH(x)H(1 − x) splits into two step functions of amplitude 1/2. Obvi-ously, the same result will be obtained from the exact solution (4.18) forproblem (4.20),
u(x, t) =12
(H(x− t)H(1 + t− x) +H(x+ t)H(1− t− x)
). ♦
The multispatial dimension case
Now, let us consider the multidimensional Cauchy problem:
∂ 2u
∂ t2= a2∆u+ f(x, t), x ∈ Rn, n = 2, 3,
u∣∣t=0
= u0(x),∂ u
∂ t
∣∣∣∣t=0
= u1(x).
(4.21)
184 CHAPTER 4. HYPERBOLIC EQUATIONS
0 1
u = 1u = 0 u = 0
u = 12 u = 1
2
u = 0
Figure 4.5: An x-t diagram of the solution u in Example 4.1.1
The fundamental solutions En for the wave operator have the form
E2 =H(at− |x|)
2πa√a2t2 − |x|2
, x ∈ R2, (4.22)
E3 =H(t)4πa2t
δSat(x), x ∈ R3. (4.23)
Here, as usual, H(z) is the Heaviside function. δSR(x) is the Dirac δ-
function concentrated on the sphere SR. By definition, δSR(x) acts as
follows (δS(x), ϕ(x)
)=∫
Sϕ(x) dS, ϕ ∈ D(R3),
where dS is the measure on the surface S.
Thus, the fundamental solution E3 acts on test functions ϕ ∈D(R3) as follows:
(E3(x, t), ϕ(x, t)
)=
14πa2
∫ ∞
0
1t
∫
Sat
ϕ(x, t) dS dt,
or, what is the same
(E3(x, t), ϕ(x, t)
)=
14πa2
∫
R3
ϕ(x, |x|a
)
|x| dx.
4.1. 185
The fundamental solutions have the following properties
En → 0,∂ En∂ t
→ δ(x),∂ 2En∂ t2
→ 0 as t→ +0 in D ′(Rn).
Using these fundamental solutions and calculating the convolutions, onecan prove the following statement:
Theorem 4.1.1. Let f ∈ C2(R1 × Rn), u0 ∈ C3(Rn) and u1 ∈ C2(Rn)for n = 2, 3. Then, the classical solution of problem (4.21) exists, it isunique, and has the form
for n = 3 (Kirchoff formula)
u(x, t) =1
4πa2
∫
U(x;at)
f(ξ, t− |x−ξa |
)
|x− ξ| dξ +1
4πa2t
∫
S(x;at)u1(ξ)dS+
+1
4πa2
∂
∂ t
1t
∫
S(x;at)u0(ξ)dξ;
(4.24)
for n = 2 (Poisson formula)
u(x, t) =1
2πa
∫ t
0
∫
U(x;a(t−τ))
f(ξ, τ)√a2(t − τ)2 − |x− ξ|2
dξdτ+
+1
2πa
∫
U(x;at)
u1(ξ)√a2t2 − |x− ξ|2
dξ+
+1
2πa∂
∂ t
∫
U(x;at)
u0(ξ)√a2t2 − |x− ξ|2
dξ.
(4.25)
Here, U(x;R) is the ball with center at x and radius R; S(x;R) is thesphere with center at x and radius R.
186 CHAPTER 4. HYPERBOLIC EQUATIONS
x2
x1
x3
u = 0
u = 0
Sat
Figure 4.6: A spherical wave for n = 3.
Remark 4.1.1. We found the solution of the one dimensional waveequation using very simple and direct considerations. However, theD’Alembert formula (4.18) can easily be derived with the help of thefundamental solution
E1(x, t) =12aH(at− |x|), x ∈ R1. (4.26)
Let us consider now the behavior of the solution’s support for thehomogeneous equation (f = 0). Let n = 3 and u0 and u1 be concentratedat the point ξ = 0. Since the initial conditions can be considered as δ-type perturbation of initial data, the solution is the superposition ofelementary perturbations
u1(ξ)En(x− ξ, t) and u0(ξ)∂ En(x− ξ, t)
∂ t, n = 3. (4.27)
Since E3(x, t) is the δ-function on the sphere Sat (see formula (4.23)),the initial perturbation evolves as a spherical wave |x| = at (see Fig-ure 4.6). Since, at any instant of time t the solution is equal to zeroeverywhere outside the sphere Sat, the Huygens Principle holds. Ob-viously the support of the solution belongs to the cone of the futureΓ+ =
(x, t), |x| 6 at, t ≥ 0
.
Next, let u0 and u1 have a compact support K0. Then each pointξ ∈ K0 is the source of a spherical wave and the whole solution is
4.1. 187
0x1
x2
Uat
Sat
u = 0
Figure 4.7: Wave diffusion for n = 2.
the superposition with respect to ξ of elementary perturbations (4.27).Obviously, the envelope of the spheres S(ξ, at), ξ ∈ K0 form the leadingfront of the wave. However, for sufficiently large time, these spheres willgo such far away that a “hole” will appear inside K0. This means thatit appears the back front and it grows as at. The domain between theleading and the back fronts is called the domain of influence of K0.
Let us now consider the two dimensional case. We again treat u0
and u1 as local perturbations and consider elementary solutions (4.27)for n = 2. Since E2 is a Heaviside-type function (see formula (4.22)),the support of the solution fills out the disk Uat (see Figure 4.7) Thus,we have the situation of wave diffusion with the leading front Sat, butwithout any back front. Obviously, a similar situation appears for anyinitial data u0 and u1 with compact supports.
So, for the three-dimensional case we have the Huygens Princi-ple, for the two-dimensional case we have diffusion and for the one-dimensional case we have the Huygens Principle for u1 ≡ 0 or difussionfor u1 6≡ 0. In order to explain such a different behavior of the wave equa-tion solution, we have to understand what does it mean the two- andone-dimensional cases. Since our world is three-dimensional, a processcan be treated as two-dimensional only in the sense that it does not de-pend on the third variable. In that sense the initial data u0 = u0(0)δ(x)or u1 = u1(0)δ(x) for n = 2 mean that u0 and u1 are concentrated on
188 CHAPTER 4. HYPERBOLIC EQUATIONS
x2
x1
x3s x0
Sat
Uat
Figure 4.8: Concentration of initial data on the line (0, 0, x3) implies theappearence of the diffusion phenomenon.
the line(0, 0, x3) ⊂ R3
(see Figure 4.8). Any point x0 = (0, 0, x0
3)is a source of a spherical wave with support S(x0, at) ⊂ R3. It is clearthat the cross section of S(x0, at) and the plane x3 = 0 is the cir-cle Sat =
x ∈ R2, |x|2 = a2t2 − (x0
3)2. The union of these circles for
|x0| ∈ [0, at] fills out the disk Uat ⊂ R2. Obviously, the back front doesnot appear for any t.
The same considerations have to be made for the one dimen-sional case: the initial perturbation at the point 0 ∈ R1 means, froma three-dimensional view point, a union of sources filling out the plane(0, x2, x3), x2, x3 ∈ R1. However, the fundamental solution for n = 1has the form (4.26). Thus for u1 ≡ 0 the elementary perturbations (4.27)are
u0(ξ)∂ E1(x− ξ, t)
∂ t=
12u0(ξ)δ(at− |x− ξ|).
Now, it is obvious that any point ξ with u0(ξ) 6= 0 initializes a non zeroperturbation only at the points x = ξ ± at. So the back front appearsand we have again the Huygens Principle.
The main conclusion of this subsection, which is true for any hy-perbolic equation, is: the solution of a homogeneous equation with initialdata with compact support has a compact support for any finite t.
4.2. MIXED PROBLEMS FOR HYPERBOLIC EQUATIONS 189
4.2 Mixed problems for hyperbolic equations
4.2.1 The reflection method
A half-infinite string
Consider the wave equation in the half line
∂ 2u
∂ t2− a2∂
2u
∂ x2, x > 0, t > 0, (4.28)
which describes the motion of a string of half-infinite length. Let usassume that the string’s left end point is fixed:
u∣∣x=0
= 0, t ≥ 0, (4.29)
and that the initial displacement and velocity are known:
u∣∣t=0
= u0(x),∂ u
∂ t
∣∣∣∣t=0
= u1(x), x ≥ 0. (4.30)
We also will assume that u0 ∈ C2(x ≥ 0), u1 ∈ C1(x ≥ 0), and that thematching conditions
u0(0) =d2u0
dx2
∣∣∣∣x=0
= 0, u1(0) = 0 (4.31)
hold.
Recall that the general form for the solution of the one dimensionalwave equation (see Subsection 4.1.2) is
u(x, t) = F (x− at) +G(x+ at), (4.32)
where F and G are twice differentiable functions. The boundary condi-tion (4.29) implies the equality
F (−at) +G(at) = 0. (4.33)
190 CHAPTER 4. HYPERBOLIC EQUATIONS
Since t ≥ 0 is arbitrary here, (4.33) implies the equality F (−ξ) = −G(ξ)for any ξ ≥ 0. Thus
u(x, t) = G(x+ at) −G(−x+ at), (4.34)
which implies that the solution is an odd function with respect to x.
Let us rewrite the initial conditions (4.30) as conditions on thewhole line:
u∣∣t=0
= u0(x),∂ u
∂ t
∣∣∣∣t=0
= u1(x), x ∈ R1, (4.35)
where u0 and u1 are odd continuations of u0 and u1 for negative x and uis an odd continuation of the solution. Note that u0 and u1 exist undercondition (4.31).
Now, using the D’Alembert formula and considering the solutionfor positive x, we have
u(x, t) =12u0(x+at)+u0(x−at)
+
12a
∫ x+at
x−atu1(ξ)dξ, x ≥ 0. (4.36)
To rewrite this formula in a more explicit form, consider two regions, Ω1
and Ω2, in the quadrantx > 0, t > 0
, defined as follows:
Ω1 =
(x, t), x > at, t > 0, Ω2 =
(x, t), 0 6 x 6 at, t > 0
.
Since, for any point (x1, y1) ∈ Ω1, the starting points x0 = x1 −at1 andx0 ′
= x1 + at1 of the characteristic x = ±at + x0 are nonnegative, thecone of dependence Γ−
(x1,t1)contains the initial data only for positive x
(see Figure 4.9). Thus
u0(x) = u0(x), u1(ξ) = u1(ξ), ξ ≥ x− at ≥ 0.
This and (4.36) imply the same formula for the solution in region Ω1, asif we consider the Cauchy problem on the whole line,
u(x, t) = 12
u0(x+ at) + u0(x− at)
+
+12a
∫ x+at
x−atu1(ξ)dξ, x ≥ at.
(4.37)
4.2. MIXED PROBLEMS FOR HYPERBOLIC EQUATIONS 191
x2 − at2 at2 − x2 x2 + at2 x1 − at1 x1 + at1
x
t
x = at
(x1, t1)
(x2, t2)
Γ−(x1 ,t1)
Γ−(x2 ,t2)
Ω2
Ω1
Figure 4.9: Cones of dependence Γ−x,t for the domains Ω1 and Ω2.
Let us consider the region Ω2 and fix a point (x2, t2) ∈ Ω2. The startingpoints for the characteristics that intersect at the point (x2, t2), t2 > 0,are x0 = x2 − at2 < 0 and x0 ′
= x2 + at2 > 0. Thus, due to the oddnature of u0 and u1, we have
u0(x2 + at2) = u0(x2 + at2), u1(ξ) = u1(ξ), ξ ≥ 0,
u0(x2 − at2) = −u0(at2 − x2), u1(ξ) = −u1(−ξ), 0 ≥ ξ ≥ x2 − at2.
Therefore, for (x, t) ∈ Ω1,
∫ x+at
x−atu1(ξ) dξ = −
∫ 0
x−atu1(−ξ) dξ +
∫ x+at
0u1(ξ) dξ =
∫ x+at
at−xu1(ξ) dξ.
Thus, for the solution in region Ω2, we have the formula
u(x, t) = 12
u0(x+ at) − u0(at− x)
+
+12a
∫ x+at
at−xu1(ξ)dξ, 0 ≤ x ≤ at.
(4.38)
This formula shows (see also Figure 4.9) that two waves come into thepoint (x, t) ∈ Ω2 : the first one was originated at the point (x + at, 0)
192 CHAPTER 4. HYPERBOLIC EQUATIONS
whereas the second one was originated at the point (at−x, 0) and comesto the point (x, t) after being reflected on the “wall” (0, t).
A similar construction can be applied for the consideration of ahalf-infinite long string with free left end point. This means the mixedproblem (4.28), (4.30), where the boundary condition (4.29) is replacedby
∂ u
∂ x
∣∣∣∣x=0
= 0. (4.39)
In this case, the general form of the wave equation solution requires theeveness of the solution
u(x, t) = G(x+ at) +G(−x+ at).
Thus, we consider the auxiliary initial conditions (4.35) for even contin-uations u0 and u1 :
u0(−x) = u0(x), u1(−x) = u1(x), x ≥ 0,
and assume that u1(0) = 0.
Therefore, the solution in region Ω1 has the same form (4.37),whereas in Ω2 it has the representation
u(x, t) = 12
u0(x+ at) + u0(at− x)
+
+12a
∫ at−x
0u1(ξ)dξ +
12a
∫ x+at
0u1(ξ)dξ, 0 ≤ x ≤ at.
A bounded string
Consider the motion of a string of length ` with fixed end points
∂ 2u
∂ t2− a2∂
2u
∂ x2= 0, x ∈ (0, `), t > 0,
u∣∣x=0
= 0, u∣∣x=`
= 0, t ≥ 0,
u∣∣t=0
= u0(x),∂ u
∂ t
∣∣∣∣t=0
= u1(x), x ∈ [0, `].
(4.40)
4.2. MIXED PROBLEMS FOR HYPERBOLIC EQUATIONS 193
Applying the same considerations as above we obtain that the generalsolution of the wave equation has again the form (4.34). However, thepresence of the right fixed end point requires an additional assumptionfor G :
G(ξ + 2`) = G(ξ), G ∈ C2(R1). (4.41)
Indeed, the boundary condition in (4.40) requieres, instead of (4.33),two equalities
F (−at) = −G(at), F (`− at) = −G(`+ at). (4.42)
Thus−G(`+ ξ) = F (`− ξ) = −G(ξ − `),
which implies the periodicity condition (4.41).
Formula (4.34) with 2`-periodic G shows that both ends, x =0, x = `, reflect waves changing signs of them. Therefore, the motion ofthe string is 2`/a-periodic in time (see Figure 4.10).
x
t
`
2`a x = at
0 x0
Figure 4.10: Characteristics reflections on the two walls, (0, t) and(`, t), t > 0.
Let us construct precisely the solution of the mixed problem (4.40).Note that the existence of classical solution u(x, t) for this problem re-quires the existence of a C2-continuation u(x, t), odd with respect to thepoints x = 0 and x = `.
194 CHAPTER 4. HYPERBOLIC EQUATIONS
Let
u0 ∈ C2([0, `]), u1 ∈ C1([0, `]),
u0(0) = u0(`) =d2u0
dx2
∣∣∣∣x=0
=d2u0
dx2
∣∣∣∣x=`
= 0, u1(0) = u1(`) = 0.
(4.43)Then, there exist continuations u0 and u1, odd with respect to x = 0and x = `. Therefore, D’Alembert formula gives the solution as follows
u(x, t) =12[u0(x+ at) + u0(x− at)
]+
12a
∫ x+at
x−atu1(ξ)dξ, x ∈ [0, `].
(4.44)
Next, note that the characteristics x = at and x = ` − at afterreflections at the points (`, `/a) and (0, `/a) divide the rectangle [0, `]×[0, 2`/a] into seven regions Ω1,Ω2 and Ω′
2,Ω3,Ω4 and Ω′4 and Ω5. (See
Figure 4.11)t
x
Ω1
Ω2 Ω′2
Ω3
Ω4 Ω′4
Ω5
x = atx = `− at
`a
2`a
Figure 4.11: Partition of the rectangle [0, `]×[0, 2`a ] by the characteristics
Obviously, for the domains Ω1,Ω2 and Ω′2 the solution is con-
structed in the same way as for half-infinite strings (see Figure 4.11).
4.2. MIXED PROBLEMS FOR HYPERBOLIC EQUATIONS 195
For the domain Ω3 we have to take into account the reflections ofboth characteristics. Fix a point (x3, t3) ∈ Ω3. Then
u(x3, t3) =12u0(x3 + at3) + u0(x3 − at3)
+
12a
∫ x3+at3
x3−at3u1(ξ)dξ.
(4.45)Since u0 and u1 are odd functions with respect to x = 0, we have u0(x3+at3) = −u0(at3 − x) and u1(ξ) = −u1(−ξ) for ξ ∈ (x3 − at3, 0). Denoteδ0 = x3 + at3 − ` and let α = `− δ0 (see Figure 4.12).
t
x
Ω1
Ω2
Ω′2
Ω3
Ω4 Ω′4
Ω5
`a
2`a
(x3, t3)
x3 − at3 0 at3 − x3 α ` x3 + at3
δ0
s
Figure 4.12: Construction of the solution for the region Ω3
Then, the same argument as before implies the equalities
u0(x3 + at3) = −u0(`− δ0) = −u0(α),
u1(ξ) = −u1(`− (ξ − `)) = −u1(2`− ξ) for ξ ∈ (`, x3 + at3).
Thus, we can transform the integral in the D’Alembert formula (4.45)
196 CHAPTER 4. HYPERBOLIC EQUATIONS
as follows∫ x3+at3
x3−at3u1(ξ)dξ =
∫ 0
x3−at3u1(ξ)dξ +
∫ `
0u1(ξ)dξ +
∫ x3+at3
`u1(ξ)dξ =
=∫ 0
at3−x3
u1(ξ)dξ +∫ `
0u1(ξ)dξ +
∫ α
`u1(ξ)dξ =
=∫ α
at3−x3
u1(ξ)dξ.
Therefore, we find that the solution for region Ω3 is
u(x, t) = −12u0(at− x) + u0(α)
+
12a
∫ α
at−xu1(ξ)dξ,
where (x, t) ∈ Ω3 and α = 2`− at− x ∈ (0, `).
Consider now the region Ω4. Fix a point (x4, t4) ∈ Ω4 and denoteb = x4 − at4, c = x4 + at4 (see Figure 4.13)
t
x
s(x4, t4)
x = −`+ at
x = at
x = −2`+ at
Ω4
Ω′4
2`a
`a
b −` 0 β γ ` c
δ1δ
Figure 4.13: Construction of the solution for the region Ω4
4.2. MIXED PROBLEMS FOR HYPERBOLIC EQUATIONS 197
According to the general formula (4.44) we have
u(x4, t4) =12[u0(b) + u0(c)
]+
12a
∫ c
b
u1(ξ)dξ. (4.46)
Denote δ = −`− b and let γ = `− δ ∈ (0, `). Then
u0(b) = u0(−`− δ + 2`) = u0(`− δ) = u0(γ).
Similarly, denote δ1 = c− ` and let β = `− δ1 ∈ (0, `). Then
u0(c) = u0(`+ δ1) = −u0(`− δ1) = −u0(β).
Furthermore, since u1 is a 2`-periodic function with zero mean and b+2` = γ, we have
∫ c
bu1(ξ)dξ =
∫ −`
bu1(ξ)dξ +
∫ c
`u1(ξ)dξ
=∫ −`+2`
b+2`u1(ξ)dξ −
∫ `
βu1(ξ)dξ
=∫ β
γu1(ξ)dξ.
Thus, formula (4.46) becomes
u(x, t) =12[u0(γ)− u0(β)
]− 1
2a
∫ γ
βu1(ξ)dξ,
where (x, t) ∈ Ω4 and β = 2`− at − x ∈ (0, `), γ = 2`− at+ x ∈ (0, `).
The sense of this formula is illustrated in Figure 4.13: two wavescome to the point (x, t), one of them starting at the point x = β hadone reflection at the “border” x = `; another one, starting at the pointx = γ had two reflections at the borders x = ` and x = 0.
Consideration of the regions Ω′4 and Ω5 are performed in the same
way.
198 CHAPTER 4. HYPERBOLIC EQUATIONS
4.2.2 The Fourier method for the wave equation
Consider the mixed problem for the wave equation in a bounded domainΩ with piecewise smooth boundary ∂ Ω :
∂ 2u
∂ t2+ Lu = F (x, t), (x, t) ∈ QT ,
u∣∣t=0
= u0(x),∂ u
∂ t
∣∣∣∣t=0
= u1(x), x ∈ Ω,
αu + β∂ u
∂ ν
∣∣∣∣Σ
= 0,
(4.47)
whereL· = −〈∇, p(x)∇·〉+ q.
is an elliptic operator, p ≥ p0 > 0 and q ≥ 0 are sufficiently smooth func-tions, α = α(x′) ≥ 0, β = β(x′) ≥ 0, α(x′) + β(x′) > 0, x′ ∈ ∂ Ω, QT =(0, T )× Ω, T > 0,Σ = (0, T )× ∂ Ω and ν is the outward normal to ∂ Ω.
Recall that the Sturm-Liuoville problem,
Lϕj(x) = λjϕj(x), x ∈ Ω,
αϕj(x) + β∂ ϕj(x)∂ ν
∣∣∣∣∂Ω
= 0,
(4.48)
has a denumerable set of eigenvalues λj > 0 and eigenfunctions ϕj(x), j =1, 2, . . . . Recall also that any function
f(x) ∈ MLdef=f ∈ C2(Ω)∩ C1(Ω), αf + β∂ f/∂ ν
∣∣∂ Ω
= 0
can be expanded in a Fourier series
f(x) =∞∑
j=1
fjϕj(x), (4.49)
wherefj =
∫
Ωf(x)ϕj(x)dx
4.2. MIXED PROBLEMS FOR HYPERBOLIC EQUATIONS 199
and the eigenfunctions ϕj(x) are assumed to be normalized:∫
Ωϕ2j(x)dx = 1.
Consider the homogeneous equation (4.47) and set
u = T (t)ϕj(x). (4.50)
Since ϕj(x) is an eigenfunction, we obtain
ϕjd2T
dt2+ λjϕjT = 0. (4.51)
Obviously, the result is
T (t) = Aj sin√λjt +Bj cos
√λjt, (4.52)
with arbitrary coefficients Aj and Bj . Thus, any function of the form(4.50), where ϕj is an eigenfunction of L and T is a linear combination ofsin√λjt and cos
√λjt, satisfies the homogeneous wave equation (4.47).
A particular solution of the form (4.50) is called standing wave sinceit can be interpreted as a stationary wave ϕj(x) with oscilating in timeamplitude T (t). Emphasizing that the solution (4.50), (4.52) for a fixed jis only one from possible, it is caleed also the j-th mode of vibration. Thenumber
√λj is called the frequency, while the numbers
√λ1,
√λ2, · · ·
are called natural frequencies or fundamental frequencies.
To find a solution with arbitrary initial data u0, u1 ∈ ML we haveto consider a superposition of standing waves
u =∞∑
j=1
Tj(t)ϕj(x). (4.53)
Expanding u0 and u1 in Fourier series,
u0(x) =∞∑
j=1
u0j (t)ϕj(x), u1(x) =
∞∑
j=1
u1j (t)ϕj(x), (4.54)
200 CHAPTER 4. HYPERBOLIC EQUATIONS
results in the initial conditions
Tj∣∣t=0
= u0j ,
dTjdt
∣∣∣∣t=0
= u1j , j = 1, 2, . . . . (4.55)
Obviously, formulas (4.52) and (4.55) imply
Aj =1√λju1j , Bj = u0
j , j = 1, 2, . . . . (4.56)
So, the formal solution for the homogeneous probles (4.47) is the follow-ing
u(x, t) =∞∑
j=1
u0j cos
√λjt +
u1j√λj
sin√λjt
ϕj(x). (4.57)
To consider the non homogeneous problem with the right-hand side F ∈ML we use again the Fourier expansion (4.49)
F (x, t) =∞∑
j=1
Fj(t)ϕj(x). (4.58)
Writing the solution in the form (4.53) and taking into account (4.54),we obtain the following problem
d2Tjdt2
+ λjTj = Fj(t),
Tj∣∣t=0
= u0j ,
dTjdt
∣∣∣∣t=0
= u1j , j = 1, 2, . . . .
(4.59)
The solution has the form
Tj(t) = u0j cos
√λjt+
u1j√λj
sin√λjt +
1√λj
∫ t
0sin√λj(t − τ)Fj(τ)dτ.
(4.60)
4.2. MIXED PROBLEMS FOR HYPERBOLIC EQUATIONS 201
Thus, the formal solution is
u(x, t) =∞∑
j=1
u0j cos
√λjt +
u1j√λj
sin√λjt+
+1√λj
∫ t
0sin√λj(t− τ)Fj(τ)dτ
ϕj(x).
(4.61)
Theorem 4.2.1. Let the series (4.61) and the series that results afterdifferentiation of (4.61) with respect to t and x, converge uniformly onany cylinder QT . Assume also that the series that results after a doubledifferentiation of (4.61) with respect to t and x converge uniformly onany compact K ⊂ QT . Then series (4.61) represents the classical solutionfor the mixed problem (4.47).
Now, consider a specific case
u0 = u1 = 0, F (x, t) = Cϕi(x) sinωt,
for some fixed i. Obviously, for any j = 1, 2, . . . ,
Aj = Bj = 0, Fj = Cδij sinωt,
where δij is the Kronecker symbol: δij = 1 if i = j and δij = 0 if i 6= j.
Formula (4.61) implies, for ω 6=√λi :
Tj(t) =Cδij√λj
∫ t
0sinωτ sin
√λj(t− τ)dτ
=Cδij
ω2 − λi
(ω√λi
sin√λit− sinωt
).
Thus the formal series (4.61) becomes the exact solution
u(x, t) =C
ω2 − λi
(ω√λi
sin√λit − sinωt
). (4.62)
202 CHAPTER 4. HYPERBOLIC EQUATIONS
However, if ω →√λi, the right-hand side in (4.62) becomes indefinite.
Solving this indefinition either by L’Hospital rule or, simply, using for-mula (4.60) for ω =
√λi results in the following
u(x, t) =C
2√λi
(sin
√λit√λi
− t cos√λit
)ϕi(x). (4.63)
Therefore, periodic external forces with one of the fundamental frequen-cies
√λi generates a linear in t increase in the amplitude of the solution.
This phenomena is called resonance
Example 4.2.1. A finite string.
Consider a string of finite length ` with both end points, x = 0and x = ` are fixed. u(x, t) is the displacement of the string from itsequilibrium position (u = 0). The mathematical model is the following
∂ 2u
∂ t2= p
∂ 2u
∂ x2, x ∈ (0, `), t > 0,
u∣∣x=0
= 0, u∣∣x=`
= 0, t ≥ 0,
u∣∣t=0
= u0(x),∂ u
∂ x
∣∣∣∣t=0
= u1(x), x ∈ (0, `)
. (4.64)
Here p = T/ρ where T is the tension on the string and ρ is the density(assumed here to be constant). u0 is an initial displacement and u1 isan initial velocity of the string. At both end points, the position andthe velocity are zero.
The Sturm-Liouville problem (4.48) corresponding to problem (4.64)is
−d2ϕjdx2
= λjϕj , ϕj∣∣x=0
= ϕj∣∣x=`
= 0.
So,
λj =(jπ
`
)2
, ϕj(x) =
√2`
sinjπx
`, j = 1, 2, . . . .
4.2. MIXED PROBLEMS FOR HYPERBOLIC EQUATIONS 203
As a result, the formula (4.61) becomes
u(x, t) =∞∑
j=1
u0j cos
jπ√p
`t +
`
jπ√pu1j sin
jπ√p
`t
√2`
sinjπx
`,
(4.65)where
ukj =
√2`
∫ `
0
uk(x) sinjπx
`dx, k = 0, 1. (4.66)
The harmonic oscillation T1ϕ1 with the minimal frequency√λ1 = π/` is
often called the fundamental tone, while the other modes T2ϕ2, T3ϕ3, . . .
are called overtones. ♦
Consider more in detail the special case of the problem (4.64) (seeFigure 4.14)
u0 =α
`(`− |2x− `|) , u1 = 0. (4.67)
According to formulas (4.66), (4.67) we have u1j = 0 and
0 ``/2x
t
α
Figure 4.14: A finite string
u0j =
√2`
α
`
∫ `
0(`− |2x− `|) sin
jπx
`dx = 4α
√2`
(jπ)2sin
jπ
2.
Hence, taking into account that sin jπ/2 is non zero only for odd j andsetting j = 2n+ 1 we rewrite formula (4.65) in the final form
u(x, t) =8απ2
∞∑
n=0
(−1)n1
(2n+ 1)2cos
(2n+ 1)π√p
`t sin
(2n+ 1)π`
x.
204 CHAPTER 4. HYPERBOLIC EQUATIONS
4.2.3 Energy relations for the mixed problem
Let u(x, t) be the classical solution for the mixed problem (4.47). Definethe so called integral of energy
E2(t) =12
∫
Ω
(∂ u
∂ t
)2
+ p|∇u|2 + qu2
dx+
12
∫
∂ Ωpα
βu2ds, (4.68)
which is the sum of kinetic and potential energies for a vibrating mediaat time t.
Lemma 4.2.1. Let u0 ∈ C1(Ω), u1 ∈ C1(Ω) and F ∈ C(QT ). Then
E2(t) = E2(0) +∫ t
0
∫
ΩF (x, t′)
∂ u(x, t′)∂ t′
dx dt′, t ≥ 0, (4.69)
where
E2(0) =12
∫
Ω
(u1)2 + p|∇u0|2 + q(u0)2
dx+
12
∫
∂ Ωpα
β(u0)2ds.
Moreover, the following inequality holds
E(t) 6 E(0) +1√2
∫ t
0‖F‖(t′)dt′, (4.70)
where ‖ · ‖ denotes the L2(Ω) norm.
Corollary 4.2.1. Let F = 0. Then, inequality (4.69) implies the energyconservation law
E(t) = E(0), for t ≥ 0.
Proof of the Lemma 4.2.1. Fix a number ε > 0 and a domain Ω′ ⊂Ω with piecewise boundary ∂ Ω′.Consider the wave equation (4.47) in thecylinder Q′
T,ε = (0, ε)×Ω′. Multiplication times ∂ u/∂ t and integrating
4.2. MIXED PROBLEMS FOR HYPERBOLIC EQUATIONS 205
over Q′T,ε results in the following
∫
Q′T,ε
F∂ u
∂ tdx dt =
∫
Ω′
∫ T
ε
∂ u
∂ t
∂ 2u
∂ t2dx dt+
∫ T
ε
∫
Ω′
∂ u
∂ tLu dx dt
=∫
Ω′
12
(∂ u
∂ t
)2 ∣∣∣∣T
ε
dx+∫ T
ε
∫
Ω
[p
⟨∇u,∇∂ u
∂ t
⟩+ qu
∂ u
∂ t
]dx−
−∫
∂Ω′p∂ u
∂ t
∂ u
∂ ν ′ds
dt
=12
∫
Ω′
(∂ u
∂ t
)2
+ p|∇u|2 + qu2
∣∣∣∣T
ε
dx−∫ T
ε
∫
∂Ω′p∂ u
∂ t
∂ u
∂ ν ′ds dt,
(4.71)where ν ′ is the outer normal vector to ∂ Ω′. Next we take the limit asε → 0 and Ω′ → Ω and use the boundary condition in (4.47) to obtainthe inequalities
∂ u
∂ ν
∣∣∣∣∂Ω
= −αβu
∣∣∣∣∂ Ω
for β > 0 and u∣∣∂ Ω
= 0 for β = 0.
Thus
−∫ T
0
∫
∂Ωp∂ u
∂ t
∂ u
∂ ν ′ds dt =
∫ T
0
∫
∂ Ωpα
βu∂ u
∂ tds dt =
12
∫
∂Ωpα
βu2ds
∣∣∣∣T
0
.
(4.72)Formulas (4.71), (4.72) and the replacement of T by t lead to the equality(4.74).
Furthermore, after differentiating the equality (4.69) with respectto t we obtain
2E(t)E ′(t) =∫
ΩF (x, t)
∂ u
∂ t(x, t)dx, for t ≥ 0. (4.73)
At the same time, the following inequalities are obvious∣∣∣∣∫
ΩF (x, t)
∂ u
∂ tdx
∣∣∣∣ ≤ ‖F‖∥∥∥∥∂ u
∂ t
∥∥∥∥(t), (4.74)
206 CHAPTER 4. HYPERBOLIC EQUATIONS
E(t) ≥ 1√2
∥∥∥∥∂ u
∂ t
∥∥∥∥(t). (4.75)
Combining relations (4.73)-(4.75) results in the inequality
|E ′(t)| ≤ 1√2‖F‖(t), t ≥ 0. (4.76)
Integration of (4.76) over t leads to the estimate (4.70).
Corollary 4.2.2. Under the assumptions of Lemma 4.2.1 the followingestimates hold for t ≥ 0 :
∥∥∥∥∂ u
∂ t
∥∥∥∥(t) 6√
2E(0) +∫ t
0
∥∥F∥∥(t′)dt′, (4.77)
∥∥∇u∥∥(t) 6
√2p0
E(0) +1
√p0
∫ t
0
∥∥F∥∥(t′)dt′, (4.78)
∥∥u∥∥(t) 6 ‖u0‖ +
√2E(0)t+
∫ t
0(t− t′)
∥∥F∥∥(t′)dt′. (4.79)
Proof. Estimates (4.77) obviously follows from (4.70) and (4.75).To obtain (4.78) it is enough to use (4.70) and the trivial estimate
E(t) ≥√p0
2‖∇u‖(t).
Next, differentiating the equality
‖u‖2(t) =∫
Ω
u2(x, t)dx
results in
2‖u‖ ddt‖u‖ = 2
∫
Ω
u∂ u
∂ tdx 6 2‖u‖
∥∥∥∥∂ u
∂ t
∥∥∥∥ ≤
6 2‖u‖√
2E(0) +∫ t
0‖F‖(t′)dt′
.
4.2. MIXED PROBLEMS FOR HYPERBOLIC EQUATIONS 207
Thusd‖u‖dt
6√
2E(0) +∫ t
0‖F‖(t′)dt′.
After integrating over t we have
‖u‖(t) 6 ‖u‖(0) +√
2E(0)t+∫ t
0
∫ t′
0
‖F‖(t′′)dt′′dt′.
Changing the order of integration in the last integral results in the esti-mate (4.79).
Corollary 4.2.3. The classical solution for the mixed problem (4.47) isunique and depends continuously on initial data and right-hand sides.
Proof. Consider the data u0 ∈ C1(Ω), u1 ∈ C(Ω), F ∈ C(QT ) andu0 ∈ C1(Ω), u1 ∈ C(Ω), F ∈ C(QT ) such that
‖F − F ;L2(Ω)‖(t) 6 ε, t ∈ [0, T ],
‖u0 − u0;C(Ω)‖ + ‖∇u0 −∇u0;L2(Ω)‖ ≤ ε0,
‖u1 − u1;L2(Ω)‖ ≤ ε1.
(4.80)
Denote with u(x, t) and u(x, t) the classical solutions for problem (4.47)with data u0, u1, F and u0, u1, F , respectively. Linearity of problem(4.47) implies estimates similar to (4.77)-(4.79) for the difference ω =u− u :
∥∥∥∥∂ ω
∂ t
∥∥∥∥(t) 6√
2Eω(0) +∫ t
0‖F − F ‖(t′)dt′,
∥∥∇ω∥∥(t) 6
√2p0
Eω(0)t+1
√p0
∫ t
0‖F − F‖(t′)dt′,
∥∥ω∥∥(t) 6
∥∥ω∥∥(0) +
√2Eω(0) +
∫ t
0(t− t′)‖F − F‖(t′)dt′,
(4.81)
208 CHAPTER 4. HYPERBOLIC EQUATIONS
where
Eω(0) =
12
∫
Ω
[(u1 − u1)2 + p|∇(u0 − u0)|2 + q(u0 − u0)2
]dx
+12
∫
∂Ωpα
β(u0 − u0)2ds
1/2
.
Inequalities (4.80) imply
2E2ω(0) 6 dsε21 +
(maxx∈Ω p + |Ω|maxx∈Ωq+
+ |∂ Ω| maxx∈∂Ω
pα
β
)ε20 ≤ C2(ε0 + ε1)2.
Obviously
∫ t
0‖F − F ‖(t′)dt′ ≤ εt,
∫ t
0(t− t′)‖F − F ‖(t′)dt′ ≤ 1
2εt2.
Thus, we obtain from (4.81)
∥∥∥∥∂ ω
∂ t
∥∥∥∥(t) 6 c(ε0 + ε1) + εt,
∥∥∇ω∥∥(t) 6
c√p0
(ε0 + ε1) +1
√p0εt,
∥∥ω∥∥(t) 6 ε0
√|Ω|+ c(ε0 + ε1)t+
12εt2.
(4.82)
These estimates imply the continuous dependence of the solution ondata u0, u1, F. Moreover, letting ε, ε0 and ε1 tend to zero, we obtain theuniqueness of the solution.
4.3. FINITE DIFFERENCE SCHEMES 209
4.3 Finite difference schemes
4.3.1 One dimensional case
Consider the mixed problem
∂ 2u
∂ t2= a2∂
2u
∂ x2+ f(x, t), x ∈ (0, `), t > 0,
u∣∣t=0
= u0(x),∂ u
∂ t
∣∣∣∣t=0
= u1(x), x ∈ (0, `),
u∣∣x=0
= µ1(t), u∣∣x=`
= µ2(t), t ≥ 0.
(4.83)
Associate with problem (4.83) the following family of three layers scheme
yj
tt= Λ
(σyj+1 + (1− 2σ)yj + σyj−1
)+ ϕj ,
y0 = u0, y0t = u1,
y0 = µ1, yN = µ2,
(4.84)
where Λ the finite-difference operator that acts as follows
Λy = a2yxx.
The parameter σ and the functions ϕ and u1 will be defined later.
Let us write the finite-difference scheme (4.84) in explicit form:
yj+1i − 2yji + y
j−1i
τ2= σ
a2
h2
(yj+1i+1 − 2yj+1
i + yj+1i−1 + yj−1
i−1−
−2yj−1i + yj−1
i−1
)+ (1− 2σ) a
2
h2
(yji+1 − yji + yji−1
)+ ϕji .
(4.85)
After collecting the terms containing yj+1 we can write (4.85) as follows
Ayj+1i−1 − Cy
j+1i +By
j+1i+1 = −F ji , i = 1, 2, . . .N − 1, (4.86)
210 CHAPTER 4. HYPERBOLIC EQUATIONS
where
A = σa2 τ2
h2, B = A, C = 1 + 2σa2 τ
2
h2,
F ji = 2yji + yj−1i + (1 − 2σ)a2 τ
2
h2
(yji+1 + 2yj−1
i + yji−1
)
+σa2 τ2
h2
(yj−1i+1 − 2yj−1
i + yj−1i−1
)− ϕji
Since C > A+B and we are considering Dirichlet boundary conditionswe can use the economic method to solve system (4.86) in the case σ > 0.
The order of approximation
Let us consider the order of approximation for scheme (4.84). The firstinitial condition is satisfied exactly. For the second we have
y0t =
∂ u
∂ t
∣∣∣∣t=0
+τ
2∂ 2u
∂ t2
∣∣∣∣t=0
+ O(τ2)
= u1 +τ
2
(a2∂
2u
∂ x2+ f
)∣∣∣∣t=0
+ O(τ2)
= u1 + a2 τ
2∂ 2u0
∂ x2+τ
2f
∣∣∣∣t=0
+ O(τ2),
where we have taken into consideration the original wave equation (4.83).
Thus, we obtain an approximation of second order by choosing
u1 = u1 + a2 τ
2∂ 2u0
∂ x2+τ
2f
∣∣∣∣t=0
. (4.87)
Consider equation (4.84). By setting z = y − u, the difference betweenthe numerical and the explicit solutions, respectively, we write
zj
tt= Λ
(σzj+1 + (1 − 2σ)zj + σzj−1
)+ ψj ,
z0 = zN = 0, z0 = 0, z0t = ν,
(4.88)
4.3. FINITE DIFFERENCE SCHEMES 211
where ν = O(τ2) and
ψj = Λ(σuj+1 + (1− 2σ)uj + σuj−1
)+ ϕj − uj
tt.
Taking into account the identities
uj+1 = uj + τujt , uj−1 = uj − τujt,
and omitting the super index j we have
ψ = Λu+ στ2Λutt + ϕ− utt. (4.89)
Now, use the relations
Λu = a2∂2u
∂ x2+ a2h
2
12∂ 4u
∂ x4+ O(h4),
utt =∂ 2u
∂ t2+ O(τ2),
which hold for sufficiently smooth functions.
Thus
ψ = a2∂2u
∂ x2− ∂ 2u
∂ t2+ a2h
2
12∂ 4u
∂ x4+ στ2a2 ∂ 4u
∂ x2∂ t2+ ϕ+ O(τ2 + h4)
= ϕ− f +(h2
12+ σa2τ2
)∂ 4u
∂ x2∂ t2− h2
12∂ 2f
∂ x2+ O(τ2 + h4).
Now, it is clear that for ϕ = f and any σ ≥ 0, the order of approximationis O(τ2 + h2). However, setting
ϕ = f +h2
12∂ 2f
∂ x2, σ = − 1
12a2
h2
τ2+ σ
for any σ ≥ 0, we obtain that ϕ = O(τ2 + h4).
Let us consider the mixed problem (4.83) with boundary condi-tions of the third type
(∂ u
∂ x− β1u
) ∣∣∣∣x=0
= −µ1,
(−∂ u∂ x
− β2u
) ∣∣∣∣x=`
= −µ2, (4.90)
212 CHAPTER 4. HYPERBOLIC EQUATIONS
instead of the Dirichlet conditions. Obviously we only have to writefinite difference approximations for (4.90). Let us write them in theform
ρ1yj
tt= Λ+
(σyj+1 + (1− 2σ)yj + σyj−1
)+ ϕ+, for i = 0,
ρ2yj
tt= Λ− (σyj+1 + (1 − 2σ)yj + σyj−1
)+ ϕ−, for i = N,
(4.91)
where
Λ+y = a2 2h
(yx − β1y) , Λ−y = −a2 2h
(yx − β2y) ,
ϕ+ = ϕ+ a2 2hν1, ϕ− = ϕ+ a2 2
hν2,
and the numbers ρk, νk, k = 1, 2 are, at the beginning, arbitrary.
Let us consider the order of approximation. Repeating the sametransformation as above we have
ψ0 = −ρ1∂ 2u
∂ t2+ a2 2
h
∂ u
∂ x+h
2∂ 2u
∂ x2+h2
6∂ 3u
∂ x3− β1u
+
+ϕ+ a2 2hν1∣∣x=0
+ O(τ2 + h2)
= (1− ρ1)∂ 2u
∂ t2+ ϕ− f + a2 2
h(ν1 − µ1)+
+a2h
3∂ 3u
∂ x3
∣∣∣∣x=0
+ O(τ2 + h2).
At the same time,
a2∂3u
∂ x3
∣∣∣∣x=0
=∂
∂ x
(∂ 2u
∂ t2− f
) ∣∣∣∣x=0
=∂ 2
∂ t2
(∂ u
∂ x
) ∣∣∣∣x=0
− ∂ f
∂ x
∣∣∣∣x=0
=∂ 2
∂ t2(β1u− µ1)
∣∣∣∣x=0
− ∂ f
∂ x
∣∣∣∣x=0
.
Thus,
ψ0 =(
1 − ρ1 − β1h
3
)∂ 2u
∂ t2+ ϕ− f + a2 2
h(ν1 − µ1)−
4.3. FINITE DIFFERENCE SCHEMES 213
−h3
(∂ f
∂ x+d2µ
dt2
) ∣∣∣∣x=0
+ O(τ2 + h2).
Therefore, to obtain the order O(τ2 + h2) we have to choose
ϕ0 = f∣∣x=0
, ρ1 = 1 + β1h
3, ν1 = µ1 +
h2
6a2
(∂ f
∂ x
∣∣∣∣x=0
+d 2µ1
dt2
).
The same considerations on the second boundary conditions imply theformulas
ϕN = f∣∣x=`
, ρ2 = 1 + β2h
3, ν2 = µ2 +
h2
6a2
(−∂ f∂ x
∣∣∣∣x=`
+d 2µ2
dt2
).
Stability for equations with constant coefficients
Let us restrict ourselves to the consideration of the stability with respectto the initial data. Setting ϕ = 0, we consider the problem
yjtt
= Λy(σ) def= σyj+1 + (1− 2σ)yj + σyj−1,
y0 = yN = 0, y0 = u0, y0t = u1.
(4.92)
Let us suppose that we have a particular solution for equation (4.92) inthe form
yji = T jXi. (4.93)
Substituting (4.93) into equation (4.92) we have
Xxx
X=
1a2
TttT (σ)
= −λ, (4.94)
where T (σ) def= σT j+1 + (1− 2σ)T j + σT j−1.
Recall again that the eigenvalue problem
Xxx + λX = 0, X0 = XN = 0
214 CHAPTER 4. HYPERBOLIC EQUATIONS
has the solutions
X(k)i =
√2`
sin(πk
`xi
), λk =
4h2
sin2
(πkh
2`
), k = 1, 2, . . . , N − 1.
Thus, to find Tk we obtain the equation
(Tk)tt + a2λkT(σ)k = 0, (4.95)
or, more in detail,
(1+στ2a2λk)Tj+1k −2
(1 + (σ − 1/2)τ2a2λk
)T jk+(1+στ2a2λk)T
j−1k = 0.
Let us define
αk =12
a2τ2λk1 + a2τ2λkσ
, (4.96)
and rewrite equation (4.95) in the form
T j+1k − 2(1− αk)T
jk + T j−1
k = 0. (4.97)
Next, setting T j = (qk)j , we obtain the algebraic equation
q2k − 2(1 − αk)qk + 1 = 0. (4.98)
Obviously
qk = 1 − αk ±√α2k − 2αk, for αk ≥ 2 (4.99)
andqk = 1 − αk ± i
√2αk − α2
k for αk ≤ 2, (4.100)
where i =√−1 is the imaginary unit.
The roots (4.99) imply the instability of the scheme since 1−αk−√α2k − 2αk < −1 for αk > 2. Thus, we have to assume that αk < 2 and
consider the case (4.100).
Let us introduce a new variable ϕk defined by the relations:
cosϕk = 1 − αk , sinϕk =√αk(2 − αk). (4.101)
4.3. FINITE DIFFERENCE SCHEMES 215
Then, since |qk | = 1, we obtain the representation for the roots (4.100):
q(1)k = eiϕk , q
(2)k = e−iϕk .
Thus, the general solution for equation (4.97) has the form
Tk(tj) = Ck
(q(1)k
)j+Dk
(q(2)k
)j
= Ak cos(jϕk) +Bk sin(jϕk)
where Ak and Bk are arbitrary constants.
Let us, next, write the solution for problem (4.92) as a linearcombination of the particular solutions (4.93):
yj =N−1∑
k=1
(Ak cos jϕk +Bk sin jϕk)X(k). (4.102)
Let u0k and u1
k be the coefficients of “the Fourier” expansions for u0 andu1 :
u0 =N−1∑
k=1
u0kX
(k), u1 =N−1∑
k=1
u1kX
(k). (4.103)
Expansions (4.102), (4.103) and the initial conditions (4.92) imply therelations
Ak = u0k , Ak
cosϕk − 1τ
+Bksinϕkτ
= u1k.
Thus
Ak = u0k, Bk =
1 − cosϕksinϕk
u0k+
τ
sinϕku1k.
After simple transformations we obtain the following representation forthe solution (4.102):
yj =N−1∑
k=1
cos(j − 1
2
)ϕk
cos 12ϕk
u0k + τ
sin jϕksinϕk
u1k
X(k). (4.104)
216 CHAPTER 4. HYPERBOLIC EQUATIONS
Let us consider the L2h-norm of the solution. Recall that for any net
function f represented as a “Fourier” sum
f =N−1∑
k=1
fkX(k),
the L2h-norm is given by
‖f‖ =
(N−1∑
k=1
f2k
)1/2
.
Recall also the Holder inequality:
∣∣∣∣∣N−1∑
i=1
figi
∣∣∣∣∣ ≤(N−1∑
i=1
f2i
)1/2(N−1∑
i=1
g2i
)1/2
= ‖f‖‖g‖.
Since ‖X(k)‖ = 1 we obtain, from formula (4.104),
‖yj‖ ≤
N−1∑
k=1
(cos(j − 1
2
)ϕk
cos 12ϕk
u0k
)2
1/2
+
N−1∑
k=1
(τ sin jϕksinϕk
u1k
)21/2
.
(4.105)Note that the definition (4.101) implies the equalities
cos12ϕk =
(1 − αk
2
)1/2, sinϕk =
(2αk
(1 − αk
2
))1/2.
Let us rewrite the assumption αk < 2 in the form
αk ≤2
1 + ε, (4.106)
where ε > 0 is an arbitrary number. Then∣∣∣∣∣cos(j − 1
2
)ϕk
cos 12ϕk
∣∣∣∣∣ 6√
1 + ε
ε,
4.3. FINITE DIFFERENCE SCHEMES 217
∣∣∣∣τ sin jϕksinϕk
∣∣∣∣ 6√
1 + ε
ε
√1 + σa2τ2λk
a2λk6√
1 + ε
ε
√1 + 4a2σq
a2λk,
where q = τ2/h2 and we use the inequality λk ≤ 4/h2. These estimatesand inequality (4.105) imply
‖yj‖ ≤√
1 + ε
ε
‖u0‖ +
√1 + 4a2σq
a
(N−1∑
k=1
(u1k)
2
λk
)1/2 . (4.107)
Let us assume that there exists a constant c such that, uniformly in τand h,
qdef=
τ2
h26 c. (4.108)
Note that the expression
(N−1∑
k=1
(u1k)
2
λk
)1/2
is just the norm‖u1‖A−1
def=(A−1u1, u1
)
for the positive operator A : Ay = −yxx.
Indeed,
A−1u1 =N−1∑
k=1
u1kA
−1X(k) =N−1∑
k=1
u1k
λkX(k).
Therefore,(A−1u1, u1
)=
N−1∑
k=1
(u1k)
2
λk.
Thus, we obtain the final estimate
‖yj‖ ≤ c1
‖u0‖ + ‖u1‖A−1
, (4.109)
218 CHAPTER 4. HYPERBOLIC EQUATIONS
where c1 > 0 is a constant independent of j, τ and h.
From the uniformity of estimate (4.109) we conclude that scheme(4.92) is stable.
Let us now consider the stability assumptions (4.106) and (4.108).First of all, we note that the constant c > 0 in (4.108) can be arbitrary.Hence, this restriction is needed only for the limiting passage τ → 0and h → 0, whereas for any finite value of τ and h it is satisfied. So,assumption (4.108) is not important from the point of view of actualnumerical calculations.
On the other hand, assumption (4.106) is a very important one.Indeed, let us write it in the explicit form
12
a2τ2λk1 + a2τ2λkσ
62
1 + ε,
or, what is the same,(σ − 1 + ε
4
)a2τ2λk + 1 ≥ 0. (4.110)
Let σ = 1+ε4 − σ. Then (4.109) implies the inequality
σa2τ2λk ≤ 1. (4.111)
Using the boundedness of the eigenvalues λk we see that the assumption(4.110) will be satisfied if the inequality
4σa2 τ2
h26 1
holds.
Returning to the original parameter σ we obtain the stability con-dition
σ ≥ 1 + ε
4− h2
4a2τ2, (4.112)
where ε > 0 is arbitrary.
4.3. FINITE DIFFERENCE SCHEMES 219
Obviously the choice σ > 1/4 implies stability for scheme (4.92)for any τ and h. Thus, this scheme is absolutely stable.
Conversely, for the explicit scheme (σ = 0), condition (4.112)requires the inequality
τ2 ≤ 1 + ε
a2h2. (4.113)
This assumption looks similar to (4.108). However, the analog of theconstant c is fixed in (4.113). Thus, we obtain the restriction for theparameters τ and h. So, the explicit scheme (σ = 0) is conditionallystable.
Finally, it remains to note that condition (4.102) guarantees alsothe stability of scheme (4.84) with respect to the right hand sides.
4.3.2 A-priori estimates and stability for the caseof variable coefficients
Obtaining a priori estimates for hyperbolic equations differs a littlefrom those for parabolic equations. As an illustration, consider finite-difference schemes for the following mixed problem
∂ 2u
∂ t2=
∂
∂ x
(p(x, t)
∂ u
∂ x
)− q(x, t)u+ f(x, t), (x, t) ∈ QT ,
u∣∣t=0
= u0(x),∂ u
∂ t
∣∣∣∣t=0
= u1(x), x ∈ Ω,
u∣∣x=0
= 0, u∣∣x=`
= 0,
(4.114)
where Ω = (0, `), QT = Ω × (0, T ), p and q are sufficiently smoothfunctions. As usual, we assume that
0 < p0 ≤ p(x, t) 6 p1, (x, t) ∈ QT , (4.115)
however, we do not prescribe any sign for q, letting
|q(x, t)| ≤ q1, (x, t) ∈ QT . (4.116)
220 CHAPTER 4. HYPERBOLIC EQUATIONS
For simplicity, consider separately explicit and implicit schemes.
In the same way as for elliptic and parabolic equations, associate
∂
∂ x
(p(x, t)
∂ u
∂ x
)− q(x, t)u −→ Λy def= (ayx)x − by,
whereaji = p
(xi− 1
2, tj
), bji = q(xi, tj).
Write the completely implicit scheme
ytt = Λy + ϕ,
y0 = u0, y0t = u1, y0 = yN = 0,
(4.117)
where y def= yj , ydef= yj+1, ϕ
def= ϕji = f(xi, tj) and, similar to formula
(4.87)
u1 = u1 +τ
2((pu0
x)x + f) ∣∣∣∣t=0
. (4.118)
Recall the identity2ggt = (g2)t + τ(gt)
2. (4.119)
Thus
2ytytt =((yt)2
)t+ τ(ytt)
2.
2yxyxt = 2yxyxt = ((yx)2)t + τ(yxt)2
Therefore, multiplication of equation (4.117) times 2h yt and summationover i implies the equality
‖yt‖2t +τ‖ytt‖
2+h∑
a((yx)2)t+τ‖√ayxt‖2 = −2h
∑byyt+2h
∑ϕyt.
(4.120)Next, take into account the identity
ajgjt
= (ajgj)t − ajtgj−1. (4.121)
Henceh∑
a((yx)2)t = ‖√ayx‖2
t− h
∑at(yx)
2.
4.3. FINITE DIFFERENCE SCHEMES 221
Let, in addition to assumption (4.115),
|ajit| ≤ p′1 = constant. (4.122)
Then, summing over index j in equation (4.120) results in the inequality
‖yjt ‖2 + ‖√ajyj+1
x ‖ + τ2j∑
j′=1
‖yj
′
tt‖2 + ‖
√aj′yj
′
xt‖2
6
6 ‖y0t ‖2 + ‖
√a0y1
x‖2 + τ
j∑
j′=1
p′1‖y
j′
x ‖2 + 2q1‖yj
′‖‖yj′
t ‖ + 2‖ϕj′‖‖yj′
t ‖.
(4.123)It remains to use the embedding inequality
‖yj‖ ≤ c(Ω)‖yjx‖ (4.124)
and to apply Gronwall’s lemma in (4.123). This results in the estimate
‖yjt‖2 + p0‖yj+1x ‖2 ≤
‖y0
t ‖2 + p1‖y1x‖2 + τ
J∑
j′=1
‖ϕj′‖2
eC tj (4.125)
where J = T/τ and C is some constant, independent of j and y.
Consider the norm ‖y1x‖. The second initial condition in (4.117)
and (4.118) imply the following
‖y1x‖ = ‖y0
x + τu1x‖ ≤ ‖y0
x‖ + 2τ
h‖u1‖ + 2
τ2
h2
∥∥√p∣∣t=0
u0x
∥∥ + τ‖f‖∣∣∣∣t=0
.
Thus, the conditionτ
h6 c0 (4.126)
with arbitrary constant c0 > 0 allows to avoid of amplification of as-sumption for u0, u1 and f smoothness.
Theorem 4.3.1. Let u0 ∈ W 12 (Ω), u1 ∈ L2(Ω), f ∈ L2(QT ) and as-
sumptions (4.115), (4.116), (4.122), and (4.126) hold. Then, the com-pletely implicit scheme (4.117) is absolutely stable for any j ≤ O(1/τ).
222 CHAPTER 4. HYPERBOLIC EQUATIONS
Consider the explicit scheme
ytt = Λy + ϕ,
y0 = u0, y0t = u1, y0 = yN = 0.
(4.127)
The identities (4.119) and the following
2ggt = (g2)t − τ(gt)2
imply the equalities
2ytt(yt + yt) = ((yt)2)t + ((yt)2)t = 2((yt)2)t,
2yx(yxt + yxt) = ((yx)2)t + ((yx)2)t + τ(yxt)2 − τ(yxt)2 =
= 2((yx)2)t− τ2((yxt)2)t.
Thus, multiplication of equation (4.127) times h(yt + yt) = 2hyt
andsummation of index i results in the equality
‖yt‖2t + h
∑a
((yx)2)
t− 1
2τ2((yxt)2)t
= −2h
∑byy
t+ 2h
∑ϕy
t.
(4.128)Furthermore, recall the identity
ajgjt = (ajgj)t − ajtgj+1. (4.129)
Using of (4.121) and (4.129) results in the equalities
h∑
aj((yjx)2)t
= ‖√ajyjx‖
2t− h
∑ajt(y
j+1x )2 + aj
t(yj−1x )2
,
h∑
aj((yjxt)2)t = ‖
√ajy
jxt‖
2t − h
∑aj
t(yj−1xt )2.
Now, applying assumption (4.122) and summing index j, we pass fromequality (4.128) to the following inequality
‖yjt ‖2 + 12
‖√aj+1yj+1
x ‖2 + ‖√ajyjx‖2
+ 1
2τ2‖√a0y0
xt‖2
6 Y0 +12τ2‖
√ajyjxt‖
2 +12p′1τ
3j−1∑
j′=1
‖yj′
xt‖2+
+2τj∑
j′=1
p′1‖y
j′+1x ‖2 + q1‖yj
′‖‖yj′
0t‖+ ‖ϕj′‖2‖yj
′
0t‖2
,
(4.130)
4.3. FINITE DIFFERENCE SCHEMES 223
where
Y0 = ‖y0t ‖2 +
12
‖√a1y1
x‖2 + ‖√a0y0
x‖2
+ p′1τ
(‖y0x‖2 +
12τ2‖y0
xt‖2
).
In contrast with inequality (4.123), relation (4.130) contains the term‖√ajy
jxt‖2 in the right-hand side, which prevents from obtaining a rea-
sonable estimate. However, it is multiplied times τ2 and this allows toestimate this norm via norms of the first derivatives. Indeed, we have
τ∥∥∥√ajyjxt
∥∥∥ =∥∥∥√aj(yj+1x − yjx
)∥∥∥ =∥∥∥√aj+1yj+1
x −√ajyjx
−(√
aj+1 −√aj)yj+1x
∥∥∥ ≤∥∥∥√aj+1yj+1
x
∥∥∥ +∥∥∥√ajyjx
∥∥∥
+∥∥∥√ajyjx
∥∥∥2+
√τp′1p0
∥∥∥√aj+1yj+1
x
∥∥∥ .
(4.131)
On the other handτ‖
√ajyjxt‖ ≤ 2p1
τ
h‖yjt ‖. (4.132)
Combining (4.131) and (4.132) results in the estimate
12τ2∥∥∥‖
√ajyjxt
∥∥∥ ≤ ε(1 +√cτ)2
‖√aj+1yj+1
x ‖2 + ‖√ajyjx‖
2
+
+2(1 − ε)p21
τ2
h2‖yjt ‖2,
(4.133)where c = p′1/p0 and ε ∈ (0, 1) is an arbitrary constant. Set ε = 1
2 − δ.Then, comparing the left-hand side in relation (4.130) and the right-hand side in (4.133) we come to the following conditions
1 − (1 + 2δ)p21
τ2
h2> 0,
12 −
(12 − δ
)(1 +
√cτ)2 > 0.
Thus, we have to choose δ, τ and h such that the following conditionsholds
τ
h<
1p1
1√1 + 2δ
, (4.134)
224 CHAPTER 4. HYPERBOLIC EQUATIONS
δ >√cτ
1 +√cτ/2
(1 +√cτ)2
. (4.135)
Choose an arbitrary number δ1 ∈ (0, 1) and set
τ
h=
√1 − δ1
p1
√1 + 2δ
,
δ =12
1(1 +
√cτ)2
[(1 +
√cτ)2 − 1 + 2δ1
].
(4.136)
Then,
1− (1 + 2δ)p21
τ2
h2= δ1,
12−(
12− δ
)(1 +
√cτ)2 = δ1.
Hence, inequality (4.130) results in the following
δ1‖yjt ‖2 + δ1
‖√aj+1yj+1
x ‖2 + ‖√ajyjx‖2
6 Y0 + 1
2p′1τ
3∑j−1
j′=1 ‖yj′
xt‖2+
+2τj∑
j′=1
p′1‖y
j′+1x ‖2 + q1‖yj
′‖‖yj′
0t‖ + ‖ϕj′‖2‖yj
′
0t‖2
.
(4.137)After using again inequalities (4.124) and (4.133) and preparing simpletransformations, we obtain the estimate
δ1
‖yjt ‖2 + ‖
√aj+1yj+1
x ‖2 + ‖√ajyjx‖
2
6
≤ Y0 + τJ∑
j′=1
‖ϕj′‖2 + c1τJ∑
j′=1
‖yj
′
t ‖2 + ‖√aj+1yj
′+1x ‖2 + ‖
√ajyj
′
x ‖2,
(4.138)where c1 > 0 is a constant independent of j and y. It remains to useGronwall’s Lemma to obtain he final estimate
‖yjt ‖2 + ‖√aj+1yj+1
x ‖2 + ‖√ajyjx‖
2 6
61δ1
Y0 + τ
J∑
j′=1
‖ϕj′‖2
e
c′1δ1tj ,
(4.139)
4.3. FINITE DIFFERENCE SCHEMES 225
where c′1 = c1(1 + O(τ)).
This estimate is similar to (4.125) for the implicit scheme butwith a worst constant in the exponent. At the same time, in contrast to(4.125), estimate (4.139) holds only under assumption (4.134). Further-more, for sufficiently small τ, parameter δ can be chosen as an arbitrarilysmall constant. Therefore, we come again to condition (4.113) with up-per bound p1 instead of the constant coefficient a2.
It remains to note that assumption (4.126) is now satisfied. Thus,the norm Y0 of initial data is bounded for the same u0, u1 and f as inTheorem 4.3.1:
Theorem 4.3.2. Let u0 ∈ W 12 (Ω), u1 ∈ L2(Ω), f ∈ L2(QT ) and as-
sumptions (4.115), (4.116) and (4.122) hold. Then, the explicit scheme(4.127) is stable for any j ≤ T/τ, T =constant, under assumption (4.134).
4.3.3 Two dimensional case
The alternating direction method
Consider the mixed problem
∂ 2u
∂ t2=
2∑
α=1
∂ 2u
∂ x2α
+ f(x, t), x ∈ Ω, t > 0,
u∣∣t=0
= u0(x),∂ u
∂ t
∣∣∣∣t=0
= u1(x), x ∈ Ω,
u∣∣Σ
= µ∣∣Σ,
(4.140)
where Ω ⊂ R2 is a rectangle.
Let us first write a preliminary finite-difference scheme
yjtt
= Λ(σyj+1 + (1 − 2σ)yj + σj−1
)+ ϕj , (4.141)
whereΛ = Λ1 + Λ2, Λαy = yxαxα , σ ≥ 1 + ε
4.
226 CHAPTER 4. HYPERBOLIC EQUATIONS
Sinceyj+1 = yj + τyjt , yj−1 = yj − τyj
t,
we can rewrite the scheme (4.141) in the form (super indexes are omit-ted)
(1 − στ2Λ)ytt = Λy + ϕ. (4.142)
Now, similar as in the parabolic case, we change (with an order O(τ2)of accuracy) the operators
1 − στ2Λ −→ (1− στ2Λ1)(1 − στ2Λ2).
Thus, we obtain the economic scheme
(1− στ2Λ1)(1− στ2Λ2)ytt = Λy + ϕ. (4.143)
The method to solve this algebraic system is almost the same as above.Indeed, rewrite (4.143) in the form
Byj+1 = F j+1, j = 1, 2, (4.144)
whereB = B1 ·B2, Bα = (1 − στ2Λα), α = 1, 2,
F j+1 = B(2yj − yj−1) + τ2(Λyj + ϕj).
Then, denoting by yj+12 an intermediate solution, we obtain, from sys-
tem (4.144), the two systems
B1yj+ 1
2 = F j+1,
B2yj+1 = yj+
12 .
(4.145)
Since any of these systems can be written as a sequence of systems withthree-diagonal matrices, this scheme is economic.
It remains to set initial and boundary conditions. In order toobtain the precision O(τ2), we write the initial conditions as follows:
y0 = u0, y0t = u1 +
12τ(∆u0 + f
∣∣t=0
). (4.146)
4.3. FINITE DIFFERENCE SCHEMES 227
It is clear that the first of the relations in (4.146) defines y0i for i =
(i1, i2), iα = 0, 1, . . . , Nα, whereas the second relation defines y1i for i =
(i1, i2), iα = 1, 2 . . . , Nα−1.
As for the boundary conditions, we define
yj+1∣∣Σ
= µj+1. (4.147)
To solve the second system in (4.145) we have to use the boundarycondition (4.147) for i2 = 0 and i2 = N2 and any i1 = 1, 2, . . . , N1 − 1.For i1 = 0 and i2 = N1 we define yj+
12 using the second of the systems
in (4.145). This implies the boundary conditions
yj+12
∣∣Σ
= B2yj+1
∣∣∣∣Σ
= µj+1−στ2Λ2µj+1∣∣Σ, for i1 = 0 and i1 = N1,
which completes the first of the systems in (4.145).
The summarized approximation method
The above described alternating direction method works very well forrectangular domains. However, there appear difficulties (to define bound-ary conditions and to probe stability) for domains with curvilinear bound-aries. To overcome these difficulties A. Samarskii ([40]) developed an-other approach called the sumarized approximation method. The mainidea is to split a multispace dimensional equation with n ≥ 2 spatialvariables into n equations each with only one spatial variable such thatthe summarized discrepancies ψ = ψ1 + ψ2 + · · · + ψn tend to zero asτ, h→ 0. However, any intermediate ψi may be of order O(1).
We describe a realization of this idea for the case of the two-dimensional wave equation
∂ 2u
∂ t2=
2∑
α=1
Lαu+ f(x, t), (x, t) ∈ QT , (4.148)
u∣∣t=0
= u0(x),∂ u
∂ t
∣∣∣∣t=0
= u1(x), x ∈ Ω, (4.149)
228 CHAPTER 4. HYPERBOLIC EQUATIONS
u∣∣Σ
= µ(x′, t), x′ ∈ Σ, (4.150)
where QT = (0, T )× Ω,Ω ⊂ R2 is a bounded domain, Σ = ∂ Ω × (0, T ),and
Lαu =∂
∂ xα
(kα(x, t)
∂ u
∂ xα
), kα(x, t) ≥ p1 > 0, p1 = constant.
Set a net ωh with steps h1, h2 over Ω and let τ be the step of timediscretization. Next, consider the operators
Pαudef=
12∂ 2u
∂ t2−(Lαu+ fα
), α = 1, 2, (4.151)
where fα satisfies the condition
2∑
α=1
fα = f.
To write the finite difference version of operators Pα, define the interme-diate layers at the time tj+1/2 = (j + 1/2)τ , j = 0, 1, 2, . . . and considera net fucntion
yj+ 1
2i = u
(tj+α
2, xi
), xi = (i1h1, i2h2) ∈ ωh, α = 1, 2.
Then, write∂ 2u
∂ t2→ 4ytαtα ,
where
ytαtα =yj+
12 − 2yj + yj−
12
τ2, for α = 1, (4.152)
ytαtα =yj+1 − 2yj+
12 + yj
τ2, for α = 2. (4.153)
The coefficient 4 appears because we are calculating the second deriva-tive using the step τ/2.
4.3. FINITE DIFFERENCE SCHEMES 229
To approximate the operators Lα and the right-hand sides fα weuse the standar formulas
Lαu→ Λαy, fα → ϕα,
where the time is fixed at the instant
t′α =12
(tj+α
2+ tj−1+α
2
)= tj +
12(α− 1)τ.
Consider the equations
Pαu = 0, α = 1, 2
and associate them with the following two-layer schemes
ytαtα =14Λα(yj+
α2 + yj−1+α
2
)+
12ϕα, α = 1, 2. (4.154)
Any equation (4.154) can be rewritten in the form(
1 − 14τ2Λα
)(yj+
α2 + yj−1+α
2
)= 2yj+
α−12 +
12τ2ϕα, α = 1, 2.
(4.155)It is clear that to find yj+α/2 it is enough to solve the three-diagonalsystem (
1 − 14τ2Λα
)yj+
α2 = Fα
along intervals parallel to the axis 0xα. The boundary conditions followfrom (4.150)
yj+α2 = µ
(x′, tj+α
2
)for x′ ∈ γα,h, α = 1, 2, (4.156)
where γα,h is the set of the boundary with respect to 0xα direction.
It remains to define initial conditions for the equations (4.155).The first initial condition in (4.149) allows to define y0 :
y0 = u0(x), x ∈ ωh. (4.157)
230 CHAPTER 4. HYPERBOLIC EQUATIONS
However, to define y1/2 it is necessary to solve the equations(
1− τ2
4Λ1
)y1/2 = F1, (4.158)
where
F1 = u0 +τ
2u1 +
τ2
4Λ1u
0 + τ2
(f1 −
18(Λu+ f)
) ∣∣∣∣t=0
.
It is clear that the knoledge of y0 and y1/2 allows to start the calculationsof yj and yj+1/2 for j = 1, 2, 3, . . . .
Theorem 4.3.3. Let∂ 2f
∂ t2∈ L2(QT ) and
∂ αu
∂ x4α
∈ C(QT ), α = 1, 2,
and satisfy the Lipschitz condition. Then the scheme (4.155)-(4.158) isabsolutely stable and has the summarized discrepancy O(τ + |h|2).
The proof of this theorem, which is somewhat complicated, hasbeen carried out in the book [40]
4.4 The WKB method
We consider here the WKB (Wentzel-Kramers-Brillouin) method, whichis known also as the method of geometric optics or the ray method. Thismethod allows to construct rapidly oscillating asymptotic solutions forlinear hyperbolic equations with a small parameter.
4.4.1 The Schrodinger equation
Consider the Cauchy problem for the Schrodinger equation which playsa very important role in quantum mechanics
−ih∂ u∂ t
= −h2∆u+ V (x)u, x ∈ Rn, t > 0,
u∣∣t=0
= ϕ0(x)eiS0(x)
h .
(4.159)
4.4. THE WKB METHOD 231
Here, i is the imaginary unit, h > 0 is the Planck’s constant. Sinceh 1, we will treat h as a small parameter, h → 0. It is assumed thatthe functions S0, called the phase and the potential V are sufficientlysmooth. The amplitude ϕ is smooth and, generally, complex valued.
In the special case ϕ0 =constant, S0 = 〈k, x〉, k ∈ Rn, the initialdata is a Fourier harmonic. So, because of the linearity of the equation,such initial represents the general data of rapidly oscillating waves.
The general idea of the WKB method is the presentation of thesolution in the self-similar form
u = Φ(x, t,−ih)eiS(x,t)
h = ϕ0(x, t)eiS(x,t)
h − ihϕ1(x, t)eiS(x,t)
h + O(h2),(4.160)
where the functions Φ, ϕ0, ϕ1 and S, arbitrary at the begining, aresmooth. Here and in what follows we say that a function f(x, t, h) isO(hk) or O(1) = O(h0) if for h ∈ (0, h0], with some h0 > 0,
supx,t
|f(x, t, h)| ≤ c hk,
where the constant c is independent of h.
Now, calculate the first derivative:
−ih ∂∂ t
(ϕ(x, t)ei
S(x,t)h
)= ei
S(x,t)h
∂ S
∂ tϕ− ih
∂ ϕ
∂ t
. (4.161)
We see that the first term in the curly brackets is of order O(1) whereasthe second is O(h). Similarly
−h2∆(ϕ(x, t)ei
S(x,t)h
)=⟨−ih∇,
ei
S(x,t)h (ϕ∇S − ih∇ϕ)
⟩=
= eiS(x,t)
h|∇S|2ϕ− ih (2〈∇S,∇ϕ〉+ ϕ∆S)− h2∆ϕ
.
Denote with Π the so called transport operator
Πϕ = 2〈∇S,∇〉+ ∆Sϕ.
232 CHAPTER 4. HYPERBOLIC EQUATIONS
Then the last formula can be rewritten as follows
−h2∆(ϕei
Sh
)= ei
Sh
|∇S|2 − ihΠ − h2∆
ϕ. (4.162)
So the three terms (O(1),O(h) and O(h2)) appear after the action ofthe operator −h2∆ to the rapidly oscillating exponent.
Next, substitute the expansion (4.160) into equation (4.159). Us-ing the formula (4.161) and (4.162) and collecting terms of order O(hk),k = 0, 1, 2, we obtain the relation
eiSh
(−∂ S∂ t
+ |∇S|2 + V
)ϕ0
−ih((
−∂ S∂ t
+ |∇S|2 + V
)ϕ1 −
∂ ϕ0
∂ t+ Πϕ0
)+ O(h2) = 0.
(4.163)Thus, vanishing the term O(1) we obtain the equality
(−∂ S∂ t
+ |∇S|2 + V
)ϕ0 = 0, x ∈ Rn, t > 0.
Since ϕ0 is not identically zero, we obtain the Hamilton-Jacobi equation∂ S
∂ t− |∇S|2 − V (x) = 0. (4.164)
Substitution of expansion (4.160) into the initial condition in (4.159)produces the initial condition for the phase S
S∣∣t=0
= S0(x). (4.165)
Recall that the Cauchy problem (4.164), (4.165) can be solved by themethod of characteristics (See Section I):
S(x, t) =S0(x) +
∫ t
0(V (X(t′, x0))− P 2(t′, x0))dt′
∣∣∣∣x0=x0(x,t)
,
where X(t, x0) and P (t, x0) = ∇S(t, x0) are the solutions of the Hamil-tonian system
dX
dt= −2P,
dP
dt= ∇xV
∣∣x=X
,
X∣∣t=0
= x0, P∣∣t=0
= ∇xS0∣∣x=x0 ,
(4.166)
4.4. THE WKB METHOD 233
and x0 = x0(x, t) is the solution of the equation
x = X(t, x0). (4.167)
Recall also that the central point of the applicability of the character-istics method is the assumption of solvability of equation (4.167): weconsider only those t ∈ [0, t∗) for which
J def= det(∂ X
∂ x0
)> 0 for t ∈ [0, t∗). (4.168)
Therefore, we obtain the same restriction (4.168) for the application ofthe WKB method.
Next, return to the relation (4.163). Vanishing the terms of orderO(h) and taking into account equation (4.164) produces the so calledtransport equation
∂ ϕ0
∂ t− Πϕ0 = 0. (4.169)
Obviously, the initial condition is
ϕ0
∣∣t=0
= ϕ0(x). (4.170)
We now show that the problem (4.169), (4.170) can be solved explicitelyif the solution to problem (4.164), (4.165) is known. Indeed, let us writeϕ0 as a function of the characteristics x = X(t, x0) of the Hamiltoniansystem (4.166). Then
d
dtϕ(t, X(t, x0)) =
∂ ϕ(t, x)∂ t
+⟨∇xϕ(t, x),
dX
dt
⟩∣∣∣∣x=X
=∂ ϕ(t, x)∂ t
− 2 〈∇xS(t, x),∇xϕ(t, x)〉∣∣∣∣
x=X
.
Thus, the transport equation (4.169) can be transformed into
dϕ0
dt− ∆Sϕ0 = 0, (4.171)
where ϕ0 = ϕ0(t, x0) def= ϕ0(t, X(t, x0)).
234 CHAPTER 4. HYPERBOLIC EQUATIONS
Recall now the Liouville equation (see Section I) for the JacobianJ = J (t, x0) for the system (4.176)
1JdJdt
= −2 〈∇,∇S〉 = −2∆S.
Thus,
−∆S =1√Jd
dt
√J ,
and we rewrite (4.171) as follows
√J dϕ0
dt+ ϕ0
d√J
dt= 0.
Therefore ϕ0(t, x0)√J (t, x0) is constant along the characteristics. Take
into account the initial condition (4.170) and that J∣∣t=0
= 1. Thisimplies the formula
ϕ0(t, x) =ϕ0(x0)√J (t, x0)
∣∣∣∣x0=x0(x,t)
. (4.172)
Obviously, this formula is meaningful only under assumption (4.168).
The described procedure can be repeated to obtain the asymptoticsolution with arbitrary accuracy. Indeed, writing, instead of of (4.160),
u = eiShϕ0 − ihϕ1 + (−ih)2ϕ2 + · · ·
(4.173)
and repeating the construccion we obtain the relation (4.163) with termsO(h2):
eiSh
(−∂ S∂ t
+ |∇S|2 + V
)ϕ0−
−ih((
−∂ S∂ t
+ |∇S|2 + V
)ϕ1 −
∂ ϕ0
∂ t+ Πϕ0
)
−h2
((−∂ S∂ t
+ |∇S|2 + V
)ϕ2 −
∂ ϕ1
∂ t+ Πϕ1 + ∆ϕ0
)+ O(h3) = 0.
4.4. THE WKB METHOD 235
Obviuosly this implies the same equations (4.164) and (4.169) for S0 andϕ0, and, in addition, the nonhomogeneous transport equation
∂ ϕ1
∂ t− Πϕ1 = f1, (4.174)
with f1 = ∆ϕ0. Since ϕ1
∣∣t=0
= 0, we easily find
ϕ1(t, x) =1√
J (t, x0)
∫ t
0
f1(X(t ′, x0, ), t )√J (t ′, x0)dt ′
∣∣∣∣x0=x0(x,t)
.
Finally, note that the same equation of the form (4.174) will appearfor the next terms, ϕk, k ≥ 1, of the asymptotic expansion (4.173).Moreover, it is easy to calculate that the right-hand side fk is equal to∆ϕk−1.
4.4.2 The wave equation
Consider the wave equation with a variable smooth coefficient a > 0 :
∂ 2u
∂ t2− div(a2(x)∇u) = 0, t > 0, x ∈ Rn. (4.175)
This equation does not depend on any small parameter. However, thesolution will depend on a small parameter ε if this parameter appearsin the initial data
u∣∣t=0
= ϕ0(x,−iε)eiS0(x)
ε , −iε∂ u∂ t
∣∣∣∣t=0
= ψ0(x,−iε)eiS0(x)
ε . (4.176)
The initial phase S0 is assumed to be smooth and real valued, whereasthe initial functions ϕ0(x, z), ψ0(x, z) are smooth for x ∈ Rn, but complex-valued. The appearence of the small parameter in the left-hand side ofthe second condition (4.176) will be commented later.
236 CHAPTER 4. HYPERBOLIC EQUATIONS
To construct the asymptotic solution we will use the same ap-proach as for the Schrodinger equation. However, since
∂ 2
∂ t2
ϕ(x, t)ei
S(x,t)ε
=
= eiS(x,t)
ε
− 1ε2
(∂ S
∂ t
)2
+i
επ +
∂ 2
∂ t2
ϕ(x, t),
⟨∇, a2(x)∇
(ϕ(x, t)ei
S(x,t)ε
)⟩=
= eiS(x,t)
ε
−a
2
ε2|∇S|2ϕ+
i
εΠϕ+ 〈∇, a2(x)∇ϕ(x, t)〉
,
(4.177)
where
πϕdef= 2
∂ S
∂ t
∂ ϕ
∂ t+∂ 2S
∂ t2ϕ,
Πϕ def= 2a2〈∇S,∇ϕ〉+ div(a2∇S)ϕ,
(4.178)
the corresponding Hamilton-Jacobi equation(∂ S
∂ t
)2
− a2(x)|∇S|2 = 0 (4.179)
has two solutions.
Therefore, we have to take into account both of them, and to writethe general solution in the form
u(x, t, ε) =∑
±
ϕ±
0 (x, t) + (−iε)ϕ±1 (x, t) + · · ·
e
iεS±(x,t), (4.180)
where the indexes ± correspond to the solutions S± of the Hamilton-Jacobi equation
∂ S±
∂ t± a(x)|∇S±| = 0 (4.181)
and ϕ±0 , ϕ
±1 , . . . , are arbitrary smooth functions.
4.4. THE WKB METHOD 237
Substitution of the expansion (4.180) into the wave equation (4.175)imply the following relation
− 1ε2
∑
±
(∂ S±
∂ t
)2
− a|∇S±|ϕ±0
− iε
∑
±
[(∂ S±
∂ t
)2
− a|∇S±|
]ϕ±
1 − (π± − Π±)ϕ±0
+ O(1) = 0,
(4.182)where the transport operators π± and Π± have the same form as in(4.178) with S± instead of S.
Thus, vanishing the terms of order O(ε−2) we pass to equation(4.179) and, after that, to equations (4.181). Note now that the initialdata depend on the single phase S0(x). Hence, the phases S± have tobe the same at the initial instant of time:
S±∣∣t=0
= S0(x). (4.183)
Recall the Hamiltonian systems
dX±
dt= ±a(X±)
P±
|P±| ,dP±
dt= ∓|P±| ∇xa(X±),
X±∣∣t=0
= x0, P±∣∣t=0
= ∇S0∣∣x=x0 ,
(4.184)
which correspond to the Cauchy problems (4.181),(4.183). Recall alsothat smooth solutions to these problems exist only under the assumption
J± def= det∂ X±
∂ x06= 0. (4.185)
Assuming the fulfilment of condition (4.185), let us consider again therelations (4.182). Vanishing the terms O(ε−1) we obtain the transportequations
2S±t
∂ ϕ±0
∂ t− a2〈∇S±,∇ϕ±
0 〉
+S±tt − div(a2∇S±)
ϕ±
0 = 0. (4.186)
238 CHAPTER 4. HYPERBOLIC EQUATIONS
Let us transform these equations. First of all, note that the total deriva-tives along the characteristics x = X±(t, x0) of the Hamiltonian system(4.184) are as follows:
d
dtϕ(X±, t) =
∂ ϕ(x, t)∂ t
+ 〈∇xϕ(x, t), X±〉∣∣∣∣
x=X±
=∂ ϕ(x, t)∂ t
± a
⟨∇xϕ(x, t),
∇S±
|∇S±|
⟩ ∣∣∣∣x=X±
.
Thus, the first summand in the left-hand side of equation (4.186) canbe rewritten as the total derivative
2S±t
∂ ϕ±0
∂ t− a2〈∇S±,∇ϕ±
0 〉∣∣∣∣
x=X±=
2S±t
∂ ϕ±
0
∂ t± a
|∇S±|〈∇S±,∇ϕ±
0 〉∣∣∣∣
x=X±= 2S±
t
dϕ±0
dt,
(4.187)
where the tilde denotes the evaluation at x = X±.
Next, to transform the second term in (4.186), we use equations(4.181) twice:
∂ 2S±
∂ t2=
∂
∂ t
(∓a(x)|∇S±|
)= ∓a(x)〈∇S
±,∇S±t 〉
|∇S±| =
=a(x)|∇S±| 〈∇S
±,∇(a(x)|∇S±|)〉.
Furthermore, use the identity
〈∇S±,∇(a(x)|∇S±|)〉 =|∇S±|a(x)
〈∇, a2∇S〉 − |∇S±|2⟨∇, a ∇S±
|∇S±|
⟩.
Hence
∂ 2S±
∂ t2= div(a2∇S±) − a|∇S±|div
(a∇S±
|∇S±|
). (4.188)
4.4. THE WKB METHOD 239
From the other hand, according to the Liouville theorem, the followingequality holds for the jacobian J ± (since J ± > 0):
1J±
dJ±
dt= ±div
(a∇S±
|∇S±|
) ∣∣∣∣x=X±
. (4.189)
Combining the equalities (4.188), (4.189) and using again equations(4.181) we obtain the formula
S±tt − div(a2∇S±)
∣∣x=X± = S±
t
1J±
dJ±
dt.
Thus, the transport equation (4.186) can be rewritten in the form
dϕ±0
dt+
12ϕ±
0
J±dJ±
dt= 0. (4.190)
Obviously, the solutions to these equations are
ϕ±0 =
φ±0 (x0)√J±(X±, x0)
, (4.191)
where φ±0 (x0) are “constants” of integration and the equalityJ±∣∣t=0
= 1was used.
It remains to find the “constants” of integration. Substitute theasymptotic expansion (4.180) into the initial conditions (4.176) and col-lect the terms of order O(1). Taking into account the formulas (4.183),(4.191) we obtain the algebraic system
φ+0 (x0) + φ−0 (x0) = ϕ0
0(x0),
∂ S+
∂ t
∣∣∣∣t=0
φ+0 +
∂ S−
∂ t
∣∣∣∣t=0
φ−0
∣∣∣∣x=x0
= ψ00(x
0),
(4.192)
where ϕ00 = ϕ0
0(x, 0) and ψ00 = ψ0
0(x, 0) are the leading terms of theasymptotic expansions of the initial amplitudes ϕ0(x,−iε), ψ0(x,−iε)with respect to ε.
240 CHAPTER 4. HYPERBOLIC EQUATIONS
Next,∂ S±
∂ t
∣∣∣∣t=0
= ∓a|∇S±|∣∣t=0
= ∓a|∇S0|.
Thus, we find the following formulas for the solution of system (4.192)
φ+0 (x0) =
12
ϕ0
0 −ψ0
a|∇S0|
∣∣∣∣x=x0
,
φ−0 (x0) =12
ϕ0
0 +ψ0
a|∇S0|
∣∣∣∣x=x0
.
(4.193)
Since the construction of next terms of the asymptotic expansion (4.180)is carried out in the same way, the formulas (4.191) complete the proofof the main assertion of this subsection
Theorem 4.4.1. Let ϕ0, ψ0 and S0 be sufficiently smooth and the as-sumption (4.185) hold for t ≤ T. Then the leading term of the WKBsolution for the Cauchy problem (4.175), (4.176) has the form
u(x, t, ε) =∑
±
φ±0 (x0)√J±(t, x0)
∣∣∣∣x0=x0
±(x,t)
eiS±(x,t)
ε + O(ε),
where x0±(x, t) are the solutions of the equations x = X±(t, x0) and φ±0
have the form (4.193). The phases S±, characteristics X± and jacobiansJ ± are defined by the Hamiltonian system (4.184).
Remark 4.4.1. We write −iε∂ u/∂ t in the initial conditions (4.180) inorder to involve the initial value of this derivative in the leading term ofthe asymptotic solution. Indeed, the setting ∂ u/∂ t
∣∣t=0
is equivalent tothe choice ψ0(x, ε) = εψ0
1(x) + O(ε2), that is, to choose ψ00 ≡ 0. Thus
the initial derivative is zero in the leading term, and instead of (4.191)we would obtain only the particular case: φ+
0 = φ−0 = ϕ00/2.
4.5 Bibliographical comments
For the first reading, the very clearly written textbook by R. Knobel[1] can be recommended. There are also many other textbooks for the
4.5. BIBLIOGRAPHICAL COMMENTS 241
undergraduate level, among others, the texts by Farlow [34] and P. W.Berg and J. L. McGregor [31]. All of them consider physical aspectsof the wave equation ocurrence and main results for the one spatialdimension case. A detailed description of the physical background aswell as considerations of many specific problems can be found in thewell known book by G. B. Whitham [3].
In the present text we follow the textbook by V. S. Vladimirov[42] since we consider it as one of the best for graduate level students.The proof of all the statements cited in this text are contained there. Ofcourse, classical results about the wave equation can be found in othertextbooks, among others in those written by E. DiBenedetto [2] and A.N. Tikhonov and A. A. Samarskii [43].
We do not consider here the more general viewpoint on linearhyperbolic equations. The textbook by J. Wloka [49] may be treated asan introduction to the modern theory of hyperbolic equations. More indetail this theory is described in the textbook by O. A. Ladyzhenskaya[46].
In the consideration of methods of numerical simulations for thewave equation we have followed the textbook by A. A. Samarskii [40].This topic is discussed also in the textbooks [46], [32] and [8].
The WKB method is discussed in the book by G. B. Whitham [3]from two viewpoints, for construction of rapidly oscillating asymptoticsolutions and for construction of discontinous asymptotic solutions. Adetailed description of this method appears in the book by V. P. Maslovand M. V. Fedoryuk [14]. It is necessary to note that the main topic of[14] is the canonical operator method which allows to construct globalasymptotics for singularly perturbed hyperbolic equations with variablecoefficients.
242 CHAPTER 4. HYPERBOLIC EQUATIONS
Bibliography
[1] R. Knobel, An Introduction to the Mathematical Theory of Waves,AMS, 2000
[2] E. DiBenedetto, Partial Differential Equations, Birkhauser, 1995.
[3] G. B. Whitham, Linear and Nonlinear Waves, Wiley, New York,1974.
[4] L. Schwarz, Theorie des Distributions, Hermann, Paris, 1966
[5] I. M. Gelfand and G. E. Shilov, Generalized Functions, Vol. 1, Prop-erties and Operations, New York, Academic Press, 1964.
[6] W. Boyce and R. DiPrima, Elementary Differential Equations andBoundary Value Problems, Wiley, New York, 1969.
[7] R. K. Miller and A. N. Michel, Ordinary Differential Equations,Academic Press, 1982.
[8] R. D. Richtmyer and K. W. Morton, Difference Methods for Initial-Value Problems, Interscience Publ., 1967.
[9] P. J. Roache, Computational Fluid Dynamics, Hermosa Publishers,Alburquerque, 1976.
[10] R. Courant, Partial Differential Equations, Interscience, New York,1962.
243
244 BIBLIOGRAPHY
[11] P. Lax, Hyperbolic Systems of Conservations Laws, SIAM, 1973.
[12] B. R. Vainberg, Asymptotic Methods in Equations of MathematicalPhysics, Gordon and Breach, New York, 1989.
[13] B. L. Rozhdestvenskii and N.I. Yanenko, Systems of QuasilinearEquations and their Applications to Gas Dynamics, Amer. Math.Soc., Providence, RI, 1983.
[14] V. P. Maslov and M.V. Fedoryuk, Semi-classical Approximation inQuantum Mechanics, Math. Phys. and Appl. Math., Vol. 7, Reidel,Dordrecht, 1981.
[15] R. Haberman, Mathematical Models: Mechanical Vibrations, Pop-ulation Dynamics, and Traffic Flow, SIAM, 1998.
[16] O. A. Oleinik, Discontinuous solutions of nonlinear differentialequations, Amer. Math. Soc. Transl. (2) 26, 1963, 95-172.
[17] O. A. Oleinik, On the construction of a generalized solution to theCauchy problem for a quasilinear equation by introducing the “van-ishing viscosity”, Amer. Math. Soc. Transl. (2) 23, 1963, 277-283.
[18] J. Smoller, Shock Waves and Reaction-Diffusion Equations,Springer-Verlag, 1982.
[19] A. Friedman, Generalized Functions and Partial Differential Equa-tions, Prentice-Hall Inc., 1963
[20] A. Friedman, Partial Differential Equations, Holt, Rinehart anddWinston, Inc., 1969
[21] J. F. Colombeau, Elementary Introduction to New GeneralizedFunctions, North-Holland, Amsterdam, 1985.
[22] H. A. Biagioni, A Nonlinear Theory of Generalized Functions,Springer-Verlag, Berlin, 1992.
BIBLIOGRAPHY 245
[23] M. Oberguggenberger, Multiplication of Distributions and Applica-tions to Partial Differential Equations, Pitman. Res. Notes Math.,259, Longman Sci. Techn., Harlow, Essex, 1992.
[24] V. P. Maslov, Propagation of shock waves in an isentropic, non-viscous gas, J. Soviet Math. 13, 1980, 119-163.
[25] V. P. Maslov, Three algebras corresponding to non-smooth solutionsof systems with quasi linear hyperbolic equations, Russian Math.Surveys 35, 1980, N2, 252-253.
[26] V. G. Danilov, V.P. Maslov, and V.M. Shelkovich, The algebra ofsingularities of singular solutions to quasi-linear strictly hyperbolicsystems of first order, Theoret. and Meth. Phys., 114, 1998, 1-42.
[27] R. J. LeVeque, Numerical Methods for Conservation Laws,Birkhuser, 1992.
[28] E. Zauderer, Partial Differential Equations of Applied Mathematics,A Wiley-Interscience publication, Wiley, N. Y. 1983
[29] H. Bateman, Partial Differential Equations of MathematicalPhysics, Cambridge Univ. Press, 1964.
[30] S. L. Sobolev, Partial Differential Equations of MathematicalPhysics, Pergamon Press, Oxford, 1964.
[31] P. W. Berg and J. L. McGregor, Elementary Partial DifferentialEquations, Holden-Day, San Francisco, 1966.
[32] G. E. Forsythe and W. R. Wasow, Finite-Difference Methods forPartial Differential Equations, Wiley, N. Y., 1960.
[33] W. F. Ames, Numerical Methods for Partial Differential Equations,Academisc Press, N. Y., 1992.
[34] S. J. Farlow, Partial Differential Equations for Scientists and En-gineers, Wiley, N. Y., 1982.
246 BIBLIOGRAPHY
[35] C. Miranda, Partial Differential Equations of Elliptic Type,Springer- Verlag, N. Y., 1970.
[36] L. Lions and E. Magenes, Non-homogeneous Boundary Value Prob-lems and Applications, New York, Springer-Verlag, 1972.
[37] E. L. Wachspress, Extended Application of Alternating-Directed-Implicit Iteration Model Problem Theory, J. Soc. Industr. Appl.Math, 11, N3, 1963, 994-1016.
[38] M. I. Vishik and L. A. Lyusternik, Regular Degeneration andBoundary Layers for Linear Differential Equations with Small Pa-rameter, Amer. Math. Soc. Transl. (2) 20, 1962, 239-364.
[39] A. M. Il’in, Matching of Asymptotic Expansions of Solutions ofBoundary Value Problems, Amer. Math. Soc. Transl., 1992.
[40] A. A. Samarskii, The Theory of Difference Schemes, Marcel Dekker,N. Y., 2001.
[41] O. A. Ladyzhenskaya and N. N. Uraltseva, Linear and QuasilinearElliptic Equations, Academic press, N. Y., 1968.
[42] V. S. Vladimirov, Equations of Mathematical Physics, MarcelDekker, Inc, N. Y., 1971.
[43] A. N. Tikhonov and A. A. Samarskii, Equations of MathematicalPhysics, Dover Publ., N. Y. 1990.
[44] D. Bleecker and G. Csordas, Basic Partial Differential Equations,International Press, Cambridge, 1996.
[45] G. Hellwig, Partial Differential Equations; An Introduction,Blasidell Publ, N. Y., 1964.
[46] O. A. Ladyzhenskaya, Boundary Value Problem of MathematicalPhysics, Springer-Verlag, N. Y., 1985.
BIBLIOGRAPHY 247
[47] O. A. Ladyzhenskaya, V. A. Solonnikov and N. N. Uraltseva, Lin-ear and Quasilinear Equations of Parabolic Type, Translation ofMathematical Monographs, 23,AMS, Providence, 1968.
[48] A. A. Samarskii and E. S. Nikolaev, Numerical Methods for GridEquations, Birkhauser Verlag, Basel-Boston, 1989.
[49] J. Wloka, Partial Differential Equations, Cambridge Univ. Press,Cambridge, 1987.