Upload
saad-malik
View
129
Download
22
Embed Size (px)
DESCRIPTION
pdes, fourier,ade,
Citation preview
Module 1: Mathematical Preliminaries
This introductory module comprises of four lectures. In these four lectures, we introduce
to the readers some basic concepts from multivariable calculus and some essential results
from ordinary differential equations(ODEs). Some geometrical concepts necessary for the
subsequent modules are also discussed.
Module 1 is organised as follows. In Lecture 1, we review some basic definitions and
results from multivariable calculus. In Lecture 2, we discuss some essential formulas for
solving linear first-order and second-order (with constant coefficients only) ODEs. In
addition, we review the basic existence and uniqueness theorems for initial value problem
(IVP) for ODEs and systems of ODEs. In Lecture 3, we discuss some geometrical concepts
like surfaces, normals and integral curves and surfaces of vector fields. Finally, Lecture 4
is devoted to methods for finding the integral curves of a vector field by solving systems
of ODEs.
1
MODULE 1: MATHEMATICAL PRELIMINARIES 2
Lecture 1 A Review of Multivariable Calulus
In this lecture, we recall some basic concepts from multivariable calculus. The concepts
of limits, continuity, partial derivatives, directional derivatives, chain rules, tangent plane
and normals are discussed.
For any (x, y), (x0, y0) ∈ R2, let us denote
d((x, y), (x0, y0)) =√
(x− x0)2 + (y − y0)2
for the distance between two points (x, y) and (x0, y0). A disk Dr(x0, y0) of radius r
centered at (x0, y0) is defined as
Dr(x0, y0) = (x, y) | d((x, y), (x0, y0)) < r .
The concept of limit now can be defined by the same ϵ, δ technique as in one variable
calculus.
DEFINITION 1. (The ϵ, δ definition of limit) Let f(x, y) be a real-valued function of
two variables defined on a disk Dr(x0, y0), except possibly at (x0, y0). Then
lim(x,y)→(x0,y0)
f(x, y) = l if for every ϵ > 0 there is a δ > 0 such that
|f(x, y)− l| < ϵ whenever 0 < d((x, y), (x0, y0)) < δ.
Definition 1 means that the distance between f(x, y) and l can be made arbitrarily
small by making the distance from (x, y) to (x0, y0) sufficiently small (but not 0). That
is, if any small interval (l − ϵ, l + ϵ) is given around l, then we can find a disk Dδ(x0, y0)
with center (x0, y0) and radius δ > 0 such that f maps all the points in Dδ(x0, y0) [except
possibly (x0, y0)] into the interval (l − ϵ, l + ϵ).
The definition of a limit can be extended to functions of three or more variables. Using
vector notation the definition can be written in a compact form as follows:
Let f : Dr(x0) ⊂ Rn → R. Then
limx→x0
f(x) = l if for every ϵ > 0 there is δ > 0 such that
|f(x)− l| < ϵ whenever 0 < d(x,x0) < δ.
DEFINITION 2. (Continuity) Let f(x, y) be a real-valued function of two variables de-
fined in a disk Dr(x0, y0) with center (x0, y0). Then
f is continuous at (x0, y0) if lim(x,y)→(x0,y0)
f(x, y) = f(x0, y0).
MODULE 1: MATHEMATICAL PRELIMINARIES 3
We say f is continuous inDr(x0, y0) if f is continuous at every point (x, y) inDr(x0, y0).
The intuitive meaning of continuity is that if the point (x, y) changes by a small amount,
then the value of f(x, y) changes by a small amount. Geometrically, this means that a
surface that is the graph of a continuous function has no holes or breaks.
DEFINITION 3. (Partial derivatives) Let f : Dr(x0, y0) → R. The partial derivatives
of f are the functions fx and fy defined by
fx(x, y) := limh→0
f(x+ h, y)− f(x, y)
h,
fy(x, y) := limh→0
f(x, y + h)− f(x, y)
h.
To find fx, treat y as a constant and differentiate f(x, y) with respect to x. Similarly,
to find fy, treat x as a constant and differentiate f(x, y) with respect to y. If z = f(x, y)
we write
fx =∂f
∂x=∂z
∂x= zx,
fy =∂f
∂y=∂z
∂y= zy.
Partial derivatives can also be defined for functions of three or more variables. In general,
if z is a function of n variables, z = f(x1, x2, . . . , xn), its partial derivative with respect
to the ith variable xi is
∂z
∂xi:= lim
h→0
f(x1, . . . , xi−1, xi + h, xi+1, . . . , xn)− f(x1, . . . , xi, . . . , xn)
h.
We also write
zxi =∂z
∂xi=
∂f
∂xi= fxi .
Since the partial derivatives are themselves functions, we can take their partial derivatives
to obtain higher derivatives. If z = f(x, y), we may compute
fxx(x, y) =∂
∂x
(∂z
∂x
)=∂2z
∂x2, fyy(x, y) =
∂
∂y
(∂z
∂y
)=∂2z
∂y2,
fxy(x, y) =∂
∂y
(∂z
∂x
)=
∂2z
∂y∂x, fyx(x, y) =
∂
∂x
(∂z
∂y
)=
∂2z
∂x∂y.
In general, fxy = fyx. However, the following theorem gives condition under which we can
assert that fxy = fyx.
MODULE 1: MATHEMATICAL PRELIMINARIES 4
THEOREM 4. Let f : Dr(x0, y0) → R. If fxy and fyx are both continuous at (x0, y0),
then
fxy(x0, y0) = fyx(x0, y0).
DEFINITION 5. (Chain rule) Let z1 = f1(x1, . . . , xn), . . . , zm = fm(x1, . . . , xn) be m
functions of n variables, and let x1 = g1(t1, . . . , tk), . . . , xn = gn(t1, . . . , tk) be n functions
of k variables, all with continuous partial derivatives.
Consider the z′is as functions of the tj’s by
zi = fi(g1(t1, . . . , tk), . . . , gn(t1, . . . , tk)).
Then∂zi∂tj
=∂zi∂x1
∂x1∂tj
+∂zi∂x2
∂x2∂tj
+ · · ·+ ∂zi∂xn
∂xn∂tj
.
DEFINITION 6. If z = f(x, y) is a function of two variables, its gradient vector field ∇fis defined by
∇f(x, y) := (fx(x, y), fy(x, y)) = (∂z
∂x,∂z
∂y).
If u = f(x, y, z) is a function of three variables, its gradient vector field ∇f is defined by
∇f(x, y, z) = (fx(x, y, z), fy(x, y, z), fz(x, y, z)) = (∂u
∂x,∂u
∂y,∂u
∂z).
DEFINITION 7. (Implicit differentiation) If y = f(x) is a function satisfying the
relation z = F (x, y) = 0, then
dy
dx= −Fx(x, f(x))
Fy(x, f(x)). (1)
Differentiating F (x, y) = 0 with respect to x using the chain rule gives
∂F
∂x
dx
dx+∂F
∂y
dy
dx= 0
=⇒ ∂F
∂x+∂F
∂y
dy
dx= 0,
which yields (1).
DEFINITION 8. (Directional derivatives) The directional derivative of f at (x0, y0) in
the direction of a unit vector u = (u1, u2) is
Duf(x0, y0) := limh→0
f(x0 + hu1, y0 + hu2)− f(x0, y0)
h
if this limit exists.
MODULE 1: MATHEMATICAL PRELIMINARIES 5
Note that if u = (1, 0) then Duf = fx and if u = (0, 1), then Duf = fy. In other
words, the partial derivatives of f with respect to x and y are just special cases of the
directional derivatives.
THEOREM 9. If f(x, y) is a differentiable function of x and y, then f has a directional
derivative in the direction of any unit vector u = (u1, u2) and
Duf(x, y) = fx(x, y)u1 + fy(x, y)u2.
The directional derivative can be written as
Duf(x, y) = fx(x, y)u1 + fy(x, y)u2
= (fx(x, y), fy(x, y)) · (u1, u2)
= ∇f(x, y) · u. (2)
This expresses the directional derivative in the direction of u as the scalar projection
of the gradient vector onto u. From (2), we have
Duf(x, y) = ∇f(x, y) · u
= |∇f ||u| cos θ
= |∇f | cos θ,
where θ is the angle between ∇f and u. The maximum value of cos θ is 1 and this occurs
when θ = 0. Therefore, the maximum value of Duf(x, y) is |∇f | and it occurs when θ = 0
i.e., when u has the same direction as ∇f .
Similarly, the directional derivative of functions of three variables with unit vector
u = (u1, u2, u3) can be written as
Duf(x, y, z) = ∇f(x, y, z) · u.
We now introduce the concept of differentiability for functions of several variable, let’s
first recall the definition of differentiability in one variable case.
Let D be an open subset R. The function f : D → R is said to be differentiable at
x0 ∈ D if
limx→x0
f(x)− f(x0)
x− x0
exists. The value of this limit is called the derivative of f at x0 and is denoted by f ′(x0).
MODULE 1: MATHEMATICAL PRELIMINARIES 6
The above definition may be restated as follows: The function f : D → R is differen-
tiable at x0 ∈ D if there is a number f ′(x0) such that
limx→x0
|f(x)− f(x0)− f ′(x0)(x− x0)||x− x0|
= 0. (3)
Any real number a0 determines a linear transformation L : R → R defined by
Lx = a0x.
In particular, f ′(x0) determines a linear transformation L : R → R given by Lx = f ′(x0)x.
Therefore, with this linear transformation, we may rewrite (3) as
limx→x0
|f(x)− f(x0)− L(x− x0)||x− x0|
= 0. (4)
We now use (3) to define differentiability of a function f : Rn → Rm.
DEFINITION 10. (Differentiability) Let D ⊂ Rn be an open subset and let f : D → Rm.
We say that f is differentiable at x0 ∈ D if there is a linear transformation L : Rn → Rm
such that
limx→x0
∥f(x)− f(x0)− L(x− x0)∥∥x− x0∥
= 0. (5)
The linear transformation L of (5) is called the derivative of f at x0. We denote it by
f ′(x0).
We say that f is differentiable in D if it is differentiable at each every point of D.
DEFINITION 11. (Jacobian matrix) Let f : D ⊂ Rn → Rm is defined by
f(x) = (f1(x), . . . , fm(x)),
where fi : D → R, 1 ≤ i ≤ m. For each x ∈ D, we define the Jacobian matrix of f at x
by
Jf (x) := (aij),
where aij = (∂fi/∂xj)(x), 1 ≤ i ≤ m, 1 ≤ j ≤ n.
EXAMPLE 12.
Let f : R2 → R3 be given by
f(x1, x2) = (x21, x1x2, x22).
Here, f1(x1, x2) = x21, f2(x1, x2) = x1x2, f3(x1, x2) = x22. Then
∂f1∂x1
= 2x1,∂f2∂x1
= x2,∂f3∂x1
= 0.
MODULE 1: MATHEMATICAL PRELIMINARIES 7
∂f1∂x2
= 0,∂f2∂x2
= x1,∂f3∂x2
= 2x2
Therefore,
Jf (x1, x2) =
2x1 0
x2 x1
0 2x2
.The following theorem gives a formula for computing derivative.
THEOREM 13. (Computing derivative) Let D be an open subset of Rn and f : D →Rm be defined by
f(x) = (f1(x), . . . , fm(x)),
where fi : D → R, 1 ≤ i ≤ m. If f is differentiable at a point x0 in D, then each of the
partial derivatives (∂fi/∂xj)(x0) exists, 1 ≤ i ≤ m, 1 ≤ j ≤ n. Furthermore,
f ′(x0) = Jf (x0).
Note that the linear transformation L is defined by the Jacobian matrix of f at x0.
In particular, for m = 1, we have
L = f ′(x0) = ∇f(x0).
The following theorem gives the sufficient condition for differentiability of f .
THEOREM 14. (Sufficient condition for differentiability) Let D ⊂ Rn be an open
set and f : D → Rm be defined by
f(x) = (f1(x), . . . , fm(x)),
where fi : D → R, 1 ≤ i ≤ m. Suppose that (∂fi/∂xj)(x0) exists and continuous on D,
1 ≤ i ≤ m, 1 ≤ j ≤ n. Then f is differentiable on D.
We shall conclude this lecture by stating some results on maxima and minima in the
case of a function of several variables. We restrict our discussion to functions of two
variables only.
DEFINITION 15. (Maxima and Minima) Let f(x, y) be a function of two variables. A
point (x0, y0) is a local minimum point for f if there is a disk Dδ(x0, y0) about (x0, y0)
such that
f(x, y) ≥ f(x0, y0) for all (x, y) ∈ Dδ(x0, y0).
The number f(x0, y0) is called a local minimum value.
MODULE 1: MATHEMATICAL PRELIMINARIES 8
Similarly, if there is a disk Dδ(x0, y0) about (x0, y0) such that
f(x, y) ≤ f(x0, y0) for all (x, y) ∈ Dδ(x0, y0)
then the point (x0, y0) a local maximum point for f .
A point which is either a local maximum or minimum point is called a local extremum.
The following is the analog in two variables of the first derivative test for one variable.
First Derivative Test:
If (x0, y0) is a local extremum of f and the partial derivatives of f exist at (x0, y0), then
fx(x0, y0) = fy(x0, y0) = 0.
Such point (x0, y0) is also called a critical point of f .
Second Derivative Test:
Let f(x, y) have continuous second-order partial derivatives, and suppose that (x0, y0) is
a critical point for f . Then
fx(x0, y0) = 0 and fy(x0, y0) = 0.
Let A = fxx(x0, y0), B = fxy(x0, y0), and C = fyy(x0, y0). Then the following statements
are true.
(a) If A > 0, AC −B2 > 0 then (x0, y0) is a local minimum.
(b) If A < 0, AC −B2 > 0 then (x0, y0) is a local maximum.
(c) If AC −B2 < 0 then (x0, y0) is a saddle point.
(d) If AC −B2 = 0 then the test is inconclusive.
Practice Problems
1. Show that lim(x,y)→(0,0)∂∂x
√x2 + y2 does not exist.
2. Using ϵ and δ definition prove that f(x, y) = |x| is continuous at (0, 0).
3. Let
f(x, y) =
xy(x2−y2)x2+y2
, (x, y) = (0, 0),
0, (x, y) = (0, 0).
(a) If (x, y) = (0, 0), compute fx and fy.
(b) What is the value of f(x, 0) and f(0, y)?
(c) Show that fx(0, 0) = 0 = fy(0, 0).
(d) Show that fyx(0, 0) = 1 and fxy(0, 0) = −1.
(e) What went wrong? why are the mixed partial not equal?
MODULE 1: MATHEMATICAL PRELIMINARIES 9
4. Find the derivative of the function f : R2 → R2 defined by
f(x, y) = (x2 + xy, x− y2).
5. Find the maxima, minima and saddle points of f(x, y) = (x2 − y2)e(−x2−y2)/2.
MODULE 1: MATHEMATICAL PRELIMINARIES 10
Lecture 2 Essential Ordinary Differential Equations
In this lecture, we recall some methods of solving first-order IVP in ODE (separable and
linear) and homogeneous second-order linear ODEs with constant coefficients. These re-
sults will be useful while solving linear homogeneous PDEs using the variables separable
method in the subsequent modules (cf., Module 5, Module 6 and Module 7). Some funda-
mental results on existence, uniqueness and continuous dependence of solutions on given
data will also be discussed.
First-Order ODEs: A first-order ODE is separable if it can be written in the form
f(y)dy
dx= g(x), (1)
where y is an unknown function of the independent variable x.
Integrate both sides (if possible) with respect to x to have∫f(y)
dy
dxdx =
∫g(x) dx
=⇒∫f(y) dy =
∫g(x) dx,
and from which we find solutions y(x).
Consider a first order linear nonhomogeneous equation ODE in the standard form
y′(x) + p(x)y(x) = q(x). (2)
When q(x) = 0, the resulting equation
y′(x) + p(x)y(x) = 0
is called homogeneous equation which can be put in a separable form
dy
y= −p(x) dx, (y = 0).
Its solution is thus given by
yh(x) = C exp
(−∫p(x) dx
),
where C is an arbitrary constant. To solve (2), an integrating factor is given by
µ(x) = exp
(∫p(x) dx
).
MODULE 1: MATHEMATICAL PRELIMINARIES 11
The general solution of (2) is given by
y(x) =1
µ(x)
∫µ(x)q(x)dx+ C1
. (3)
EXAMPLE 1. Solve (1 + x2)y′ + 2xy = 5x4.
Solution. Putting the equation into the standard form (2), we have
y′ + 2x/(1 + x2)y = 5x4/(1 + x2). (4)
Here, p(x) = 2x/(1 + x2) and q(x) = 5x4
(1+x2). An integrating factor for (2) is
µ(x) = exp
[∫2x/(1 + x2)dx
]= exp[log(1 + x2)] = 1 + x2.
Multiplying both side of (4) by µ(x), we obtain
d
dx[µ(x)y(x)] = µ(x)q(x) = 5x4.
Integrate both side of the above equation to have
µ(x)y(x) = x5 + C
=⇒ y(x) = (x5 + C)/(1 + x2).
Second-Order Linear ODEs with Constant Coefficients: We recall some basic
results of the homogeneous second-order linear ODE of the form
ay′′(x) + by′(x) + cy(x) = 0, (5)
where the coefficients a, b, and c are real constants with a = 0. Let m1 and m2 be the
roots of the associated auxiliary equation
am2 + bm+ c = 0.
• If m1 and m2 are real and distinct (b2 − 4ac > 0), then the general solution of (5) is
y(x) = c1em1x + c2e
m2x.
• If m1 = m2 = m (b2 − 4ac = 0), then the general solution of (5) is
y(x) = c3emx + c4xe
mx.
MODULE 1: MATHEMATICAL PRELIMINARIES 12
• If m1 = α+ iβ and m2 = α− iβ (b2 − 4ac < 0), then the general solution of (5) is
y(x) = eαx[c5 cos(βx) + c6 sin(βx)].
Here, ci, i = 1, 2, 3, 4, 5, 6 are arbitrary constants.
On Existence and Uniqueness of IVP: Consider the following IVPs:
|y′|+ 2|y| = 0, y(0) = 1. (6)
y′(x) = x, y(0) = 1. (7)
xy′ = y − 1, y(0) = 1. (8)
Note that the IVP (6) has no solution, the problem (7) has precisely one solution, namely
y = 12x
2 + 1 and the problem (8) has infinitely many solutions, namely y = 1+ cx, where
c is an arbitrary constant. From the above three IVPs, we observe that an IVP
y′(x) = f(x, y), y(x0) = y0
may have none, precisely one, or more than one solution. This leads to the following
fundamental results.
THEOREM 2. (Existence) Let R : |x − x0| < a, |y − y0| < b be a rectangle. If f(x, y)
is continuous and bounded in R i.e., there is a number K such that
|f(x, y)| ≤ K ∀(x, y) ∈ R,
then the IVP
y′(x) = f(x, y), y(x0) = y0 (9)
has at least one solution y(x). This solution is defined for all x in the interval
|x− x0| < α, where α = mina, b
K.
THEOREM 3. (Uniqueness) Let R : |x−x0| < a, |y− y0| < b be a rectangle. If f(x, y)
and ∂f∂y are continuous and bounded in R i.e., there exist two number K and M such that
|f(x, y)| ≤ K ∀(x, y) ∈ R, (10)∣∣∣∣∂f∂y∣∣∣∣ ≤M ∀(x, y) ∈ R, (11)
then the IVP (9) has a unique solution y(x). This solution is defined for all x in the
interval
|x− x0| < α, where α = mina, b
K.
MODULE 1: MATHEMATICAL PRELIMINARIES 13
EXAMPLE 4. Let R : |x| < 5, |y| < 3 be the rectangle. Consider the IVP
y′ = 1 + y2, y(0) = 0
over R.
Here, a = 5, b = 3. Then
max(x,y)∈R
|f(x, y)| = max(x,y)∈R
|1 + y2| ≤ 10(= K),
max(x,y)∈R
∣∣∣∣∂f∂y∣∣∣∣ = max
(x,y)∈R2|y| ≤ 6(=M).
α = mina, bK
= min5, 3
10 = 0.3 < 5.
Note that the solution of the IVP is y = tanx. This solution is valid in the interval
|x| < 0.3 in stead of the entire interval |x| < 5. It is easy check that the solution y = tanx
is discontinuous at x = ±π/2, and hence, there is no continuous solution valid in the entire
interval |x| < 5.
The conditions in Theorem 3 are sufficient conditions rather than necessary ones, and
can be lessened. By the mean value theorem of differential calculus, we have
f(x, y2)− f(x, y1) = (y2 − y1)∂f
∂y(x, η),
where (x, y1), (x, y2) ∈ R and η lies between y1 and y2. In view of the condition∣∣∣∣∂f∂y∣∣∣∣ ≤M ∀(x, y) ∈ R,
it follows that
|f(x, y2)− f(x, y1)| ≤M |y2 − y1|, (12)
which is known as a Lipschitz condition. Thus, the condition (11) can be weakened to
obtain the following existence and uniqueness result.
THEOREM 5. (Picard’s Theorem) Let R : |x − x0| < a, |y − y0| < b be a rectangle.
Let f(x, y) be continuous and bounded in R i.e., there exists a number K such that
|f(x, y)| ≤ K ∀(x, y) ∈ R.
Further, let f satisfy the Lipschitz condition with respect to y in R, i.e., there exists a
number M such that
|f(x, y2)− f(x, y1)| ≤M |y2 − y1| ∀(x, y1), (x, y2) ∈ R. (13)
MODULE 1: MATHEMATICAL PRELIMINARIES 14
Then, the IVP (9) has a unique solution y(x). This solution is defined for all x in the
interval
|x− x0| < α, where α = mina, b
K.
Note that the continuity of f is not enough to guarantee the uniqueness of the solution
which can be seen from the following example.
EXAMPLE 6. (Nonuniqueness) Consider the IVP:
y′ =√
|y|, y(0) = 0.
Note that f(x, y) =√
|y| is continuous for all y. However,
y ≡ 0 and y =
x2/4, x ≥ 0
−x2/4, x < 0.
are two solutions of the given IVP. The uniqueness fails because the Lipschitz condition
(13) is violated in any region which include the line y = 0. With y1 = 0 and y2 > 0, we
note that|f(x, y2)− f(x, y1)|
|y2 − y1|=
|f(x, y2)− f(x, 0)||y2|
=
√y2
y2=
1√y2,
which can be made large by choosing y2 → 0. Thus, it is not possible to find a fixed
constant M such that the condition (13) holds, and hence the IVP has a solution but it
is not unique.
Next, we generalize the above result to a system of n first order ordinary differential
equations in n unknowns of the form
dyi(x)
dx= fi(x, y1, . . . , yn), i = 1, . . . , n, (14)
satisfying the initial conditions
y1(x0) = y01, . . . , yn(x0) = y0n, (15)
where y01, . . . , y0n are the given initial values.
The fundamental result concerning the existence and uniqueness of solution of the
system (14)-(15) is essentially the same as Theorem 5.
THEOREM 7. Let Q be a box in Rn+1 defined by
Q : |x− x0| < a, |y1 − y01| < b1, . . . , |yn − y0n| < bn.
MODULE 1: MATHEMATICAL PRELIMINARIES 15
Let each of the functions f1, . . . , fn be continuous and bounded in Q, and satisfy the fol-
lowing Lipschitz condition with respect to the variables y1, y2, . . . , yn, i.e., there exists
constants L1, . . . , Ln such that
|f(x, y11, . . . , y1n)− f(x, y21, . . . , y2n)| ≤ L1|y11 − y21|+ · · ·+ Ln|y1n − y2n|
for all pairs of points (x, y11, . . . , y1n), (x, y
21, . . . , y
2n) ∈ Q. Then there exists a unique set of
functions y1(x), . . . , yn(x) defined for x in some interval |x − x0| < h, 0 < h < a such
that y1(x), . . . , yn(x) solve (14)-(15).
Practice Problems
1. Determine whether the given differential equation is separable.
(a) dydx = yex+y
x2+y; (b) x dy
dx = 1 + y2; (c) dydx = sin(x+ y).
2. Solve the following first-order linear equations subject to the given conditions:
(a) dydx + y
x = 1, y(2) = 1; (b) 4 dydx + 3xy = 5, y(0) = 1; (c) sinx dy
dx + y cosx =
x sinx, y(π/2) = 2.
3. Find the general solution of the following second-order homogeneous linear ODEs.
(a) d2ydx2 + 4 dy
dx + 5y = 0; (b) d2ydx2 + dy
dx = 0; (c) d2ydx2 − 2 dy
dx + 4y = 0.
4. Does f(x, y) = |x| + |y| satisfy a Lipschitz condition in the xy-plane? Does ∂f/∂y
exist?
5. Find all solutions of the IVP dydx = 2
√y, y(1) = 0.
MODULE 1: MATHEMATICAL PRELIMINARIES 16
Lecture 3 Surfaces and Integral Curves
In Lecture 3, we recall some geometrical concepts that are essential for understanding
the nature of solutions of partial differential equations to be discussed in the subsequent
lectures.
Surface: A surface is the locus of a point moving in space with two degrees of freedom.
Generally, we use implicit and explicit representations for describing such a locus by
mathematical formulas.
In the implicit representation we describe a surface as a set
S = (x, y, z) |F (x, y, z) = 0,
i.e., a set of points (x, y, z) satisfying an equation of the form F (x, y, z) = 0.
Sometimes we can solve such an equation for one of the coordinates in terms of the
other two, say for z in terms of x and y. When this is possible we obtain an explicit
representation of the form z = f(x, y).
EXAMPLE 1. A sphere of radius 1 and center at the origin has the implicit representation
x2 + y2 + z2 − 1 = 0.
When this equation is solved for z it leads to two solutions:
z =√
1− x2 − y2 and z = −√
1− x2 − y2.
The first equation gives an explicit representation of the upper hemisphere and the
second of the lower hemisphere.
We now describe here a class of surfaces more general than surfaces obtained as graphs
of functions. For simplicity, we restrict the discussion to the case of three dimensions.
Let Ω ⊂ R3 and let F (x, y, z) ∈ C1(Ω), where C1(Ω) := F (x, y, x) ∈ C(Ω) :
Fx, Fy, Fz ∈ C(Ω). We know the gradient of F , denoted by ∇F , is a vector valued
function defined by
∇F = (∂F
∂x,∂F
∂y,∂F
∂z).
One can visualize ∇F as a field of vectors (vector fields), with one vector, ∇F , emanating
from each point (x, y, z) ∈ Ω. Assume that
∇F (x, y, z) = (0, 0, 0), ∀x ∈ Ω. (1)
MODULE 1: MATHEMATICAL PRELIMINARIES 17
This means that the partial derivatives of F do not vanish simultaneously at any point of
Ω.
DEFINITION 2. (Level surface) The set
Sc = (x, y, z) | (x, y, z) ∈ Ω and F (x, y, z) = c,
for some appropriate value of the constant c, is a surface in Ω. This surface is called a
level surface of F .
Note: When Ω ⊂ R2, the set Sc = (x, y) | (x, y) ∈ Ω and F (x, y) = c is called a
level curve in Ω.
Let (x0, y0, z0) ∈ Ω and set c = F (x0, y0, z0). The equation
F (x, y, z) = c (2)
represents a surface in Ω passing through the point (x0, y0, z0). For different values of c,
(2) represents different surfaces in Ω. Each point of Ω lies on exactly one level surface of
F . Any two points (x0, y0, z0) and (x1, y1, z1) of Ω lie on the same level surface if and only
if
F (x0, y0, z0) = F (x1, y1, z1).
Thus, one may visualize Ω as being laminated by the level surfaces of F . The equation
(2) represents one parameter family of surfaces in Ω.
EXAMPLE 3. Take Ω = R3\(0, 0, 0) and let F (x, y, z) = x2 + y2 + z2. Then
∇F (x, y, z) = (2x, 2y, 2z).
Note that the condition (1) is satisfied ∀(x, y, z) ∈ Ω. The level surfaces of F are spheres
with center at the origin.
EXAMPLE 4. Take Ω = R3. Then ∇F (x, y, z) = (0, 0, 1). The condition (1) is satisfied
at every point of Ω. The level surfaces are planes parallel to the (x, y)-plane.
Consider the surface given by the equation (2) and let the point (x0, y0, z0) lie on this
surface. We now ask the following question: Is it possible to describe Sc by an equation
of the form
z = f(x, y), (3)
so that Sc is the graph of f? This is equivalent to asking whether it is possible to solve
(2) for z in terms of x and y. An answer to this question is contained in the following
theorem.
MODULE 1: MATHEMATICAL PRELIMINARIES 18
THEOREM 5. (Implicit Function Theorem)
If F is defined within a sphere containing the point (x0, y0, z0), where F (x0, y0, z0) = 0,
Fz(x0, y0, z0) = 0, and Fx, Fy and Fz are continuous inside the sphere, then the equation
F (x, y, z) = 0 defines z = f(x, y) near the point (x0, y0, z0).
EXAMPLE 6. Consider the unit sphere
x2 + y2 + z2 = 1. (4)
Note that the point (0, 0, 1) lies on this surface and Fz(0, 0, 1) = 2. By the implicit function
function theorem, we can solve (4) for z near the point (0, 0, 1). In fact, we have
z = +√
1− x2 − y2, x2 + y2 < 1. (5)
In the upper half space z > 0, (4) and (5) describe the same surface.
The point (0, 0,−1) is also an the surface (4) and Fz(0, 0,−1) = −2. Near (0, 0,−1),
we have
z = −√
1− x2 − y2, x2 + y2 < 1. (6)
In the lower half space z < 0, (4) and (6) represents the same surface.
On the other hand, at the point (1, 0, 0), we have Fz(1, 0, 0) = 0. Clearly, it is not
possible to solve (4) for z in terms of x and y near this point.
Note that the set of points satisfying the equations
F (x, y, z) = c1, G(x, y, z) = c2 (7)
must lie on the intersection of these two surfaces. If ∇F and ∇G are not colinear at any
point of the domain Ω, where both F and G are defined, i.e.,
∇F (x, y, z)×∇G(x, y, z) = 0, (x, y, z) ∈ Ω, (8)
then the intersection of the two surfaces given by (7) is always a curve.
Since
∇F ×∇G =
(∂(F,G)
∂(y, z),∂(F,G)
∂(z, x),∂(F,G)
∂(x, y)
), (9)
where the Jacobian∂(F,G)
∂(y, z)=∂F
∂y
∂G
∂z− ∂F
∂z
∂G
∂y.
The condition (8) means that at every point of Ω at least one of the Jacobian on the right
side of (9) is different from zero.
MODULE 1: MATHEMATICAL PRELIMINARIES 19
EXAMPLE 7. Let
F (x, y, z) = x2 + y2 − z, G(x, y, z) = z.
Note that ∇F = (2x, 2y,−1) and ∇G = (0, 0, 1). It is easy to see that if Ω = R3 with
the z-axis removed, then the condition (8) is satisfied in Ω. The pair of equations
x2 + y2 − z = 0, z = 1
represents a circle which is the intersection of the paraboloidal surface represented by the
first equation and the plane represented by the second equation.
Systems of Surfaces: A one-parameter system of surfaces is represented by the equation
of the form
f(x, y, z, c) = 0. (10)
Consider the system of surfaces described by the equation
f(x, y, z, c+ δc) = 0, (11)
corresponding to the slightly different value c+ δc.
Note that these two surfaces will intersect in a curve whose equations are (10) and
(11). This curve may be considered to be intersection of the equations
f(x, y, z, c) = 0, limδc→0
f(x, y, z, c+ δc)− f(x, y, z, c)
δc.
The limiting curve described by the set of equations
f(x, y, z, c) = 0,∂
∂cf(x, y, z, c) = 0. (12)
is called the characteristic curve (cf. [10]) of (10).
REMARK 8. Geometrically, it is the curve on the surface (10) approached by the inter-
section of (10) and (11) as δc → 0. Note that as c varies, the characteristic curve (12)
trace out a surface whose equation is of the form
g(x, y, z) = 0.
DEFINITION 9. (Envelope of one-parameter system)
The surface determined by eliminating the parameter c between the equations
f(x, y, z, c) = 0,∂
∂cf(x, y, z, c) = 0
is called the envelope of the one-parameter system f(x, y, z, c) = 0.
MODULE 1: MATHEMATICAL PRELIMINARIES 20
EXAMPLE 10. Consider the equation
x2 + y2 + (z − c)2 = 1.
This equation represents the family of spheres of unit radius with centers on the z-axis.
Set
f(x, y, z, c) = x2 + y2 + (z − c)2 − 1.
Then ∂f∂c = z − c. The set of equations
x2 + y2 + (z − c)2 = 1, z = c
describe the characteristic curve to the surface. Eliminating the parameter c, the envelope
of this family is the cylinder
x2 + y2 = 1.
Now consider the two parameter system of surfaces defined by the equation
f(x, y, z, c, d) = 0, (13)
where c and d are parameters.
In a similar way, the characteristics curve of the surface (13) passes through the point
defined by the equations
f(x, y, z, c, d) = 0,∂
∂cf(x, y, z, c, d) = 0,
∂
∂df(x, y, z, c, d) = 0.
This point is called the characteristics point of the two-parameter system (13). As the
parameters c and d vary, this point generates a surface which is called the envelope of the
surfaces (13).
DEFINITION 11. (Envelope of two-parameter system)
The surface obtained by eliminating c and d from the equations
f(x, y, z, c, d) = 0,∂
∂cf(x, y, z, c, d) = 0,
∂
∂df(x, y, z, c, d) = 0
is called the envelope of the two-parameter system f(x, y, z, c, d) = 0.
EXAMPLE 12. Consider the equation
(x− c)2 + (y − d)2 + z2 = 1,
where c and d are parameters. Observe that
(x− c)2 + (y − d)2 + z2 = 1, x− c = 0, y − d = 0.
The characteristics points of the two-parameter system (13) are (c, d,±1). Eliminating c
and d, the envelope is the pair of parallel planes z = ±1.
MODULE 1: MATHEMATICAL PRELIMINARIES 21
Integral Curves of Vector Fields: Let V(x, y, z) = (P (x, y, z), Q(x, y, z), R(x, y, z))
be a vector field defined in some domain Ω ⊂ R3 satisfying the following two conditions:
• V = 0 in Ω, i.e., the component functions P , Q and R of V do not vanish simulta-
neously at any point of Ω.
• P,Q,R ∈ C1(Ω).
DEFINITION 13. A curve C in Ω is an integral curve of the vector field V if V is tangent
to C at each of its points.
EXAMPLE 14. 1. The integral curves of the constant vector fields V = (1, 0, 0) are lines
parallel to the x-axis (see Fig. 1.1).
2. The integral curves of V = (y,−x, 0) are circles parallel to the (x, y)-plane and
centered on the z-axis (see Fig. 1.1).
Figure 1.1: Integral curves of V = (1, 0, 0) and V = (y,−x, 0)
REMARK 15. In physics, if V is a force field, the integral curves of V are called lines
of force. If V is the velocity of the fluid flow, the integral curves of V are called lines of
flow. These are the paths of motion of the fluid particles.
With V = (P,Q,R), associate the system of ODEs:
dx
dt= P (x, y, z),
dy
dt= Q(x, y, z),
dz
dt= R(x, y, z). (14)
A solution (x(t), y(t), z(t)) of the system (14), defined for t in some interval I, may be
regarded as a curve in Ω. We call this curve a solution curve of the system (14). Every
solution curve of the system (14) is an integral curve of the vector field V. Conversely, if
C is an integral curve of V , then there is a parametric representation
x = x(t), y = y(t), z = z(t); t ∈ I,
MODULE 1: MATHEMATICAL PRELIMINARIES 22
of C such that (x(t), y(t), z(t)) is a solution of the system of equations (14). Thus, every
integral curve of V , if parametrized appropriately, is a solution curve of the associated
system of equations (14).
It is customary to write the systems (14) in the form
dx
P=dy
Q=dz
R. (15)
EXAMPLE 16. The systems associated with the vector fields V = (x, y, x) and V =
(y,−x, 0), respectively, are
dx
x=dy
y=dz
z, (16)
dx
y=
dy
−x=dz
0. (17)
Note that the zero which appears in the denominator of (17) should not be disturbing. It
simply means that dz/dx = dz/dy = dz/dt = 0.
Before we discuss the method of solutions of (15), let us introduce some basic defini-
tions and facts (cf. [11]).
DEFINITION 17. Two functions ϕ(x, y, z), ψ(x, y, z) ∈ C1(Ω) are functionally independent
in Ω ⊂ R3 if
∇ϕ(x, y, z)×∇ψ(x, y, z) = 0, (x, y, z) ∈ Ω. (18)
Geometrically, condition (18) means that ∇ϕ and ∇ψ are not parallel at any point of
Ω.
DEFINITION 18. A function ϕ ∈ C1(Ω) is called a first integral of the vector field V =
(P,Q,R) (or its associated system (15)) in Ω, if at each point of Ω, V is orthogonal to
∇ϕ, i.e.
V · ∇ϕ = 0
=⇒ P∂ϕ
∂x+Q
∂ϕ
∂y+R
∂ϕ
∂z= 0 in Ω.
THEOREM 19. Let ϕ1 and ϕ2 be any two functionally independent first integrals of V in
Ω. Then the equations
ϕ(x, y, z) = c1, ϕ2(x, y, z) = c2 (19)
describe the collection of all integrals of V in Ω.
MODULE 1: MATHEMATICAL PRELIMINARIES 23
If ϕ(x, y, z) is a first integral of V and f(ϕ) is a C1 function of single variable ϕ then
w(x, y, z) = f(ϕ(x, y, z)) is also a first integral of V. This follows from the fact that
P∂w
∂x+Q
∂w
∂y+R
∂w
∂z= Pf ′
∂ϕ
∂x+Qf ′
∂ϕ
∂y+Rf ′
∂ϕ
∂z
= f ′(P∂ϕ
∂x+Q
∂ϕ
∂y+R
∂ϕ
∂z
)= 0.
Similarly, if f(u, v) is a C1 function of two variables ϕ1 and ϕ2 and if ϕ1(x, y, z) and
ϕ2(x, y, z) are any two first integrals of V then w(x, y, z) = f(ϕ1(x, y, z), ϕ2(x, y, z)) is
also a first integral of V.
EXAMPLE 20. Let V = (1, 0, 0) be a vector field and let Ω = R3. A first integral of V is
a solution of the equation
ϕx = 0.
Any function of y and z only is a solution of this equation. For example,
ϕ1 = y, ϕ2 = z
are two solutions which are functionally independent. The integral curves of V are de-
scribed by the equations
y = c1, z = c2,
and are straight lines parallel to the x-axis.
EXAMPLE 21. Let V = (y,−x, 0) be a vector field and let Ω = R3\ z-axis. A first integral
of V is a solution of the equation
yϕx − xϕy = 0.
It is easy to verify that
ϕ1(x, y, z) = x2 + y2, ϕ2(x, y, z) = z (20)
are two functionally independent first integrals of V. Therefore, the integrals curves of V
in Ω are given by
x2 + y2 = c1, z = c2. (21)
The above equations describe circles parallel to the (x, y)-plane and centered on the z-axis
(see the second figure of Fig 1.1).
MODULE 1: MATHEMATICAL PRELIMINARIES 24
Practice Problems 3
1. Find a vector V (x, y, z) normal to the surface z =√x2 + y2 + (x2 + y2)3/2.
2. If ∇f(x, y, z) is always parallel to the vector (x, y, z), show that f must assume equal
values at the points (0, 0, a) and (0, 0,−a).
3. Find ∇F , where F (x, y, z) = z2−x2−y2. Find the largest set in which grad F does
not vanish?
4. Find a vector normal to the surface z2 − x2 − y2 = 0 at the point (1, 0, 1).
5. If possible, solve the equation z2 − x2 − y2 = 0 in terms of x, y near the following
points: (a) (1, 1,√2); ; (b) (1, 1,
√2) ; (c) (0, 0, 0).
6. Find the integral curves of the following vector fields: (a) V = (x, 0,−z), (b) V =
(x2,−y3, 0), (c) V = (2, 3y2, 0).
7. Let u(x, y, z) be a first integral of V and let C be an integral curve of V given by
x = x(t), y = y(t), z = z(t); t ∈ I.
Show that C must lie on some level surface of u. [Hint: Compute ddtu(x(t), y(t), z(t))].
8. If V be the vector field given by V = (x, y, z) and let Ω be the octant x > 0, y > 0,
z > 0. Show that u1(x, y, z) =yx and u2(x, y, z) =
zx are functionally independent
first integrals of V in Ω
MODULE 1: MATHEMATICAL PRELIMINARIES 25
Lecture 4 Solving Equations dx/P = dy/Q = dz/R
In the previous lecture, we have seen that the integral curves of the set of differential
equationsdx
P=dy
Q=dz
R(1)
form a two-parameter family of curves in three-dimensional space. If we can derive two
relation of the form
u1(x, y, z) = c1, u2(x, y, z) = c2, (2)
then varying c1 and c2 we get a two-parameter family of curves satisfying (1). In this
lecture, we shall describe methods for finding integral curves of the set of differential
equations (1).
Method I: Along any tangential direction through a point (x, y, z) to u1(x, y, z) = c1
we have∂u1∂x
dx+∂u1∂x
dy +∂u1∂x
dz = 0. (3)
If u1(x, y, z) = c is a suitable one-parameter system of surfaces, then the tangential direc-
tion to the integral curve through the point (x, y, z) is also a tangential direction to this
surface. Hence
(P,Q,R) · ∇u1 = 0
=⇒ P∂u1∂x
+Q∂u1∂x
+R∂u1∂x
= 0.
To find u1, choose functions P1, Q1 and R1 such that
(P,Q,R) · (P1, Q1, R1) = 0,
=⇒ PP1 +QQ1 +RR1 = 0. (4)
Thus, there exists a function u1 such that
P1 =∂u1∂x
, Q1 =∂u1∂y
, R1 =∂u1∂z
.
and this leads to the equation
du1 = P1dx+Q1dy +R1dz, (5)
which is an exact differential.
MODULE 1: MATHEMATICAL PRELIMINARIES 26
REMARK 1. The method described above for finding solutions of (1) is by inspection. A
good deal of intuition is required to determine the forms of the functions P1, Q1 and R1
(cf. [10]).
EXAMPLE 2. Find the integral curves of the equations
dx
y(x+ y)=
dy
x(x+ y)=
dz
z(x+ y). (6)
Solution. Comparing with (1), we find that
P = y(x+ y), Q = x(x+ y), R = z(x+ y).
If we choose
P1 =1
z, Q1 =
1
z, R1 = −x+ y
z2
then condition PP1 +QQ1 + RR1 = 0 is satisfied. The function u1(x, y, z) is then deter-
mined as follows:
u1(x, y, z) =
∫1
zdx+
∫1
zdy +
∫(−x+ y
z2) dz
=x
z+y
z+x+ y
z= c
=⇒ 2x+ y
z= c.
Similarly, choose P1 = x, Q1 = −y and R1 = 0 and verify that the condition (4) is get
satisfied. The function u2 is then determined as
u2(x, y, z) =1
2(x2 − y2).
Thus, the integral curves of the differential equations (6) are the member of the two-
parameter family of curves
x+ y = c1z, x2 − y2 = c2.
Method II: Suppose that we are able to find three functions P1, Q1 and R1 such that
the ratioP1dx+Q1dy +R1dz
PP1 +QQ1 +RR1= dW1, (7)
an exact differential. Similarly, suppose we can find other three functions P2, Q2 and R2
such thatP2dx+Q2dy +R2dz
PP2 +QQ2 +RR2= dW2 (8)
MODULE 1: MATHEMATICAL PRELIMINARIES 27
is also an exact differential. Since the ratios
P1dx+Q1dy +R1dz
PP1 +QQ1 +RR1=P2dx+Q2dy +R2dz
PP2 +QQ2 +RR2=dx
P=dy
Q=dz
R,
it now follows that
dW1 = dW2,
which yields the relation
W1(x, y, z) =W2(x, y, z) + c1,
where c1 denotes an arbitrary constant.
EXAMPLE 3. Find the integral curves of the equations
dx
y − x=
dy
x+ y=
zdz
x2 + y2. (9)
Solution. Here P = y − x, Q = x + y and R = x2+y2
z . Observe that P + Q = 2y.
Now choose P1 = 1, Q1 = 1 and R1 = 0 to obtain
dx+ dy
2y=
dy
x+ y
=⇒ (x+ y)(dx+ dy) = 2y
=⇒ 1
2d(x+ y)2 = 2y.
It has solution of the form
u1(x, y, z) =(x+ y)2
2− y2 = c1.
Similarly, with P2 = x, Q2 = −y and R2 = z, we find that
xdx− ydy + zdz = 0,
which has solution
u2(x, y, z) = x2 − y2 + z2 = c2.
The equations(x+ y)2
2− y2 = c1, x2 − y2 + z2 = c2
constitute the integral curves of (9).
Method III: When one of the variables is absent from (1), we can derive the integral
curves in a simple way.
For the sake of definiteness, let P and Q be functions of x and y only. Then the
equationdx
P=dy
Q
MODULE 1: MATHEMATICAL PRELIMINARIES 28
may be written asdy
dx= f(x, y), where f(x, y) =
Q
P.
Let this equation has a solution of the form
ϕ(x, y, c1) = 0. (10)
Solving (10) for x and substituting the value of x in the equation
dy
Q=dz
R
leads to an equation of the form
dy
dz= g(y, z, c1). (11)
Let the solution of (11) be expressed by
ψ(y, z, c1, c2) = 0. (12)
EXAMPLE 4. Find the integral curves of the equations
dx
x=
dy
y + x2=
dz
y + z(13)
Solution. The first two equations may be expressed as
dy
dx− y
x= x
=⇒ d
dx
(yx
)= 1,
which has solution
y = c1x+ x2.
Using the first and third equations of (13), we note that
dz
dx=y
x+z
x= c1 + x+
z
x
=⇒ d
dx
( zx
)=c1x
+ 1,
which has solution
z = c1x log x+ c2x+ x2.
Hence, the integral curves of the differential equations (13) are given by the equations
y = c1x+ x2, z = c1x log x+ c2x+ x2.
MODULE 1: MATHEMATICAL PRELIMINARIES 29
Practice Problems 4
Find the integral curves of the following system of ODEs:
1.dx
y − z=
dy
z − x=
dz
x− y
2.dx
z=dy
xz=dz
y
3.dx
xz − y=
dy
yz − x=
dz
xy − z
4.dx
y + 3z=
dy
z + 5x=
dz
x+ 7y
Module 2: First-Order Partial Differential Equations
The mathematical formulations of many problems in science and engineering reduce to
study of first-order PDEs. For instance, the study of first-order PDEs arise in gas flow
problems, traffic flow problems, phenomenon of shock waves, the motion of wave fronts,
Hamilton-Jacobi theory, nonlinear continum mechanics and quantum mechanics etc.. It is
therefore essential to study the theory of first-order PDEs and the nature their solutions
to analyze the related physical problems.
In Module 2, we shall study first-order linear, quasi-linear and nonlinear PDEs and
methods of solving these equations. An important method of characteristics is explained
for these equations in which solving PDE reduces to solving an ODE system along a
characteristics curve. Further, the Charpit’s method and the Jacobi’s method for nonlinear
first-order PDEs are discussed.
This module consists of seven lectures. Lecture 1 introduces some basic concepts
of first-order PDEs such as formulation of PDEs, classification of first-order PDEs and
Cauchy’s problem for first-order PDEs. In Lecture 2, we study first-order linear PDEs
and the parametric form of solution of first-order PDEs. In Lecture 3, we study a first-
order quasi-linear PDE and discuss the method of characteristics for a first-order quasi-
linear PDE. Lecture 4 is devoted to nonlinear first-order PDEs and Cauchy’s method
of characteristics for finding solutions of these equations. Lecture 5 is focused on the
compatible system of equations and Charpit’s method for solving nonlinear equations. In
Lecture 6, we consider some special type of PDEs and method of obtaining their general
integrals. Finally, the Jacobi’s method for solving nonlinear PDEs is discussed in Lecture
7.
1
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 2
Lecture 1 First-Order Partial Differential Equations
A first order PDE in two independent variables x, y and the dependent variable z can be
written in the form
f(x, y, z,∂z
∂x,∂z
∂y) = 0. (1)
For convenience, we set
p =∂z
∂x, q =
∂z
∂y.
Equation (1) then takes the form
f(x, y, z, p, q) = 0. (2)
The equations of the type (2) arise in many applications in geometry and physics. For
instance, consider the following geometrical problem.
EXAMPLE 1. Find all functions z(x, y) such that the tangent plane to the graph z = z(x, y)
at any arbitrary point (x0, y0, z(x0, y0)) passes through the origin characterized by the PDE
xzx + yzy − z = 0.
The equation of the tangent plane to the graph at (x0, y0, z(x0, y0)) is
zx(x0, y0)(x− x0) + zy(x0, y0)(y − y0)− (z − z(x0, y0)) = 0.
This plane passes through the origin (0, 0, 0) and hence, we must have
−zx(x0, y0)x0 − zy(x0, y0)y0 + z(x0, y0) = 0. (3)
For the equation (3) to hold for all (x0, y0) in the domain of z, z must satisfy
xzx + yzy − z = 0,
which is a first-order PDE.
EXAMPLE 2. The set of all spheres with centers on the z-axis is characterized by the
first-order PDE yp− xq = 0.
The equation
x2 + y2 + (z − c)2 = r2, (4)
where r and c are arbitrary constants, represents the set of all spheres whose centers lie
on the z-axis. Differentiating (4) with respect to x, we obtain
2
(x+ (z − c)
∂z
∂x
)= 2 (x+ (z − c)p) = 0. (5)
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 3
Differentiate (4) with respect to y to have
y + (z − c)q = 0. (6)
Eliminating the arbitrary constant c from (5) and (6), we obtain the first-order PDE
yp− xq = 0. (7)
Equation (4) in some sense characterized the first-order PDE (7).
EXAMPLE 3. Consider all surfaces described by an equation of the form
z = f(x2 + y2), (8)
where f is an arbitrary function, described by the first-order PDE.
Writing u = x2 + y2 and differentiating (8) with respect to x and y, it follows that
p = 2xf ′(u); q = 2yf ′(u),
where f ′(u) = dfdu . Eliminating f ′(u) from the above two equations, we obtain the same
first-order PDE as in (7).
REMARK 4. The function z described by each of the equations (4) and (8), in some
sense, a solution to the PDE (7). Observe that, in Example 2, PDE (7) is formulated
by eliminating arbitrary constants from (4) whereas in Example 3, PDE (7) is formed by
eliminating an arbitrary function.
1 Formation of first-order PDEs
The applications of conservation principles often yield a first-order PDEs. We have seen
in the previous two examples that a first-order PDE can be formed either by eliminat-
ing arbitrary constants or an arbitrary function involved. Below, we now generalize the
arguments of Example 2 and Example 3 to show that how a first-order PDE can be formed.
Method I (Eliminating arbitrary constants): Consider two parameters family of sur-
faces described by the equation
F (x, y, z, a, b) = 0, (9)
where a and b are arbitrary constants. Equation (9) may be thought of as a generalization
of the relation (4).
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 4
Differentiating (9) with respect to x and y, we obtain
∂F
∂x+ p
∂F
∂z= 0 (10)
∂F
∂y+ q
∂F
∂z= 0. (11)
Eliminate the constants a, b from equations (9), (10) and (11) to obtain a first-order PDE
of the form
f(x, y, z, p, q) = 0. (12)
This shows that a family of surfaces described by the relation (9) gives rise to a first-order
PDE (12).
Method II (Eliminating arbitrary function): Now consider the generalization of Ex-
ample 3. Let u(x, y, z) = c1 and v(x, y, z) = c2 be two known functions of x, y and z
satisfying a relation of the form
F (u, v) = 0, (13)
where F is an arbitrary function of u and v. Differentiating (13) with respect to x and y
lead to the equations
Fu(ux + uzp) + Fv(vx + vzp) = 0
Fu(uy + uzq) + Fv(vy + vzq) = 0.
Eliminating Fu and Fv from the above two equations, we obtain
p∂(u, v)
∂(y, z)+ q
∂(u, v)
∂(z, x)=∂(u, v)
∂(x, y), (14)
which is a first-order PDE of the form f(x, y, z, p, q) = 0. Here, ∂(u,v)∂(x,y) = uxvy − uyvx.
2 Classification of first-order PDEs
We classify the equation (1) depending on the special forms of the function f . If (1) is of
the form
a(x, y)∂z
∂x+ b(x, y)
∂z
∂y+ c(x, y)z = d(x, y)
then it is called linear first-order PDE. Note that the function f is linear in ∂z∂x ,
∂z∂y and
z with all coefficients depending on the independent variables x and y only.
If (1) has the form
a(x, y)∂z
∂x+ b(x, y)
∂z
∂y= c(x, y, z)
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 5
then it is called semilinear because it is linear in the leading (highest-order) terms ∂z∂x
and ∂z∂y . However, it need not be linear in z. Note that the coefficients of ∂z
∂x and ∂z∂y are
functions of the independent variables only.
If (1) has the form
a(x, y, z)∂z
∂x+ b(x, y, z)
∂z
∂y= c(x, y, z)
then it is called quasi-linear PDE. Here the function f is linear in the derivatives ∂z∂x
and ∂z∂y with the coefficients a, b and c depending on the independent variables x and y as
well as on the unknown z. Note that linear and semilinear equations are special cases of
quasi-linear equations.
Any equation that does not fit into one of these forms is called nonlinear.
EXAMPLE 5.
1. xzx + yzy = z (linear)
2. xzx + yzy = z2 (semilinear)
3. zx + (x+ y)zy = xy (linear)
4. zzx + zy = 0 (quasilinear)
5. xz2x + yz2y = 2 (nonlinear)
3 Cauchy’s problem or IVP for first-order PDEs
Recall the initial value problem for a first-order ODE which ask for a solution of the
equation that takes a given value at a given point of R. The IVP for first-order PDE ask
for a solution of (2) which has given values on a curve in R2. The conditions to be satisfied
in the case of IVP for first-order PDE are formulated in the classic problem of Cauchy
which may be stated as follows:
Let C be a given curve in R2 described parametrically by the equations
x = x0(s), y = y0(s); s ∈ I, (15)
where x0(s), y0(s) are in C1(I). Let z0(s) be a given function in C1(I). The IVP or
Cauchy’s problem for first-order PDE
f(x, y, z, p, q) = 0 (16)
is to find a function u = u(x, y) with the following properties:
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 6
• u(x, y) and its partial derivatives with respect to x and y are continuous in a region
Ω of R2 containing the curve C.
• u = u(x, y) is a solution of (16) in Ω, i.e.,
f(x, y, u(x, y), ux(x, y), uy(x, y)) = 0 in Ω.
• On the curve C
u(x0(s), y0(s)) = z0(s), s ∈ I. (17)
The curve C is called the initial curve of the problem and the function z0(s) is called the
initial data. Equation (17) is called the initial condition of the problem.
NOTE:Geometrically, Cauchy’s problem may be interpreted as follows: To find a solution
surface u = u(x, y) of (16) which passes through the curve C whose parametric equations
are
x = x0(s), y = y0(s) z = z0(s). (18)
Further, at every point of which the direction (p, q,−1) of the normal is such that
f(x, y, z, p, q) = 0.
The proof of existence of a solution of (16) passing through a curve with equations
(18) requires some more assumptions on the function f and the nature of the curve C.
We now state the classic theorem due to Kowalewski in the following theorem (cf. [10]).
THEOREM 6. (Kowalewski) If g(y) and all its derivatives are continuous for |y− y0| < δ,
if x0 is a given number and z0 = g(y0), q0 = g′(y0), and if f(x, y, z, q) and all its partial
derivatives are continuous in a region S defined by
|x− x0| < δ, |y − y0| < δ, |q − q0| < δ,
then there exists a unique function ϕ(x, y) such that:
(a) ϕ(x, y) and all its partial derivatives are continuous in a region
Ω : |x− x0| < δ1, |y − y0| < δ2;
(b) For all (x, y) in Ω, z = ϕ(x, y) is a solution of the equation
∂z
∂x= f(x, y, z,
∂z
∂y)
(c) For all values of y in the interval |y − y0| < δ1, ϕ(x0, y) = g(y).
We conclude this lecture by introducing different kinds of solutions of first-order PDE.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 7
DEFINITION 7. (A complete solution or a complete integral) Any relation of the
form
F (x, y, z, a, b) = 0 (19)
which contains two arbitrary constants a and b and is a solution of a first-order PDE is
called a complete solution or a complete integral of that first-order PDE.
DEFINITION 8. (A general solution or a general integral) Any relation of the form
F (u, v) = 0
involving an arbitrary function F connecting two known functions u(x, y, z) and v(x, y, z)
and providing a solution of a first-order PDE is called a general solution or a general
integral of that first-order PDE.
It is possible to derive a general integral of the PDE once a complete integral is known.
With b = ϕ(a), if we take any one-parameter subsystem
f(x, y, z, a, ϕ(a)) = 0
of the system (19) and form its envelope, we obtain a solution of equation (16). When
ϕ(a) is arbitrary, the solution obtained is called the general integral of (16) corresponding
to the complete integral (19).
When a definite ϕ(a) is used, we obtain a particular solution.
DEFINITION 9. (A singular integral) The envelope of the two-parameter system (19)
is also a solution of the equation (16). It is called the singular integral or singular solution
of the equation.
NOTE: The general solution of an equation of type (1) can be obtained by solving
systems of ODEs. This is not true for higher-order equations or for systems of first-order
equations.
Practice Problems
1. Classify whether the following PDE is linear, quasi-linear or nonlinear:
(a) zzx − 2xyzy = 0; (b) z2x + zzy = 2; (c) zx + 2zy = 5z; (d) xzx + yzy = z2.
2. Eliminate the arbitrary constants a and b from the following equations to form the
PDE:
(a) ax2 + by2 + z2 = 1; (b) z = (x2 + a)(y2 + b).
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 8
3. Show that z = f(xy), where f is an arbitrary differentiable function satisfies
xzx − yzy = 0,
and hence, verify that the functions sin(xy), cos(xy), log(xy) and exy are solutions.
4. Eliminate the arbitrary function f from the following and form the PDE:
(a) z = x+ y + f(xy); (b) z = f(xy
z).
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 9
Lecture 2 Linear First-Order PDEs
The most general first-order linear PDE has the form
a(x, y)zx + b(x, y)zy + c(x, y)z = d(x, y), (1)
where a, b, c, and d are given functions of x and y. These functions are assumed to be
continuously differentiable. Rewriting (1) as
a(x, y)zx + b(x, y)zy = −c(x, y)z + d(x, y), (2)
we observe that the left hand side of (2), i.e.,
a(x, y)zx + b(x, y)zy = ∇z · (a, b)
is (essentially) a directional derivative of z(x, y) in the direction of the vector (a, b), where
(a, b) is defined and nonzero. When a and b are constants, the vector (a, b) had a fixed
direction and magnitude, but now the vector can change as its base point (x, y) varies.
Thus, (a, b) is a vector field on the plane.
The equations
dx
dt= a(x, y),
dy
dt= b(x, y), (3)
determine a family of curves x = x(t), y = y(t) whose tangent vector (dxdt ,dydt ) coincides
with the direction of the vector (a, b). Therefore, the derivative of z(x, y) along these
curves becomes
dz
dt=
d
dtz(x(t), y(t)) =
∂z
∂x
dx
dt+∂z
∂y
dy
dt
= zx(x(t), y(t))a(x(t), y(t)) + zy(x(t), y(t))b(x(t), y(t))
= −c(x(t), y(t))z(x(t), y(t)) + d(x(t), y(t))
= −c(t)z(t) + d(t),
where we have used the chain rule and (1). Thus, along these curves, z(t) = z(x(t), y(t))
satisfies the ODE
z′(t) + c(t)z(t) = d(t). (4)
Let µ(t) = exp[∫ t
0 c(τ)dτ]be an integrating factor for (4). Then, the solution is given by
z(t) =1
µ(t)
[∫ t
0µ(τ)d(τ)dτ + z(0)
]. (5)
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 10
The approach described above to solve (1) by using the solutions of (3)-(4) is called the
method of characteristics. It is based on the geometric interpretation of the partial
differential equation (1).
NOTE: (i) The ODEs (3) is known as the characteristics equation for the PDE (1). The
solution curves of the characteristic equation are the characteristics curves for (1).
(ii) Observe that µ(t) and d(t) depend only on the values of c(x, y) and d(x, y) along
the characteristics curve x = x(t), y = y(t). Thus, equation (5) shows that the values z(t)
of the solution z along the entire characteristics curve are completely determined, once
the value z(0) = z(x(0), y(0)) is prescribed.
(iii) Assuming certain smoothness conditions on the functions a, b, c, and d, the exis-
tence and uniqueness theory for ODEs guarantees a unique solution curve (x(t), y(t), z(t))
of (3)-(4) (i.e., a characteristic curve) passes through a given point (x0, y0, z0) in (x, y, z)-
space.
1 The method of characteristics for solving linear first-order IVP
In practice we are not interested in determining a general solution of the partial differential
equation (1) but rather a specific solution z = z(x, y) that passes through or contains a
given curve C. This problem is known as the initial value problem for (1). The method
of characteristics for solving the initial value problem for (1) proceeds as follows.
Let the initial curve C be given parametrically as:
x = x(s), y = y(s), z = z(s). (6)
for a given range of values of the parameter s. The curve may be of finite or infinite extent
and is required to have a continuous tangent vector at each point.
Every value of s fixes a point on C through which a unique characteristic curve passes
(see, Fig. 2.1). The family of characteristic curves determined by the points of C may be
parameterized as
x = x(s, t), y = y(s, t), z = z(s, t)
with t = 0 corresponding to the initial curve C. That is, we have
x(s, 0) = x(s), y(s, 0) = y(s), z(s, 0) = z(s).
In other words, we have the following:
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 11
Figure 2.1: Characteristic curves and construction of the integral surface
The functions x(s, t) and y(s, t) are the solutions of the characteristics
system (for each fixed s)
d
dtx(s, t) = a(x(s, t), y(s, t)),
d
dty(s, t) = b(x(s, t), y(s, t))
with given initial values x(s, 0) and y(s, 0).
(7)
Suppose that
z(x(s, 0), y(s, 0)) = g(s), (8)
where g(s) is a given function. We obtain z(x(s, t), y(s, t)) as follows: Let
z(s, t) = z(x(s, t), y(s, t)), c(s, t) = c(x(s, t), y(s, t)), d(s, t) = d(x(s, t), y(s, t)) (9)
and
µ(s, t) = exp
[∫ t
0c(s, t)dt
]. (10)
Analogous to formula (5), for each fixed s, we obtain
z(s, t) =1
µ(s, t)
[∫ t
0µ(s, t)d(s, t)dt+ g(s)
]. (11)
z(s, t) is the value of z at the point (x(s, t), y(s, t)). Thus, as s and t vary, the point
(x, y, z), in xyz-space, given by
x = x(s, t), y = y(s, t), z = z(s, t), (12)
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 12
traces out the surface of the graph of the solution z of the PDE (1) which meets the
initial curve (8). The equations (12) constitute the parametric form of the solution of (1)
satisfying the initial condition (8) [i.e., a surface in (x, y, z)-space that contains the initial
curve ]
NOTE: If the Jacobian J(s, t) = xsyt − xtys = 0, then the equations x = x(s, t) and
y = y(s, t) can be inverted to give s and t as (smooth) functions of x and y i.e., s = s(x, y)
and t = t(x, y). The resulting function z = z(x, y) = z(s(x, y), t(x, y)) satisfies the PDE
(1) in a neighborhood of the curve C (in view of (4) and the initial condition (6)) and is
the unique solution of the IVP.
EXAMPLE 1. Determine the solution the following IVP:
∂z
∂y+ c
∂z
∂x= 0, z(x, 0) = f(x),
where f(x) is a given function and c is a constant.
Solution. A step by step procedure for the finding solution is given below.
Step 1.(Finding characteristic curves)
To apply the method of characteristics, parameterize the initial curve C as follows: as
follows:
x = s, y = 0, z = f(s). (13)
The family of characteristics curves x((s, t), y(s, t)) are determined by solving the ODEs
d
dtx(s, t) = c,
d
dty(s, t) = 1
The solution of the system is
x(s, t) = ct+ c1(s) and y(s, t) = t+ c2(s).
Step 2. (Applying IC)
Using the initial conditions
x(s, 0) = s, y(s, 0) = 0.
we find that
c1(s) = s, c2(s) = 0,
and hence
x(s, t) = ct+ s and y(s, t) = t.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 13
Step 3. (Writing the parametric form of the solution)
Comparing with (1), we have c(x, y) = 0 and d(x, y) = 0. Therefore, using (10) and (11),
we find that
d(s, t) = 0, µ(s, t) = 1.
Since z(x(s, 0), y(s, 0)) = z(s, 0) = g(s) = f(s), we obtain z(s, t) = f(s). Thus, the
parametric form of the solution of the problem is given by
x(s, t) = ct+ s, y(s, t) = t, z(s, t) = f(s).
Step 4. (Expressing z(s, t) in terms of z(x, y)) Expressing s and t as s = s(x, y) and
t = t(x, y), we have
s = x− cy, t = y.
We now write the solution in the explicit form as
z(x, y) = z(s(x, y), y(x, y)) = f(x− cy).
Clearly, if f(x) is differentiable, the solution z(x, y) = f(x − cy) satisfies given PDE as
well as the initial condition.
NOTE: Example 1 characterizes unidirectional wave motion with velocity c. If we con-
sider the initial function z(x, 0) = f(x) to represent a waveform, the solution z(x, y) =
f(x− cy) shows that a point x for which x− cy = constant, will always occupy the same
position on the wave form. If c > 0, the entire initial wave form f(x) moves to the right
without changing its shape with speed c (if c < 0, the direction of motion is reversed).
EXAMPLE 2. Find the parametric form of the solution of the problem
−yzx + xzy = 0
with the condition given by
z(s, s2) = s3, (s > 0).
Solution. To find the solution, let’s proceed as follows.
Step 1. (Finding characteristic curves)
The family of characteristics curves (x(s, t), y(s, t)) are determined by solving
d
dtx(s, t) = −y(s, t), d
dty(s, t) = x(s, t)
with initial conditions
x(s, 0) = s, y(s, 0) = s2.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 14
The general solution of the system is
x(s, t) = c1(s) cos(t) + c2(s) sin(t) and y(s, t) = c1(s) sin(t)− c2(s) cos(t).
Step 2. (Applying IC)
Using ICs, we find that
c1(s) = s, c2(s) = −s2,
and hence
x(s, t) = s cos(t)− s2 sin(t) and y(s, t) = s sin(t) + s2 cos(t).
Step 3. (Writing the parametric form of the solution)
Comparing with (1), we note that c(x, y) = 0 and d(x, y) = 0. Therefore, using (10)
and (11), it follows that
d(s, t) = 0, µ(s, t) = 1.
In view of the given condition curve and z = z(s, t), we obtain
z(x(s, 0), y(s, 0)) = z(s, s2) = g(s) = s3, z(s, t) = s3.
Thus, the parametric form of the solution of the problem is given by
x(s, t) = s cos(t)− s2 sin(t), y(s, t) = s sin(t) + s2 cos(t), z(s, t) = s3.
Step 4. (Expressing z(s, t) in terms of z(x, y))
Writing s and t as a function of x and y, it is an easy exercise to show that
z(x, y) =1√8
[−1 +
√1 + 4(x2 + y2)
]3/2.
Practice Problems
1. Find the general solution of the following PDE in the indicated domain.
(A) xzx + 2yzy = 0, for x > 0, y > 0
(B) yzx − 4xzy = 2xy, for all (x, y)
(C) xzx − xyzy = z, for all (x, y)
2. Find a particular solution of the following PDEs satisfying the given side conditions.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 15
(A) xzx + 2yzy = 0, z(x, 1/x) = x, for x > 0, y > 0
(B) xzx − xyzy = z, z(x, x) = x2ex, for all (x, y)
3. Find the parametric form of the solutions of the PDEs.
(A) xzx − xyzy = z, for all (x, y), z(s2, s) = s3
(B) (y + x)zx + (y − x)zy = z, z(cos(s), sin(s)) = 1, for 0 ≤ s ≤ 2π
4. Show that the problem −yzx + xzy = 0, z(x, 0) = 3x has no solution.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 16
Lecture 3 Quasilinear First-Order PDEs
A first order quasilinear PDE is of the form
a(x, y, z)∂z
∂x+ b(x, y, z)
∂z
∂y= c(x, y, z). (1)
Such equations occur in a variety of nonlinear wave propagation problems. Let us assume
that an integral surface z = z(x, y) of (1) can be found. Writing this integral surface in
implicit form as
F (x, y, z) = z(x, y)− z = 0.
Note that the gradient vector∇F = (zx, zy,−1) is normal to the integral surface F (x, y, z) =
0. The equation (1) may be written as
azx + bzy − c = (a, b, c) · (zx, zy,−1) = 0. (2)
This shows that the vector (a, b, c) and the gradient vector ∇F are orthogonal. In other
words, the vector (a, b, c) lies in the tangent plane of the integral surface z = z(x, y) at
each point in the (x, y, z)-space where ∇F = 0.
At each point (x, y, z), the vector (a, b, c) determines a direction in (x, y, z)-space is
called the characteristic direction. We can construct a family of curves that have the
characteristic direction at each point. If the parametric form of these curves is
x = x(t), y = y(t), and z = z(t), (3)
then we must have
dx
dt= a(x(t), y(t), z(t)),
dy
dt= b(x(t), y(t), z(t)),
dz
dt= c(x(t), y(t), z(t)), (4)
because (dx/dt, dy/dt, dz/dt) is the tangent vector along the curves. The solutions of (4)
are called the characteristic curves of the quasilinear equation (1).
We assume that a(x, y, z), b(x, y, z), and c(x, y, z) are sufficiently smooth and do not
all vanish at the same point. Then, the theory of ordinary differential equations ensures
that a unique characteristic curve passes through each point (x0, y0, z0). The IVP for
(1) requires that z(x, y) be specified on a given curve in (x, y)-space which determines a
curve C in (x, y, z)-space referred to as the initial curve. To solve this IVP, we pass a
characteristic curve through each point of the initial curve C. If these curves generate a
surface known as integral surface. This integral surface is the solution of the IVP.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 17
REMARK 1. (i) The characteristic equations (4) for x and y are not, in general, uncoupled
from the equation for z and hence differ from those in the linear case (cf. Eq. (3) of Lecture
2).
(ii) The characteristics equations (4) can be expressed in the nonparametric form as
dx
a=dy
b=dz
c. (5)
Below, we shall describe a method for finding the general solution of (1). This method
is due to Lagrange hence it is usually referred to as the method of characteristics or the
method of Lagrange.
1 The method of characteristics
It is a method of solution of quasi-linear PDE which is stated in the following result.
THEOREM 2. The general solution of the quasi-linear PDE (1) is
F (u, v) = 0, (6)
where F is an arbitrary function and u(x, y, z) = c1 and v(x, y, z) = c2 form a solution of
the equationsdx
a=dy
b=dz
c. (7)
Proof. If u(x, y, z) = c1 and v(x, y, z) = c2 satisfy the equations (1) then the equations
uxdx+ uydy + uzdz = 0,
vxdx+ vydy + vzdz = 0
are compatible with (7). Thus, we must have
aux + buy + cuz = 0,
avx + bvy + cvz = 0.
Solving these equations for a, b and c, we obtain
a∂(u,v)∂(y,z)
=b
∂(u,v)∂(z,x)
=c
∂(u,v)∂(x,y)
. (8)
Differentiate F (u, v) = 0 with respect to x and y, respectively, to have
∂F
∂u
∂u
∂x+∂u
∂z
∂z
∂x
+∂F
∂v
∂v
∂x+∂v
∂z
∂z
∂x
= 0
∂F
∂u
∂u
∂y+∂u
∂z
∂z
∂y
+∂F
∂v
∂v
∂y+∂v
∂z
∂z
∂y
= 0.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 18
Eliminating ∂F∂u and ∂F
∂v from these equations, we obtain
∂z
∂x
∂(u, v)
∂(y, z)+∂z
∂y
∂(u, v)
∂(z, x)=∂(u, v)
∂(x, y)(9)
In view of (8), the equation (9) yields
a∂z
∂x+ b
∂z
∂y= c.
Thus, we find that F (u, v) = 0 is a solution of the equation (1). This completes the
proof. .
REMARK 3. • All integral surfaces of the equation (1) are generated by the integral
curves of the equations (4).
• All surfaces generated by integral curves of the equations (4) are integral surfaces of
the equation (1).
EXAMPLE 4. Find the general integral of xzx + yzy = z.
Solution. The associated system of equations are
dx
x=dy
y=dz
z.
From the first two relation we have
dx
x=dy
y=⇒ lnx = ln y + ln c1 =⇒
x
y= c1.
Similarly,
dz
z=dy
y=⇒ z
y= c2.
Take u1 =xy and u2 =
zy . The general integral is given by
F (x
y,z
y) = 0.
EXAMPLE 5. Find the general integral of the equation
z(x+ y)zx + z(x− y)zy = x2 + y2.
Solution. The characteristic equations are
dx
z(x+ y)=
dy
z(x− y)=
dz
x2 + y2.
Each of these ratio is equivalent to
ydx+ xdy − zdz
0=xdx− ydy − zdz
0.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 19
Consequently, we have
dxy − z2
2 = 0 and d1
2(x2 − y2 − z2) = 0.
Integrating we obtain two integrals
2xy − z2 = c1 and x2 − y2 − z2 = c2,
where c1 and c2 are arbitrary constants. Thus, the general solution is
F (2xy − z2, x2 − y2 − z2) = 0,
where F is an arbitrary function.
Next, we shall discuss a method for solving a Cauchy problem for the first-order quasi-
linear PDE (1). The following theorem gives conditions under which a unique solution of
the initial value problem for (1) can be obtained.
THEOREM 6. Let a(x, y, z), b(x, y, z) and c(x, y, z) in (1) have continuous partial deriva-
tives with respect to x, y and z variables. Let the initial curve C be described parametrically
as
x = x(s), y = y(s), and z = z(x(s), y(s)).
The initial curve C has a continuous tangent vector and
J(s) =dy
dsa[x(s), y(s), z(s)]− dx
dsb(x(s), y(s), z(s)] = 0 (10)
on C. Then, there exists a unique solution z = z(x, y), defined in some neighborhood of
the initial curve C, satisfies (1) and the initial condition z(x(s), y(s)) = z(s).
Proof. The characteristic system (4) with initial conditions at t = 0 given as x =
x(s), y = y(s), and z = z(s) has a unique solution of the form
x = x(s, t), y = y(s, t), z = z(s, t),
with continuous derivatives in s and t, and
x(s, 0) = x(s), y(s, 0) = y(s), z(s, 0) = z(s).
This follows from the existence and uniqueness theory for ODEs. The Jacobian of the
transformation x = x(s, t), y = y(s, t) at t = 0 is
J(s) = J(s, t)|t=0 =
∣∣∣∣∣ ∂x∂s
∂x∂t
∂y∂s
∂y∂t
∣∣∣∣∣t=0
=
[∂y
∂ta− ∂x
∂tb
]t=0
= 0. (11)
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 20
in view of (10). By the continuity assumption, the Jacobian J = 0 in a neighborhood
of the initial curve. Thus, by the implicit function theorem, we can solve for s and t as
functions of x and y near the initial curve. Then
z(s, t) = z(s(x, y), t(x, y)) = Z(x, y).
a solution of (1), which can be easily seen as
c =dz
dt=
∂z
∂x
dx
dt+∂z
∂y
dy
dt
= a∂z
∂x+ b
∂z
∂y,
where we have used (4). The uniqueness of the solution follows from the fact that any two
integral surfaces that contain the same initial curve must coincide along all the charac-
teristic curves passing through the initial curve. This is a consequence of the uniqueness
theorem for the IVP for (4). This completes our proof. .
EXAMPLE 7. Consider the IVP:
∂z
∂y+ z
∂z
∂x= 0
z(x, 0) = f(x),
where f(x) is a given smooth function.
Solution. We solve this problem using the following steps.
Step 1. (Finding characteristic curves)
To solve the IVP, we parameterize the initial curve as
x = s, y = 0, z = f(s).
The characteristic equations are
dx
dt= z,
dy
dt= 1,
dz
dt= 0.
Let the solutions be denoted as x(s, t), t(s, t), and z(s, t). We immediately find that
x(s, t) = zt+ c1(s), y(s, t) = t+ c2(s), z(s, t) = c3(s),
where ci, i = 1, 2, 3 are constants to be determined using IC.
Step 2. (Applying IC) The initial conditions at s = 0 are given by
x(s, 0) = s, y(s, 0) = 0, z(s, 0) = f(s).
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 21
Using these condition, we obtain
x(s, t) = zt+ s, y(s, t) = t, z(s, t) = f(s).
Step 3. (Writing the parametric form of the solution)
The solutions are thus given by
x(s, t) = zt+ s = f(s)t+ s, y(s, t) = t, z(s, t) = f(s).
Step 4. (Expressing z(s, t) in terms of z(x, y)) Applying the condition (10), we find that
J(s) = −1 = 0, along the entire initial curve. We can immediately solve for s(x, y) and
t(x, y) to obtain
s(x, y) = x− tf(s), t(x, y) = y.
Since t = y and s = x− tf(s) = x− yz, the solution can also be given in implicit form as
z = f(x− yz).
EXAMPLE 8. Solve the following quasi-linear PDE:
zzx + yzy = x, (x, y) ∈ R2
subject to the initial condition
z(x, 1) = 2x, x ∈ R.
Solution. Here a(x, y, z) = z, b(x, y, z) = y, c(x, y, z) = x. The characteristics
equations are
dx
dt= z, x(s, 0) = s,
dy
dt= y, y(s, 0) = 1,
dz
dt= x, z(s, 0) = 2s.
On solving the above ODEs, we obtain
x(s, t) =s
2(3et − e−t), y(s, t) = et, z(s, t) =
s
2(3et + e−t).
Solving for (s, t) in terms of (x, y), we obtain
s(x, y) =2xy
3y2 − 1, t(x, y) = ln(y),
z(x, y) = z(s(x, y), t(x, y)) =(3y2 + 1)x
(3y2 − 1).
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 22
Note that the characteristics variables imply that y must be positive (y = et). In fact, the
solution z is valid only for 3y2 − 1 > 0, i.e., for y > 1√3> 0. Observe that the change of
variables is valid only where ∣∣∣∣∣ xs(s, t) xt(s, t)
ys(s, t) yt(s, t)
∣∣∣∣∣ = 0.
It is easy to verify that this condition leads to y = 1/√3.
Practice Problems
1. Find a solution of the PDE zx + zzy = 6x satisfying the condition z(0, y) = 3y.
2. Find the general integral of the PDE
(2xy − 1)zx + (z − 2x2)zy = 2(x− yz)
and also the particular integral which passes through the line x = 1, y = 0.
3. Solve zx + zzy = 2x, z(0, y) = f(y).
4. Find the solution of the equation zx + zzy = 1 with the data
x(s, 0) = 2s, y(s, 0) = s2, z(0, s2) = s.
5. Find the characteristics of the equation zxzy = z, and determine the integral surface
which passes through the parabola x = 0, y2 = z.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 23
Lecture 4 Nonlinear First-Order PDEs
The general nonlinear first-order PDE is written in the form
F (x, y, z, zx, zy) = 0, (1)
where F is not linear in zx and zy. Setting zx = p and zy = q, rewrite (1) as
F (x, y, z, p, q) = 0. (2)
1 The method of characteristics for nonlinear PDEs
Recall the method of characteristics for solving first-order linear PDE:
F (x, y, z, p, q) = a(x, y)p+ b(x, y)q + c(x, y)z − d(x, y) = 0.
In this method, the PDE becomes an ODEs along the characteristics curves which may
be regarded as the solutions of the system
x′(t) = a(x(t), y(t)) and y′(t) = b(x(t), y(t)). (3)
Note that Fp = a(x, y) and Fq = b(x, y). Hence, (3) may be written as
x′(t) = Fp and y′(t) = Fq. (4)
For solving first-order nonlinear PDE (1), the relation (4) motivates us to define charac-
teristics curves as solutions of the system
x′(t) = Fp(x(t), y(t), z(t), p(t), q(t)) and y′(t) = Fq(x(t), y(t), z(t), p(t), q(t)), (5)
where z(t) = z(x(t), y(t)), p(t) = zx(x(t), y(t)), q(t) = zy(x(t), y(t)). However, unlike the
linear case, the right sides of (5) depend not only on x(t) and y(t), but also on z(t), p(t)
and q(t). Thus, we can expect a large system of five ODEs for the five unknown x(t), y(t),
z(t), p(t) and q(t). For the remaining three equations, notice that
z′(t) =d
dtz(x(t), y(t))
= zxx′(t) + zyy
′(t)
= p(t)x′(t) + q(t)y′(t)
= p(t)Fp(x(t), y(t), z(t), p(t), q(t)) + q(t)Fq(x(t), y(t), z(t), p(t), q(t)). (6)
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 24
Along a characteristics p is a function of t. The equation for p′(t) is obtained as follows:
p′(t) =d
dtzx(x(t), y(t))
= zxxx′(t) + zxyy
′(t)
= zxxFp(x(t), y(t), z(t), p(t), q(t)) + zxyFq(x(t), y(t), z(t), p(t), q(t)). (7)
Using the fact that z(x, y) should solve the PDE (1), we obtain
0 =d
dxF (x, y, z(x, y), zx(x, y), zy(x, y))
= Fx + Fzzx + Fpzxx + Fqzyx.
Therefore,
p′(t) = zxxFq + zxyFq = −(Fx + pFz). (8)
Similarly,
q′(t) = −[Fy + qFz]. (9)
Thus, we have the following system of five ODEs
x′(t) = Fp(x(t), y(t), z(t), p(t), q(t))
y′(t) = Fq(x(t), y(t), z(t), p(t), q(t))
z′(t) = p(t)Fp(x(t), y(t), z(t), p(t), q(t)) + q(t)Fq(x(t), y(t), z(t), p(t), q(t))
p′(t) = −Fx(x(t), y(t), z(t), p(t), q(t)) + p(t)Fz(x(t), y(t), z(t), p(t), q(t))
q′(t) = −Fy(x(t), y(t), z(t), p(t), q(t)) + q(t)Fz(x(t), y(t), z(t), p(t), q(t))
(10)
These equations constitute the characteristics system of the PDE (1) and are known as
the characteristics equations associated with PDE (1).
NOTE: If the functions which appear in equations (10) satisfy a Lipschitz condition,
there is a unique solution of the equations for each prescribed set of initial values of the
variables. Therefore the characteristics strip is uniquely determined by any initial element
(x(t0), y(t0), z(t0), p(t0), q(t0)) at any initial point t0 of t.
An important result about characteristic strips is given below.
THEOREM 1. The function F (x, y, z, p, q) is a constant along every characteristics strip
of the equation F (x, y, z, p, q) = 0.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 25
Proof. Along a characteristic strip, we have
d
dtF (x(t), y(t), z(t), p(t), q(t)) = Fxx
′(t) + Fyy′(t) + Fzz
′(t) + Fpp′(t) + Fqq
′(t)
= FxFp + FyFq + Fz(pFp + qFq)− Fp(Fx + pFz)− Fq(Fy + qFz)
= 0.
This implies F (x, y, z, p, q) = k, a constant along the strip.
2 Solving Cauchy’s problem for nonlinear PDEs
The objective of this section to solve PDE
F (x, y, z, zx, zy) = 0
subject to an appropriate initial condition (i.e., z assume prescribed values on some curve).
Let (f(s), g(s)) traces out a regular curve in the xy-plane as s varies. We regard this
curve as being an initial curve. We seek a solution u(x, y) of the following problem (known
as Cauchy’s problem).
F (x, y, z, zx, zy) = 0, u(f(s), g(s)) = G(s), (11)
where G(s) is a continuously differentiable function. Such a problem may have no solution
(e.g., the PDE z2x+ z2y +1 = 0). However, if a solution exists in some neighborhood of the
initial curve, then such a solution can often be determined using the following steps (cf.
[1]).
Step 1: Find functions h(s) and k(s) (if possible) such that
F (f(s), g(s), G(s), h(s), k(s)) = 0, G′(s) = h(s)f ′(s) + k(s)g′(s) and
Fp(f(s), g(s), G(s), h(s), k(s))g′(s)− Fq(f(s), g(s), G(s), h(s), k(s))f
′(s) = 0. (12)
Note that if h(s) and k(s) do not exist, then (11) has no solution. If there are several
choices for (h(s), k(s)), then a solution of (11) exists for each such choice.
Step 2: For each fixed s, solve the following charateristics system for x(s, t), y(s, t),
z(s, t), p(s, t),q(s, t) with the given initial conditions p(s, 0) = h(s), q(s, 0) = k(s), where
h(s) and k(s) are the functions found in Step 1.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 26
d
dtx(s, t) = Fp(x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
d
dty(s, t) = Fq(x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
d
dtz(s, t) = p(s, t)Fp(x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
+q(s, t)Fq(x(s, t), y(s, t), z(s, t), p(s, t), q(s, t)) (13)
d
dtp(s, t) = −[Fx(x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
+p(s, t)Fz(x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))]
d
dtq(s, t) = −[Fy(x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
+q(s, t)Fz(x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))]
Step 3: As s and t vary, the point (x, y, z), defined by
x = x(s, t), y = y(s, t), z = z(s, t) (14)
traces out the graph of a solution z of (11) in the xyz-space, in a neighborhood of the
curve traced out by (f(s), g(s), G(s)). In some cases, one can use the first two equations
in (14) to solve for s and t in terms of x and y (say, s = s(x, y) and t = t(x, y)) to obtain a
solution z(x, y) = z(s(x, y), t(x, y)), for (x, y) in a neighborhood of the curve (f(s), g(s)).
To illustrate the above steps, let us consider the following example.
EXAMPLE 2. Solve the PDE zxzy − z = 0 subject to the condition z(s,−s) = 1.
Solution. Here, we have
F (x, y, z, p, q) = pq − z.
The characteristics system (13) takes the form
dx
dt= Fp = q(t),
dy
dt= Fq = p(t),
dz
dt= pFp + qFq = 2p(t)q(t),
dp
dt= −[Fx + p(t)Fz] = p(t),
dq
dt= −[Fy + q(t)Fz] = q(t).
Note thatdp
dt= p(t) =⇒ p(t) = cet and
dq
dt= q(t) =⇒ q(t) = det,
where c and d are arbitrary constants. Since we are looking for a characteristics strip (i.e.,
F (x, y, z, p, q) = 0), we set z(t) = p(t)q(t) = cde2t. The equations for the characteristic
strip are:
x(t) = det + d1, y(t) = cet + c1, z(t) = cde2t, p(t) = cet, q(t) = det,
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 27
where c1 and d1 are constants.
The initial condition z(s,−s) = 1 is given on the line y = −x traced out by (s,−s),in (11), we have f(s) = s and g(s) = −s. We must find h(s) and k(s) such that
1 = G(s) = h(s)k(s) 0 = G′(s) = h(s)− k(s),
0 = Fp(. . .)(−1)− Fq(. . .)(1) = −k(s)− h(s).
Thus, we have two choices h(s) = 1 and k(s) = 1, or h(s) = −1 and k(s) = −1. For the
choice h(s) = 1 and k(s) = 1, we obtain
x(s, t) = et − 1 + s, y(s, t) = et − 1− s, z(s, t) = e2t, p(s, t) = et, q(s, t) = et.
From the first two equations, we obtain
et = (x+ y + 2)/2.
Then the solution is
z(x, y) = e2t =(x+ y + 2)2
4.
If we choose h(s) = −1 and k(s) = −1, the solution is given by
z(x, y) =(x+ y − 2)2
4.
Practice Problems
Solve the following Cauchy’s problem:
1. pq − z = 0, z(x,−x) = x
2. p+ zq = 2x, z(0, y) = f(y)
3. Find the solution of the equation p+ zq = 1 with the data
x(s, 0) = 2s, y(s, 0) = s2, z(0, s2) = s.
4. Find the characteristics of the equation pq = z, and determine the integral surface
which passes through the parabola x = 0, y2 = z.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 28
Lecture 5 Compatible Systems and Charpit’s Method
In this lecture, we shall study compatible systems of first-order PDEs and the Charpit’s
method for solving nonlinear PDEs. Let’s begin with the following definition.
DEFINITION 1. (Compatible systems of first-order PDEs) A system of two first-
order PDEs
f(x, y, z, p, q) = 0 (1)
and
g(x, y, z, p, q) = 0 (2)
are said to be compatible if they have a common solution.
THEOREM 2. The equations f(x, y, z, p, q) = 0 and g(x, y, z, p, q) = 0 are compatible on
a domain D if
(i) J = ∂(f,g)∂(p,q) =
∣∣∣∣∣ fp fq
gp gq
∣∣∣∣∣ = 0 on D.
(ii) p and q can be explicitly solved from (1) and (2) as p = ϕ(x, y, z) and q = ψ(x, y, z).
Further, the equation
dz = ϕ(x, y, z)dx+ ψ(x, y, z)dy
is integrable.
THEOREM 3. A necessary and sufficient condition for the integrability of the equation
dz = ϕ(x, y, z)dx+ ψ(x, y, z)dy is
[f, g] ≡ ∂(f, g)
∂(x, p)+∂(f, g)
∂(y, q)+ p
∂(f, g)
∂(z, p)+ q
∂(f, g)
∂(z, q)= 0. (3)
In other words, the equations (1) and (2) are compatible iff (3) holds.
EXAMPLE 4. Show that the equations
xp− yq = 0, z(xp+ yq) = 2xy
are compatible and solve them.
Solution. Take f ≡ xp− yq = 0, g ≡ z(xp+ yq)− 2xy = 0. Note that
fx = p, fy = −q, fz = 0, fp = x, fq = −y.
and
gx = zp− 2y, gy = zq − 2x, gz = xp+ yq, gp = zx, gq = zy.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 29
Compute
J ≡ ∂(f, g)
∂(p, q)=
∣∣∣∣∣ fp fq
gp gq
∣∣∣∣∣ =∣∣∣∣∣ x −yzx zy
∣∣∣∣∣ = zxy + zxy = 2zxy = 0
for x = 0, y = 0, z = 0. Further,
∂(f, g)
∂(x, p)=
∣∣∣∣∣ fx fp
gx gp
∣∣∣∣∣ =∣∣∣∣∣ p x
zp− 2y zx
∣∣∣∣∣ = zxp− x(zp− 2y) = 2xy
∂(f, g)
∂(z, p)=
∣∣∣∣∣ fz fp
gz gp
∣∣∣∣∣ =∣∣∣∣∣ 0 x
xp+ yq zx
∣∣∣∣∣ = 0− x(xp+ yq) = −x2p− xyq
∂(f, g)
∂(y, q)=
∣∣∣∣∣ fy fq
gy gq
∣∣∣∣∣ =∣∣∣∣∣ −q −yzq − 2x zy
∣∣∣∣∣ = −qzy + y(zq − 2x) = −2xy
∂(f, g)
∂(z, q)=
∣∣∣∣∣ fz fq
gz gq
∣∣∣∣∣ =∣∣∣∣∣ 0 −yxp+ yq zy
∣∣∣∣∣ = y(xp+ yq) = y2q + xyp.
It is an easy exercise to verify that
[f, g] ≡ ∂(f, g)
∂(x, p)+∂(f, g)
∂(y, q)+ p
∂(f, g)
∂(z, p)+ q
∂(f, g)
∂(z, q)
= 2xy − x2p2 − xypq − 2xy + y2q2 + xypq
= y2q2 − x2p2
= 0.
So the equations are compatible.
Next step to determine p and q from the two equations xp−yq = 0, z(xp+yq) = 2xy.
Using these two equations, we have
zxp+ zyq − 2xy = 0 =⇒ xp+ yq =2xy
z
=⇒ 2xp =2xy
z=⇒ p =
y
z= ϕ(x, y, z).
and
xp− yq = 0 =⇒ q =xp
y=xy
yz=x
z
=⇒ q =x
z= ψ(x, y, z).
Substituting p and q in dz = pdx+ qdy, we get
zdz = ydx+ xdy = d(xy),
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 30
and hence integrating, we obtain
z2 = 2xy + k,
where k is a constant.
NOTE: For the compatibility of f(x, y, z, p, q) = 0 and g(x, y, z, p, q) = 0 it is not nec-
essary that every solution of f(x, y, z, p, q) = 0 be a solution of g(x, y, z, p, q) = 0 or
vice-versa as is generally believed. For instance, the equations
f ≡ xp− yq − x = 0 (4)
g ≡ x2p+ q − xz = 0 (5)
are compatible. They have common solutions z = x + c(1 + xy), where c is an arbitrary
constant. Note that z = x(y + 1) is a solution of (4) but not of (5).
Charpit’s Method: It is a general method for finding the complete integral of a
nonlinear PDE of first-order of the form
f(x, y, z, p, q) = 0. (6)
Basic Idea: The basic idea of this method is to introduce another partial differential
equation of the first order
g(x, y, z, p, q, a) = 0 (7)
which contains an arbitrary constant a and is such that
(i) Equations (6) and (7) can be solved for p and q to obtain
p = p(x, y, z, a), q = q(x, y, z, a).
(ii) The equation
dz = p(x, y, z, a)dx+ q(x, y, z, a)dy (8)
is integrable.
When such a function g is found, the solution
F (x, y, z, a, b) = 0
of (8) containing two arbitrary constants a, b will be the solution of (6).
Note: Notice that another PDE g is introduced so that the equations f and g are com-
patible and then common solutions of f and g are determined in the Charpit’s method.
The equations (6) and (7) are compatible if
[f, g] ≡ ∂(f, g)
∂(x, p)+∂(f, g)
∂(y, q)+ p
∂(f, g)
∂(z, p)+ q
∂(f, g)
∂(z, q)= 0.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 31
Expanding it, we are led to the linear PDE
fp∂g
∂x+ fq
∂g
∂y+ (pfp + qfq)
∂g
∂z− (fx + pfz)
∂g
∂p− (fy + qfz)
∂g
∂q= 0. (9)
Now solve (9) to determine g by finding the integrals of the following auxiliary equations:
dx
fp=dy
fq=
dz
pfp + qfq=
dp
−(fx + pfz)=
dq
−(fy + qfz)(10)
These equations are known as Charpit’s equations which are equivalent to the character-
istics equations (10) of the previous Lecture 4.
Once an integral g(x, y, z, p, q, a) of this kind has been found, the problem reduces to
solving for p and q, and then integrating equation (8).
REMARK 5. 1. For finding integrals, all of Charpit’s equations (10) need not to be used.
2. p or q must occur in the solution obtained from (10).
EXAMPLE 6. Find a complete integral of
p2x+ q2y = z. (11)
Solution. To find a complete integral, we proceed as follows.
Step 1: (Computing fx, fy, fz, fp, fq).
Set f ≡ p2x+ q2y − z = 0. Then
fx = p2, fy = q2, fz = −1, fp = 2px, fq = 2qy.
=⇒ pfp + qfq = 2p2x+ 2q2y, −(fx + pfz) = −p2 + p, −(fy + qfz) = −q2 + q.
Step 2: (Writing Charpit’s equations and finding a solution g(x, y, z, p, q, a)).
The Charpit’s equations (or auxiliary) equations are:
dx
fp=dy
fq=
dz
pfp + qfq=
dp
−(fx + pfz)=
dq
−(fy + qfz)
=⇒ dx
2px=
dy
2qy=
dz
2(p2x+ q2y)=
dp
−p2 + p=
dq
−q2 + q
From which it follows that
p2dx+ 2pxdp
2p3x+ 2p2x− 2p3x=
q2dy + 2qydq
2q3y + 2q2y − 2q3y
=⇒ p2dx+ 2pxdp
p2x=q2dy + 2qydq
q2y
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 32
On integrating, we obtain
log(p2x) = log(q2y) + log a
=⇒ p2x = aq2y, (12)
where a is an arbitrary constant.
Step 3: (Solving for p and q).
Using (11) and (12), we find that
p2x+ q2y = z, p2x = aq2y
=⇒ (aq2y) + q2y = z =⇒ q2y(1 + a) = z
=⇒ q2 =z
(1 + a)y=⇒ q =
[z
(1 + a)y
]1/2.
and
p2 = aq2y
x= a
z
(1 + a)y
y
x=
az
(1 + a)x
=⇒ p =
[az
(1 + a)x
]1/2.
Step 4: (Writing dz = p(x, y, z, a)dx+ q(x, y, z, a)dy and finding its solution).
Writing
dz =
[az
(1 + a)x
]1/2dx+
[z
(1 + a)y
]1/2dy
=⇒(1 + a
z
)1/2
dz =(ax
)1/2dx+
(1
y
)1/2
dy.
Integrate to have
[(1 + a)z]1/2 = (ax)1/2 + (y)1/2 + b
the complete integral of the equation (11).
Practice Problems
1. Show that the equations xp − yq = x and x2p + q = xz are compatible and solve
them.
2. Show that the equations f(x, y, p, q) = 0 and g(x, y, p, q) = 0 are compatible if
∂(f, g)
∂(x, p)+∂(f, g)
∂(y, p)= 0.
3. Find complete integrals of the equations:
(i) p = (z + qy)2; (ii) (p2 + q2)y = qz
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 33
Lecture 6 Some Special Types of First-Order PDEs
We shall consider some special types of first-order partial differential equations whose
solutions may be obtained easily by Charpit’s method.
Type (a): (Equations involving only p and q)
If the equation is of the form
f(p, q) = 0 (1)
then Charpit’s equations take the form
dx
fp=dy
fq=
dz
pfp + qfq=dp
0=dq
0
An immediate solution is given by p = a, where a is an arbitrary constant. Substituting
p = a in (1), we obtain a relation
q = Q(a).
Then, integrating the expression
dz = adx+Q(a)dy
we obtain
z = ax+Q(a)y + b, (2)
where b is a constant. Thus, (2) is a complete integral of (1).
Note: Instead of taking dp = 0, we can take dq = 0 ⇒ q = a. In some problems, taking
dq = 0 the amount of computation involved may be reduced considerably.
EXAMPLE 1. Find a complete integral of the equation pq = 1.
Solution. If p = a then pq = 1 ⇒ q = 1a . In this case, Q(a) = 1/a. From (2), we
obtain a complete integral as
z = ax+y
a+ b
=⇒ a2x+ y − az = c,
where a and c are arbitrary constants.
Type (b) (Equations not involving the independent variables):
For the equation of the type
f(z, p, q) = 0, (3)
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 34
Charpit’s equation becomes
dx
fp=dy
fq=
dz
pfp + qfq=
dp
−pfz=
dq
−qfz.
From the last two relation, we have
dp
−pfz=
dq
−qfz=⇒ dp
p=dq
q
=⇒ p = aq, (4)
where a is an arbitrary constant. Solving (3) and (4) for p and q, we obtain
q = Q(a, z) =⇒ p = aQ(a, z).
Now
dz = pdx+ qdy
=⇒ dz = aQ(a, z)dx+Q(a, z)dy
=⇒ dz = Q(a, z) [adx+ dy] .
It gives complete integral as ∫dz
Q(a, z)= ax+ y + b, (5)
where b is an arbitrary constant.
EXAMPLE 2. Find a complete integral of the PDE p2z2 + q2 = 1.
Solution. Putting p = aq in the given PDE, we obtain
a2q2z2 + q2 = 1
=⇒ q2(1 + a2z2) = 1
=⇒ q = (1 + a2z2)−1/2.
Now,
p2 = (1− q2)/z2 =
(1− 1
(1 + a2z2)
)(1
z2
)=⇒ p2 =
a2
1 + a2z2
=⇒ p = a(1 + a2z2)−1/2.
Substituting p and q in dz = pdx+ qdy, we obtain
dz = a(1 + a2z2)−1/2dx+ (1 + a2z2)−1/2dy
=⇒ (1 + a2z2)1/2dz = adx+ dy
=⇒ 1
2a
az(1 + a2z2)1/2 − log[az + (1 + a2z2)1/2]
= ax+ y + b,
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 35
which is the complete integral of the given PDE.
Type (c): (Separable equations)
A first-order PDE is separable if it can be written in the form
f(x, p) = g(y, q). (6)
That is, a PDE in which z is absent and the terms containing x and p can be separated
from those containing y and q. For this type of equation, Charpit’s equations become
dx
fp=
dy
−gq=
dz
pfp − qgq=
dp
−fx=
dq
−gy.
From the last two relation, we obtain an ODE
dp
−fx=dx
fp=⇒ dp
dx+fxfp
= 0 (7)
which may be solved to yield p as a function of x and an arbitrary constant a. Writing
(7) in the form fpdp+ fzdx = 0, we see that its solution is f(x, p) = a. Similarly, we get
g(y, q) = a. Determine p and q from the equation
f(x, p) = a, g(y, q) = a
and then use the relation dz = pdx+ qdy to determine a complete integral.
EXAMPLE 3. Find a complete integral of p2y(1 + x2) = qx2.
Solution. First we write the given PDE in the form
p2(1 + x2)
x2=q
y(separable equation)
It follows that
p2(1 + x2)
x2= a2 =⇒ p =
ax√1 + x2
,
where a is an arbitrary constant. Similarly,
q
y= a2 =⇒ q = a2y.
Now, the relation dz = pdx+ qdy yields
dz =ax√1 + x2
dx+ a2ydy =⇒ z = a√
1 + x2 +a2y2
2+ b,
where a and b are arbitrary constant, a complete integral for the given PDE.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 36
Type (d): (Clairaut’s equation)
A first-order PDE is said to be in Clairaut form if it can be written as
z = px+ qy + f(p, q). (8)
Charpit’s equations take the form
dx
x+ fp=
dy
y + fq=
dz
px+ qy + pfp + qfq=dp
0=dq
0.
Now, dp = 0 =⇒ p = a, where a is an arbitrary constant.
dq = 0 =⇒ q = b, where b is an arbitrary constant.
Substituting the values of p and q in (8), we obtain the required complete integral
z = ax+ by + f(a, b).
EXAMPLE 4. Find a complete integral of (p+ q)(z − xp− yq) = 1.
Solution. The given PDE can be put in the form
z = xp+ yq +1
p+ q, (9)
which is of Clairaut’s type. Putting p = a and q = b in (9), a complete integral is given
by
z = ax+ by +1
a+ b,
where a and b are arbitrary constants.
Practice Problems
Find complete integrals of the following PDEs.
1. p+ q = pq
2.√p+
√q = 1
3. z = p2 − q2
4. p(1 + q) = qz
5. p2 + q2 = x+ y
6. z = px+ qy +√
1 + p2 + p2
7. zpq = p2(xq + p2) + q2(yp+ q2)
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 37
Lecture 7 Jacobi Method for Nonlinear First-Order PDEs
Consider the following first-order PDE of the form
f(x, y, z, ux, uy, uz) = 0, (1)
where x, y, z are independent variables and u is the dependent variable. Note that the
dependent variable u does not appear in the PDE (1).
Idea of Jacobi’s Method: The fundamental idea of Jacobi’s method is to introduce
two first-order PDEs involving two arbitrary constants a and b of the following form
h1(x, y, z, ux, uy, uz, a) = 0, (2)
h2(x, y, z, ux, uy, uz, b) = 0 (3)
such that∂(f, h1, h2)
∂(ux, uy, uz)= 0 (4)
and
• Equations (1), (2) and (3) can be solved for ux, uy, uz;
• The equation
du = uxdx+ uydy + uzdz (5)
is integrable.
If h1 = 0 and h2 = 0 are compatible with f = 0 then h1 and h2 satisfy
∂(f, h)
∂(x, ux)+
∂(f, h)
∂(y, uy)+
∂(f, h)
∂(z, uz)= 0 (6)
for h = hi, i = 1, 2. Equation (6) leads to a semi-linear PDE of the form
fux
∂h
∂x+ fuy
∂h
∂y+ fuz
∂h
∂z− fx
∂h
∂ux− fy
∂h
∂uy− fz
∂h
∂uz= 0 (7)
for h = hi, i = 1, 2. The associated auxiliary equations are given by
dx
fux
=dy
fuy
=dz
fuz
=dux−fx
=duy−fy
=duz−fz
. (8)
The rest of the procedure is same as in Charpit’s method.
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 38
The method just described above can be applied to solve first-order equation of the
form
f(x, y, z, p, q) = 0. (9)
Next, we shall show how to transform the equation f(x, y, z, p, q) = 0 into the equation
g(x, y, z, ux, uy, uz) = 0 so that the above procedure can be applied.
If u(x, y, z) is a relation between x, y and z and satisfies (9) then we have
ux + uzzx = 0 =⇒ ux + uzp = 0 =⇒ p = −ux/uz,
uy + uzzy = 0 =⇒ uy + uzq = 0 =⇒ q = −uy/uz.
Substituting
p = −ux/uz and q = −uy/uz
in (9) we obtain an equation
g(x, y, z, ux, uy, uz) = 0 (10)
which can be solved by the Jacobi’s method.
EXAMPLE 1. Find a complete integral of p2x+ q2y = z by Jacobi’s method.
Step 1: (Converting the given PDE into the form f(x, y, z, ux, uy, uz) = 0).
Set p = −ux/uz and q = −uy/uz in the given PDE to obtain
u2xu2zx+
u2yu2zy = z
=⇒ xu2x + yu2y − zu2z = 0.
Thus,
f(x, y, z, ux, uy, uz) = xu2x + yu2y − zu2z = 0. (11)
Step 2: Solving PDE (11) by Jacobi’s method
Step 2(a): Compute fux , fuy , fuz , fx, fy, fz
fux = 2xux, fuy = 2yuy, fuz = −2zuz, fx = u2x, fy = u2y, fz = −u2z.
Step 2(b): Writing auxiliary equation and solving for ux, uy and uz.
The auxiliary equations are given by
dx
fux
=dy
fuy
=dz
fuz
=dux−fx
=duy−fy
=duz−fz
=⇒ dx
2xux=
dy
2yuy=
dz
−2zuz=dux−u2x
=duy−u2y
=duzu2z
MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 39
Now,
dx
2xux=dux−u2x
=⇒ uxdx
2xu2x=
−2xdux2xu2x
=⇒ uxdx = −2xdux
=⇒ dx
x= −2
duxux
=⇒ log x = −2 log(ux) + log(a)
=⇒ log x+ log(u2x) = log(a)
=⇒ xu2x = a =⇒ ux =(ax
)1/2.
Similarly, we get
yu2y = b =⇒ uy =
(b
y
)1/2
.
and
uz =
[(a+ b)
z
]1/2.
Step 2(c): Solving the equation du = uxdx+ uydy + uzdz.
du =(ax
)1/2dx+
(b
y
)1/2
dy +
(a+ b
z
)1/2
dz
=⇒ u = 2(ax)1/2 + 2(by)1/2 + 2((a+ b)z)1/2 + c. (12)
Step 3: (Finding solution of the given PDE from the solution of PDE (11)).
Writing u = c in (12), we get the complete integral of the given PDE as
z =
[(ax
a+ b
)1/2
+
(by
a+ b
)1/2]2.
Practice Problems
Find the complete integral of the following PDEs:
1. (p2 + q2)y = qz
2. z2 = pqxy
3. 2(y + zq) = q(xp+ yq)
Module 3: Second-Order Partial Differential
Equations
In Module 3, we shall discuss some general concepts associated with second-order linear
PDEs. These types of PDEs arise in connection with various physical problems such as
the motion of a vibration string, heat flow, electricity, magnetism and fluid dynamics. The
second-order partial differential equations are classified into three different types. Further,
the simplified canonical/normal forms are obtained for second order linear equations in
two independent variables.
The lectures are organized as follows. The first lecture is devoted to the classification
of second-order linear PDEs in two or more independent variables. These equations are
classified into hyperbolic, parabolic and elliptic types. The reduction to the canonical form
(or normal form) of second order equations in two independent variables are discussed in
the second lecture. Finally, the third lecture is devoted to wellposedness, superposition
principle and method of factorization for these types of equations.
1
MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS 2
Lecture 1 Classification of Second-Order PDEs
Classification of PDEs is an important concept because the general theory and methods
of solution usually apply only to a given class of equations. Let us first discuss the
classification of PDEs involving two independent variables.
1 Classification with two independent variables
Consider the following general second order linear PDE in two independent variables:
A∂2u
∂x2+B
∂2u
∂x∂y+ C
∂2u
∂y2+D
∂u
∂x+ E
∂u
∂y+ Fu+G = 0, (1)
where A, B, C, D, E, F and G are functions of the independent variables x and y. The
equation (1) may be written in the form
Auxx +Buxy + Cuyy + f(x, y, ux, uy, u) = 0, (2)
where
ux =∂u
∂x, uy =
∂u
∂y, uxx =
∂2u
∂x2, uxy =
∂2u
∂x∂y, uyy =
∂2u
∂y2.
Assume that A, B and C are continuous functions of x and y possessing continuous partial
derivatives of as high order as necessary.
The classification of PDE is motivated by the classification of second order algebraic
equations in two-variables
ax2 + bxy + cy2 + dx+ ey + f = 0. (3)
We know that the nature of the curves will be decided by the principal part ax2+bxy+cy2
i.e., the term containing highest degree. Depending on the sign of the discriminant b2−4ac,
we classify the curve as follows:
If b2 − 4ac > 0 then the curve traces hyperbola.
If b2 − 4ac = 0 then the curve traces parabola.
If b2 − 4ac < 0 then the curve traces ellipse.
With suitable transformation, we can transform (3) into the following normal form
x2
a2− y2
b2= 1 (hyperbola).
x2 = y (parabola).
x2
a2+y2
b2= 1 (ellipse).
MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS 3
Linear PDE with constant coefficients. Let us first consider the following general
linear second order PDE in two independent variables x and y with constant coefficients:
Auxx +Buxy + Cuyy +Dux + Euy + Fu+G = 0, (4)
where the coefficients A,B,C,D,E, F and G are constants. The nature of the equation
(4) is determined by the principal part containing highest partial derivatives i.e.,
Lu ≡ Auxx +Buxy + Cuyy. (5)
For classification, we attach a symbol to (5) as P (x, y) = Ax2 +Bxy+Cy2 (as if we have
replaced x by ∂∂x and y by ∂
∂y ). Now depending on the sign of the discriminant (B2−4AC),
the classification of (4) is done as follows:
B2 − 4AC > 0 =⇒ Eq. (4) is hyperbolic
B2 − 4AC = 0 =⇒ Eq. (4) is parabolic
B2 − 4AC < 0 =⇒ Eq. (4) is elliptic
(6)
(7)
(8)
Linear PDE with variable coefficients. The above classification of (4) is still valid if
the coefficients A,B,C,D,E and F depend on x, y. In this case, the conditions (6), (7)
and (8) should be satisfied at each point (x, y) in the region where we want to describe its
nature e.g., for elliptic we need to verify
B2(x, y)− 4A(x, y)C(x, y) < 0
for each (x, y) in the region of interest. Thus, we classify linear PDE with variable coeffi-
cients as follows:
B2(x, y)− 4A(x, y)C(x, y) > 0 at (x, y) =⇒ Eq. (4) is hyperbolic at (x, y)
B2(x, y)− 4A(x, y)C(x, y) = 0 at (x, y) =⇒ Eq. (4) is parabolic at (x, y)
B2(x, y)− 4A(x, y)C(x, y) < 0 at (x, y) =⇒ Eq. (4) is elliptic at (x, y)
(9)
(10)
(11)
Note: Eq. (4) is hyperbolic, parabolic, or elliptic depends only on the coefficients of the
second derivatives. It has nothing to do with the first-derivative terms, the term in u, or
the nonhomogeneous term.
EXAMPLE 1.
1. uxx + uyy = 0 (Laplace equation). Here, A = 1, B = 0, C = 1 and B2 − 4AC =
−4 < 0. Therefore, it is an elliptic type.
MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS 4
2. ut = uxx (Heat equation). Here, A = −1, B = 0, C = 0. B2 − 4AC = 0. Thus, it is
of parabolic type.
3. utt−uxx = 0 (Wave equation). In this case, A = −1, B = 0, C = 1 and B2−4AC =
4 > 0. Hence, it is of hyperbolic type.
4. uxx + xuyy = 0, x = 0 (Tricomi equation). B2 − 4AC = −4x. Given PDE is
hyperbolic for x < 0 and elliptic for x > 0. This example shows that equations with
variable coefficients can change form in the different regions of the domain.
2 Classification with more than two variables
Consider the second-order PDE in general form:
n∑i=1
n∑j=1
aij∂2u
∂xi∂xj+
n∑i=1
bi∂u
∂xi+ cu+ d = 0, (12)
where the coefficients aij , bi, c and d are functions of x = (x1, x2, · · · , xn) alone and u =
u(x1, x2, · · · , xn).
Its principal part is
L ≡n∑
i=1
n∑j=1
aij∂2
∂xi∂xj. (13)
It is enough to assume that A = [aij ] is symmetric if not, let aij =12(aij +aji) and rewrite
L ≡n∑
i=1
n∑j=1
aij∂2
∂xi∂xj. (14)
Note that ∂2u∂xi∂xj
= ∂2u∂xj∂xi
. As in two-space dimension, let us attach a quadratic form P
with (14) (i.e., replacing ∂u∂xi
by xi).
P (x1, x2, · · · , xn) =n∑
i=1
n∑j=1
aijxixj . (15)
Since A is a real valued symmetric (aij = aji) matrix, it is diagonalizable with real
eigenvalues λ1, λ2, . . . , λn (counted with their multiplicities). In other words, there exists
a corresponding set of orthonormal set of n eigenvectors, say σ1, σ2, · · · , σn with R =
MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS 5
[σ1, σ2, · · · , σn] as column vectors such that
RTAR =
λ1
λ2 ⃝·
·⃝ ·
λn
= D (16)
We now classify (12) depending on sign of eigenvalues of A:
(a) If λi > 0 ∀i or λi < 0 ∀i then (12) is elliptic type.
(b) If one or more of the λi = 0 then (12) is parabolic type.
(c) If one of the λi < 0 or λi > 0, and all the remaining have
opposite sign then (12) is said to be of hyperbolic type.
EXAMPLE 2.
1. ∇2u = uxx+uyy +uzz = 0. In this case, λi = 1 > 0 for all i = 1, 2, 3. It is an elliptic
PDE since all eigenvalues are of one sign.
2. It is an easy exercise to check that ut −∇2u = 0 is of parabolic type.
3. The equation utt −∇2u = 0 is of hyperbolic type.
EXAMPLE 3. Classify ux1x1 + 2(1 + cx2)ux2x3 = 0.
To symmetrize, write it as
ux1x1 + (1 + cx2)ux2x3 + (1 + cx2)ux3x2 = 0
i.e., ∂TxA∂x − c∂x2 = 0, where
A =
1 0 0
0 0 1 + cx2
0 1 + cx2 0
∂x =
∂x1
∂x2
∂x3
Eigenvalues are λ1 = 1, λ2 = 1 + cx2, λ3 = −(1 + cx2) and normalized eigenvectors
σ1 =
1
0
0
σ2 =
0
1/√2
1/√2
σ3 =
0
1/√2
−1/√2
MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS 6
So
R =
1 0 0
0 1/√2 1/
√2
0 1/√2 −1/
√2
Note that R = RT = R−1.
RTAR =
1 0 0
0 1 + cx2 0
0 0 −(1 + cx2)
= D
Equation is parabolic if x2 = −1c (c = 0), hyperbolic if x2 > −1
c and x2 < −1c . For c = 0,
λ1 = λ2 = 1 and λ3 = −1, it is hyperbolic type.
Practice Problems
1. Classify the following equations into hyperbolic, elliptic or parabolic type.
(A) 5uxx − 3uyy + (cosx)ux + eyuy + u = 0.
(B) exuxx + eyuyy − u = 0.
(C) xuxx + uyy = 0.
(D) 8uxx + uyy − ux + [log(2 + x2)]u = 0.
(E) sin2 xuxx + sin 2xuxy + cos2 xuyy = x.
2. Classify the following equations into elliptic, parabolic, or hyperbolic type.
(A) uxx + 2uyz + (cosx)uz − ey2u = cosh z.
(B) uxx + 2uxy + uyy + 2uzz − (1 + xy)u = 0.
(C) ezuxy − uxx = log[x2 + y2 + z2 + 1].
3. Determine the regions where uxx − 2x2uxz + uyy + uzz = 0 is of hyperbolic, elliptic
and parabolic.
MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS 7
Lecture 2 Canonical Forms or Normal Forms
By a suitable change of the independent variables we shall show that any equation of the
form
Auxx +Buxy + Cuyy +Dux + Euy + Fu+G = 0, (1)
where A, B, C, D, E, F and G are functions of the variables x and y, can be reduced to a
canonical form or normal form. The transformed equation assumes a simple form so that
the subsequent analysis of solving the equation will be become easy.
Consider the transformation of the indpendent variables from (x, y) to (ξ, η) given by
ξ = ξ(x, y), η = η(x, y). (2)
Here, the functions ξ and η are continuously differentiable and the Jacobian
J =∂(ξ, η)
∂(x, y)=
∣∣∣∣∣ ξx ξy
ηx ηy
∣∣∣∣∣ = (ξxηy − ξyηx) = 0 (3)
in the domain where (1) holds.
Using chain rule, we notice that
ux = uξξx + uηηx
uy = uξξy + uηηy
uxx = uξξξ2x + 2uξηξxηx + uηηη
2x + uξξxx + uηηxx
uxy = uξξξxξy + uξη(ξxηy + ξyηx) + uηηηxηy + uξξxy + uηηxy
uyy = uξξξ2y + 2uξηξyηy + uηηη
2y + uξξyy + uηηyy
Substituting these expression into (1), we obtain
A(ξx, ξy)uξξ + B(ξx, ξy; ηx, ηy)uξη + C(ηx, ηy)uηη = F (ξ, η, u(ξ, η), uξ(ξ, η), uη(ξ, η)), (4)
where
A(ξx, ξy) = Aξ2x +Bξxξy + Cξ2y
B(ξx, ξy; ηx, ηy) = 2Aξxηx +B(ξxηy + ξyηx) + 2Cξyηy
C(ηx, ηy) = Aη2x +Bηxηy + Cη2y .
An easy calculation shows that
B2 − 4AC = (ξxηy − ξyηx)2(B2 − 4AC). (5)
MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS 8
The equation (5) shows that the transformation of the independent variables does not
modify the type of PDE.
We shall determine ξ and η so that (4) takes the simplest possible form. We now
consider the following cases:
Case I: B2 − 4AC > 0 (Hyperbolic type)
Case II: B2 − 4AC = 0 (Parabolic type)
Case III: B2 − 4AC < 0 (Elliptic type)
Case I: Note that B2 − 4AC > 0 implies the equation Aα2 + Bα + C = 0 has two real
and distinct roots, say λ1 and λ2. Now, choose ξ and η such that
∂ξ
∂x= λ1
∂ξ
∂yand
∂η
∂x= λ2
∂η
∂y. (6)
Then the coefficients of uξξ and uηη will be zero because
A = Aξ2x +Bξxξy + Cξ2y = (Aλ21 +Bλ1 + C)ξ2y = 0,
C = Aη2x +Bηxηy + Cη2y = (Aλ22 +Bλ2 + C)η2y = 0.
Thus, (5) reduces to
B2 = (B2 −AC)(ξxηy − ξyηx)2 > 0
as B2−4AC > 0. Note that (6) is a first-order linear PDE in ξ and η whose characteristics
curves are satisfy the first-order ODEs
dy
dx+ λi(x, y) = 0, i = 1, 2. (7)
Let the family of curves determined by the solution of (7) for i = 1 and i = 2 be
f1(x, y) = c1 and f2(x, y) = c2, (8)
respectively. These family of curves are called characteristics curves of PDE (5). With
this choice, divide (4) throughout by B (as B > 0) and use (7)-(8) to obtain
∂2u
∂ξ∂η= ϕ(ξ, η, u, uξ, uη), (9)
which is the canonical form of hyperbolic equation.
EXAMPLE 1. Reduce the equation uxx = x2uyy to its canonical form.
Solution. Comparing with (1) we find that A = 1, B = 0, C = −x2.
The roots of the equations Aα2 +Bα+C = 0 i.e., α2 + x2 = 0 are given by λi = ±x.The differential equations for the family of characteristics curves are
dy
dx± x = 0.
MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS 9
whose solutions are y + 12x
2 = c1 and y − 12x
2 = c2. Choose
ξ = y +1
2x2, η = y − 1
2x2.
An easy computation shows that
ux = uξξx + uηηx,
uxx = uξξξ2x + 2uξηξxηx + uηηη
2x + uξξxx + uηηxx
= uξξx2 − 2uξηx
2 + uηηx2 + uξ − uη,
uyy = uξξξ2y + 2uξηξyηy + uηηη
2y + uξξyy + uηηyy,
= uξξ + 2uξη + uηη.
Substituting these expression in the equation uxx = x2uyy yields
4x2uξη = (uξ − uη)
or 4(ξ − η)uξη =1
4(ξ − η)(uξ − uη)
or uξη =1
4(ξ − η)(uξ − uη)
which is the required canonical form.
CASE II: B2 − 4AC = 0 =⇒ the equation Aα2 + Bα + C = 0 has two equal roots, say
λ1 = λ2 = λ. Let f1(x, y) = c1 be the solution of dydx + λ(x, y) = 0. Take ξ = f1(x, y) and
η to be the any function of x and y which is independent of ξ.
As before, A(ξx, ξy) = 0 and hence from equation (5), we obtain B = 0. Note that
C(ηx, ηy) = 0, otherwise η would be a function of ξ. Dividing (4) by C, the canonical form
of (2) is
uηη = ϕ(ξ, η, u, uξ, uη). (10)
which is the canonical form of parabolic equation.
EXAMPLE 2. Reduce the equation uxx + 2uxy + uyy = 0 to canonical form.
Solution. In this case, A = 1, B = 2, C = 1. The equation α2+2α+1 = 0 has equal
roots λ = −1. The solution of dydx − 1 = 0 is x− y = c1 Take ξ = x− y. Choose η = x+ y.
proceed as in Example 1 to obtain uηη = 0 which is the canonical form of the given PDE.
CASE III: When B2− 4AC < 0, the roots of Aα2+Bα+C = 0 are complex. Following
the procedure as in CASE I, we find that
uξη = ϕ1(ξ, η, u, uξ, uη). (11)
MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS 10
The variables ξ, η are infact complex conjugates. To get a real canonical form use the
transformation
α =1
2(ξ + η), β =
1
2i(ξ − η)
to obtain
uξη =1
4(uαα + uββ), (12)
which follows from the following calculation:
uξ = uααξ + uββξ =1
2uα +
1
2iuβ
uξη =1
2(uαααη + uαββη) +
1
2i(uβααη + uβββη)
=1
4(uαα + uββ).
The desired canonical form is
uαα + uββ = ψ(α, β, u(α, β), uα(α, β), uβ(α, β)). (13)
EXAMPLE 3. Reduce the equation uxx + x2uyy = 0 to canonical form.
Solution. In this case, A = 1, B = 0, C = x2. The roots are λ1 = ix, λ2 = −ix.Take ξ = iy + 1
2x2, η = −iy + 1
2x2. Then α = 1
2x2, β = y. The canonical form is
uαα + uββ = − 1
2αuα.
Practice Problems
1. Reduce the following equations to canonical/normal form:
(A) 2uxx − 4uxy + 2uyy + 3u = 0.
(B) uxx + yuyy = 0.
(C) uxy + ux + uy = 2x.
2. Show that the equation
uxx − 6uxy + 12uyy + 4ux − u = sin(xy)
is of elliptic type and obtain its canonical form.
3. Determine the regions where Tricomi’s equation uxx + xuyy = 0 is of elliptic,
parabolic, and hyperbolic types. Obtain its characteristics and its canonical form in
the hyperbolic region.
MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS 11
Lecture 3 Superposition Principle and Wellposedness
A very important fact concerning linear PDEs is the superposition principle, which is
stated below.
A linear PDE can be written in the form
L[u] = f, (1)
where L[u] denotes a linear combination of u and some of its partial derivatives, with
coefficients which are given functions of the independent variables.
DEFINITION 1. (Superposition principle) Let u1 be a solution of the linear PDE
L[u] = f1
and let u2 be a solution of the linear PDE
L[u] = f2.
Then, for any any constants c1 and c2, c1u1 + c2u2 is a solution of
L[u] = c1f1 + c2f2.
That is,
L[c1u1 + c2u2] = c1f1 + c2f2. (2)
In particular, when f1 = 0 and f2 = 0, (2) implies that if u1 and u2 are solutions of the
homogeneous linear PDE L[u] = 0, then c1u1 + c2u2 will also be a solution of L[u] = 0.
EXAMPLE 2. Observe that u1(x, y) = x3 is a solution of the linear PDE uxx − uy = 6x,
and u2(x, y) = y2 is a solution of uxx − uy = −2y. Then, using superposition principle, it
is easy to verify that 3u1(x, y)− 4u2(x, y) will be a solution of uxx − uy = 18x+ 8y.
REMARK 3. Note that the principle of superposition is not valid for nonlinear partial
differential equations. This failure makes it difficult to form families of new solutions
from an original pair of solutions.
EXAMPLE 4. Consider the nonlinear first order PDE uxuy − u(ux + uy) + u2 = 0. Note
that ex and ey are two solutions of this equation. However, c1ex + c2e
y will not be a
solution, unless c1 = 0 or c2 = 0.
Solution. Define D[u] := (ux − u)(uy − u). For any u, v ∈ C1, we have
D[u+ v] = (ux + vx − u− v)(uy + vy − u− v)
= D[u] +D[v] + (uy − u)(vx − v) + (ux − u)(vy − v).
MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS 12
The computation shows that D[u + v] = D[u] + D[v] in general. Taking u = c1ex and
v = c2ey, an easy computation shows that
D[c1ex + c2e
y] = D[c1ex] +D[c2e
y] + (−c1ex)(−c2ey) = c1c2ex+y.
Thus, D[c1ex + c2e
y] = 0 only if c1 = 0 or c2 = 0.
1 Well-posed problems
A set of conditions was proposed by Hadamard (cf. [12]), who listed three requirements
that must be met when formulating an initial and /or boundary value problem. A problem
for which the PDE and the data lead to a solution is said to be well posed or correctly
posed if the following three conditions are satisfies:
Hadamard’s conditions for a well-posed problem are:
1. The solution must exist.
2. The solution should be unique.
3. The solution should depend continuously on the initial and/or boundary data.
If it fails to meet these requirements, it is incorrectly posed.
The conditions (1)-(2) require that the equation plus the data for the problem must
be such that one and only one solution exists. The third condition states that a small
variation of the data for the problem should cause small variation in the solution. As data
are generally obtained experimentally and may be subject to numerical approximations,
we require that the solution be stable under small variations in initial and/or boundary
values. That is, we cannot allow large variations to occur in the solution if the data are
altered slightly.
A simple example of a ill posed problem is given below.
EXAMPLE 5. Consider Cauchy’s problem for Laplace’s equation in y ≥ 0:
∂2u
∂x2+∂2u
∂y2= 0, (3)
u(x, 0) = 0, (4)
uy(x, 0) =1
nsinnx, (5)
where n is a positive integer, is not well-posed.
MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS 13
The solution is given by u(x, y) = 1n2 sin(nx) sinh(ny). Now, as n → ∞, uy(x, 0) → 0
so that for large n the Cauchy data u(x, 0) and uy(x, 0) can be made arbitrarily small
in magnitude. However, the solution u(x, y) oscillates with an amplitude that grows
exponentially like eny as n→ ∞. Thus, arbitrarily small data can lead to arbitrarily large
variation in solutions and hence the solution is unstable. This violates the condition (3)
i.e., the continuous dependence of the solution on the data.
Boundary value problems are not well posed for hyperbolic and parabolic equations.
This follows because these are, in general, equations whose solutions evolve in time and
their behavior at later times is predicted by their previous states.
EXAMPLE 6. Consider the hyperbolic equation
uxy = 0 in 0 < x < 1, 0 < y < 1
with the boundary conditions
u(x, 0) = f1(x), u(x, 1) = f2(x) for 0 ≤ x ≤ 1,
u(0, y) = g1(y), u(1, y) = g2(y) for 0 ≤ y ≤ 1.
We shall show that this problem has no solution if the data are prescribed arbitrarily.
Since uxy = 0 implies that ux(x, y) = constant, we have
ux(x, 0) = ux(x, 1).
In view of the given BC, we have
ux(x, 0) = f ′1(x) and ux(x, 1) = f ′2(x).
Thus, unless f1(x) and f2(x) are prescribed such that f ′1(x) = f ′2(x), the BVP cannot be
solved. Therefore, it is incorrectly posed.
2 Method of factorization
There is no general methods are available for obtaining the general solutions of second-
order PDEs. Sometimes PDE of second-order can be factorized into two first-order equa-
tions. The equations
uξη = 0,
yuxx + (x+ y)uxy + xuyy = 0.
MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS 14
are examples of such equation. It is often much easier to factorize an equation when
in its canonical form. But, we can often factorize equations with constant coefficients
directly. The method of factorization can be a useful method of solution for hyperbolic
and parabolic equations.
EXAMPLE 7. The equation
uxx − uyy + 4(ux + u) = 0
can be written as (∂
∂x+
∂
∂y+ 2
)(∂
∂x− ∂
∂y+ 2
)u = 0.
It is equivalent to the pair of first order equations
ux − uy + 2u = v,
and
vx + vy + 2v = 0.
EXAMPLE 8. The hyperbolic equation
acuxy + aux + cuy + u = 0
can be written as (a∂
∂x+ 1
)(c∂
∂y+ 1
)u = 0.
It is equivalent to
cuy + u = v,
avx + v = 0.
Note: Unlike the case when the coefficients are constant, the differential operators need
not commute.
Practice Problems
1. If u1(x, y) = x3 solves uxx + uyy = 2 and u2(x, y) = c3 + dy3 solves uxx + uyy =
6cx+ 6dy for real constants c and d then find a solution of uxx + uyy = ax+ by + c
for given real constants a, b and c.
2. Let u1(x, y) be the solution to the Cauchy problem
uxx + uyy = 0,
u(x, 0) = f(x),
uy(x, 0) = g(x),
MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS 15
and let u2(x, y) be the solution of the following Cauchy problem
uxx + uyy = 0,
u(x, 0) = f(x),
uy(x, 0) = g(x) +1
nsin(nx).
Show that u2−u1 = 1n2 sinh(ny) sin(nx) and the solution to the Cauchy problem for
Laplace’s equation does not depend continuously on the initial data.
3. Show that the Dirichlet problem for the wave equation
uxx − uyy = 0, 0 < x < l, 0 < y < T,
u(0, y) = u(l, y) = 0, 0 ≤ y ≤ T,
u(x, 0) = u(x, T ) = 0, 0 ≤ x ≤ l,
is not wellposed.
Module 4: Fourier Series
Periodic functions occur frequently in engineering problems. Their representation in terms
of simple periodic functions, such as sine and cosine, which leads to Fourier series(FS).
Fourier series is a very powerful tool in connection with various problems involving partial
differential equations. Applications of Fourier series in solving PDEs are discussed in the
subsequent module. In this module, we shall learn basic concepts, facts and techniques in
connection with Fourier series.
The Module 4 is organized as follows. While the first lecture introduces the FS, the
convergence of FS and the properties of termwise differentiation and integration of FS are
discussed in the second lecture. Third lecture is devoted to Fourier sine series (FSS) and
Fourier cosine series (FCS) of functions.
1
MODULE 4: FOURIER SERIES 2
Lecture 1 Introduction to Fourier Series
In this lecture, we shall discuss a class of expansions which are particularly useful in the
study of solution of PDEs. To begin with, we now review some function property that are
particularly relevant to this study.
DEFINITION 1. (Periodic function) A function is periodic of period L if f(x+L) = f(x)
for all x in the domain of f .
The smallest positive value of L is called the fundamental period. The trigonometric
functions sinx and cosx are examples of periodic functions with fundamental period 2π
and tanx is periodic with fundamental period π. A constant function is a periodic function
with arbitrary period L.
It is easy to verify that if the functions f1, . . . , fn are periodic of period L, then any
linear combination
c1f1(x) + · · ·+ cnfn(x)
is also periodic. Furthermore, if the infinite series
1
2a0 +
∞∑n=1
an cos(nπx
L) + bn sin(
nπx
L)
consisting of 2L-periodic functions converges for all x, then the function to which it con-
verges will be periodic of period 2L.
There are two symmetry properties of functions that will be useful in the study of
Fourier series.
DEFINITION 2. (Even function and Odd function) Let f : [−L,L] → R. Then f(x)
is called even, if f(−x) = f(x) for all x ∈ [−L,L]. f(x) is called odd, if f(−x) = −f(x),for all x ∈ [−L,L].
Note: The graph of an even function is symmetric with respect to the y-axis. Note that
if (x, f(x)) is on the graph of an even function f(x), then (−x, f(x)) will also be on the
graph (i.e., the graph is invariant under reflection in the y-axis), see Figure 4.1.
The graph of an odd function is symmetric with respect to the origin. If f(x) is odd,
then (x, f(x)) is on the graph if and only if (−x,−f(x)) is on the graph. That is, the
graph is invariant under reflection through the origin, see Figure 4.2.
EXAMPLE 3. The functions f(x) = x2n, n = 0, 1, 2, . . . are even functions whereas f(x) =
x2n+1, n = 0, 1, 2, . . . are odd functions. The functions sinx and tanx are odd functions
and cosx is an even function.
MODULE 4: FOURIER SERIES 3
We collect some facts concerning even and odd functions.
• The product of two even functions is even.
• The product of two odd functions is even.
• The product of an odd function and an even function is odd.
• If f(x) is odd (−L ≤ x ≤ L), then∫ L−L f(x)dx = 0, if the integral exists.
• If f(x) is even (−L ≤ x ≤ L), then∫ L−L f(x)dx = 2
∫ L0 f(x)dx, if the integral exists.
It is easy to verify that∫ L
−Lsin
nπx
Ldx = 0,
∫ L
−Lcos
nπx
Ldx = 0, n = 1, 2, . . . ,
For m. n = 1, 2, . . . , we have∫ L
−Lsin
mπx
Lcos
nπx
Ldx = 0 (1)∫ L
−Lsin
mπx
Lsin
nπx
Ldx =
0, m = n,
L, m = n.(2)
∫ L
−Lcos
mπx
Lcos
nπx
Ldx =
0, m = n,
L, m = n.(3)
Equations (1)-(3) express an orthogonality condition satisfied by the set of trigonometric
functions cosx, sinx, cos 2x, sin 2x, . . ., where L = π.
DEFINITION 4. (Orthogonal functions) A set of functions fn(x)∞n=1 is said to be an
orthogonal with respect to the nonnegative weight function w(x) on the interval [a, b] if∫ b
afm(x)fn(x)w(x)dx = 0, whenever m = n. (4)
We have already seen that the set of trigonometric functions
1, cosx, sinx, cos 2x, sin 2x, . . .
is orthogonal on [−π, π] with respect to the weight function w(x) = 1.
Define the norm of f as
∥f∥ :=
[∫ b
af2(x)w(x)dx
]1/2.
MODULE 4: FOURIER SERIES 4
We say that a set of functions fn(x)∞n=1 is an orthonormal system with respect to w(x)
if (4) holds and ∥fn∥ = 1 for each n. That is,∫ b
afm(x)fn(x)w(x)dx =
0, m = n,
1, m = n.(5)
An infinite series of the form
1
2a0 +
∞∑n=1
an cos(nπx
L) + bn sin(
nπx
L), (6)
where
an =1
L
∫ L
−Lf(x) cos(
nπx
L)dx, n = 0, 1, 2, 3 . . . ,
and
bn =1
L
∫ −L
Lf(x) sin(
nπx
L)dx, n = 1, 2, 3 . . . ,
is called the Fourier series of f(x). This series is named after the outstanding French
mathematical physicist Joseph Fourier (1768-1830).
Suppose that f(x) is of the form
f(x) =1
2a0 +
N∑n=1
[an cos(nπx
L) + bn sin(
nπx
L)]. (7)
Then, the coefficients an and bn are uniquely determined by the formulas
an =1
L
∫ L
−Lf(x) cos(
nπx
L) dx, n = 0, 1, . . . , (8)
bn =1
L
∫ L
−Lf(x) sin(
nπx
L) dx, n = 1, 2, . . . , . (9)
REMARK 5.
• Not every function f(x) has the representation of the form (7). The right side of (7) is
smooth i.e., C∞ (infinitely differentiable functions), but many functions have graphs
with jumps or corners. We will encounter functions f(x) for which the integral (8)
and (9) are not zero for infinitely many values of n. In such cases, f(x) can not be
represented as a finite sum as in (7). Also, even if N → ∞, the sum (7) might not
converge to f(x), unless some additional assumptions are made (cf. [1])
DEFINITION 6. (Fourier series) Let f(x) : [−L,L] → R be such that the integrals
an =1
L
∫ L
−Lf(x) cos(
nπx
L)dx, n = 0, 1, 2, 3 . . . , (10)
MODULE 4: FOURIER SERIES 5
and
bn =1
L
∫ L
−Lf(x) sin(
nπx
L)dx, n = 1, 2, 3 . . . , (11)
exists and are finite. Then the Fourier series(FS) of f on [−L,L] is the expression
f(x) ∼ 1
2a0 +
∞∑n=1
[an cos(
nπx
L) + bn sin(
nπx
L)]. (12)
The coefficients a0, an, bn (n = 1, 2, 3, . . .) are known as the Fourier coefficients of f . The
symbol “∼” means “has the Fourier series”.
EXAMPLE 7. Find the FS of
f(x) =
−1 −π < x < 0,
1, 0 < x < π.
Solution. Here L = π. Note that f(x) is an odd function. Since the product of an
odd function and an even function is odd, f(x) cosnx is also an odd function. Hence
an =1
π
∫ π
−πf(x) cosnxdx = 0, n = 0, 1, 2, . . .
Since f(x) sinx is an even function (as the product of two odd functions), we have
bn =1
π
∫ π
−πf(x) cosnxdx =
2
π
∫ π
0sinnxdx
=2
π
[− cosnx
n
]∣∣∣∣π0
=2
π
[1
n− (−1)n
n
], n = 1, 2, 3, . . . .
=
0, n even,4nπ , n odd.
Thus
f(x) ∼ 2
π
∞∑n=1
[1− (−1)n]
nsinnx =
4
π
[sinx+
1
3sin 3x+
1
5sin 5x+ · · ·
].
REMARK 8.
• If f is any odd function, then its FS consists only of sine terms. (see, Example 7).
• If f is an even function, then its FS consists only of cosine terms (including cos 0πx).
EXAMPLE 9. Find the FS of the function f(x) = x for −L ≤ x ≤ L.
MODULE 4: FOURIER SERIES 6
Solution. We first compute the Fourier coefficients an for n ≥ 1,
an =1
L
∫ L
−Lx cos(
nπx
L)dx =
x
nπsin(
nπx
L)∣∣∣L−L
− 1
nπ
∫ L
−Lsin(
nπx
L)dx
= 0 +L
(nπ)2cos(
nπx
L)
∣∣∣∣L−L
= 0, n = 1, 2, 3, . . . .
For n = 0, we get
a0 =1
L
∫ L
−Lxdx =
1
L
x2
2
∣∣∣∣L−L
= 0.
Thus, an = 0, for n = 0, 1, 2, 3, . . .. Next, to compute bn, we have
bn =1
L
∫ L
−Lx sin(
nπx
L)dx =
−xnπ
cos(nπx
L)
∣∣∣∣L−L
+1
nπ
∫ L
−Lcos(
nπx
L)dx
=−2L
(nπ)cos(nπ) +
L
(nπ)2sin(
nπx
L)
∣∣∣∣L−L
=2L
nπ(−1)n+1, n = 1, 2, 3, . . . .,
where we have used the fact cos(nπ) = (−1)n. Thus, the FS of f(x) is given by
f(x) ∼∞∑n=1
(−1)n+1 2L
nπsin(
nπx
L) =
2L
π
∞∑n=1
(−1)n+1 1
nsin(
nπx
L).
Practice Problems
1. Find the FS of the following function:
(a) f(x) =
0 −2 ≤ x ≤ −1,
1, −1 < x ≤ 2.; (b) f(x) =
x2 −1 ≤ x ≤ 0,
1 + x, 0 < x ≤ 1.
(c) f(x) = | sinx|, −π < x < π.
2. If the 2π-periodic even function is given by f(x) = |x| for −π < x < π, show that
f(x) =π
2− 4
π
∞∑n=1
cos(2n− 1)x
(2n− 1)2.
3. Show that
1− 1
3+
1
5− 1
7+ · · · = π
4.
MODULE 4: FOURIER SERIES 7
Lecture 2 Convergence of Fourier Series
In this lecture, we shall discuss the convergence of FS (without proofs) and the properties
of termwise differentiation and integration.
1 Convergence of FS for continuous functions
In Example 9 (cf. Lecture 1 of Module 4), we notice that the FS of f(0) = 0 = f(0).
However, the FS of f(±L) = 0 = ±L = f(±L). Thus, the FS of f(x) is not f(x) at
x = ±L.
Under various assumptions on a function f(x), one can prove that the Fourier series
of f(x) does converge to f(x) for all x in [−L,L]. We now have the following results:
THEOREM 1. (The convergence of FS) Let f ∈ C2([−L,L]) ( a twice continuously
differentiable function on the interval [−L,L]) be such that f(−L) = f(L) and f ′(−L) =f ′(L). Let an and bn be the Fourier coefficients of f(x), and let
M = maxx∈[−L,L]
|f ′′(x)|.
Then for any N ≥ 1,∣∣∣∣∣f(x)−a02
+∞∑n=1
[an cos(nπx
L) + bn sin(
nπx
L)]
∣∣∣∣∣ ≤ 4L2M
π2N, (1)
for all x ∈ [−L,L].REMARK 2.
• As N → ∞, the right hand side of (1) → 0, which implies that the sum in brackets
on the left side of (1) converges to f(x) as N → ∞, i.e.,
f(x) ∼ 1
2a0 +
∞∑n=1
[an cos(
nπx
L) + bn sin(
nπx
L)].
• The inequality (1) tells us how many terms of Fourier Series f(x) will suffice, in
order to approximate f(x) to within a certain error.
• The conditions f(−L) = f(L) and f ′(−L) = f ′(L) ensure that if the interval [−L,L]is bent into a circle by joining x = −L to x = L, then the graph of f(x) above this
circle is continuous and has a well defined tangent line (or derivative) at the juncture
where −L and L are identified.
MODULE 4: FOURIER SERIES 8
Periodic extensions of functions: We begin with some facts concerning periodic func-
tions and periodic extensions of functions.
DEFINITION 3. Let f : [−L,L] → R be such that f(−L) = f(L). Then the periodic
extension of f(x) is the unique periodic function f(x) of period 2L, such that f(x) = f(x)
for −L ≤ x ≤ L.
In order to have periodic extension to exist, we must have f(−L) = f(L). Thus, note
that the function f(x) = x, defined for −L ≤ x ≤ L, does not have a periodic extension.
To remedy this situation, we define
f(±L) = 1
2f(L−) + f(−L+),
where
f(L−) := limx→L−
f(x) := limh→0+
f(L− h)
and
f(−L+) := limx→(−L)+
f(x) := limh→0+
f((−L) + h).
We now state a sequence of convergence results without proofs. For a proof, readers
may refer to [1, 9].
THEOREM 4. (Pointwise convergence of FS) Let f(x) ∈ C1([−L,L]) and assume that
f(−L) = f(L) and f ′(−L) = f ′(L). Then the FS of f(x) = f(x) ∀ x ∈ [−L,L].
That is, if the N -th partial sum of Fourier series of f(x) is denoted by
SN (x) =1
2a0 +
N∑n=1
[an cos(nπx
L) + bn sin(
nπx
L)],
where
an =1
L
∫ L
−Lf(y) cos(
nπy
L) dy and bn =
1
L
∫ L
−Lf(y) sin(
nπy
L) dy,
then, for any fixed x ∈ [−L,L], we have
f(x) ∼ 1
2a0 +
∞∑n=1
[an cos(nπx
L) + bn sin(
nπx
L)] ≡ lim
N→∞SN (x) = f(x).
REMARK 5. Note that when f and f ′ are piecewise continuous on [−L,L], the Fourier
series converges to f(x). FS converges to the average of the left and right-hand limits at
points where f is discontinuous.
THEOREM 6. (Uniform convergence of FS) Let f(x) ∈ C2([−L,L]) be such that
f(−L) = f(L) and f ′(−L) = f ′(L). Then the FS of f(x) converges uniformly to f(x).
MODULE 4: FOURIER SERIES 9
That is, the sequence of partial sums S1(x), S2(x), . . . of Fourier Series of f(x) con-
verge uniformly to f(x). Indeed,
|f(x)− SN (x)| ≤ 4L2M
π2N, ∀x ∈ [−L,L],
where
M = max−L≤x≤L
|f ′′(x)|.
2 Convergence of FS for piecewise continuous functions
DEFINITION 7. A function f : [a, b] → R is piecewise continuous on [a, b] if
(i) f(x) is continuous at all but a finite number of points in [a, b],
(ii) ∀ x0 ∈ (a, b), the limits f(x+0 ) and f(x−0 ) exist, and
(iii) the limits f(a+) and f(b−) exist.
EXAMPLE 8. The function
f(x) =
x, 0 < x < 1,
2, 1 < x < 2,
(x− 2)2, 2 ≤ x ≤ 3
is piecewise continuous on [0, 3].
DEFINITION 9. f(x) is piecewise C1 on [a, b] if f(x) and f ′(x) are piecewise continuous
on [a, b].
Note: If f ′(x) is piecewise continuous on [a, b], then f(x) is automatically piecewise
continuous on [a, b].
The following definition is convenient in the formulation of the convergence theorem
for FS of piecewise C1 functions.
DEFINITION 10. Let f(x) : [−L,L] → R be a piecewise C1-function. Define the adjusted
function f(x) as follows:
f(x) =
12 [f(x
+) + f(x−)], −L < x < L,12 [f(−L
+) + f(L−)], x = ±L.(2)
Note: The above definition tells that f(x) coincides at all points in (−L,L), where f(x)is continuous, but f(x) is the average of the left-hand and right-hand limits of f(x) at
points of discontinuity in (−L,L). The value of f(x) at x = ±L can be thought of as an
average of left-hand and right-hand limits, if we bend the interval [−L,L] into a circle.
MODULE 4: FOURIER SERIES 10
THEOREM 11. Let f : [−L,L] → R be a piecewise C1 function and let f(x) be the adjusted
function as defined in (2). Then FS of f(x) = f(x), for all x ∈ [−L,L]. In fact, we have
FS of f(x) = ˜f(x), −∞ < x <∞,
where ˜f(x) is the periodic extension of the adjusted function f(x).
If f(x) is piecewise C1 and has no discontinuities in [−L,L] and f(−L) = f(L), then
there is still hope for uniform convergence of FS of f(x) to f(x) in [−L,L]. The following
theorem provides the uniform convergence of FS of f(x) to f(x) for such functions.
THEOREM 12. Let f(x) : [−L,L] → R be a piecewise C1 function such that f(−L) =
f(L). Then FS of f(x) converges uniformly to f(x) on [−L,L]. That is,
max−L≤x≤L
|f(x)− SN (x)| → 0 as N → ∞, (3)
where SN (x) is the N -th partial sum of FS of f(x).
REMARK 13.
• Note that in the above theorem we have not assumed that f ′(x) is continuous ev-
erywhere. The graph of f(x) may have corners. However, the continuity assump-
tion ensures that the graph of f(x) will have no gaps. Moreover, the assumption
f(−L) = f(L) ensures that the periodic extension f(x) has no gaps (i.e., f(x) yields
a continuous function on the circle of circumference 2L).
3 Differentiation and integration of FS
The term-by-term differentiation of a Fourier series is not always permissible. For example,
the FS for f(x) = x, −π < x < π (see, Example 7) is
f(x) ∼ 2
∞∑n=1
(−1)n+1 sinnx
n,
which converges for all x, whereas its derived series
2∞∑n=1
(−1)n+1 cosnx
diverges for every x.
The following theorem gives sufficient conditions for using termwise differentiation.
MODULE 4: FOURIER SERIES 11
THEOREM 14. (Differentiation of FS) Let f(x) : R → R be continuous and f(x+2L) =
f(x). Let f ′(x) and f ′′(x) be piecewise continuous on [−L,L]. Then, The FS of f ′(x) can
be obtained from the FS for f(x) by termwise differentiation. In particular, if
f(x) =a02
+
∞∑n=1
an cos
nπx
L+ bn sin
nπx
L
,
then
f ′(x) ∼∞∑n=1
πn
L
−an sin
nπx
L+ bn cos
nπx
L
.
Notice that the above theorem does not apply to Example 7 as the 2π-periodic exten-
sion of f(x) fails to be continuous on (−∞,∞).
Termwise integration of a FS is permissible under much weaker conditions.
THEOREM 15. (Integration of FS) Let f(x) : [−L,L] → R be piecewise continuous
function with FS
f(x) ∼ a02
+∞∑n=1
an cos
nπx
L+ bn sin
nπx
L
.
Then, for any x ∈ [−L,L], we have∫ x
−Lf(t)dt =
∫ x
−L
a02dt+
∞∑n=1
∫ x
−L
an cos
nπt
L+ bn sin
nπt
L
dt.
Practice Problems
1. Discuss the convergence of the FS of the following functions f :
(a) f(x) =
x, 0 < x < π,
π − x, π < x < 2π.; (b) f(x) =
x, 0 < x < π,
0, π < x ≤ 2π.
2. Determine the FS of the following functions by differentiating the appropriate FS:
(i) sin2 x, 0 < x < π; (ii) cosx+ cos(2x), 0 < x < π
3. Differentiate the FS of f(x) = | sinx| term by term to prove that
cosx = − 8
π
∞∑n=1
n sin(2nx)
(1− 4n2).
4. Find the function represented by the new series which are obtained by termwise
integration of the following series from 0 to x:
(−1)k+1 cos(kx)
k= ln(2 cos(x/2)), −π < x < π.
MODULE 4: FOURIER SERIES 12
Lecture 3 Fourier Cosine and Sine Series
Recall that the FS for an odd/even function defined on [−L,L] consists entirely of sine/cosineterms. If a function f defined in 0 ≤ x ≤ L can be extended to the interval [−L,L] in such
a way that the extended function is odd/even. This leads to the following definitions.
DEFINITION 1. (Even and Odd extensions) Let f : [0, L] → R. The even extension
of f(x) is the unique even function fe(x) defined for x ∈ [−L,L] with fe(x) = f(x) for
x ∈ [0, L], i.e.,
fe(x) =
f(x) if 0 ≤ x ≤ L,
f(−x) if − L ≤ x ≤ 0.
If f(0) = 0 then the odd extension fo(x) is the unique odd function defined for x ∈ [−L,L],such that fo(x) = f(x) for x ∈ [0, L], i.e.,
fo(x) =
f(x) if 0 ≤ x ≤ L,
−f(−x) if − L ≤ x ≤ 0.
An example of even and odd extensions of f(x) =√x, 0 ≤ x ≤ L given in Fig. 4.1.
Figure 4.1: f(x) =√x, 0 ≤ x ≤ L; even extension fe(x); odd extension fo(x)
THEOREM 2. Let f : [−L,L] → R f(x) be a function with Fourier coefficients
an =1
L
∫ L
−Lf(x) cos(
nπx
L)dx and bn =
1
L
∫ L
−Lf(x) sin(
nπx
L)dx.
If f(x) is even, then bn = 0 (n = 1, 2, 3, . . .) and
an =2
L
∫ L
0f(x) cos(
nπx
L)dx, (n = 0, 1, 2, . . .). (1)
MODULE 4: FOURIER SERIES 13
If f(x) is odd, then an = 0(n = 0, 1, 2, . . .) and
bn =2
L
∫ L
0f(x) sin(
nπx
L)dx, (n = 1, 2, 3, . . .). (2)
DEFINITION 3. Let f : [0, L] → R be such that the integrals (1) and (2) exist. Then the
Fourier sine series (FSS) of f(x) is the expression
FSS of f(x) =∞∑n=1
bn sin(nπx
L)dx, where bn =
2
L
∫ L
0f(x) sin(
nπx
L)dx. (3)
The Fourier cosine series (FCS) of f(x) is the expression
FCS of f(x) =1
2a0 +
∞∑n=1
an cos(nπx
L), where an =
2
L
∫ L
0f(x) cos(
nπx
L)dx. (4)
THEOREM 4. Let f : [0, L] → R. Suppose that the integrals in (1) and (2) exist. Then
(redefining f(0) = 0) the FSS of f(x) is the Fourier series of the odd extension f0(x)
defined on [−L,L], i.e.,FSS of f(x) = FS of fo(x).
The FCS of f(x) is the Fourier series of the even extension fe(x) defined on [−L,L], i.e.,
FCS of f(x) = FS of fe(x).
Since FSS of f(x) = FS of fo(x) and FCS of f(x) = FS of fe(x), we can obtain
convergence results for FSS of f(x) and FCS of f(x) by applying Theorem 11 (see, previous
lecture) to the extensions fo(x) and fe(x).
THEOREM 5. Let f(x) : [0, L] → R be a piecewise C1 function. Then
FSS of f(x) =
12 [f(x
−) + f(x+)], 0 < x < L,
0, x = 0 or L.(5)
If f(x) is also continuous on [0, L] with f(0) = 0 and f(L) = 0, then the partial sums
SN (x) of FSS of f(x) converge uniformly to f(x) on [0, L], i.e.,
max0≤x≤L
|f(x)− SN (x)| → 0 as N → ∞. (6)
THEOREM 6. Let f(x) : [0, L] → R be a piecewise C1 function. Then
FCS of f(x) =
12 [f(x
−) + f(x+)], 0 < x < L,
f(0+), x = 0,
f(L−), x = L.
(7)
If f(x) is also continuous on [0, L], then the partial sums SN (x) of FCS of f(x) converge
uniformly to f(x).
MODULE 4: FOURIER SERIES 14
THEOREM 7. The Fourier sine series of fe(x) on [0, 2L] is given by
FSS of fe(x) =∞∑n=0
cn sin[(n+1
2)πx/L], (8)
where
cn =2
L
∫ L
0f(x) sin[(n+
1
2)πx/L], n = 0, 1, 2, . . . . (9)
If f(x) is piecewise C1, then
FSS of fe(x) =
12 [f
e(x−) + f(x+)], 0 < x < 2L,
0, x = 0.(10)
In particular, for 0 ≤ x ≤ L, we have
FSS of fe(x) =
12 [f
e(x−) + fe(x+)], 0 < x < L,
0, x = 0,
f(L−), x = L.
(11)
Moreover, if f(x) is piecewise C1 and continuous with f(0) = 0, then the partial sums
SN (x) of FSS of fe(x) converge uniformly to f(x) on [0, L].
Practice Problems
1. Construct the FCS for the following functions:
(a) f(x) =
1 0 ≤ x ≤ 1,
−1, 1 < x ≤ 2.; (b) f(x) =
2 + x 0 ≤ x ≤ 1,
1− x, 1 < x ≤ 2.
2. Construct the FSS for the following functions:
(a) f(x) =
1 0 < x < π/2,
2, π/2 < x ≤ π.; (b) f(x) = x2, 0 < x < L.
Module 5: Heat Equation
In this module, we shall study one-dimensional heat equation. It is a parabolic partial
differential equation which describes diffusion processes such as heat conduction, chemical
concentration etc., and hence the heat equation is often called the diffusion equation.
This module consists of five lectures and is organized as follows. The first lecture is
devoted to the derivation of heat equation from the principle of conservation of energy.
Uniqueness results and the maximum principle for the heat equation will be discussed
in the second lecture. Third lecture discusses the method of solution by separation of
variables. Fourth and fifth lectures are devoted to the cases where the boundary conditions
do not change with time and time-dependent boundary conditions, respectively.
1
MODULE 5: HEAT EQUATION 2
Lecture 1 Modeling the Heat Equation
We shall derive heat equation from the principle of conservation of energy and the fact
that heat flows from hot regions to cold regions.
Consider a wire or rod of length L which is made of some heat-conducting material
and is insulated on the outside, except possibly over the ends at x = 0 and x = L. Let
u(x, t) denote the temperature at x at time t. u(x, t) is assumed to be constant on each
cross section at each time. By the principle of conservation of energy (heat energy), the
Figure 5.1: A thin rod of length L
net change of heat inside the segment PQ (between x and x + ∆x) is equal to the net
heat flux across the boundaries and the total heat generated inside PQ. If c is thermal
capacity of the rod, ρ is the density of the rod, A is the cross-section area of the rod, k is
thermal conductivity of the rod and f(x, t) is the external heat source, then we calculate
these terms as follows:
Total amount of heat inside the segment PQ at time t =
∫ x+∆x
xcρAu(τ, t)dτ .
Net change of heat inside PQ =d
dt
∫ x+∆x
xcρAu(τ, t)ds = cρA
∫ x+∆x
xut(τ, t)dτ .
Net flux of heat across the boundaries =kA[ux(x+∆x, t)− ux(x, t)].
Heat generated due to external heat source inside PQ = A
∫ x+∆x
xf(τ, t)dτ.
By the principle of conservation of energy, we write
d
dt
∫ x+∆x
xcρAu(τ, t)dτ = cρA
∫ x+∆x
xut(τ, t)dτ
= kA[ux(x+∆x, t)− ux(x, t)] +A
∫ x+∆x
xf(τ, t)dτ. (1)
MODULE 5: HEAT EQUATION 3
Applying Mean Value Theorem for integral1, we obtain
cρAut(ξ1, t)∆x = kA[ux(x+∆x, t)− ux(x, t)] +Af(ξ2, t)∆x,
where ξ1, ξ2 ∈ (x, x+∆x), and hence,
ut(ξ1, t) =k
cρ
[ux(x+∆x, t)− ux(x, t)
∆x
]+
1
cρf(ξ2, t).
Now, letting ∆x→ 0, we arrive at
ut(x, t) = α2uxx(x, t) + F (x, t), (2)
where α2 = k/(cρ) is called the thermal diffusivity of the rod and F (x, t) = 1cρf(x, t) is
called the heat source density.
REMARK 1.
• When the rod is not laterally insulated and we allow the heat to flow in and out
across the lateral boundary at a rate proportional to the difference between the
temperature u(x, t) and the surrounding medium, the conservation of heat principle
yields
ut = α2uxx − β(u− u0), β > 0.
The heat loss (u > u0) or gain (u < u0) is proportional to the difference between
the temperature u(x, t) of the rod and the surrounding medium u0. Here, β is the
constant of proportionality.
• If the material of the rod is uniform, then k is independent of x. For some materials,
the value of k depends on the temperature u and hence the resulting heat equation
ut =1
cρ
∂
∂x
k(u)
∂u
∂x
is nonlinear.
• If the material is nonhomogeneous the diffusion within the rod depends on x. For
example, suppose the half of the rod is made of copper and other half is made of
steel, then the PDE that describes the heat flow is given by
ut = α2(x)uxx, 0 < x < L,
1If f(x) is continuous on [a, b], then there exists at least one number ξ in (a, b) such that∫ b
a
f(x)dx = f(ξ)(b− a).
MODULE 5: HEAT EQUATION 4
with
α(x) =
α1, 0 < x < L/2,
α2, L/2 < x < L,
where α1 and α2 are the thermal diffusivity coefficients of copper and steel, respec-
tively.
Types of BCs: There are three types of boundary conditions that can occur for heat
flow problems. They are
• Dirichlet boundary conditions (temperature is specified on the boundary):
Consider heat flow problem in a rod (0 ≤ x ≤ L). The specification of the tempera-
tures u(0, t) and u(L, t) at the ends are classified as Dirichlet type BC.
• Neumann boundary conditions ( heat flow across the boundary is specified):
The specification of the normal derivative (i.e., ∂u∂n , where n is the outward normal
to the boundary) on the boundary is classified as Neumann type BCs. For instance,
if the end points of a rod is insulated (i.e., we do not allow any flow of heat across
the boundary), the BCs are
ux(0, t) = 0, ux(L, t) = 0, 0 < t <∞.
• Robin’s or Mixed boundary conditions:
If the condition on the boundary is a mixture of both Dirichlet and Neumann types
i.e.,∂u
∂n= −h(u− g(t))
then it is called Robin’s BCs or mixed BCs. Here, h is a constant and g(t) is given
function that can vary over the boundary. The mixed BCs may be interpreted
as the inward flux across the boundary is proportional to the difference between
the temperature u and some specified temperature g. If the temperature u on the
boundary is greater than the boundary temperature, then the flow of heat is outward.
If u is less than the specified boundary temperature g, then heat flows inward.
MODULE 5: HEAT EQUATION 5
Lecture 2 The Maximum and Minimum Principle
In this lecture, we shall prove the maximum and minimum properties of the heat equation.
These properties can be used to prove uniqueness and continuous dependence on data of
the solutions of these equations.
To begin with, we shall first prove the maximum principle for the inhomogeneous heat
equation (F = 0).
THEOREM 1. (The maximum principle) Let R : 0 ≤ x ≤ L, 0 ≤ t ≤ T be a closed
region and let u(x, t) be a solution of
ut − α2uxx = F (x, t) (x, t) ∈ R, (1)
which is continuous in the closed region R. If F < 0 in R, then u(x, t) attains its maximum
values on t = 0, x = 0 or x = L and not in the interior of the region or at t = T . If
F > 0 in R, then u(x, t) attains its minimum values on t = 0, x = 0 or x = L and not in
the interior of the region or at t = T .
Proof. We shall show that if a maximum or minimum occurs at an interior point
0 < x0 < l and 0 < t0 ≤ T , then we will arrive at contradiction. Let us consider the
following cases.
Case I: First, consider the case with F < 0. Since u(x, t) is continuous in a closed and
bounded region in R, u(x, t) must attain its maximum in R. Let (x0, t0) be the interior
maximum point. Then, we must have
uxx(x0, t0) ≤ 0, ut(x0, t0) ≥ 0. (2)
Since ux(x0, t0) = 0 = ut(x0, t0), we have
ut(x0, t0) = 0 if t0 < T.
If t0 = T , the point (x0, t0) = (x0, T ) is on the boundary of R, then we claim that
ut(x0, t0) ≥ 0
as u may be increasing at (x0, t0). Substituting (2) in (1), we find that the left side of
the equation (1) is non-negative while the right side is strictly negative. This leads to
a contradiction and hence, the maximum must be assumed on the initial line or on the
boundary.
MODULE 5: HEAT EQUATION 6
Case II: Consider the case with F > 0. Let there be an interior minimum point
(x0, t0) in R such that
uxx(x0, t0) ≥ 0, ut(x0, t0) ≤ 0. (3)
Note that the inequalities (3) is same as (2) with the signs reversed. Again arguing as
before, this leads to a contradiction, hence the minimum must be assumed on the initial
line or on the boundary.
Note: When F = 0 i.e., for homogeneous equation, the inequalities (2) at a maximum or
(3) at a minimum do not leads to a contradiction when they are inserted into (1) as uxx
and ut may both vanish at (x0, t0).
Below, we present a proof of the maximum principle for the homogeneous heat equa-
tion.
THEOREM 2. (The maximum principle) Let u(x, t) be a solution of
ut = α2uxx 0 ≤ x ≤ L, 0 < t ≤ T, (4)
which is continuous in the closed region R : 0 ≤ x ≤ L and 0 ≤ t ≤ T . The maximum
and minimum values of u(x, t) are assumed on the initial line t = 0 or at the points on
the boundary x = 0 or x = L.
Proof. Let us introduce the auxiliary function
v(x, t) = u(x, t) + ϵx2, (5)
where ϵ > 0 is a constant and u satisfies (4). Note that v(x, t) is continuous in R and
hence it has a maximum at some point (x1, t1) in the region R.
Assume that (x1, t1) is an interior point with 0 < x1 < L and 0 < t1 ≤ T . Then we
find that
vt(x1, t1) ≥ 0, vxx(x1, t1) ≤ 0. (6)
Since u satisfies (4), we have
vt − α2vxx = ut − α2uxx − 2α2ϵ = −2α2ϵ < 0. (7)
Substituting (6) into (4) and using (7 ) now leads to
0 ≤ vt − α2vxx < 0,
which is a contradiction since the left side is non-negative and the right side is strictly
negative. Therefore, v(x, t) assumes its maximum on the initial line or on the boundary
since v satisfies (1) with F < 0.
MODULE 5: HEAT EQUATION 7
Let
M = maxu(x, t) on t = 0, x = 0, and x = L,
i.e., M is the maximum value of u on the initial line and boundary lines. Then
v(x, t) = u(x, t) + ϵx2 ≤M + ϵL2, for 0 ≤ x ≤ L, 0 ≤ t ≤ T. (8)
Since v has its maximum on t = 0, x = 0, or x = L, we obtain
u(x, t) = v(x, t)− ϵx2 ≤ v(x, t) ≤M + ϵL2. (9)
Since ϵ is arbitrary, letting ϵ→ 0, we conclude that
u(x, t) ≤M for all (x, t) ∈ R, (10)
and this completes the proof.
REMARK 3.
• The minimum principle for the heat equation can be obtained by replacing the
function u(x, t) by −u(x, t), where u(x, t) is a solution of (4). Clearly, −u is also
a solution of (4) and the maximum values of u correspond to the minimum values
of u. Since u satisfies the maximum principle, we conclude that u assumes its min-
minimum values on the initial line or on the boundary lines. In particular, this
implies that if the initial and boundary data for the problem are non- negative, then
the solution must be non-negative.
• In geometrical term, the maximum principle states that if a solution of the problem
(4) is graphed in the xtu-space, then the surface u = u(x, t) achieves its maximum
height above one of the three sides x = 0, x = L, t = 0 of the rectangle 0 ≤ x ≤ L,
0 ≤ t ≤ T .
• From a physical perspective, the maximum principle states that the temperature, at
any point x inside the rod at any time t (0 ≤ t ≤ T ), is less than the maximum of
the initial temperature distribution or the maximum of the temperatures prescribed
at the ends during the time interval [0, T ].
1 Uniqueness and continuous dependence
As a consequence of the maximum principle, we can show that the heat flow problem has
a unique solution and depend continuously on the given initial and boundary data.
MODULE 5: HEAT EQUATION 8
THEOREM 4. (Uniqueness result) Let u1(x, t) and u2(x, t) be solutions of the following
problem
PDE: ut = α2uxx, 0 < x < L, t > 0,
BC: u(0, t) = g(t), u(L, t) = h(t), (11)
IC: u(x, 0) = f(x),
where f(x), g(t) and h(t) are given functions. Then u1(x, t) = u2(x, t), for all 0 ≤ x ≤ L
and t ≥ 0.
Proof. Let u1(x, t) and u2(x, t) be two solutions of (11). Set w(x, t) = u1(x, t) −u2(x, t). Then w satisfies
wt = α2wxx 0 < x < L, t > 0,
w(0, t) = 0, w(L, t) = 0,
w(x, 0) = 0.
By the maximum principle (cf. Theorem 2), we must have
w(x, t) ≤ 0 =⇒ u1(x, t) ≤ u2(x, t), for all 0 ≤ x ≤ L, t ≥ 0.
A similar argument with w = u2 − u1 yields
u2(x, t) ≤ u1(x, t) for all 0 ≤ x ≤ L, t ≥ 0.
Therefore, we have
u1(x, t) = u2(x, t) for all 0 ≤ x ≤ L, t ≥ 0,
and this completes the proof.
THEOREM 5. (Continuous Dependence on the IC and BC) Let u1(x, t) and u2(x, t),
respectively, be solutions of the problems
ut = α2uxx; ut = α2uxx
u(0, t) = g1(t) u(L, t) = h1(t); u(0, t) = g2(t) u(L, t) = h2(t) (12)
u(x, 0) = f1(x); u(x, 0) = f2(x),
in the region 0 ≤ x ≤ L, t ≥ 0. If
|f1(x)− f2(x)| ≤ ϵ for all x, 0 ≤ x ≤ L,
MODULE 5: HEAT EQUATION 9
and
|g1(t)− g2(t)| ≤ ϵ and |h1(t)− h2(t)| ≤ ϵ for all t, 0 ≤ t ≤ T,
for some ϵ ≥ 0, then we have
|u1(x, t)− u2(x, t)| ≤ ϵ for all x and t,where 0 ≤ x ≤ L, 0 ≤ t ≤ T.
Proof. Let v(x, t) = u1(x, t)− u2(x, t). Then vt = α2vxx and we obtain
|v(x, 0)| = |f1(x)− f2(x)| ≤ ϵ, 0 ≤ x ≤ L,
|v(0, t)| = |g1(t)− g2(t)| ≤ ϵ, 0 ≤ t ≤ T,
|v(L, t)| = |h1(t)− h2(t)| ≤ ϵ, 0 ≤ t ≤ T.
Note that the maximum of v on t = 0 (0 ≤ x ≤ L) and x = 0 and x = L (0 ≤ t ≤ T ) is
not greater than ϵ. The minimum of v on these boundary lines is not less than −ϵ. Hence,
the maximum/minimum principle yields
−ϵ ≤ v(x, t) ≤ ϵ =⇒ |u1(x, t)− u2(x, t)| = |v(x, t)| ≤ ϵ.
Note: (i) We observe that when ϵ = 0, the problems in (12) are identical. We conclude
that |u1(x, t)− u2(x, t)| ≤ 0 (i.e. u1 = u2). This proves the uniqueness result.
(ii) Suppose a certain initial/boundary value problem has a unique solutions. Then
a small change in the initial and/or boundary conditions yields a small change in the
solutions.
For the inhomogeneous equation (1), we have seen that the maximum or minimum
values must be attained either on the initial line or the boundary lines and that they
cannot be assumed in the interior. This result is known as a strong maximum or minimum
principle.
THEOREM 6. (Strong maximum principle) Let u(x, t) be a solution of the heat equa-
tion in the rectangle R : 0 ≤ x ≤ L, 0 ≤ t ≤ T . If u(x, t) achieves its maximum at
(x∗, T ), where 0 < x∗ < L, then u must be constant in R.
Practice Problems
1. Use the maximum/minimum principle to show that the solution u of the problem
ut = uxx, 0 < x < π, t > 0,
ux(0, t) = 0, ux(π, t) = 0, t > 0,
u(x, 0) = sin(x) +1
2sin(2x), 0 ≤ x ≤ π
satisfies 0 ≤ u(x, t) ≤ 3√3
4 , t ≥ 0.
MODULE 5: HEAT EQUATION 10
2. Let Q = (x, t) | 0 < x < π, 0 < t ≤ T. Let u solves
ut = uxx in Q,
u(0, t) = 0, u(π, t) = 0, 0 ≤ t ≤ T,
u(x, 0) = sin2(x), 0 ≤ x ≤ π.
Use maximum principle to show that 0 ≤ u(x, t) ≤ e−t sinx in Q.
MODULE 5: HEAT EQUATION 11
Lecture 3 Method of Separation of Variables
Separation of variables is one of the oldest technique for solving initial-boundary value
problems (IBVP) and applies to problems, where
• PDE is linear and homogeneous (not necessarily constant coefficients) and
• BC are linear and homogeneous.
Basic Idea: To seek a solution of the form
u(x, t) = X(x)T (t),
where X(x) is some function of x and T (t) in some function of t. The solutions are simple
because any temperature u(x, t) of this form will retain its basic “shape” for different
values of time t. The separation of variables reduced the problem of solving the PDE
to solving the two ODEs: One second order ODE involving the independent variable x
and one first order ODE involving t. These ODEs are then solved using given initial and
boundary conditions.
To illustrate this method, let us apply to a specific problem. Consider the following
IBVP:
PDE: ut = α2uxx, 0 ≤ x ≤ L, 0 < t <∞, (1)
BC: u(0, t) = 0 u(L, t) = 0, 0 < t <∞, (2)
IC: u(x, 0) = f(x), 0 ≤ x ≤ L. (3)
Step 1:(Reducing to the ODEs) Assume that equation (1) has solutions of the form
u(x, t) = X(x)T (t),
where X is a function of x alone and T is a function of t alone. Note that
ut = X(x)T ′(t) and uxx = X ′′(x)T (t).
Now, substituting these expression into ut = α2uxx and separating variables, we obtain
X(x)T ′(t) = α2X ′′(x)T (t)
⇒ T ′(t)
α2T (t)=X ′′(x)
X(x).
MODULE 5: HEAT EQUATION 12
Since a function of t can equal a function of x only when both functions are constant.
Thus,
T ′(t)
α2T (t)=X ′′(x)
X(x)= c
for some constant c. This leads to the following two ODEs:
T ′(t)− α2cT (t) = 0, (4)
X ′′(x)− cX(x) = 0. (5)
Thus, the problem of solving the PDE (1) is now reduced to solving the two ODEs.
Step 2:(Applying BCs)
Since the product solutions u(x, t) = X(x)T (t) are to satisfy the BC (2), we have
u(0, t) = X(0)T (t) = 0 and X(L)T (t) = 0, t > 0.
Thus, either T (t) = 0 for all t > 0, which implies that u(x, t) = 0, or X(0) = X(L) = 0.
Ignoring the trivial solution u(x, t) = 0, we combine the boundary conditions X(0) =
X(L) = 0 with the differential equation for X in (5) to obtain the BVP:
X ′′(x)− cX(x) = 0, X(0) = X(L) = 0. (6)
There are three cases: c < 0, c > 0, c = 0 which will be discussed below. It is convenient
to set c = −λ2 when c < 0 and c = λ2 when c > 0, for some constant λ > 0.
Case 1. (c = λ2 > 0 for some λ > 0). In this case, a general solution to the differential
equation (5) is
X(x) = C1eλx + C2e
−λx,
where C1 and C2 are arbitrary constants. To determine C1 and C2, we use the BC
X(0) = 0, X(L) = 0 to have
X(0) = C1 + C2 = 0, (7)
X(L) = C1eλL + C2e
−λL = 0. (8)
From the first equation, it follows that C2 = −C1. The second equation leads to
C1(eλL − e−λL) = 0,
⇒ C1(e2λL − 1) = 0,
⇒ C1 = 0.
MODULE 5: HEAT EQUATION 13
since (e2λL−1) > 0 as λ > 0. Therefore, we have C1 = 0 and hence C2 = 0. Consequently
X(x) = 0 and this implies u(x, t) = 0 i.e., there is no nontrivial solution to (5) for the
case c > 0.
Case 2. (when c=0)
The general solution solution to (5) is given by
X(x) = C3 + C4x.
Applying BC yields C3 = C4 = 0 and hence X(x) = 0. Again, u(x, t) = X(x)T (t) = 0.
Thus, there is no nontrivial solution to (5) for c = 0.
Case 3. (When c = −λ2 < 0 for some λ > 0)
The general solution to (5) is
X(x) = C5 cos(λx) + C6 sin(λx).
This time the BC X(0) = 0, X(L) = 0 gives the system
C5 = 0,
C5 cos(λL) + C6 sin(λL) = 0.
As C5 = 0, the system reduces to solving C6 sin(λL) = 0. Hence, either sin(λL) = 0 or
C6 = 0. Now
sin(λL) = 0 =⇒ λL = nπ, n = 0,±1,±2, . . . .
Therefore, (5) has a nontrivial solution (C6 = 0) when
λL = nπ or λ =nπ
L, n = 1, 2, 3, . . . .
Here, we exclude n = 0, since it makes c = 0. Therefore, the nontrivial solutions (eigen-
functions) Xn corresponding to the eigenvalue c = −λ2 are given by
Xn(x) = an sin(nπx
L), (9)
where an’s are arbitrary constants.
Step 3:(Applying IC)
Let us consider solving equation (4). The general solution to (4) with c = −λ2 = (nπL )2
is
Tn(t) = bne−α2(nπ
L)2t.
MODULE 5: HEAT EQUATION 14
Combing this with (9), the product solution u(x, t) = X(x)T (t) becomes
un(x, t) := Xn(x)Tn(t) = an sin(nπx
L)bne
−α2(nπL
)2t
= cne−α2(nπ
L)2t sin(
nπx
L), n = 1, 2, 3, . . . ,
where cn is an arbitrary constant.
Since the problem (9) is linear and homogeneous, an application of superposition
principle gives
u(x, t) =∞∑n=1
un(x, t) =∞∑n=1
cne−α2(nπ
L)2t sin(
nπx
L), (10)
which will be a solution to (1)-(3), provided the infinite series has the proper convergence
behavior.
Since the solution (10) is to satisfy IC (3), we must have
u(x, 0) =∞∑n=1
cn sin(nπxL
)= f(x), 0 < x < L.
Thus, if f(x) has an expansion of the form
f(x) =∞∑n=1
cn sin(nπxL
), (11)
which is called a Fourier sine series (FSS) with cn’s are given by the formula
cn =2
L
∫ L
0f(x) sin(
nπx
L)dx. (12)
Then the infinite series (10) with the coefficients cn given by (12) is a solution to the
problem (1)-(3).
EXAMPLE 1. Find the solution to the following IBVP:
ut = 3uxx 0 ≤ x ≤ π, 0 < t <∞, (13)
u(0, t) = u(π, t) = 0, 0 < t <∞, (14)
u(x, 0) = 3 sin 2x− 6 sin 5x, 0 ≤ x ≤ π. (15)
Solution. Comparing (13) with (1), we notice that α2 = 3 and L = π. Using formula
(10), we write a solution u(x, t) as
u(x, t) =
∞∑n=1
cne−3n2t sin(nx).
MODULE 5: HEAT EQUATION 15
To determine cn’s, we use IC (15) to have
u(x, 0) = 3 sin 2x− 6 sin 5x =∞∑n=1
cn sin(nx).
Comparing the coefficients of like terms, we obtain
c2 = 3 and c5 = −6,
and the remaining cn’s are zero. Hence, the solution to the problem (13)-(15) is
u(x, t) = c2e−3(2)2t sin(2x) + c5e
−3(5)2t sin(5x)
= 3e−12t sin(2x)− 6e−75t sin(5x).
Practice Problems
1. Solve the following IBVP:
ut = 16uxx, 0 < x < 1, t > 0,
u(0, t) = 0, u(1, t) = 0, t > 0,
u(x, 0) = (1− x)x, 0 < x < 1.
2. Solve the following IBVP:
ut = uxx, 0 < x < π, t > 0,
ux(0, t) = ux(π, t) = 0, t > 0,
u(x, 0) = 1− sinx, 0 < x < π.
MODULE 5: HEAT EQUATION 16
Lecture 4 Time-Independent Homogeneous BC
The boundary conditions in previous lecture are assumed to be homogeneous, where we
are able to use the superposition principle in forming general solutions of the PDE. We
now turn to the situation where the BC are not both homogeneous, but are independent
of time variable t. The method of solution consists of the following steps:
• Step 1: Find a particular solution of the PDE and BC.
• Step 2: Find the solution of a related problem with homogeneous BC. Then, add
this solution to that particular solution obtained in Step 1.
The procedure is illustrated in the following example.
PDE: ut = α2uxx 0 ≤ x ≤ L, 0 < t <∞, (1)
BC: u(0, t) = a u(L, t) = b, 0 < t <∞, (2)
IC: u(x, 0) = f(x), 0 ≤ x ≤ L, (3)
where a and b are arbitrary constants and f(x) is a given function.
Solution. Seek a particular solution up(x, t) of the form up(x, t) = cx + d, where c
and d are chosen so that the BC are satisfied:
a = up(0, t) = c · 0 + d = d,
b = up(L, t) = cL+ d = cL+ a.
=⇒ d = a and c = (b− a)/L.
Thus,
up(x, t) = (b− a)x/L+ a
solves both the PDE with the BC’s being satisfied.
Consider the related homogeneous problem (i.e., with homogeneous PDE and BC)
PDE: vt = α2vxx 0 ≤ x ≤ L, 0 < t <∞,
BC: v(0, t) = 0, v(L, t) = 0, 0 < t <∞, (4)
IC: v(x, 0) = f(x)− up(x, 0), 0 ≤ x ≤ L.
If f(x)− up(x, 0) is of the form∑∞
n=1 cn sin(nπx/L), then its solution is given by
v(x, t) =
∞∑n=1
cne−(nπ/L)2α2t sin(nπx/L).
MODULE 5: HEAT EQUATION 17
Now, set u(x, t) = up(x, t)+v(x, t). Then it is easy to verify that u(x, t) solves (1). Indeed,
u(x, t) solves (1) by the superposition principle. Further, we have
BC: u(0, t) = up(0, t) + v(0, t) = a+ 0 = a
u(L, t) = up(L, t) + v(L, t) = b+ 0 = b
IC: u(x, 0) = up(x, 0) + v(x, 0) = up(x, 0) + f(x)− up(x, 0) = f(x).
REMARK 1. (i) It is necessary to subtract up(x, 0) from f(x) to form the initial condition
for the related problem (4) so that the initial condition (3) is satisfied.
(ii) Since any particular solution will do, for simplicity one should consider a particular
solution of the form cx+ d, and find the constants, using the BC. The reason is that the
formula only applies to the BC of (2). For other BC, we obtain other particular solution.
For example, If ux(0, t) = a, u(L, t) = b then up(x, t) = a(x− L) + b.
EXAMPLE 2.
PDE: ut = 2uxx 0 ≤ x ≤ 1, 0 < t <∞, (5)
BC: ux(0, t) = 1 u(1, t) = −2, 0 < t <∞, (6)
IC: u(x, 0) = x+ cos2(3πx/4)− 5/2. (7)
Solution. Take up(x, t) = cx + d. The first BC ux(0, t) = 1 yields c = 1, while
up(1, t) = 1 + d yields d = −3 by the second BC. Thus, up(x, t) = x − 3. The related
homogeneous problem is
vt = 2vxx 0 ≤ x ≤ 1, 0 < t <∞,
vx(0, t) = 0 v(1, t) = 0, 0 < t <∞
v(x, 0) = [x+ cos2(3πx/4)− 5/2]− (x− 3)
=1
2+
1
2cos(3πx/2)− 5/2 + 3 = 1 +
1
2cos(3πx/2).
It is easy to obtain the solution of the related homogeneous problem as
v(x, t) = e−9π2t/2[1 +1
2cos(3πx/2)].
Then
u(x, t) = x− 3 + e−9π2t/2[1 +1
2cos(3πx/2)].
From the above examples, we notice that the particular solution is time independent, or
in steady-state.
Note: Any steady-state solution of the heat equation ut = α2uxx is of the form cx+ d.
MODULE 5: HEAT EQUATION 18
The solutions u(x, t) are sums of a steady-state particular solution of the PDE and
BC and the solution v(x, t) of the related homogeneous problem which is transient in the
sense that v(x, t) → 0 as t→ ∞. Thus
u(x, t) = up(x, t) + v(x, t) → up(x, t), as t→ ∞.
(steady-state solution) (transient solution)
That is, the solution u approaches the steady-state solution as t→ ∞. However, for some
types of BC there are no steady-state particular solutions, as illustrated in the following
example.
EXAMPLE 3. Consider the problem
PDE: ut = α2uxx 0 ≤ x ≤ L, 0 < t <∞, (8)
BC: ux(0, t) = a ux(L, t) = b, (9)
IC: u(x, 0) = f(x), (10)
where a and b are constants, and f(x) is a given function.
Solution. Let up(x, t) = cx + d. Then, using BC, we obtain c = a and c = b, which
is impossible unless a = b.
NOTE: Observe that the boundary conditions state that heat is being drained out
of the end x = 0 at a rate ux(0, t) = a and heat is flowing into the end x = L at a rate
ux(L, t) = b. If b > a, then the heat energy is being added to the rod at a constant rate.
If b < a, the rod loses heat at a constant rate. Thus, we cannot expect a steady-state
solution of the PDE and BC, unless a = b.
The simplest form for a particular solution, that reflects the fact that the heat energy
is changing at a constant rate, is
up(x, t) = ct+ h(x)
where c is a constant and h(x) is a function of x. The constant c and the function h(x)
can be determined from the PDE and BC. Thus,
c = (up)t = α2(up)xx = α2h′′(x)
=⇒ h′′(x) =c
α2
=⇒ h(x) =c
2α2x2 + dx+ e,
for constants d and e. Using BC, we note that
a = (up)x(0, t) = h′(0) = d =⇒ d = a.
MODULE 5: HEAT EQUATION 19
b = (up)x(L, t) = h′(L) =cL
α2+ d =⇒ c =
(b− a)α2
L.
Thus, a particular solution (taking e = 0, for simplicity) is obtained as:
up(x, t) =(b− a)
Lα2t+
(b− a)
2Lx2 + ax =
(b− a)
L[α2t+
1
2x2] + ax. (11)
The related homogeneous problem is
vt = α2vxx 0 ≤ x ≤ L, t ≥ 0,
vx(0, t) = 0 vx(L, t) = 0, 0 < t <∞,
v(x, 0) = f(x)− up(x, 0) = f(x)− [(b− a)
2Lx2 + ax].
If f(x)− up(x, 0) is of the form∑∞
n=0 cn cos(nπx/L), we have the solution
u(x, t) = up(x, t) + v(x, t)
= up(x, t) +
∞∑n=0
cne−(nπ/L)2α2t cos(nπx/L),
where up(x, t) is given by (11).
Practice Problems
1. Solve the following IBVP:
ut = uxx, 0 < x < L, t > 0,
u(0, t) = a, u(L, t) = b, t > 0,
u(x, 0) = a+ bx, 0 ≤ x ≤ L.
2. Solve the following IBVP:
ut = 4uxx, 0 < x < π, t > 0,
u(0, t) = 5, u(π, t) = 10, t > 0,
u(x, 0) = sinx− sin 3x, 0 < x < π.
MODULE 5: HEAT EQUATION 20
Lecture 5 Time-Dependent BC
In this lecture we shall learn how to solve the inhomogeneous heat equation
ut − α2uxx = h(x, t)
with time-dependent BC. To begin with, let us consider the following IBVP problem with
time-dependent BC:
PDE: ut = α2uxx 0 ≤ x ≤ L, 0 < t <∞, (1)
BC: u(0, t) = a(t) u(L, t) = b(t), 0 < t <∞, (2)
IC: u(x, 0) = f(x). (3)
In the previous lecture, we had discussed the solution of this problem in the case where a(t)
and b(t) are constant functions (independent of t) and f(x) is a suitable given function.
Notice that the function w(x, t) defined by
w(x, t) =
[b(t)− a(t)
L
]x+ a(t)
satisfies the BC (2). However, w(x, t) will not satisfy the PDE (1) unless a(t) and b(t)
are constant. In fact,
wt − α2wxx =
[b′(t)− a′(t)
L
]x+ a′(t).
We now attempt to find a solution for the problem (1)-(3) of the form
u(x, t) = w(x, t) + v(x, t),
where v(x, t) satisfies the following problem
vt − α2vxx = ut − α2uxx − (wt − α2wxx)
= −(wt − α2wxx)
= −[b′(t)− a′(t)]x/L− a′(t).
Further,
v(0, t) = u(0, t)− w(0, t) = a(t)− a(t) = 0,
v(L, t) = b(t)− b(t) = 0.
MODULE 5: HEAT EQUATION 21
Thus, the function v(x, t) must satisfy the following related problem with homogeneous
BC, but inhomogeneous PDE:
PDE: vt − α2vxx = −[b′(t)− a′(t)]x/L− a′(t), 0 ≤ x ≤ L, 0 < t <∞, (4)
BC: v(0, t) = 0, v(L, t) = 0, 0 < t <∞, (5)
IC: v(x, 0) = u(x, 0)− w(x, 0) = f(x)− [a(0)− b(0)]x/L− a(0). (6)
Note: When a(t) and b(t) are constants, the PDE is homogeneous. But, in this case,
v(x, t) satisfies nonhomogeneous PDE.
The problem (4)-(6) is a special case of the following general problem:
PDE: vt − α2vxx = h(x, t) 0 ≤ x ≤ L, 0 < t <∞, (7)
BC: v(0, t) = 0, v(L, t) = 0, 0 < t <∞ (8)
IC: v(x, 0) = g(x). (9)
The solution procedure to the above problem was given by the French mathematician and
physicist Jean-Marie-Constant Duhamel (1797-1872). The method is known as Duhamel’s
principle.
Suppose u1 and u2 are solutions of the following problems:
(P1 :)
PDE: (u1)t − α2(u1)xx = 0
BC: u1(0, t) = 0, u1(L, t) = 0
IC: u1(x, 0) = g(x)
(P2 :)
PDE: (u2)t − α2(u2)xx = h(x, t)
BC: u2(0, t) = 0, u2(L, t) = 0
IC: u2(x, 0) = 0
(10)
It is easy to check that v(x, t) = u1(x, t) + u2(x, t) solves (7)-(9). The solution u1 to the
problem (P1) is known (cf. Lecture 4 in Module 5). It remains only to solve the problem
(P2) for u2.
The above observation has led to the following (cf. [1]).
THEOREM 1. A solution to problem (1)-(3) is given by
u(x, t) = w(x, t) + u1(x, t) + u2(x, t),
where
w(x, t) =
[b(t)− a(t)
L
]x+ a(t)
is the particular solution of the BC and u1(x, t) solves (P1) with g(x) = f(x) − w(x, 0)
and u2(x, t) solves (P2) with h(x, t) = −(wt − α2wxx) = −[b′(t)− a′(t)]x/L− a′(t).
MODULE 5: HEAT EQUATION 22
1 Duhamel’s principle
The basic idea of Duhamel’s principle is to transfer the source term h(x, t) to initial
condition of related problems. This is done in the following manner. The function defined
by
u(x, t) =
∫ t
0v(x, t; s)ds
is a solution of (7)-(9) provided v(x, t; s) is a solution of the problem
PDE: vt = α2vxx, 0 ≤ x ≤ L, 0 < t <∞, (11)
BC: v(0, t; s) = 0, v(L, t; s) = 0, 0 < t <∞, (12)
IC: v(x, s; s) = h(x, s). (13)
Note that both PDE and BC are homogeneous. We use translation in time
u(x, t) =
∫ t
0v(x, t− s; s)ds
to obtain an IC at t = 0, instead of t = s. Rewriting (11)-(13) in terms of v, we now
reduce the problem to the following associated problem with IC at t = 0:
PDE: vt = α2vxx 0 ≤ x ≤ L, 0 < t <∞, (14)
BC: v(0, t; s) = 0, v(L, t; s) = 0, 0 < t <∞ (15)
IC: v(x, 0; s) = h(x, s). (16)
To illustrate the procedure let us consider the following example:
EXAMPLE 2. Solve
PDE: ut − α2uxx = t sin(x) 0 ≤ x ≤ π, 0 < t <∞, (17)
BC: u(0, t) = 0, u(π, t) = 0, 0 < t <∞, (18)
IC: u(x, 0) = 0. (19)
Solution. Here h(x, t) = t sin(x). We solve the related problem:
PDE: vt = α2vxx, 0 ≤ x ≤ π, 0 < t <∞, (20)
BC: v(0, t; s) = 0, v(π, t; s) = 0, 0 < t <∞, (21)
IC: v(x, 0; s) = h(x, s) = s sin(x). (22)
MODULE 5: HEAT EQUATION 23
Treating s a constant, we easily obtain v(x, t; s) = se−α2t sin(x). Note that
u(x, t) =
∫ t
0v(x, t− s; s)ds =
∫ t
0se−α2(t−s) sin(x)ds
= e−α2t sin(x)
∫ t
0seα
2sds =[(α2)−1t+ (α2)−2(e−α2t − 1)
]sin(x),
which satisfies (17)-(19).
THEOREM 3. (Duhamel’s principle, [1]) Let h(x, t) be a twice continuously differen-
tiable function in 0 ≤ x ≤ L, t ≥ 0. Assume that, for each s ≥ 0, the IBVP
PDE: vt = α2vxx 0 ≤ x ≤ L, 0 < t <∞, (23)
BC: v(0, t; s) = 0, v(L, t; s) = 0, 0 < t <∞, (24)
IC: v(x, 0; s) = h(x, s). (25)
has a solution v(x, t; s), where v(x, t; s), vt(x, t; s) and vxx(x, t; s) are continuous (in all
three variables). Then the unique solution of the problem
PDE: ut − α2uxx = h(x, t) 0 ≤ x ≤ L, 0 < t <∞, (26)
BC: u(0, t) = 0, u(L, t) = 0, 0 < t <∞, (27)
IC: u(x, 0) = 0. (28)
is given by
u(x, t) =
∫ t
0v(x, t; s)ds. (29)
Proof. Note that the function u(x, t) defined by
u(x, t) =
∫ t
0v(x, t; s)ds
satisfies the IC u(x, 0) = 0 and the BC u(0, t) = u(L, t) = 0. Observe that v(x, t; s)
satisfies the BC (24). Now, with g(t, s) = v(x, t; s), where x fixed, we have
ut(x, t) = v(x, t; t) +
∫ t
0vt(x, t; s)ds
= h(x, t) +
∫ t
0α2vxx(x, t; s)ds.
Apply Leibniz’s rule to obtain
ut(x, t) = h(x, t) + α2uxx(x, t).
By the hypothese on v(x, t; s), it follows that u(x, t) is in C2. For the uniqueness, see
Theorem 4 (of Lecture 2 of Module 5).
MODULE 5: HEAT EQUATION 24
REMARK 4. The solution u in (29) may be written as
u(x, t) =
∫ t
0v(x, t− s; s)ds
where v solves (14)-(16).
EXAMPLE 5. Solve the IBVP:
ut − α2uxx = t[sin(2πx) + 2x] 0 ≤ x ≤ 1, 0 < t <∞,
u(0, t) = 1, u(1, t) = t2, 0 < t <∞,
u(x, 0) = 1 + sin(πx)− x.
Solution. The function that satisfies the BC is
w(x, t) = (t2 − 1)x+ 1.
Then u(x, t) = w(x, t)+v(x, t), where v(x, t) solves the related problem with homogeneous
BC:
vt − kvxx = ut − α2uxx − (wt − α2wxx) = t sin(2πx)
v(0, t) = u(0, t)− w(0, t) = 0
v(1, t) = u(1, t)− w(1, t) = 0
v(x, 0) = u(x, 0)− w(x, 0) = sin(πx).
Now, v = u1 + u2, where u1 and u2, respectively, solves
(a)
(u1)t − α2(u1)xx = 0
u1(0, t) = 0 u1(1, t) = 0
u1(x, 0) = sin(πx)
(b)
(u2)t − α2(u2)xx = t sin(2πx)
u2(0, t) = 0 u2(1, t) = 0
u2(x, 0) = 0.
We know that u1(x, t) = e−π2α2t sin(πx). The function u2 is found via Duhamel’s principle.
The solution u2 is given by
u2(x, t) =
∫ t
0v(x, t− s; s)ds,
where v solves the problem
vt = α2vxx
v(0, t; s) = 0 v(L, t; s) = 0
v(x, 0; s) = s sin(2πx).
MODULE 5: HEAT EQUATION 25
We know that v(x, t; s) = se−4π2α2t sin(2πx). Thus,
u2(x, t) =
∫ t
0s · e−4π2α2(t−s) sin(2πx)ds
= e−4π2α2t sin(2πx)
∫ t
0s · e4π2α2sds
= (4π2α2)−2[4π2α2t+ e−4π2α2t − 1
]· sin(2πx).
The solution is then given by
u(x, t) = w(x, t) + u1(x, t) + u2(x, t).
REMARK 6. Duhamel’s principle is also applicable to problems with PDE ut − α2uxx =
h(x, t) and homogeneous BC of the forms:
ux(0, t) = 0
u(L, t) = 0;
u(0, t) = 0
ux(L, t) = 0;
ux(0, t) = 0
ux(L, t) = 0..
Practice Problems
1. Solve the following IBVP:
ut = α2uxx + cos(3t), 0 < x < 1, t > 0,
ux(0, t) = 0, ux(1, t) = 1, t > 0,
u(x, 0) = cos(πx)1
2x2 − x, 0 < x < 1.
2. Solve the following IBVP:
ut = 4uxx + et sin(x/2)− sin(t), 0 < x < π, t > 0,
u(0, t) = cos(t), u(π, t) = 0, t > 0,
u(x, 0) = 1, 0 < x < π.
Module 6: The Wave Equation
In this module we shall study the one-dimensional wave equation which describes trans-
verse vibrations of an elastic string. This module is organized as follows. In the first
lecture, we shall discuss the mathematical formulation of this model using Newton’s sec-
ond law of motion. Further, we shall also established the uniqueness of solutions by
proving that the energy is conserved. In the second lecture, we shall derive D’Alembert’s
formula for the solution of initial value problems for the infinite string. The third lecture
deals with some special cases of D’Alembert’s formula and the semi-infinite string prob-
lems. The fourth lecture is devoted to solving initial and boundary value problem for a
string with fixed ends. Finally, in the last lecture, we shall discuss Duhamel’s principle
for inhomogeneous wave equations.
1
MODULE 6: THE WAVE EQUATION 2
Lecture 1 Mathematical Formulation and Uniqueness Result
We begin by studying the one-dimensional wave equation, which describe the transverse
vibrations of a string. Consider the small vibrations of a string that is fastened at each
end (see, Fig. 6.1). We now make the following assumptions:
• The string is made of a homogeneous material (i.e., the mass/unit length of the
string is constant).
• There is no effect of gravity and external forces.
• The vibration takes place in a plane.
The mathematical model equation under these assumptions describe small vibrations of
the string. Let the forces acting on a small portion PQ of the string. Since the string
Figure 6.1: Vibrations of a string problem
does not offer resistance to bending, the tension is tangential to the curve of the string at
each point. Let T1 and T2, respectively, be the tensions at the endpoints P and Q. Since
there is no motion in horizontal direction, the horizontal components of the tension must
be constant. From the Fig. 6.1, we obtain
T1 cos θ1 = T2 cos θ2 = T = constant. (1)
Let −T1 sin θ1 and T2 sin θ2 be two components of T1 and T2, respectively in the vertical
direction. The minus sign indicates that component at P is directed downward. By
Newton’s second law, the resultant of these two forces is equal to the mass ρ∆x of the
portion times the acceleration utt, evaluated at some point between x and x+∆x. If ρ is
the mass of the undeflected string per unit length and ∆x is length of the portion of the
undeflected string then we have
T2 sin θ2 − T1 sin θ1 = ρ∆xutt.
MODULE 6: THE WAVE EQUATION 3
In view of (1), we obtain
T2 sin θ2T2 cos θ2
− T1 sin θ1T1 cos θ1
= tan θ2 − tan θ1 =ρ∆x
Tutt. (2)
Note that tan θ1 and tan θ2 are the slopes of the curve of the string at x and x+∆x, i,e.,
tan θ1 = (ux)P , tan θ2 = (ux)Q.
Here, partial derivatives are used because u also depends on t. Dividing (2) by ∆x, we
have1
∆x[ux(x+∆x, t)− ux(x, t)] =
ρ
Tutt.
Letting ∆x→ 0, we obtain
utt = c2uxx, (3)
where c2 = Tρ .
NOTE: The notation c2 (instead of c) for the physical constant T/ρ has been chosen to
indicate that this constant is positive. The constant c2 depends on the density and tension
of the string.
As the problem is linear, it is enough to prove the uniqueness of solution. The unique-
ness result is proved in the following theorem.
THEOREM 1. Let u1(x, t) and u2(x, t) be two solutions of
PDE: utt = c2uxx, 0 ≤ x ≤ L, −∞ < t <∞,
BC: u(0, t) = a(t), u(L, t) = b(t),
IC: u(x, 0) = f(x), ut(x, 0) = g(x).
Then u1(x, t) = u2(x, t) for all 0 ≤ x ≤ L, −∞ < t <∞.
Proof. Let v(x, t) = u1(x, t)− u2(x, t). Note that v satisfies
vtt = c2vxx, 0 ≤ x ≤ L, −∞ < t <∞,
v(0, t) = 0, v(L, t) = 0,
v(x, 0) = 0, vt(x, 0) = 0.
with homogeneous BC and IC. Observe that v(x, 0) = 0 and vt(x, 0) = 0. We need to
show that v(x, t) = 0 for all t. We write
v(x, t) = v(x, t)− v(x, 0) =
∫ t
0vt(x, t)dt. (4)
MODULE 6: THE WAVE EQUATION 4
We now claim that vt(x, t) = 0 for all x in [0, L] and for all t. Construct the function
H(t) =
∫ L
0c2v2x(x, t) + v2t (x, t)dx. (5)
Differentiating with respect to t and using vtt = c2vxx, we obtain
H ′(t) =
∫ L
0c22vxvxt + 2vtvttdx
= 2c2∫ L
0vxvxt + vtvxxdx
= 2c2∫ L
0
∂
∂x(vxvt)dx
= 2c2vx(x, t)vt(x, t)∣∣L0
= 0,
where in the last step we have used vt(0, t) = ddtv(0, t) = 0 and, similarly vt(L, t) = 0.
Thus,
H ′(t) = 0 =⇒ H(t) = C,
where C is an arbitrary constant. Since H(0) = 0, we have C = 0 and, hence H(t) = 0.
Thus, (5) becomes ∫ L
0c2v2x(x, t) + v2t (x, t)dx = 0
=⇒ vt(x, t) = 0 ∀x ∈ [0, L], ∀t ∈ R.
In view of (4), we obtain
v(x, t) =
∫ t
0vt(x, t)dt = 0 =⇒ u1(x, t) = u2(x, t).
This completes the proof.
MODULE 6: THE WAVE EQUATION 5
Lecture 2 The Infinite String Problem
In this lecture, we shall show that the solution of the wave equation
utt = c2uxx
can be immediately obtained with suitable transformation of the independent variables.
We shall derive D’Alembert formula for the solution of the wave equation for an infinite
string (−∞ < x <∞) with IC u(x, 0) = f(x) and ut(x, 0) = g(x).
Consider the following IVP:
PDE: utt = c2uxx, −∞ < x <∞, t ≥ 0, (1)
IC: u(x, 0) = f(x) (initial displacement), (2)
ut(x, 0) = g(x) (initial velocity).
Step 1.(Transforming to its canonical form): Introducing the transformation
ξ = x+ ct η = x− ct,
we note that
ux = uξξx + uηηx = uξ + uη.
uxx = (uξ + uη)x
= (uξ + uη)ξξx + (uξ + uη)ηηx
= uξξ + 2uξη + uηη.
Similarly,
utt = c2(uξξ − 2uξη + uηη).
Substituting the expression for uxx and utt in utt = c2uxx yields
uξη = 0, (3)
which is known as canonical form of (1).
Step 2. (Solving the transformed equation (3)): Integrate (3) with respect to ξ to have
uη(ξ, η) = Φ(η) + ψ(ξ),
MODULE 6: THE WAVE EQUATION 6
where Φ(η) is the antiderivative of ϕ(η), and ψ(ξ) is any function of ξ. Thus, the general
solution of uξη = 0 is
u(ξ, η) = ϕ(η) + ψ(ξ), (4)
where ϕ(η), ψ(ξ) are arbitrary functions of η and ξ, respectively.
Step 3. (Transforming back to the original variables x and t): Substituting ξ = x + ct
and η = x− ct in (4) we get
u(x, t) = ϕ(x− ct) + ψ(x+ ct). (5)
This is the general solution of the wave equation. We may interpret (5) as the sum of any
two moving waves, each moving in opposite directions with velocity c.
Step 4. (Applying IC to the general solution): In order to solve IVP (1)-(2), the general
solution u(x, t) is required to satisfy the two initial conditions
u(x, 0) = f(x), ut(x, 0) = g(x).
These conditions lead to the following equations:
ϕ(x) + ψ(x) = f(x) (6)
−cϕ′(x) + cψ′(x) = g(x). (7)
Integrating (7) from x0 to x, we obtain
−cϕ(x) + cψ(x) =
∫ x
x0
g(τ) dτ +K. (8)
Solving for ϕ(x) and ψ(x) from (6) and (8), we obtain
ϕ(x) =1
2f(x)− 1
2c
∫ x
x0
g(τ) dτ (9)
ψ(x) =1
2f(x) +
1
2c
∫ x
x0
g(τ) dτ (10)
Thus, the solution to IVP (1)-(2) is given by
u(x, t) =1
2[f(x− ct) + f(x+ ct)] +
1
2c
∫ x+ct
x−ctg(τ) dτ. (11)
The equation (11) is known as D’Alembert solution to the IVP (1)-(2). This formula is
of great interest in itself, and it avoids the problem of convergence of infinite series in the
Fourier series approach.
MODULE 6: THE WAVE EQUATION 7
REMARK 1. D’Alembert’s formula yields a number of properties of solutions of the wave
problem for the infinite string.
• Disturbances propagate with speed c.
The value u(x0, t0) depends only on the values of g in the interval [x0− ct0, x0+ ct0]and on the values of f at the endpoints of this interval. Geometrically, this is the
interval cut out by the characteristic lines that pass through the point (x0, t0). The
interval [x0 − ct0, x0 + ct0] is called the interval of dependence for the point (x0, t0)
(since u(x0, t0) depends only on the values u(x, 0) and ut(x, 0) for x in this interval).
• Odd initial data yields odd solution and even initial data yields even solution.
If f(x) and g(x) are odd, then u(x, t) is odd in the x-variable, since
u(−x, t) =1
2[f(−x+ ct) + f(−x− ct)] +
1
2c
∫ −x+ct
−x−ctg(r) dr
=1
2[−f(x− ct)− f(x+ ct)]− 1
2c
∫ x−ct
x+ctg(−s)ds
= −1
2[f(x− ct) + f(x+ ct)] +
1
2c
∫ x−ct
x+ctg(s)ds
= −1
2[f(x+ ct) + f(x− ct)]− 1
2a
∫ x+ct
x−ctg(s)ds
= −u(x, t).
Similarly, we can show that if f(x) and g(x) are even then u(x, t) is even i.e.,
u(−x, t) = u(x, t).
• Periodic initial data yield periodic solutions.
If f(x+ 2L) = f(x) and g(x+ 2L) = g(x), then u(x+ 2L, t) = u(x, t). That is, if f
and g are periodic of period 2L then u(x, t) is also periodic of period 2L in x. This
follows easily from D’Alembert’s formula. This fact is useful in dealing with finite
strings.
It can be shown that if f(x) and g(x) are periodic of period 2L and∫ L
−Lg(x)dx = 0,
then u(x, t) is not only periodic in x of period 2L, but also periodic in t of period
2L/c.
MODULE 6: THE WAVE EQUATION 8
Special cases of D’Alembert’s formula:
CASE I. (Initial velocity zero). Suppose the string has IC
u(x, 0) = f(x)
ut(x, 0) = 0.
The D’Alembert solution is
u(x, t) =1
2[f(x− ct) + f(x+ ct)].
Thus, the solution u at a point (x0, t0) can be interpreted as the average of the initial
displacement f(x) at a point (x0 − ct0, 0) and (x0 + ct0, 0) found by backtracking the
characteristic curves x− ct = x0 − ct0 and x+ ct = x0 + ct0.
CASE 2. (Initial displacement zero) Suppose the string has the following IC:
u(x, 0) = 0
ut(x, 0) = g(x).
In this case, the solution is
u(x, t) =1
2c
∫ x+ct
x−ctg(τ) dτ.
The solution u at (x, t) may be interpreted as integrating the initial velocity between x−ctand x+ ct on the initial line t = 0.
Let us consider the following examples.
EXAMPLE 2. (Zero initial velocity) Solve the IVP:
PDE: utt = c2uxx, −∞ < x, t <∞,
IC: u(x, 0) = sin(x),
ut(x, 0) = 0.
Solution: Applying D’Alembert’s formula (11) with f(x) = sin(x) and g(x) = 0, we
obtain
u(x, t) =1
2[sin(x− ct) + sin(x+ ct)] .
EXAMPLE 3. (Zero initial displacement) Consider the IVP:
PDE: utt = c2uxx, −∞ < x, t <∞
I.C. u(x, 0) = 0,
ut(x, 0) = sin(x).
MODULE 6: THE WAVE EQUATION 9
Solution: Here the string is initially straight (u(x, 0) = 0), but has a variable velocity
at t = 0 (ut(x, 0) = sin(x)). Thus, applying D’Alembert’s formula (11) with f(x) = 0 and
g(x) = sin(x), we obtain
u(x, t) =1
2c
∫ x+ct
x−ctsin(τ)dτ = − 1
2c[cos(x+ ct)− cos(x− ct)] .
Practice Problems
1. Solve the following IVP:
utt = 9uxx, −∞ < x <∞, t > 0,
u(x, 0) = sinx, ut(x, 0) = cosx, −∞ < x <∞.
2. Solve the following IVP:
utt = c2uxx, −∞ < x <∞, t > 0,
u(x, 0) = 0, ut(x, 0) = sin2(x), −∞ < x <∞.
3. Let u(x, t) be the solution of
utt = c2uxx, 0 < x <∞, t > 0,
u(x, 0) = f(x), ut(x, 0) = g(x), −∞ < x <∞.
Use D’Alembert’s formula to show that u is even in x.
MODULE 6: THE WAVE EQUATION 10
Lecture 3 The Semi-Infinite String Problem
Before we introduce the semi-infinite string problem, let us look at some special cases of
D’Alembert’s formula derived in the previous lecture.
EXAMPLE 1. Consider the problem for the semi-infinite string (0 ≤ x < ∞) with fixed
end at x = 0:
PDE: utt = c2uxx, 0 ≤ x <∞, −∞ < t <∞
BC: u(0, t) = 0
IC: u(x, 0) = f(x), ut(x, 0) = 0.
Solution. Note that f(x) is defined for x ≥ 0. Consider the odd extension f0(x),
−∞ < x <∞ as follows:
f0(x) =
f(x) for x ≥ 0,
−f(−x) for x ≤ 0.
The related extended problem is
PDE: utt = c2uxx, −∞ ≤ x, t <∞
I.C. u(x, 0) = f0(x), ut(x, 0) = 0.
By D’Alembert’s formula, the solution of this problem is
u(x, t) =1
2[f0(x+ ct) + f0(x0 − ct)].
Note that u(x, t) is odd in x, since f0(x) is odd. Thus, u(0, t) = 0 and so u(x, t) satisfies
the BC.
Moreover,
u(x, 0) =1
2[f0(x+ c · 0) + f0(x− c · 0)] = f0(x),
which is the same as f(x) when x ≥ 0.
Semi-infinite string problem: We shall find the solution of the following wave equation
whose left end fixed at zero and has given initial conditions:
PDE: utt = c2uxx, 0 < x <∞, 0 < t <∞
BC: u(0, t) = 0, 0 < t <∞,
IC: u(x, 0) = f(x), ut(x, 0) = 0, 0 < x <∞.
MODULE 6: THE WAVE EQUATION 11
Recall that the solution of the PDE (1) is given by (see (5), Lecture 2 of this module)
u(x, t) = ϕ(x− ct) + ϕ(x+ ct). (1)
Substitute the general solution into the initial conditions, we arrive at (cf. (9)-(10), Lecture
2 of this module)
ϕ(x− ct) =1
2f(x− ct)− 1
2c
∫ x−ct
x0
g(ξ) dξ. (2)
ψ(x+ ct) =1
2f(x+ ct) +
1
2c
∫ x+ct
x0
g(ξ) dξ. (3)
Since we are looking for the solution u(x, t) everywhere in the first quadrant (x > 0, t > 0)
of the xt-plane, we must find ϕ(x−ct) ∀ −∞ < x−ct <∞ and ψ(x+ct) ∀ 0 < x+ct <∞.
Using (1), (2) and (3), for x− ct ≥ 0, it follows that
u(x, t) = ϕ(x− ct) + ψ(x+ ct)
=1
2[f(x− ct) + f(x+ ct)] +
1
2c
∫ x+ct
x−ctg(ξ)dξ.
When x < ct, use of BC u(0, t) = 0 leads to
ϕ(−ct) = −ψ(ct)
and hence,
ϕ(x− ct) = −1
2f(ct− x)− 1
2c
∫ ct−x
x0
g(ξ) dξ +K.
Substituting this value of ϕ into the general solution
u(x, t) = ϕ(x− ct) + ψ(x+ ct).
yields
u(x, t) =1
2[f(x+ ct)− f(ct− x)] +
1
2c
∫ x+ct
ct−xg(ξ) dξ, 0 < x < ct.
Thus, for x < ct and x > ct, we have
u(x, t) =
12 [f(x− ct) + f(x+ ct)] + 1
2c
∫ x+ctx−ct g(ξ) dξ x ≥ ct
12 [f(x+ ct)− f(ct− x)] + 1
2c
∫ x+ctct−x g(ξ) dξ x < ct.
EXAMPLE 2. Find the solution of the following IBVP:
utt = uxx, 0 < x <∞, t > 0,
u(x, t) = 0, t > 0,
u(x, 0) = | sinx|, ut(x, 0) = 0, 0 < x <∞.
MODULE 6: THE WAVE EQUATION 12
Solution. For x > t,
u(x, t) =1
2(f(x+ t) + f(x− t))
=1
2(| sin(x+ t)|+ | sin(x− t)|).
For x < t,
u(x, t) =1
2(f(x+ t)− f(t− x))
=1
2(| sin(x+ t)| − | sin(t− x)|).
Observe that u(0, t) = 0 is satisfied by u(x, t) for x < t. Thus,
u(x, t) =
12(| sin(x+ t)|+ | sin(x− t)|) x > t12(| sin(x+ t)| − | sin(t− x)|) x < t.
Practice Problems
1. Solve the following IBVP:
utt = uxx, 0 < x <∞, t > 0,
ux(0, t) = 0, t ≥ 0,
u(x, 0) = cosx, ut(x, 0) = 0, 0 ≤ x <∞.
2. Solve the following IBVP:
utt = c2uxx, 0 < x <∞, t > 0,
u(0, t) = 0, t ≥ 0,
u(x, 0) = x2, ut(x, 0) = 0, 0 ≤ x <∞.
MODULE 6: THE WAVE EQUATION 13
Lecture 4 The Finite Vibrating String Problem
In this lecture, we shall study the transverse vibrations of a finite string. If u(x, t) repre-
sents the displacement (deflection) of the string and the ends of the string are held fixed,
then the motion of the string is described by the following initial-boundary value problem
(IBVP):
PDE: utt = c2uxx, 0 < x < L, 0 < t <∞, (1)
BC: u(0, t) = 0; u(L, t) = 0, 0 < t <∞. (2)
IC: u(x, 0) = f(x); ut(x, 0) = g(x), 0 ≤ x ≤ L. (3)
While studying the wave equation in a bounded region of space 0 < x < L, it is to be
noted that the waves no longer appear to be moving due to their repeated interaction with
boundaries. These waves are known as standing waves (e.g., a guitar string fixed at both
ends). The boundary condition in (2) reflect the fact the string is held fixed at the two
end points x = 0 and x = L.
We shall apply the method of separation of variables to solve this problem.
Step 1. (Reducing to a system of ODEs): We seek solutions of the form
u(x, t) = X(x)T (t). (4)
Substituting (4) into utt = c2uxx and separating variables, we get
X(x)T ′′(t) = c2X ′′(x)T (t).
or
T ′′(t)
c2T (t)=X ′′(x)
X(x)= k,
where the constant k can now be any number −∞ < k <∞. This leads to two ODEs:
T ′′(t)− c2kT (t) = 0,
X ′′(x)− kX(x) = 0.
(5)
(6)
The ODE X ′′ − kX = 0 is solved for X(x) in a manner similar to that of heat equation
(see, Lecture 3 of Module 5), but the solution of the ODE T ′′ − c2kT = 0 for T (t) are
different, because of the second-order time derivative.
Step 2. (Solving the ODEs): Investigating the solutions of these two ODEs for all different
values of k lead into the following cases.
MODULE 6: THE WAVE EQUATION 14
Case I : Let k > 0. Set k = λ2. The soultions are given by
T (t) = Ae(cλ)t +Be−(cλ)t,
X(x) = Ce(λ)x +De−(λ)x.
Application of BC yields u ≡ 0.
Case II : Let k = 0. In this case, the solutions are linear and given by
T (t) = At+B, X(x) = Cx+D.
This case is of no interest because use of BC yields trivial solution u ≡ 0. Hence, for
nontrival solution, we are left with the possibility of choosing k < 0.
Case III : Let k < 0. Set k = −λ2 for some λ ∈ R and λ = 0.
The solutions of T ′′(t) + c2λT (t) = 0 is given by
T (t) = A sin(cλt) +B cos(cλt).
The solutions of X ′′(x) + λ2X(x) = 0 is
X(x) = C sin(λx) +D cos(λx),
where A,B,C and D are constants. Then
u(x, t) = [A sin(cλt) +B cos(cλt)][C sin(λx) +D cos(λx)].
Our goal is to find the constants A, B, C and D and the negative separation constant λ
so that the expression
u(x, t) = [C sin(λx) +D cos(λx)][A sin(cλt) +B cos(cλt)] (7)
satisfies the BC. As u(x, t) has to satisfy the BC (2), substituting (7) into u(0, t) =
u(L, t) = 0 gives
u(0, t) = X(0)T (t) = D[A sin(cλt) +B cos(cλt)] = 0
=⇒ D = 0.
u(L, t) = 0 =⇒ X(L)T (t) = 0
= C sin(λL)[A sin(cλt) +B cos(cλt)] = 0
=⇒ sin(λL) = 0
=⇒ λL = nπ, n = 0, 1, 2, . . .
or λn =nπ
L, n = 0, 1, 2, . . . .
MODULE 6: THE WAVE EQUATION 15
Note that the choice of C = 0 in (7) would lead to X(x)T (t) = 0. Thus, the sequence of
solutions given by
un(x, t) = Xn(x)Tn(t)
= sin(nπx
L)
[an sin(
nπct
L) + bn cos(
nπct
L)
], n = 1, 2, 3, · · ·
As the PDE is linear, by superposition principle we write
u(x, t) =∞∑n=1
sin(nπx
L)
[an sin(
nπct
L) + bn cos(
nπct
L)
]. (8)
These solutions are called eigenfunctions and the values λn = nπL are called the eigenvalues
of the vibrating string.
Step 3. (Applying IC): Substituting (8) into IC u(x, 0) = f(x), ut(x, 0) = g(x) yields the
two equations:
∞∑n=1
bn sin(nπx
L) = f(x),
∞∑n=1
an(nπc
L) sin(
nπx
L) = g(x),
which represent the Fourier sine expansion of f(x) and g(x), respectively. The coefficients
an and bn are given by
an =2
nπc
∫ L
0g(x) sin(
nπx
L)dx, (9)
bn =2
L
∫ L
0f(x) sin(
nπx
L)dx. (10)
Thus, the solution is
u(x, t) =∞∑n=1
sin(nπx
L)
[an sin(
nπct
L) + bn cos(
nπct
L)
], (11)
where an and bn are given by (9) and (10), respectively.
REMARK 1. • The function u(x, t) given by (11) with coefficients (9) and (10), is a
solution of (1) that satisfies the conditions (2) and (3), provided that the series (11)
converges and also that the series obtained by differentiating (11) twice (term-wise)
with respect to x and t, converge and have the sums uxx and utt, respectively, which
are continuous.
MODULE 6: THE WAVE EQUATION 16
• Note that each un in (8) represents a harmonic motion having the frequency λn/2π =
cn/2L cycles per unit time. This motion is called the nth normal mode of the string.
The first normal mode is known as the fundamental mode (n = 1), and the others
are known as overtones.
Practice Problems
1. Solve the following IBVP:
utt = uxx, 0 < x < 1, t > 0,
u(0, t) = u(1, t) = 0, t > 0,
u(x, 0) = x(1− x), ut(x, 0) = 0, 0 ≤ x ≤ 1.
2. Solve the following IBVP:
utt = 4uxx, 0 < x < π, t > 0,
u(0, t) = u(π, t) = 0, t > 0,
u(x, 0) = 0, ut(x, 0) = sinx, 0 ≤ x ≤ π.
MODULE 6: THE WAVE EQUATION 17
Lecture 5 The Inhomogeneous Wave Equation
Recall the Duhamel’s principle for inhomogeneous heat equations that arises due to in-
ternal heat sources. We solve the inhomogeneous heat equation by solving a family of
related problems in which the sources appears in the initial conditions instead of the dif-
ferential equation. The same idea works for inhomogeneous wave equations. To illustrate
the procedure, let us consider the following infinite string problem:
PDE: utt = c2uxx + h(x, t), −∞ < x, t <∞, (1)
IC: u(x, 0) = 0, ut(x, 0) = 0. (2)
To motivate the method of Duhamel for the string problem, let the acceleration h(x, s) be
applied to the string at t = s −∆s and let the acceleration be turned off at t = s. The
string will then acquire a velocity of h(x, s)∆s, and its position change is h(x, s)(∆s)2/2.
Assuming ∆s to be small enough, the change in position can be neglected. The effect of
the imposed acceleration is v(x, t; s)∆s, where v(x, t; s) is the solution of
PDE: vtt = c2vxx, −∞ < x <∞, t ≥ s, (3)
IC: v(x, s; s) = 0, vt(x, s; s) = h(x, s). (4)
This problem has initial conditions given at the arbitrary time t = s, instead of t = 0. We
can write v(x, t; s) = v(x, t− s; s), where v(x, t; s) solves
PDE: vtt = c2vxx, −∞ < x <∞, t ≥ 0 (5)
IC: v(x, s; s) = 0, vt(x, s; s) = h(x, s). (6)
By D’Alembert’s formula, the solution of (5) is given by
v(x, t; s) =1
2c
∫ x+ct
x−cth(r, s)dr, (7)
and hence, the solution of (3) is
v(x, t; s) = v(x, t− s; s) =1
2c
∫ x+c(t−s)
x−c(t−s)h(r, s)dr.
THEOREM 1. (Duhamel’s principle for the wave equation[1]) Let h(x, t) be a C1
function, −∞ < x, t < ∞. Then the unique solution of the problem (1) satisfying the
conditions (2) is given by
u(x, t) =
∫ t
0v(x, t; s)ds =
∫ t
0v(x, t− s; s)ds =
1
2c
∫ t
0
∫ x+c(t−s)
x−c(t−s)h(r, s)drds. (8)
MODULE 6: THE WAVE EQUATION 18
Proof. By D’Alembert’s formula, we know
v(s, t; s) =1
2c
∫ x+ct
x−cth(r, s)ds.
Note that v(s, t; s) is in C2 since h(x, t) is assumed to be in C1. Differentiate twice with
respect to t to obtain
ut(x, t) = v(x, 0; s) +
∫ t
0vt(x, t− s; s)ds =
∫ t
0vt(x, t− s; s)ds, (9)
and
utt(x, t) = vt(x, 0; t) +
∫ t
0vtt(x, t− s; s)ds
= h(x, t) +
∫ t
0c2vxx(x, t− s; s)ds
= h(x, t) + c2uxx(x, t),
where we have used (5). This shows that u(x, t) is a C2 solution of (1). By (8), we have
u(x, 0) = 0. The equation (9) yields ut(x, 0) = 0.
To prove the uniqueness, let u1 and u2 be two solutions of (1)-(2). Now, the function
v = u1 − u2 satisfies vtt = c2vxx with IC v(x, 0) = 0 and vt(x, 0) = 0. Hence, v ≡ 0 =⇒u1 = u2. This completes the proof.
EXAMPLE 2. Solve
PDE: utt − uxx = x− t, −∞ < x, t <∞, (10)
IC: u(x, 0) = x4, ut(x, 0) = sin(x).
Solution. Splitting the problem (10) into two problems with u1(x, t) and u2(x, t)
solve(u1)tt − (u1)xx = 0,
u1(x, 0) = x4,
(u1)t(x, 0) = sin(x),
and(u2)tt − (u2)xx = x− t,
u2(x, 0) = 0,
(u2)t(x, 0) = 0.
respectively. The solution of (8) is then u(x, t) = u1(x, t) + u2(x, t). By D’Alembert’s
formula
u1(x, t) =1
2[(x+ t)4 + (x− t)4]− 1
2[cos(x+ t)− cos(x− t)].
MODULE 6: THE WAVE EQUATION 19
Applying Theorem 1 we compute u2(x, t) as follows:
u2(x, t) =1
2
∫ t
0
∫ x+(t−s)
x−(t−s)(r − s)drds =
1
2
∫ t
0
[r2
2− sr
]x+t−s
x−t+s
ds
=1
2
∫ t
0
[(x+ t− s)2
2− (x+ s− t)2
2− s(x+ t− s) + s(x+ s− t)
]ds
=1
2
∫ t
0
[2s2 − 2s(x+ t) +
(x+ t)2
2− (x− t)2
2
]ds
=t3
3− t2(x+ t)
2+ t2x = − t
3
6+t2x
2.
The solution u(x, t) = u1(x, t) + u2(x, t) can easily be verified.
REMARK 3. Duhamel’s principle also applies in the case of a finite string. As in Example
2, one can handle the case where both the differential equation and BC are inhomogeneous.
This is done by splitting the problem into two parts and then adding the solutions of the
two parts to obtain the desired solution.
Practice Problems
1. Solve the following nonhomogeneous IBVP:
utt = uxx + x sin t, 0 < x < 1, t > 0,
u(x, 0) = x(1− x), ut(x, 0) = 0, 0 ≤ x ≤ 1,
u(0, t) = u(1, t) = 0, t > 0.
2. Solve the following nonhomogeneous IBVP:
utt = uxx + 2, 0 < x < 1, t > 0,
u(x, 0) = x, ut(x, 0) = 0, 0 ≤ x ≤ 1,
u(0, t) = 0, ux(1, t) = t, t ≥ 0.
Module 7: The Laplace Equation
In this module, we shall study one of the most important partial differential equations in
physics known as the Laplace equation
∇2u = 0 in Ω ⊂ Rn, (1)
where ∇2u :=∑n
i=1∂2u∂x2
iis the Laplacian of the function u. The theory of the solutions of
Laplace ’s equation is called potential theory. The equation (1) is often referred to as the
potential equation as the function u is frequently a potential function. Solutions of (1)
that have continuous second-order partial derivatives are called harmonic functions. For
easy of exposition, we shall study Laplace’s equation in two dimensions.
This module consists of five lectures. The first lecture introduces some basic concepts
and the maximum and minimum principle for boundary value problems (BVP). In the
second lecture, we discuss the Green’s identities, fundamental solution of the Laplace
equation and the Poisson integral formula. The solution of the Laplace equation for
rectangular region is discussed in the third lecture. The mixed BVP for a rectangle is
discussed in the fourth lecture. In the fifth lecture, we solve the Laplace equations for
the annular region between concentric circles. Finally, the sixth lecture is devoted to the
interior and exterior Dirichlet problems for the Laplace equations.
1
MODULE 7: THE LAPLACE EQUATION 2
Lecture 1 Basic Concepts and The Maximum/Minimum
Principle
Let Ω be an open region in R2. The Laplace equation in two dimension is of the form
∇2u(x, y) = 0, (x, y) ∈ Ω, (1)
where ∇2 := ∂2
∂x2 +∂2
∂x2 is the Laplace operator or the Laplacian. The equation of the type
(1) plays an important role in a variety of physical contexts such as in Gravitation theory,
electrostatics, steady-state heat conduction problems and fluid flow problems.
Some examples of physical problems(cf. [10]):
EXAMPLE 1. (Gravitation theory) The force of attraction F , both inside and outside the
attracting matter, can be expressed in terms of a gravitational potential u by the equation
F = ∇u.
In empty space u satisfies Laplace’s equation
∇2u = 0.
EXAMPLE 2. (Steady-state heat flow problem) In the theory of heat conduction if the
temperature u does not vary with the time, then u satisfies the equation
∇ · (κ∇u) = 0,
where κ is the thermal conductivity. If κ is a constant throughout the medium then
∇2u = 0.
EXAMPLE 3. (Fluid flow problem) The velocity q of a perfect fluid in irrotational motion
can be expressed in terms of a velocity potential u by the equation
q = −∇u.
If there are no sources or sinks at all points of the fluid the function u satisfies Laplace’s
equation
∇2u = 0.
The inhomogeneous Laplace equation
∇2u(x, y) = f(x, y) in Ω,
where f is a given function is known as the Poisson equation.
MODULE 7: THE LAPLACE EQUATION 3
1 Types of BVP
Because these solutions do not depend on time, initial conditions are irreverent and only
boundary conditions are specified. There are three basic types of boundary conditions
that are usually associated Laplace’s equation. They are
• Dirichlet BVP: If the BC are of Dirichlet type i.e., if the solution u(x, y) to Laplace
equation in a domain Ω is specified on the boundary ∂Ω i.e.,
u(x, y) = f(x, y) on ∂Ω,
where f(x, y) is a given function. The Laplace equation together with Dirichlet BC
are called the Dirichlet problem / Dirichlet BVP. The Dirichlet problem for
Laplace equation is of the form
∇2u(x, y) = 0 in Ω; u(x, y) = f(x, y) on ∂Ω.
• Neumann BVP: We know the BC are of Neumann type if the directional derivative∂u∂n along the outward normal to the boundary is specified on ∂Ω i.e.,
∂u
∂n(x, y) = g(x, y) for (x, y) ∈ ∂Ω.
In physical terms, the normal component of the solution gradient is known on the
boundary. In steady-state heat flow problem, Neumann BC means the rate of heat
loss or gain through the boundary points is prescribed.
The Laplace equation together with Neumann BC are called the Neumann BVP/
Neumann problem which is written as
∇2u = 0 in Ω;∂u
∂n(x, y) = g(x, y) for (x, y) ∈ ∂Ω.
The Neumann problem will have no solution unless we assume that the average
value of the function g on ∂Ω is zero. This assumption is known as the compatibility
condition ∫∂Ω
∂u
∂n=
∫∂Ωg = 0,
which will be discussed in the next lecture.
• Robin’s BVP. The boundary conditions are called Robin’s type or mixed type if
Dirichlet BC are specified on part of the boundary ∂Ω and Neumann type BC are
specified on the remaining part of the boundary ∂Ω. For example,
∂u
∂n+ c(u− g) = 0,
MODULE 7: THE LAPLACE EQUATION 4
where c is a constant and g is a given function that can vary over the boundary. The
Laplace equation together with the Rabin’s/Mixed BC known as Rabin’s BVP /
Mixed BVP.
2 The maximum/minimum principle
The maximum/minimum principle for Laplace’s equation is stated in the following theo-
rem.
THEOREM 4. (The maximum/minimum principle for Laplace’s equation)
Let u(x, y) ∈ C2(Ω) ∩ C(Ω) be a solution of Laplace’s equation
∇2u(x, y) := uxx + uyy = 0 (2)
in a bounded region Ω with boundary ∂Ω. Then the maximum and minimum values of u
attain on ∂Ω. That is,
maxΩ
u(x, y) = max∂Ω
u(x, y); and minΩu(x, y) = min
∂Ωu(x, y).
Proof. Since u is continuous in Ω it attains its maximum either in Ω or on ∂Ω.
Suppose u achieves its maximum at some point (x0, y0) ∈ Ω. Let
u(x0, y0) = maxΩ
u(x, y) =M0 > Mb,
where Mb = max∂Ω u(x, y). Consider the function
v(x, y) = u(x, y) + ϵ[(x− x0)2 + (y − y0)
2], (3)
for some ϵ > 0. Note that v(x0, y0) = u(x0, y0) =M0 and
max∂Ω
v(x, y) ≤Mb + ϵd2,
where d is the diameter of Ω. For such ϵ (0 < ϵ < (M0 −Mb)/d2), the maximum of v can
not occur on ∂Ω because
M0 = v(x0, y0) > max∂Ω
v(x, y).
This implies there may be points in Ω where v > M0. Let
v(x1, y1) = maxΩ
v(x, y).
At (x1, y1), we must have
vxx ≤ 0 and vyy ≤ 0 =⇒ vxx + vyy ≤ 0. (4)
MODULE 7: THE LAPLACE EQUATION 5
From (3), we observe that
vxx + vyy = uxx + uyy + 2ϵ+ 2ϵ = 4ϵ > 0,
where we have used the fact that uxx + uyy = 0. This led to a contradiction to (4). Thus,
maxΩ
v(x, y) = maxΩ
v(x, y).
So, the maximum of u attains on ∂Ω.
To prove that the minimum of u is also achieved on the boundary ∂Ω, replace u by
−u in the above argument to obtain
minΩu = max
Ω(−u) = max
∂Ω(−u) = min
∂Ω(u).
This completes the proof.
We now discuss the maximum and minimum principle for Poisson’s equation
∇2u(x, y) = f(x, y) in Ω. (5)
THEOREM 5. (The maximum/minimum principle for Poisson’s equation)
Let Ω be a bounded domain in R2 with boundary ∂Ω. Then the maximum values of a
solution u of (5) attain on ∂Ω if f(x, y) > 0 in Ω and the minimum values of u occur on
∂Ω if f(x, y) < 0 in Ω.
Proof. Since u is continuous in a closed and bounded domain, it must assume its
maximum in Ω or in ∂Ω. Suppose that the maximum is assumed at a point (x0, y0) in Ω,
i.e.,
u(x0, y0) = maxΩ
u(x, y).
Suppose that f(x, y) > 0 in Ω. Then at (x0, y0) ∈ Ω, we must have
uxx(x0, y0) ≤ 0, uyy(x0, y0) ≤ 0.
As f > 0, it follows from (5) that
uxx + uyy > 0,
which is a contradiction. Hence, the maximum of u(x, y) must occur on ∂Ω.
To show that the minimum of u(x, y) attains on ∂Ω if f(x, y) < 0 in Ω, replace u
by −u in the preceding argument. This is equivalent to replacing f by −f in (4). Since
f < 0, we obtain −f > 0 and conclude that −u assumes its maximum on ∂Ω. Therefore,
u assumes its minimum on ∂Ω and this completes the proof.
The maximum/minimum principle can be used to prove uniqueness and continuous
dependence of the solution for the Dirichlet’s problems.
MODULE 7: THE LAPLACE EQUATION 6
THEOREM 6. Let Ω be a bounded domain in R2 with boundary ∂Ω. The solution of the
Dirichlet’s problem
∇2u(x, y) = −f(x, y) in Ω, u(x, y) = g(x, y) on ∂Ω (6)
if it exists, is unique.
Proof. Let u1(x, y) and u2(x, y) be two solutions of (6). Set v(x, y) = u1(x, y) −u2(x, y). Then v satisfies
∇2v = 0 in Ω, v = 0 on ∂Ω.
The maximum/minimum principle yields (cf. Theorem 4)
v = 0 in Ω =⇒ u1 − u2 = 0 in Ω.
Thus, we have
u1 = u2,
which proves the uniqueness.
Next, we shall prove the continuous dependence of the solution on the boundary data.
THEOREM 7. The solution of the Dirichlet problem depends continuously on the boundary
data.
Proof. Let ui, i = 1, 2 be the solutions of
∇2ui = F in Ω ⊂ R2, ui = fi on ∂Ω.
Then the function v = u1 − u2 solves
∇2v = 0 in Ω with v = f1 − f2 on ∂Ω.
By the maximum/minimum principle v attains its maximum/minimum on ∂Ω. Thus, for
all (x, y) ∈ Ω, we have
−max∂Ω
(|f1 − f2|) ≤ min∂Ω
(f1 − f2) ≤ v(x, y) ≤ max∂Ω
(f1 − f2) ≤ max∂Ω
(|f1 − f2|).
If |f1 − f2| < ϵ then
−ϵ < minΩv(x, y) ≤ v(x, y) ≤ max
Ωv(x, y) < ϵ.
Therefore,
|f1 − f1| < ϵ =⇒ |v(x, y)| < ϵ
for all (x, y) ∈ Ω. This completes the proof.
MODULE 7: THE LAPLACE EQUATION 7
Practice Problems
1. Let u satisfy the Laplace equation in a disk Ω = (x, y) | x2+y2 < 1 and continuous
on Ω. If u(cos θ, sin θ) ≤ sin θ + cos(2θ), then show that
u(x, y) ≤ y + x2 − y2, ∀ (x, y) ∈ Ω.
2. Consider the elliptic equation
∇ · (α∇u) = −F, α > 0,
in a bounded region Ω ⊂ R2 with the boundary ∂Ω. Show that if F < 0 in Ω, the
solution u assumes its maximum on ∂Ω and if F > 0 in Ω, the solution u assumes
its minimum on ∂Ω.
3. Let Ω be a bounded region R2. Use the maximum principle to prove continuous
dependence on the data for the Dirichlet problem for the elliptic equation
∇ · (α∇u) = −F in Ω
with α > 0.
MODULE 7: THE LAPLACE EQUATION 8
Lecture 2 Green’s Identity and Fundamental Solutions
In this lecture, we shall learn about some important identities known as Green’s identities
and its special forms. As a consequence of these identities we can prove the uniqueness of
the solution to the Dirichlet problem and the compatibility conditions for the Neumann
problems. The fundamental solutions for the Laplace equation will be discussed.
Let Ω be bounded domain in R2 with smooth boundary ∂Ω. Recall the following
Gauss divergence theorem: For u, v ∈ C1(Ω)∫Ωv∂u
∂xkdx =
∫∂Ωvu · nds−
∫Ωu∂v
∂xkdx, (1)
where n is the outward unit normal the boundary ∂Ω and ds is the element of arc length.
As a consequence of Gauss divergence theorem, the following identity known as Green’s
identity hold true: ∫Ωv∇2udx =
∫∂Ωv∂u
∂nds−
∫Ω∇u · ∇vdx. (2)
Integrating the second term of the right hand side once more by parts we obtain∫Ωv∇2udx =
∫Ωu∇2vdx+
∫∂Ω
(v∂u
∂n− u
∂v
∂n
)ds. (3)
Here, ∂∂n indicates differentiation in the direction of the exterior normal to ∂Ω.
From the identity (2), the special case v = 1 yields∫Ω∇2udx =
∫∂Ω
∂u
∂nds. (4)
Another special case of interest by choosing v = u. In this case, the equation (2) yields
the energy identity ∫Ω|∇u|2dx+
∫Ωu∇2udx =
∫∂Ωu∂u
∂nds. (5)
If ∇2u = 0 in Ω then for u ∈ C2(Ω), it follows that∫Ω|∇u|2dx = 0
=⇒ ∇u = 0
=⇒ u = constant.
This observation leads to uniqueness theorems for the Dirichlet problem and the Neumann
problem.
MODULE 7: THE LAPLACE EQUATION 9
REMARK 1. Using Green’s identity (2), one can easily prove that:
(i) A solution u ∈ C2(Ω) of the Dirichlet problem is determined uniquely.
(ii) A solution u ∈ C2(Ω) of the Neumann problem is determined uniquely within an
additive constant.
Observe that the solution of the Neumann problem can only exist if the data satisfy
the condition known as compatibility condition. For example, the compatibility condition
for the Neumann problem:
∇2u = 0 in Ω,∂u
∂n= g on ∂Ω
is ∫∂Ωgds = 0,
which immediately follows from the identity (4).
Fundamental Solutions: One of the principal features of the Laplace equation
∇2u = 0 (6)
is its spherical symmetry. The Laplace equation is preserved under rotations about a point
ξ. Therefore, it is reasonable to assume that there exist special solutions v(x) of (6) that
are invariant under rotations about ξ. Such solutions would be of the form
v = ψ(r), (7)
where
r = |x− ξ| =
√√√√ n∑i=1
(xi − ξi)2
represents the Euclidean distance between x and ξ. By the chain rule of differentiation
we find that
dr
dxi=
1
2
(n∑
i=1
(xi − ξi)2
)−1/2
× 2xi =xir.
Further, we note that
vxi = ψ′(r)dr
dxi= ψ′(r)(
xir), vxixi = ψ′′(r)
x2ir2
+
(1
r− x2ir3
).
Hence,
∇2v =n∑
i=1
vxixi = ψ′′(r) +n− 1
rψ′(r) = 0.
MODULE 7: THE LAPLACE EQUATION 10
If ψ′(r) = 0, we haveψ′′(r)
ψ′(r)=
1− n
r.
On solving we arrive at ψ′(r) = Cr1−n and hence,
ψ(r) =
C log r + C1 n = 2,Cr2−n
2−n + C1 n > 2,
where C and C1 are constants.
The function v(x) = ψ(r) satisfies (6) for r > 0, that is for x = ξ, but becomes infinite
for x = ξ. The function v for a suitable choice of the constant C, is a fundamental solution
for the operator ∇2, satisfying the equation,
∇2v = δ(x− ξ),
where δ is the Dirac delta function. The function
ψ(r) =1
2πlog r, r > 0
is a fundamental solution to two dimensional Laplace’s equation (6). For a proof, see [5].
The Poisson Integral Formula. We know the function u ∈ C2(Ω) satisfying the Laplace
equation ∇2u = 0 is harmonic. The following result express the solution of the Dirichlet
problem in terms of an integral known as The Poisson integral formula.
THEOREM 2. (The Poisson integral formula) Let f(θ) be a continuous function and
f(θ + 2π) = f(θ). Define
u(r, θ) =1
2π
∫ π
−π
(r20 − r2)f(s)
r20 − 2rr0 cos(θ − s) + r2ds, r < r0,
u(r0, θ) = f(θ), r = r0.
Then u(r, θ) solves the following Dirichlet problem:
∇2u(x, y) = 0, (x2 + y2)1/2 < r0,
u(r0, θ) = f(θ), f(θ + 2π) = f(θ),
where u(r, θ) = u(x, y) = u(r cos θ, r sin θ). That is, u(r, θ) is harmonic on the open disk
D = (x, y) | (x2 + y2)1/2 < r0.
Some consequences of the Poisson integral formula are given below.
THEOREM 3. Let u be a harmonic function on some region Ω. The value of u at the
center of any disk D with D ⊂ Ω is the average (or mean) of the values of u on the
circular boundary ∂D of D.
MODULE 7: THE LAPLACE EQUATION 11
Note: The mean value property can be used to prove the maximum and minimum prin-
ciple for solutions for Laplace’s equation. It can be used to show that whenever the
maximum or minimum is attained in the interior of the region, the solution u must be
identically constant. This is the strong maximum and minimum principle for Laplace’s
equation.
THEOREM 4. (The strong maximum/minimum principle) Let u be a harmonic
function on an open connected set Ω. Suppose that the maximum or minimum of u is
attained at some point in Ω. Then u must be constant throughout Ω.
We know by definition a harmonic function u on an open region Ω is only required to
be C2(Ω). But, u actually C∞(Ω) (infinitely differentiable function). Thus, we have the
following result.
THEOREM 5. (Regularity result) If u is harmonic on an open region Ω, then u ∈C∞(Ω).
Practice Problems
1. Prove that a solution of the Neumann problem
∇2u = f in Ω, u = g on ∂Ω
differs from another solution by a constant.
2. Prove that u1(x, y) = 1+ log(x2 + y2) and u2(x, y) = 1− log(x2 + y2) are harmonic,
where defined. Note that u1 = u2 on the circle x2 + y2 = 1, but unequal inside
the circle. Why does this not contradict the uniqueness theorem for the Dirichlet
problem.
3. Let u be harmonic in the disk x2 + y2 < r20. If u achieves its maximum at the point
(0, 0), then show that u must be constant throughout this disk.
MODULE 7: THE LAPLACE EQUATION 12
Lecture 3 The Dirichlet BVP for a Rectangle
In this lecture we shall discuss the solution of the Laplace equation with Dirichlet type
BC in cartesian coordinates.
Consider the following Dirichlet problem in a rectangle:
PDE: uxx + uyy = 0, 0 < x < a, 0 < y < b, (1)
BC: u(x, 0) = f1(x), u(x, b) = f2(x), 0 ≤ x ≤ a, (2)
u(0, y) = g1(y), u(a, y) = g2(y) 0 ≤ y ≤ b.
We shall study how the method of separation of variables is still applicable for the BVP.
Since the BC are nonhomogeneous, we are required to do some preliminary work.
By the principle of superposition, we seek the solution of the above BVP (1)-(2) as
u(x, y) = u1(x, y) + u2(x, y) + u3(x, y) + u4(x, y),
where each of u1, u2, u3 and u4 satisfies the PDE with one of the original nonhomogeneous
BC, and the homogeneous versions of the remaining three BC. These problems are then
solved by the method of separation of variables.
Let us consider solving the following example problem:
EXAMPLE 1. Solve the Dirichlet BVP:
PDE: uxx + uyy = 0, 0 < x < a, 0 < y < b, (3)
BC: u(x, 0) = f(x), u(x, b) = 0, 0 ≤ x ≤ a, (4)
u(0, y) = 0, u(a, y) = 0, 0 ≤ y ≤ b.
Apply the method of separation of variables to solve this problem. The step-wise
solution procedure is given below.
Step 1: (Reducing to ODEs)
Separating variables, we seek for a solution of the form
u(x, y) = X(x)Y (x).
Substituting this into (3), we obtain
X ′′Y (y) +X(x)Y ′′(y) = 0
MODULE 7: THE LAPLACE EQUATION 13
and hence,X ′′(x)
X(x)= −Y
′′(y)
Y (y)= k,
for some constant k, which is called the separation constant. This leads two ODEs
X ′′(x)− kX(x) = 0, (5)
Y ′′(y) + kY (y) = 0. (6)
Step 2: (Solving the resulting ODEs)
Case 1 : When k > 0, set k = λ2, where λ = 0. In this case, the solutions of ODEs are
X(x) = [Aeλx +Be−λx],
Y (y) = [C cos(λy) +D sin(λy)].
Therefore, the solutions of PDE u(x, y) are given by
u(x, y) = [Aeλx +Be−λx][C cos(λy) +D sin(λy)].
Case 2 : When k = 0, the solutions of ODEs are linear are given by
X(x) = (A+Bx), Y (y) = (C +Dy).
Therefore,
u(x, y) = (A+Bx)(C +Dy).
Case 3 : Suppose k < 0, set k = −λ2, where λ > 0.
The solutions of ODEs are given by
X(x) = [A cos(λx) +B sin(λx)]
Y (x) = [Ceλy +De−λy].
Thus , the solution of PDE is
u(x, t) = [A cos(λx) +B sin(λx)][Ceλy +De−λy].
Step 3: (Applying the BC)
Using the boundary conditions u(0, y) = 0 and u(a, y) = 0 for the product solution
obtained for the case k > 0 leads to the equations
A+B = 0, Aeλa +Be−λa = 0,
MODULE 7: THE LAPLACE EQUATION 14
which has a trivial solution A = 0 and B = 0. Thus, only the trivial solution u(x, y) = 0
is possible. Similarly, use of boundary conditions u(0, y) = 0 and u(a, y) = 0 also leads
to a trivial solution u(x, y) = 0 for the case k = 0. Let us examine the product solution
obtained in Case 3 (for k < 0) i.e.,
u(x, y) = [A cos(λx) +B sin(λx)][Ceλy +De−λy].
Using the boundary condition u(0, y) = 0 yields A = 0. The condition u(a, y) = 0 gives
B sin(λa)][Ceλy +De−λy] = 0.
For a non-trivial solution,
B = 0 =⇒ sinλa = 0
=⇒ λa = nπ or λ =nπ
a, n = 1, 2, 3, . . . .
Therefore, the sequence of non-trivial is given by
un(x, y) = sin(nπx
a)[Cne
nπya +Dne
−nπya ]
Applying the BC u(x, b) = 0, we obtain
sin(nπx
a)[Cne
nπba +Dne
−nπba ] = 0
=⇒ Cnenπba +Dne
−nπba = 0
=⇒ Dn = −Cne
nπba
e−nπb
a
, n = 1, 2, . . . , .
Therefore, the solution now takes the form
un(x, y) = sin(nπx
a)2Cn
e−nπb
a
e
nπ(y−b)a − e
−nπ(y−b)a
/2
=2Cn
e−nπb
a
sin(nπx
a) sinh(
nπ(y − b)
a).
Setting cn = 2Cn
e−nπb
aand using superposition principle, we obtain
u(x, y) =
∞∑n=1
cn sin(nπx
a) sinh(
nπ(y − b)
a).
To satisfy the remaining nonhomogeneous BC, we must have
u(x, 0) = f(x) =
∞∑n=1
cn sin(nπx
a) sinh(
−nπba
),
MODULE 7: THE LAPLACE EQUATION 15
which is a half-range Fourier series. Therefore,
cn sinh(−nπba
) =2
a
∫ a
0f(x) sin(
nπx
a)dx,
and this implies
cn =2
a sinh(−nπba )
∫ a
0f(x) sin(
nπx
a)dx. (7)
Therefore, the required solution to the problem (3)-(4) is
u(x, y) =∞∑n=1
cn sin(nπx
a) sinh(
nπ(y − b)
a)
with the coefficients cn computed from (7).
As a consequence of the superposition principle we obtain the following result.
THEOREM 2. Let an, bn, cn and dn be the Fourier coefficients of f(x), g(x), h(y) and
k(y). Then solution of the Dirichlet problem
PDE: uxx + uyy = 0, 0 < x < a, 0 < y < b,
BC: u(x, 0) = f(x), u(x, b) = g(x) 0 ≤ x ≤ a,
u(0, y) = h(y), u(a, y) = k(y), 0 ≤ y ≤ b,
is
u(x, y) =
∞∑n=1
[An sin(
nπx
a) sinh[
nπ(b− y)
a]
+Bn sin(nπx
a) sinh(
nπy
a)
+Cn sin(nπy
b) sinh[
nπ(a− x)
b]
+ Dn sin(nπy
b) sinh(
nπx
b)],
where
An = an/ sinh(nπb
a) Bn = bn/ sinh(
nπb
a)
Cn = cn/ sinh(nπa
b) Dn = dn/ sinh(
nπa
b).
Practice Problems
1. Solve the following BVP:
uxx + uyy = 0, 0 < x < 1, 0 < y < 1,
u(x, 0) = x(x− 1), u(x, 1) = 0, 0 ≤ x ≤ 1,
u(0, y) = 0, u(1, y) = 0, 0 ≤ y ≤ 1,
MODULE 7: THE LAPLACE EQUATION 16
2. Solve the following BVP:
uxx + uyy = 0, 0 < x < π, 0 < y < π,
u(x, 0) = sinx, u(x, 1) = sinx, 0 ≤ x ≤ π,
u(0, y) = sin y, u(1, y) = sin y, 0 ≤ y ≤ π,
MODULE 7: THE LAPLACE EQUATION 17
Lecture 4 The Mixed BVP for a Rectangle
In this lecture we shall consider solving the mixed BVP for the Laplace equation. To
begin with, let us consider the following the Neumann problem for a rectangle:
PDE: uxx + uyy = 0, 0 < x < a, 0 < y < b (1)
BC: uy(x, 0) = f(x), uy(x, b) = g(x), 0 ≤ x ≤ a (2)
ux(0, y) = h(y), ux(a, y) = k(y), 0 ≤ y ≤ b.
This problem has no solution, unless the following compatibility condition holds:∫ a
0g(x)dx−
∫ a
0f(x)dx+
∫ b
0k(y)dy −
∫ b
0h(y)dy = 0.
Solution. If u(x, y) is a solution of (1), then
0 =
∫ b
0
∫ a
0(uxx + uyy)dxdy =
∫ b
0
∫ a
0uxxdxdy +
∫ a
0
∫ b
0uyydydx
=
∫ b
0[ux(a, y)− ux(0, y)]dy +
∫ a
0uy(x, b)− uy(x, 0)]dx
=
∫ b
0k(y)dy −
∫ b
0h(y)dy +
∫ a
0g(x)dx−
∫ a
0f(x)dx,
where we have used the fundamental theorem of calculus, and the Fubini’s theorem.
REMARK 1. • The compatibility condition is an immediate consequence of the follow-
ing special case of Green’s theorem∫C∇u · nds =
∫Cuxdy − uydx =
∫ ∫R(uxx + uyy)dxdy,
i.e., the flux of the gradient of u through the boundary is the integral of ∆u in the
interior.
• Note that we only require that ux and uy be continuous on the closed rectangle.
Further, we do not demand that the second partial of u extend continuously to the
closed rectangle.
We now consider solving Laplace equation with mixed type of boundary conditions.
EXAMPLE 2. Solve the following BVP:
PDE: uxx + uyy = 0, 0 < x < a, 0 < y < b, (3)
BC: u(x, 0) = 0, u(x, b) = 0, 0 ≤ x ≤ a, (4)
u(0, y) = g(y), ux(a, y) = h(y), 0 ≤ y ≤ b.
MODULE 7: THE LAPLACE EQUATION 18
Solution. The solution of this problem has a form
u(x, y) = u1(x, y) + u2(x, y),
where u1 and u2 satisfy (3) with the BC
(BC)1 :
u1(x, 0) = u1(x, b) = 0, 0 ≤ x ≤ a,
u1(0, y) = g(y), u1x(a, y) = 0, 0 ≤ y ≤ b,
and
(BC)2 :
u2(x, 0) = u2(x, b) = 0, 0 ≤ x ≤ a,
u2(0, y) = 0, u2x(a, y) = h(y), 0 ≤ y ≤ b.
We shall determine each one of u1 and u2 by the method of separation of variables.
Step 1.(Solving for u1): Separating variables for u1(x, y) = X(x)Y (y) and substitut-
ing in (3) we obtainX ′′(x)
X(x)+Y ′′(y)
Y (y)= 0.
This leads to the following ODEs:
X ′′(x) + λX(x) = 0, 0 < x < a, (5)
Y ′′(y)− λY (y) = 0, 0 < y < b, (6)
for a constant λ. Since u1 satisfies (BC)1, we must have
Y (0) = Y (b) = 0, (7)
X ′(a) = 0. (8)
Nontrivial solutions of (6) with BC (7) are
Yn(y) = sinnπy
b
corresponding to
λ = λn = −(nπb
)2, n ∈ N.
The differential equation for X(x)
X ′′(x)−(nπb
)2X(x) = 0
has solution of the form
X(x) = C1 coshnπx
b+ C2 sinh
nπx
b.
MODULE 7: THE LAPLACE EQUATION 19
The condition (8) yields C2/C1 = tanh nπab . Thus, a sequence of solutions X(x) is given
by
Xn(x) = an
(cosh
nπx
b− tanh
nπa
bsinh
nπx
b
).
By superposition principle the product solution u1 is expressed by
u1(x, y) =∞∑n=1
an
(cosh
nπx
b− tanh
nπa
bsinh
nπx
b
)sin
nπy
b. (9)
The boundary condition u1(0, y) = g(y), 0 ≤ y ≤ b yields
u1(0, y) =
∞∑n=1
an sinnπy
b= g(y), 0 ≤ y ≤ b,
with an’s given by
an =2
b
∫ b
0g(y) sin
nπy
bdy. (10)
Step 2.(Solving for u2): Suppose u2(x, y) = X(x)Y (y) satisfies (3) and (BC)2. Arguing
as before, we have the ODEs (5) and (6) for X(x) and Y (y) with the boundary conditions
Y (0) = Y (b) = 0; X(0) = 0.
The non-trivial solutions corresponding to
λ = λn = −(nπn
)2, n ∈ N,
are
Yn(y) = sinnπy
b.
For X(x), we have the ODE:
X ′′(x)−(nπb
)2X(x) = 0,
X(0) = 0.
It has solutions of the form
Xn(x) = bn sinhnπx
b, n ∈ N.
Thus, u2(x, y) is given by
u2(x, y) =
∞∑n=1
bn sinhnπx
bsin
nπy
b(11)
MODULE 7: THE LAPLACE EQUATION 20
which satisfies the boundary condition u2x(a, y) = h(y). This leads to
bn =2
nπ
1
cosh nπab
∫ b
0h(y) sin
nπy
bdy. (12)
Step 3.(Writing the solution): The solution of (3)-(4) is obtained as
u(x, y) = u1(x, y) + u2(x, y),
where an and bn are determined by (10) and (12), respectively.
Practice Problems
1. Solve the following Neumann BVP:
uxx + uyy = 0, 0 < x < a, 0 < y < b,
uy(x, 0) = 0, uy(x, b) = h(x), 0 ≤ x ≤ a,
ux(0, y) = 0, ux(a, y) = 0, 0 ≤ y ≤ b.
given that g(x) is continuous and∫ a0 h(x)dx = 0.Why the assumption
∫ a0 h(x)dx = 0
is needed?
2. Find a solution of the Neumann BVP:
uxx + uyy = 0, 0 < x < π, 0 < y < π,
uy(x, 0) = cosx, uy(x, b) = 0, 0 ≤ x ≤ π,
ux(0, y) = 0, ux(a, y) = 0, 0 ≤ y ≤ π.
By adding a constant, find a solution such that u(0, 0) = 0.
3. Solve the following mixed BVP:
uxx + uyy = 0, 0 < x < a, 0 < y < b,
u(x, 0) = 2x, u(x, b) = x2, 0 ≤ x ≤ a,
ux(0, y) = 0, ux(a, y) = 0, 0 ≤ y ≤ b.
MODULE 7: THE LAPLACE EQUATION 21
Lecture 5 The Dirichlet Problems for Annuli
Let us consider an annular region or a disk D in R2. To solve the Dirichlet problem in D
it is most natural to use polar coordinates. Polar coordinates (r, θ) of a point in the plane
are related to its Cartesian coordinates (x, y) by
x = r cos θ and y = r sin θ, (1)
where r = (x2 + y2)1/2. Set
U(r, θ) ≡ u(x, y) = u(r cos θ, r sin θ).
Observe that
Ur = uxxr + uyyr = ux cos θ + uy sin θ
and
Uθ = uxxθ + uyyθ = −uxr sin θ + uyr cos θ.
Hence,
Urr = uxx cos2 θ + 2uxy sin θ cos θ + uyy sin
2 θ.
and
Uθθ = −(uxxxθ + uxyyθ)r sin θ − uxr cos θ + (uyxxθ + uyyyθ)r cos θ − uyr sin θ
= r2(uxx sin2 θ − 2uxy cos θ sin θ + uyy cos
2 θ)− r(ux cos θ + uy sin θ).
Thus,
Urr +1
rUr +
1
r2Uθθ = uxx + uyy.
The Dirichlet problem for the annulus: We formulate the Dirichlet problem for the
annulus in polar coordinates as follows:
PDE: Urr +Ur
r+Uθθ
r2= 0, r1 < r < r2, (2)
BC: U(r2, θ) = f(θ), f(θ + 2π) = f(θ),
U(r1, θ) = g(θ), g(θ + 2π) = g(θ),
PC: U(r, θ + 2π) = U(r, θ), r1 < r < r2,
where −∞ < θ < ∞ and PC stands for “periodicity condition”. Here, f and g are
continuous periodic function with period 2π.
MODULE 7: THE LAPLACE EQUATION 22
Using separation of variable, we seek solutions of the form
U(r, θ) = R(r)T (θ).
Substituing this into the PDE in (2) and separating variables, we obtain
R′′(r)T (θ) + r−1R′(r)T (θ) + r−2R(r)T ′′(θ) = 0
=⇒ r2R′′(r) + rR′(r)
R(r)= −T
′′(θ)
T (θ)= c = ±b2 (b > 0).
This leads to the two ODEs
T ′′(θ) + cT (θ) = 0, (3)
r2R′′(r) + rR′(r)− cR(r) = 0. (4)
Note that we get periodic solutions of period 2π, when b = n and c = +b2 = n2, for
n = 0, 1, 2, . . .. In this case, solving (3) we obtain
Tn(θ) = an cos(nθ) + bn sin(nθ), n = 0, 1, 2, . . . , (5)
where an and bn are arbitrary constants. With c = n2, equation (4) for R(r) is the
Cauchy-Euler equation
r2R′′(r) + rR′(r)− n2R(r) = 0. (6)
This equation can be solved by taking R(r) = rm. Substituting this into the (6), we get
r2m(m− 1)rm−2 + rmrm−1 − n2rm = 0
or (m2 − n2)rm = 0.
=⇒ rm is a solution if m = ±n.
For n ≥ 1, the general solution is
Rn(r) =
cnr
n + dnr−n, for n = 1, 2, 3, . . . ,
c0 + d0 log(r), for n = 0.
Putting together the expressions for Tn(θ) and Rn(r) in U(r, θ), we obtain
U0(r, θ) = a0 + α0 log(r), (7)
Un(r, θ) = (anrn + αnr
−n) cos(nθ) + (bnrn + βnr
−n) sin(nθ), n ≥ 1. (8)
By the superposition principle, we obtain a more general solution of (4) as
U(r, θ) = U0(r, θ) +
∞∑n=1
Un(r, θ). (9)
MODULE 7: THE LAPLACE EQUATION 23
Suppose that f(θ) and g(θ) have Fourier series of the form
f(θ) =A0
2+
∞∑n=1
An cos(nθ) +Bn sin(nθ), (10)
g(θ) =C0
2+
∞∑n=1
Cn cos(nθ) +Dn sin(nθ). (11)
Comparing Fourier coefficients in the equations U(r2, θ) = f(θ) and U(r1, θ) = g(θ), we
obtain
a0 + α0 log(r2) =A0
2, a0 + α0 log(r1) =
C0
2, (12)
anrn2 + αnr
−n2 = An, anr
n1 + αnr
−n1 = Cn, (13)
bnrn2 + βnr
−n2 = Bn, bnr
n1 + βnr
−n1 = Dn, n = 1, 2, . . . . (14)
Solving for a0, α0 from (12), an, αn from (13) and bn, βn from (14), we obtain
a0 =12C0 log r2 − 1
2A0 log r1
logQ, α0 =
12A0 − 1
2C0
logQ, (15)
an =Anr
−n1 − Cnr
−n2
Qn −Q−n, αn =
Cnrn2 −Anr
−n1
Qn −Q−n, (16)
bn =Bnr
−n1 −Dnr
−n2
Qn −Q−n, βn =
Dnrn2 −Bnr
n1
Qn −Q−n, (17)
where Q = r2/r1. This provides us with the constants an, bn, cn, dn in terms of the given
Fourier coefficients An, Bn, Cn, Dn of f(θ) and g(θ).
Thus, the solution of (2), where f(θ) and g(θ) are given by (10)-(11), is
U(r, θ) = a0 + α0 log r +
∞∑n=1
[anr
n + αnr−n] cos(nθ)
+[bnrn + βnr
−n] sin(nθ), (18)
where an, αn, bn, βn are defined by (16)-(17).
EXAMPLE 1. Solve the following Dirichlet problem
Urr +Ur
r+Uθθ
r2= 0, 1 < r < 2,
U(1, θ) = 1 + 4 cos(2θ),
U(2, θ) = 2 + 5 sin(θ),
U(r, θ + 2π) = U(r, θ), 1 < r < 2.
MODULE 7: THE LAPLACE EQUATION 24
Solution. Using the formulas (18) with A0 = 2, C0 = 4, A2 = 4, C1 = 5 and all other
An, Bn, Cn and Dn equal to 0. Note that Q = r2/r1 = 2.
Equating the Fourier coefficients in the BC with those of U(r, θ) in (18), using r1 = 1
and r2 = 2, we obtain
a0 + α0 log(1) = 1, a0 + α0 log(2) = 2,
b1 + β1 = 0, 2b1 +1
2β1 = 5,
a2 + α2 = 4, 22a2 + 2−2α2 = 0.
Solving (15) for a0 and α0, (16) for b1 and β1 and(17) for a2 and α2, we obtain
a0 = 1, α0 =1
log(2), b1 = 10/3, β1 = −10/3, a2 = −4/15, α2 = 64/15.
All other systems in (15)-(17) have solutions zero. The solution of (18) is then
U(r, θ) = 1 + log(r)/ log(2) + (10r/3− 10r−1/3) sin(θ)
+(−4r2/15 + 64r−2/15) cos(2θ).
EXAMPLE 2. Solve the following problem:
Urr +Ur
r+Uθθ
r2= 0, 1 < r < 2, (19)
U(1, θ) = 0, U(2, θ) = sin(θ), 0 ≤ θ ≤ 2π.
Solution. The coefficients a0, b0, an, bn, cn and dn are found to be
a0 = 0
b0 = 0
an = 0
bn = 0, n = 1, 2, . . .cn =
23 n = 1
0 for all other n’sdn =
−2
3 n = 1
0 for all other n’s
Substituting these value in (18) yields the solution as
u(r, θ) =2
3
(r − 1
r
)sin θ.
It is easy to verify that u(r, θ) satisfies Laplace’s equation and the given BC.
Practice Problems
1. Solve the BVP
Urr +1
rUr +
1
r2Uθθ = 0 1 < r < 2,
U(2, θ) = 1 + 4 cos θ + cos(2θ), U(1, θ) = sin(2θ),
U(r, θ + 2π) = U(r, θ).
MODULE 7: THE LAPLACE EQUATION 25
2. Solve the BVP
Urr +1
rUr +
1
r2Uθθ = 0 1 < r < 2, 0 ≤ θ < 2π,
Ur(1, θ) = sin θ, Ur(2, θ) = cos θ, 0 ≤ θ ≤ 2π.
3. Solve the BVP
Urr +1
rUr +
1
r2Uθθ = 0 1 < r < 2,−π < θ < π,
U(1, θ) = cos2 θ, Ur(2, θ) = 0, −π < θ < π.
MODULE 7: THE LAPLACE EQUATION 26
Lecture 6 The Dirichlet Problem for the Disk
The Dirichlet problem in a disk of radius r0 and center at (0, 0) can be expressed as
PDE: Urr +Ur
r+Uθθ
r2= 0, 0 < r < r0, −π ≤ θ ≤ π, (1)
BC: U(r0, θ) = f(θ), −π ≤ θ ≤ π,
where f(θ) is a given periodic, continuous function of period 2π (f(θ + 2π) = f(θ)). To
solve the above problem, we use the method of separation of variables.
Step 1.(Writing the ODEs): Seek solutions of the form
U(r, θ) = R(r)T (θ),
where 0 ≤ r ≤ r0 and −π ≤ θ ≤ π. Substituting into (1) and separating variables yield
R′′(r)T (θ) + r−1R′(r)T (θ) + r−2R(r)T ′′(θ) = 0.
=⇒ r2R′′(r) + rR′(r)
R(r)= −T
′′(θ)
T (θ)= k.
Which leads to the following two ODEs:
T ′′(θ) + kT (θ) = 0, (2)
r2R′′(r) + rR′(r)− kR(r) = 0. (3)
Step 2.(Solving the ODEs):
Case (a): When k < 0, the general solution to (2) is the sum of two exponentials.
Hence we have only trivial 2π-periodic solutions (see, Lecture 5).
Case (b): When k = 0, we find that T (θ) = Aθ+B is the solution to (2). This linear
function is periodic only when A = 0, that is, T0(θ) = B is the only 2π-periodic solution
corresponding to k = 0.
Case (c): When k > 0, the general solution to (2) is
T (θ) = A cos(√kθ) +B sin(
√kθ).
In this case we get a nontrivial 2π-periodic solution only when√k = n, n = 1, 2, . . ..
Hence, we obtain the nontrivial 2π-periodic solutions
Tn(θ) = An cos(nθ) +Bn sin(nθ) (4)
MODULE 7: THE LAPLACE EQUATION 27
corresponding to√k = n, n = 1, 2, . . . .
Now for k = n2, n = 0, 1, 2, . . ., equation (3) is the Cauchy-Euler equation
r2R′′(r) + rR′(r)− n2R(r) = 0. (5)
When n = 0, the general solution is
R0(r) = C +D ln r.
Since ln r → ∞ as r → 0+, this solution is unbounded near r = 0 when D = 0. Therefore,
we must choose D = 0 if U(r, θ) is to be continuous at r = 0. We now have R0(r) = C
and so U0(r, θ) = R0(r)T0(θ) = CB. For convenience, we write U0(r, θ) in the form
U0(r, θ) =A0
2, (6)
where A0 is an arbitrary constant.
When k = n2, n = 1, 2, . . . , the general solution of (3) is given by
Rn(r) = Cnrn +Dnr
−n.
Since r−n → ∞ as r → 0+, we must set Dn = 0 in order for u(r, θ) to be bounded at
r = 0. Thus
Rn(r) = Cnrn
Now for each n = 1, 2, . . . , we have the solutions
U(r, θ) = Rn(r)Tn(θ) = Cnrn[An cos(nθ) +Bn sin(nθ)].
By superposition principle, we write
U(r, θ) =A0
2+
∞∑n=1
Cnrn[An cos(nθ) +Bn sin(nθ)].
This series may be written in the equivalent form
U(r, θ) =A0
2+
∞∑n=1
(r
r0
)n
[An cos(nθ) +Bn sin(nθ)], (7)
where the An’s and bn’s are constants. These constants can be determined from the
boundary condition. With r = r0 in (7), we have
f(θ) =A0
2+
∞∑n=1
[An cos(nθ) +Bn sin(nθ)].
MODULE 7: THE LAPLACE EQUATION 28
Since f(θ) is 2π-periodic, we recognize that An, Bn are Fourier coefficients. Thus
An =1
π
∫ π
−πf(θ) cos(nθ)dθ, n = 0, 1, . . . , (8)
Bn =1
π
∫ π
−πf(θ) sin(nθ)dθ, n = 1, . . . , (9)
We now summarize the Dirichlet problem for a disk as follows.
In the Dirichlet problem(1), if
f(θ) =A0
2+
∞∑n=1
[An cos(nθ) +Bn sin(nθ)],
then the solution is given by
U(r, θ) =A0
2+
∞∑n=1
(r
r0
)n
[An cos(nθ) +Bn sin(nθ)],
where An and Bn are given by (8) and (9), respectively.
EXAMPLE 1. Solve the following BVP
PDE: Urr +Ur
r+Uθθ
r2= 0, 0 ≤ r < 1,
BC: U(1, θ) = f(θ),
where f(θ) = 1 + r sin θ + r3
2 sin(3θ) + r4 cos(4θ).
Solution. Here r0 = 1. Note that f(θ) is already in the form of Fourier series, with
An =
2 for n = 0 and 1 for n = 4
0 for other nBn =
1 n = 112 n = 3
0 for other n
The solution of the BVP is
U(r, θ) =A0
2+
∞∑n=1
(r
r0
)n
[An cos(nθ) +Bn sin(nθ)]
= 1 + r sin θ +r3
2sin(3θ) + r4 cos(4θ).
Exterior Dirichlet Problem: We shall discuss the exterior Dirichlet problem i.e., the
Dirichlet problem outside the circle. The exterior Dirichlet problem is given by
PDE: Urr +Ur
r+Uθθ
r2= 0, 1 ≤ r <∞,
BC: U(1, θ) = f(θ), 0 ≤ θ ≤ 2π.
MODULE 7: THE LAPLACE EQUATION 29
This problem is solved exactly in a manner similar to the interior Dirichlet problem. We
assume that the solutions are bounded as r → ∞. Basically, we throw out the solutions
rn cos(nθ), rn sin(nθ), ln r
that are unbounded as r → ∞.
The solution is given by
U(r, θ) =
∞∑n=0
r−n[An cos(nθ) +Bn sin(nθ)], (10)
where An and Bn are given by
A0 =1
2π
∫ 2π
0f(θ)dθ,
An =1
π
∫ 2π
0f(θ) cos(nθ)dθ,
Bn =1
π
∫ 2π
0f(θ) sin(nθ)dθ.
The detail procedure is thus left as an exercise.
Practice Problems
1. Solve the Dirichlet problem
Uxx + Uyy = 0, (x2 + y2 < 1), (11)
u(1, θ) = sin2 θ, −π ≤ θ ≤ π,
for the disk r ≤ 1.
2. Solve the BVP
Urr +Ur
r+Uθθ
r2= 0 0 ≤ r < 2, −π < θ < π,
U(2, θ) = 1 + 8 sin θ − 32 cos(4θ) − π < θ < π.
3. Show that the exterior Dirichlet problem
Urr +Ur
r+Uθθ
r2= 0 1 ≤ r <∞,
U(1, θ) = 1 + sin θ + cos(3θ) 0 < θ < 2π,
has the solution
U(r, θ) = 1 +1
rsin θ +
1
r3sin(3θ).
Module 8: The Fourier Transform Methdos for PDEs
In the previous modules (Modules 5-7), the method of separation of variables was used
to obtain solutions of initial and boundary value problems for partial differential equa-
tions given over bounded spatial regions. The present module deals with partial differen-
tial equations defined over unbounded spatial regions. The mathematical tools used for
solving initial and boundary value problems over unbounded spatial regions are integral
transforms: The Fourier transform (FT), the Fourier sine transform (FST) and the Fourier
cosine transform (FCT).
The outline of this module is as follows. The first lecture introduces the necessary back-
ground for the definition of FT. The FST and FCT are introduced in the second lecture.
Applications of FT, FST and FCT for the three types of PDEs viz., heat equation, wave
equation and Laplace equation are described in third, fourth and fifth lectures, respec-
tively.
1
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 2
Lecture 1 Fourier Transform
Recall that if f is a periodic function with period 2L on R then f has a Fourier Series(FS)
representation of the form
f(x) =1
2a0 +
∞∑n=1
(an cos(
nπx
L) + bn sin(
nπx
L)), (1)
where
an =1
L
∫ L
−Lf(x) cos(
nπx
L)dx, n = 0, 1, 2, . . .
and
bn =1
L
∫ L
−Lf(x) sin(
nπx
L)dx, n = 1, 2, . . .
Fourier series are powerful tools in treating various problems involving periodic functions.
Many practical problems do not involve periodic functions. Therefore, it is desirable to
generalize the method of Fourier series to include non-periodic functions. If f is not
periodic then we may regard it as periodic with an infinite period, i.e., we would like to
see what happens if we let L → ∞. We shall do this for reasons of motivation as well
as for making it plausible that for a non-periodic function, one should expect an integral
representation (Fourier integral) instead of Fourier series.
Set ωn = nπL . Then (1) can be rewritten as
f(x) =1
2a0 +
∞∑n=1
[an cos(ωnx) + bn sin(ωnx)].
=1
L
∫ L
−Lf(x)dx+
1
L
∞∑n=1
[cos(ωnx)
∫ L
−Lf(t) cos(ωnt)dt
+sin(ωnx)
∫ L
−Lf(t) sin(ωnt)dt
].
Note that
ωn+1 − ωn =(n+ 1)π
L− (n)π
L=π
L.
Setting ∆ω = ωn+1 − ωn = πL , we write the Fourier series in the form
f(x) =1
L
∫ L
−Lf(x)dx +
1
π
∞∑n=1
[cos(ωnx)∆ω
∫ L
−Lf(t) cos(ωnt)dt
+sin(ωnx)∆ω
∫ L
−Lf(t) sin(ωnt)dt
].
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 3
This representation is valid for any fixed L, arbitrary large, but finite. Letting L → ∞and assuming that the resulting nonperiodic function
f(x) = limL→∞
f(x)
is absolutely integrable over (−∞,∞), i.e.,∫ ∞
−∞|f(x)| <∞
it seems plausible that the infinite series (1) becomes an integral from 0 to ∞, i.e.,
f(x) =1
π
∫ ∞
0
[cos(ωx)
∫ ∞
−∞f(t) cos(ωt)dt+ sin(ωx)
∫ ∞
−∞f(t) sin(ωt)dt
]dω.
The above equation can be put in the form
f(x) =1
π
∫ ∞
0[A(ω) cos(ωx) +B(ω) sin(ωx)]dω, (2)
which is called the Fourier integral of f , where
A(ω) =
∫ ∞
−∞f(t) cos(ωt)dt, B(ω) =
∫ ∞
−∞f(t) sin(ωt)dt,
This motivates the following result.
THEOREM 1. If f is piecewise continuous in every bounded interval of R and∫ ∞
−∞|f(x)| <∞,
then f can be represented by a Fourier integral
f(x) =1
π
∫ ∞
0[A(ω) cos(ωx) +B(ω) sin(ωx)]dω, (3)
where
A(ω) =
∫ ∞
−∞f(t) cos(ωt)dt, B(ω) =
∫ ∞
−∞f(t) sin(ωt)dt, (4)
The Fourier integral of f converges to f if f is continuous. If f is discontinuous at a
point x the integral converges to
f(x+) + f(x−)
2
i.e., the average value of the left and right hand limits of f(x) at that point.
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 4
In order to motivate the definition of Fourier transform we first express complex form
of the Fourier integral as follows. Using the identity cos(a − b) = cos a cos b + sin a sin b,
we write the integral (3) as
f(x) =1
π
∫ ∞
0
[∫ ∞
−∞f(t) cos(ωx− ωt)dt
]dω. (5)
Since cos θ = eiθ+e−iθ
2 , we obtain
f(x) =1
2π
∫ ∞
0
∫ ∞
−∞f(t)ei(ωx−ωt) + e−i(ωx−ωt)dtdω
=1
2π
∫ ∞
0
∫ ∞
−∞f(t)ei(ωx−ωt)dtdω +
1
2π
∫ ∞
0
∫ ∞
−∞f(t)e−i(ωx−ωt)dtdω
Replacing w by −w in the second term on the right hand side and adjusting limits from
−∞ to 0, we obtain
f(x) =1
2π
∫ ∞
−∞
∫ ∞
−∞f(t)ei(ωx−ωt)dtdω
=1√2π
∫ ∞
−∞eiωx
[1√2π
∫ ∞
−∞f(t)e−iωtdt
]dω,
which is the complex form of the Fourier integral of f . This leads to following pair of
transforms.
DEFINITION 2. (The Fourier Transform)
Let f : (−∞,∞) → R or C. The Fourier transform (FT) of f(x) is defined by
F(f)(ω) = f(ω) =1√2π
∫ ∞
−∞f(x)e−iωxdx (−∞ < ω <∞) (6)
provided this integral exists.
REMARK 3. Note that all functions may not have Fourier transform. For example, the
constant function C (C = 0), sinx, ex, x2 do not have FT. Only functions that tend to
zero sufficiently fast as |x| → ∞ (rapidly decreasing to zero functions) have FT.
DEFINITION 4. (The Inverse Fourier Transform)
The inverse Fourier transform of a function f(ω) (−∞ < ω <∞) is defined as
F−1(f) = f(x) =1√2π
∫ ∞
−∞f(ω)eiωxdω (−∞ < x <∞) (7)
provided this integral exists.
EXAMPLE 5. Find the FT of the function
f(x) =
1 for |x| ≤ L,
0 for |x| > L.
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 5
Solution. Using the definition of FT, we have
f(ξ) =1√2π
∫ ∞
−∞f(x)e−iξxdx =
1√2π
∫ L
−Le−iξxdx =
1√2π
e−iξx
−iξ
∣∣∣∣L−L
=1√2π
e−iξL − eiξL
−iξ=
1√2π
2 sin(ξL)
ξ.
Note that even though f(x) vanishes for x outside the interval [−L,L], the same is not
true of f(ξ). In general, it can be shown that if f and f vanish outside [−L,L], thenf ≡ 0.
Some Basic Properties of Fourier Transforms:
• Linearity: F is a linear transformation. For any two functions f1 and f2 with FTs
F [f1] and F [f2], respectively and any constants c1 and c2, we have
F [c1f1 + c2f2] = c1F [f1] + c2F [f2].
• Conjugation: Let f(x) be a function with FT F [f ]. Then the FT of the function
f(x) (complex conjugate) is given by
F [f(x)] = F [f ](−ω).
• Continuity: Let f(x) be an absolutely integrable function with FT f(ω). Then
f(ω) is a continuous function.
• Convolution: We know the convolution of the functions f and g, denoted be f ∗ g,is defined by
(f ∗ g)(x) =∫ ∞
−∞f(x− t)g(t)dt,
provided the integral exists for each x. (i.e., if f is bounded and g is absolutely
integrable). Let f(ω) and g(ω) be the FTs of f and g, respectively. Then
F(f ∗ g) = F(f)F(g) = f(ω) g(ω).
Note: In general, F [f(x)g(x)] = F [f ]F [g].
• Parseval’s identity: For any two functions f(x) and g(x) with FTs f(ω) and g(ω),
respectively. Then ∫ ∞
−∞f(x)g(x)dx =
∫ ∞
−∞f(ω)g(ω)dω.
In particular, ∫ ∞
−∞|f(x)|2dx =
∫ ∞
−∞|f(ω)|2dω.
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 6
• Transformation of partial derivatives:
(i) Let u = u(x, t) be a function defined for −∞ < x <∞ and t ≥ 0. If u(x, t) → 0
as x→ ±∞, and F [u](ω, t) = U(ω, t), then
F [ux](ω, t) = iωF [u] = iωU(ω, t).
If, in addition, ux(x, t) → 0 as x→ ±∞, then
F [uxx](ω, t) = −ω2F [u] = −ω2U(ω, t).
(ii) If we transform the partial derivative ut(x, t) (and if the variable of integration
in the transformation is x), then the transformation is given by
F [ut](ω, t) =d
dtF [u](ω, t) = d
dtU(ω, t).
The Fourier transform of a time derivative equals the time derivative of the Fourier
transform. This shows that time differentiation and the FT with respect to x com-
mute.
Practice Problems
1. Compute the complex FS for each of the function f(x) = eax cos(bx).
2. Show that if f(x) is absolutely integrable on (−∞,∞), then
F [eibxf(ax)] =1
aF [f((ξ − b)/a)], a, b ∈ R, a = 0.
3. Find the FT of
(A) f(x) = e−cx2, where c is a constant.
(B) f(x) = e−|x|
(C) f(x) = sinx2
4. Verify the following properties of FT:
(A) F [ux] = iξF [u]
(B) F [uxx] = −ξ2F [u]
(C) F [u(x+ c)] = eicξF [u], for any c ∈ R
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 7
Lecture 2 Fourier Sine and Cosine Transformations
In this lecture we shall discuss the Fourier sine and cosine transforms and their properties.
These transforms are appropriate for problems over semi-infinite intervals in a spatial
variable in which the function or its derivative are prescribed on the boundary.
If a function is even or odd function then f can be represented by a Fourier integral
which takes a simpler form than in the case of an arbitrary function.
If f(x) is an even function, then B(ω) = 0 in (3), and
A(ω) = 2
∫ ∞
0f(t) cosωtdt.
Hence, the Fourier integral reduces to the simpler form
f(x) =1
π
∫ ∞
0A(ω) cos(ωx)dω.
Similarly, if f(x) is odd, then A(ω) = 0 in (3), and
B(ω) = 2
∫ ∞
0f(t) sinωtdt.
Thus, (3) becomes
f(x) =1
π
∫ ∞
0B(ω) sin(ωx)dω.
These Fourier integrals motivates to define the Fourier cosine transform (FCT) and Fourier
sine transform (FST). The FT of an even function f is called FCT of f . The FT of an
odd function f is called the FST of f .
DEFINITION 1. (Fourier Cosine Transform) The FCT of a function f : [0,∞) → Ris defined as
Fc(f) = fc(ω) = Fc(ω) =
√2
π
∫ ∞
0f(x) cos(ωx)dx (0 ≤ ω <∞). (1)
DEFINITION 2. (Inverse Fourier Cosine Transform ) The Inverse FCT (IFCT) of a
function fc(ω) (0 ≤ ω <∞) is defined as
F−1c [fc] = fc(x) =
√2
π
∫ ∞
0fc(ω) cos(ωx)dω (0 ≤ x <∞). (2)
DEFINITION 3. (Fourier Sine Transform) The FST of a function f : [0,∞) → R is
defined as
Fs(f) = fs(ω) = Fs(ω) =
√2
π
∫ ∞
0f(x) sin(ωx)dx (0 ≤ ω <∞). (3)
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 8
DEFINITION 4. (Inverse Fourier Sine Transform) The Inverse FST (IFST) of a
function fs(ω) (0 ≤ ω <∞) is defined as
F−1s (f) = fs(x) = Fs(ω) =
√2
π
∫ ∞
0fs(ω) sin(ωx)dω (0 ≤ x <∞). (4)
Basic Properties of Fourier Cosine and Sine Transforms:
• Linearity:
Fc[(af + bg)] = aFc[f ] + bFc[g].
Fs[(af + bg)] = aFs[f ] + bFs[g].
• Let f be a function defined for x ≥ 0 and f(x) → 0 as x→ ∞. Then
Fs[f′(x)] =
√2
π
∫ ∞
0sin(ωx)f ′(x)dx
=
√2
πsin(ωx)f(x)
∣∣∣∣∣x=∞
x=0
− ω
√2
π
∫ ∞
0cos(ωx)f(x)dx
= −ωFc[f ].
If we assume that f(x), f ′(x) → ∞ then√2
π
∫ ∞
0sin(ωx)f ′′(x)dx =
√2
πsin(ωx)f ′(x)
∣∣∣∣∣x=∞
x=0
− ω
√2
π
∫ ∞
0cos(ωx)f ′(x)dx
=
√2
πsin(ωx)f ′(x)
∣∣∣∣∣x=∞
x=0
+ ω
√2
πcos(ωx)f(x)
∣∣∣∣∣x=∞
x=0
−ω2
√2
π
∫ ∞
0sin(ωx)f(x)dx
= ω
√2
πf(0)− ω2Fs[f ]
Thus, we have
Fs[f′(x)] = −ωFc[f ].
Fs[f′′(x)] = −ω2Fs[f ] + ω
√2
πf(0).
A similar result is true for the Fourier cosine function.
Fc[f′(x)] = ωFs[f ]−
√2
πf(0)
Fc[f′′(x)] = −ω2Fc[f ]−
√2
πf ′(0).
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 9
Note: Observe that the FST of a first derivative of a function is given in terms of
the FCT of the function itself. However, the FST of a second derivative is given in
terms of the sine transform of the function. There is an additional boundary term
ω√
2πf(0).
• Transformation of partial derivatives:
(i) Let u = u(x, t) be a function defined for x ≥ 0 and t ≥ 0. If u(x, t) → 0 as
x→ ∞, and Fs[u](ω, t) = us(ω, t), then
Fs[ux](ω, t) = −ωFc[u](ω, t).
Fc[ux](ω, t) = ωFs[u](ω, t)−√
2
πu(0, t).
If, in addition, ux(x, t) → 0 as x→ ∞, then
Fs[uxx](ω, t) = −ω2Fs[u](ω, t) +
√2
πωu(0, t).
Fc[uxx](ω, t) = −ω2Fc[u](ω, t)−√
2
πux(0, t).
(ii) If we transform the partial derivative ut(x, t) (and if the variable of integration
in the transformation is x), then the transformation is given by
Fs[ut](ω, t) =d
dtFs[u](ω, t).
Fc[ut](ω, t) =d
dtFc[u](ω, t).
Thus, time differentiation commutes with both the Fourier cosine and sine transfor-
mations.
Practice Problems
1. Find the FST and FCT of the function
f(x) =
1, 0 ≤ x ≤ 2,
0, x > 2.
2. If u = u(x, t) and u(x, t) → 0 as x→ ∞, then
(A) Fsux(ω, t) = −ωFc[u](ω, t)
(B) Fcux(ω, t) = − 2πu(0, t) + ωFs[u](ω, t)
3. If u(x, t) and ux(x, t) → 0 as x→ ∞, then
(A) Fs[uxx](ω, t) = −ω2Fs[u](ω, t) +√
2πωu(0, t)
(B) Fc[uxx](ω, t) = −ω2Fc[u](ω, t)−√
2πux(0, t)
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 10
Lecture 3 Heat Flow Problems
In this lecture we shall study some applications of the Fourier transform in solving the
heat flow problems where the spatial domain is infinite or semi-infinite.
1 Heat flow problem in an infinite rod
Consider the heat flow in an infinite rod where the initial temperature is u(x, 0) = f(x).
We shall prove that if the function f(x) is continuous and either absolutely integrable i.e.,∫ ∞
−∞|f(x)|dx <∞
or bounded (i.e., |f(x)| ≤ M ∀x), then the following IVP problem has a solution u(x, t)
which is continuous throughout the half-plane t ≥ 0, −∞ < x <∞.
PDE: ut(x, t) = α2uxx(x, t), −∞ < x <∞, t > 0, (1)
IC: u(x, 0) = f(x), −∞ < x <∞, (2)
with u(x, t), ux(x, t) → 0 as x→ ±∞, t > 0.
The stepwise solution procedure is given below.
Step 1. (Transforming the problem to an IVP in ODE )
We apply FT F to the PDE (1) and IC (2) and use the properties of FT to reduce the
given Cauchy problem to an IVP for an ODE. Let
F [u] = u(ω, t) F [f(x)] = f(ω).
Taking the FT of both sides of the PDE (1) and IC (2) with respect to the x variable, we
obtain
F [ut] = α2F [uxx]
F [u(x, 0)] = F [f(x)].
Using the properties of the FT
F [ut] =d
dtu(ω, t), F [uxx] = −ω2u(ω, t)
we have
d
dtu(ω, t) = −α2ω2u(ω, t), (3)
u(ω, 0) = f(ω). (4)
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 11
Step 2. (Solving the transformed problem)
Note that (3) is a first-order IVP for an ODE in t for each fixed ω. The solution to this
problem is given by
u(ω, t) = f(ω)e−α2ω2t. (5)
Step 3. (Finding the inverse transform)
To find the solution u(x, t), we take inverse transform, with t fixed, to obtain
u(x, t) = F−1[u(ω, t)]
= F−1[f(ω)e−α2ω2t].
Step 4. (Using convolution property of the inverse FT )
Using the convolution property of F−1, we write
u(x, t) = F−1[f(ω)e−α2ω2t]
= F−1[f(ω)] ∗ F−1[e−α2ω2t]
= f(x) ∗[
1√2α2t
e−( x2
4α2t)
]=
1
2√α2πt
∫ ∞
−∞f(ω)e−
(x−ω)2
4α2t dω.
REMARK 1.
• Note that integrand is made up of two terms i.e., the initial temperature f(x) and
the function
G(x, t) =1
2√α2πt
e−(x−ω)2
4α2t .
The function G(x, t) is called Green’s function or impulse-response function which
is the temperature response to an initial temperature impulse at x = ω.
• The major drawback of the FT method is that all functions can not be transformed.
Only functions that damp to zero sufficiently fast as |x| → ∞ have FTs.
2 Heat flow problem in a semi-infinite rod
Consider the heat flow in a semi-infinite region with the temperature prescribed as a
function of time at x = 0.
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 12
EXAMPLE 2. Solve the problem
PDE: ut(x, t) = α2uxx(x, t), 0 < x <∞, t > 0 (6)
BC: u(0, t) = b0 t > 0, (7)
IC: u(x, 0) = 0, −∞ < x <∞, (8)
with u(x, t), ux(x, t) → 0 as x→ ∞.
Since 0 < x < ∞, we may wish to use a transform. Since u is specified at x = 0, we
should try to use Fourier sine transform (and not the Fourier cosine transform). We solve
this problem with the following steps.
Step 1. (Transforming the problem)
Notice that u is specified at x = 0. Let Fs[u] = us(ω, t). Now taking FST of both sides of
(6) and noting the following properties of FST
Fs[ut] =
√2
π
∫ ∞
0ut(x, t) sin(ωx)dx
=d
dt
[√2
π
∫ ∞
0ut(x, t) sin(ωx)dx
]
=d
dtFs[u]
=d
dtus(ω, t).
and
Fs[uxx] = −ω2Fs[u] +
√2
πωu(0, t)
= −ω2us(ω, t) +
√2
πωu(0, t)
= −ω2us(ω, t) +
√2
πb0ω,
where in the last step we have used BC u(0, t) = b0, we arrive at the ODE
d
dtus(ω, t) = α2
(−ω2us(ω, t) +
√2
πb0ω
).
Next, taking FST of the IC (8), we obatin
Fs[u(x, 0)] = Fs[0] −→ us(ω, 0) = 0.
Thus, we transform the original problem (6)-(8) to an IVP in ODE:
d
dtus(ω, t) + α2ω2us(ω, t) =
√2
πα2b0ω,
us(ω, 0) = 0.
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 13
Step 2.(Solving the transformed problem)
Using the standard method of solving ODE, the solution is given by
us(ω, t) =
√2
π
b0ω(1− e−ω2α2t). (9)
Step 3. (Finding the Inverse Transform)
Applying the inverse FST to both sides of (9), we find that
u(x, t) = F−1s [us(ω, t)] = F−1
s
[√2
π
b0ω(1− e−ω2α2t)
]
=2
πb0
∫ ∞
0
sin(ωx)
ω(1− e−α2ω2t)dω
= b0
[erfc(
x√2α2t
)
],
where erfc(y) is the complementary error function given by
erfc(y) =
√2
π
∫ ∞
ye−τ2dτ.
Hence, the solution of the heat conduction problem is
u(x, t) = b0 erfc
(x√2α2t
).
Practice Problems
1. Solve the following IVP:
ut = uxx, −∞ < x <∞, t > 0,
u(x, 0) = 2x, −∞ < x <∞,
u(x, t), ux(x, t) → 0 as x→ ±∞, t > 0.
2. Let f(x) ∈ C(R) be an odd function. If f(x) is absolutely integrable on R and
f(x) → 0 as x→ ±∞ then show that the unique continuous solution of the problem
ut = α2uxx, −∞ < x <∞, t > 0,
u(x, 0) = f(x), −∞ < x <∞,
u(x, t), ux(x, t) → 0 as x→ ±∞, t > 0.
is also odd in the variable x. Show that the conclusion is false if the BC is dropped.
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 14
3. Apply appropriate FT to solve the IBVP:
ut = uxx, x > 0, t > 0,
u(x, 0) = x, x > 0,
ux(0, t) = e−t, t > 0,
u(x, t), ux(x, t) → 0 as x→ ∞, t > 0.
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 15
Lecture 4 Vibration of an Infinite String
In this lecture we shall learn how Fourier Transforms can be used to solve one dimensional
wave equations in an infinite (or semi-infinite) interval. More precisely, we shall derive
D’Alembert’s formula using FT method.
Consider the following one-dimensional wave equation:
PDE: utt(x, t) = c2uxx(x, t), −∞ < x <∞, t > 0, (1)
IC: u(x, 0) = f(x), ut(x, 0) = g(x) −∞ < x <∞, (2)
with u(x, t), ux(x, t) → 0 as x→ ±∞, t > 0.
Step 1. (Transforming the problem to a second-order IVP in ODE )
Let
F [u] = u(ω, t), F [f(x)] = f(ω), F [g(x)] = g(ω).
Taking the FT of both sides of the PDE (1) and IC with respect to the x variable, we
obtain
F [utt] = c2F [uxx]
F [u(x, 0)] = F [f(x)], F [ut(x, 0)] = F [g(x)]
Using the properties of the FT
F [utt] =d2
dt2u(ω, t), F [uxx] = −ω2u(ω, t)
we have
d2
dt2u(ω, t) + c2ω2u(ω, t) = 0, (3)
u(ω, 0) = f(ω), ut(ω, 0) = g(ω). (4)
Step 2:(Solving the transformed problem) The solution of (3) is given by
u(ω, t) = C(ω) cos(cωt) +D(ω) sin(cωt). (5)
The condition u(ω, 0) = f(ω) yields
C(ω) = u(ω, 0) = f(ω).
Differentiate (5) to have
ut(ω, t) = −cωC(ω) sin(cωt) + cωD(ω) cos(cωt). (6)
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 16
From the condition ut(ω, 0) = g(ω), we obtain
g(ω) = ut(ξ, 0) = cωD(ω).
Thus, we write the solution of IVP as
u(ω, t) = f(ω) cos(cωt) + g(ω)sin(cωt)
cω. (7)
Step 3:(Taking Inverse Transform)
Taking inverse transform of both sides of (7) and using linearity property, we have
u(x, t) = F−1[u(ω, t)]
= F−1[f(ω) cos(cωt)] + F−1
[g(ω)
sin(cωt)
cω
].
:= I1 + I2.
For I1, we note that
I1 = F−1[f(ω) cos(cωt)](x)
=
∫ ∞
−∞f(ω) cos(cωt)eiωxdω
=1
2
∫ ∞
−∞f(ω)(eicωt + e−icωt)eiωx dω (8)
=1
2
∫ ∞
−∞f(ω)ei(x+ct)ωdω +
1
2
∫ ∞
−∞f(ω)ei(x−ct)ωdω
=1
2[f(x+ ct) + f(x− ct)]. (9)
For the second term I2, we use the convolution theorem as follows. Note that if
h(x) =
12c |x| ≤ ct,
0 |x| > ct.
then its FT is given by
h(ω) =1√2π
sin(cω)
cω.
An use of convolution theorem now yields
I2 = F−1
[g(ω)
sin(cωt)
cω
](x)
=
∫ ∞
−∞g(y)h(x− y)dy =
1
2c
∫ x+ct
x−ctg(y)dy. (10)
Adding (9) and (10) yields D’Alembert’s formula:
u(x, t) =1
2[f(x+ ct) + f(x− ct)] +
1
2c
∫ x+ct
x−ctg(y)dy. (11)
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 17
Practice Problems
1. Solve the following IVP:
utt = 4uxx, −∞ < x <∞, t > 0,
u(x, 0) = x2, ut(x, 0) = 0, −∞ < x <∞,
u(x, t), ux(x, t) → 0 as x→ ±∞, t > 0.
2. Solve the following IBVP:
utt = c2uxx, 0 < x <∞, t > 0,
u(x, 0) = ut(x, 0) = 0, 0 < x <∞,
u(0, t) = 1, t > 0,
u(x, t), ux(x, t) → 0 as x→ ∞, t > 0.
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 18
Lecture 5 Laplace’s Equation in a Half-Plane
The steady-state temperature distribution for y > 0 with the prescribed temperature
u(x, 0) = f(x) on an infinite wall, y = 0, is described by the equation:
PDE: uxx + uyy = 0, −∞ < x <∞, y > 0 (1)
BC: u(x, 0) = f(x), −∞ < x <∞, (2)
where u is bounded as y → ∞. Both u and ux → 0 as |x| → ∞.
Solution. To solve this problem, we proceed as follows. Let
F [u](ω) = u(ω, y), F [f(x)] = f(ω).
Step 1. (Transforming the problem using FT )
Taking FT of the PDE (1) in the variable x and using linearity property we have
F [uxx] + F [uyy] = 0. (3)
Since u and ux → 0 as |x| → ∞, it follows that
−ω2F [u](ω, y) +1√2π
∫ ∞
−∞uyye
−iωxdx = 0
=⇒ −ω2u(ω, y) +∂2
∂y2
[∫ ∞
−∞u(x, y)e−iωxdx
]= 0
=⇒ d2
dy2u(ω, y)− ω2u(ω, y) = 0, (4)
which is a second-order linear ODE in y. Taking FT of the BC yields
u(ω, 0) = F [f(x)] = f(ω). (5)
Step 2. (Solving the Transformed the problem)
The general solution of (4) is given by
u(ω, y) = A(ω)eωy +B(ω)e−ωy, (6)
where A(ω) and B(ω) are to be determined. Since u is bounded as y → ∞, its FT u(ω, y)
must be bounded as y → ∞. This implies A(ω) = 0 for ω > 0. If ω < 0 then B(ω) = 0.
Thus,
u(ω, y) = Ke−|ω|y, K = is a constant. (7)
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 19
Using (7) and (5), we obtain
f(ω) = K. (8)
Therefore,
u(ω, y) = f(ω)e−|ω|y =1√2π
∫ ∞
−∞f(x)e−|ω|ye−iωxdx. (9)
Step 3. (Applying Inverse FT )
Taking Inverse FT both sides of (9), we obtain
u(x, y) = F−1[1√2π
∫ ∞
−∞f(x)e−|ω|ye−iωxdx]
=1√2π
∫ ∞
−∞
[1√2π
∫ ∞
−∞f(τ)e−|ω|ye−iωτdτ
]eiωxdω
=1
2π
∫ ∞
−∞f(τ) dτ
∫ ∞
−∞e(ω[i(τ−x)]−|ω|y)dω. (10)
An easy computation shows that
1
2π
∫ ∞
−∞e(ω[i(τ−x)]−|ω|y)dω
=1
2π
∫ 0
−∞e(ω[y+i(τ−x)])dω +
1
2π
∫ ∞
0e−ω[y−i(τ−x)]dω
=1
2π
[1
y + i(τ − x)+
1
y − i(τ − x)
]=
1
π
y
(τ − x)2 + y2. (11)
Substituting (11) into (10), we conclude that
u(x, y) =y
π
∫ ∞
−∞
f(τ)
(τ − x)2 + y2dτ. (12)
Let us consider the following example.
EXAMPLE 1. Let u(x, y) solves
uxx + uyy = 0 −∞ < x <∞, y > 0, (13)
u(x, 0) = e−x2. (14)
If u(x, y) is continuous and bounded, then show that∫ ∞
−∞u(x, y)dx =
√π, for each y ≥ 0.
MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES 20
Solution. Since e−x2is bounded and continuous, apply formula (12) to obtain (for
y > 0)
u(x, y) =1
π
∫ ∞
−∞
y
y2 + (x− τ)2e−τ2dτ. (15)
Integrate both sides of (15) with respect to x to obtain∫ ∞
−∞u(x, y)dx =
∫ ∞
−∞
1
π
∫ ∞
∞
y
y2 + (x− τ)2e−τ2dτdx. (16)
Interchanging the order of integration (This is possible because the integrand is absolutely
integrable for y > 0) and using∫∞−∞ e−τ2dτ =
√π, we obtain∫ ∞
−∞u(x, y)dx =
∫ ∞
−∞
y
π
[∫ ∞
−∞
dx
y2 + (x− τ)2
]e−τ2dτ
=y
π
π
y
∫ ∞
−∞e−τ2dτ =
√π.
Note that∫∞−∞ u(x, 0)dx =
∫∞−∞ e−τ2dτ =
√π =⇒ (14) also holds for y = 0. Hence the
result.
Practice Problems
1. Solve
uxx + uyy = 0, 0 < x, y <∞,
u(0, y) = 0, u(x, 0) = f(x),
where u(x, y) assumed to be bounded and u(∞, y) = ux(∞, y) = 0, 0 ≤ y <∞.
2. Solve
uxx + uyy = 0, −∞ < x <∞, 0 < y < 2
u(x, 0) = f(x), u(x, 2) = 0, −∞ < x <∞,
u(x, t) → 0 uniformly in y as |x| → ∞.
Module 9: The Method of Green’s Functions
The method of Green’s functions is an important technique for solving boundary value
and, initial and boundary value problems for partial differential equations.
In this module, we shall learn Green’s function method for finding the solutions of
partial differential equations. This is accomplished by constructing Green’s theorem that
are appropriate for the second order differential equations. These integral theorems will
then be used to show how BVP and IBVP can be solved in terms of appropriately defined
Green’s functions for these problems. More precisely, we shall study the construction and
use of Green’s functions for the Laplace, the Heat and the Wave equations.
1
MODULE 9: THE METHOD OF GREEN’S FUNCTIONS 2
Lecture 1 The Laplace Equation
Let Ω be a bounded domain in R2. Consider the Laplace equation
∇2u = 0 in Ω (1)
satisfying the BC
αu+ β∂u
∂n
∣∣∣∣∂Ω
= B. (2)
Here, α(x), β(x), and B are given functions evaluated on the boundary region ∂Ω. The
term ∂u∂n denotes the exterior normal derivative on ∂Ω. The boundary condition (2) relates
the values of u on ∂Ω and the flux of u through ∂Ω. We assume that α(x) > 0 and β(x) > 0
on ∂Ω.
If α = 0, β = 0 then (2) is referred to as Dirichlet BC. If α = 0, β = 0 then (2) is
referred to as Neumann BC. If α = 0, β = 0 then the condition (2) is known as Robin’s
type BC or mixed BC. We assume that ∂Ω is subdivided into three disjoint subsets ∂Ω1,
∂Ω2 and ∂Ω3. On ∂Ω1, ∂Ω2 and ∂Ω3, u satisfies a boundary condition of the first kind
(Dirichlet type), second kind (Neumann type) and third kind (mixed type), respectively.
Introducing a function w(x) (whose property we shall specify later) and applying
Green’s theorem, we note that∫Ω
(w∇2u− u∇2w
)dx =
∫∂Ω
(u∂w
∂n− w
∂u
∂n
)ds, (3)
where n is the exterior unit normal to ∂Ω. The equation (3) is the basic integral theorem
from which the Green’s function method proceeds in the elliptic case.
Now the function w(x) is to be determined so that (3) expresses u at an arbitrary
point ξ in the region Ω in terms of w and known functions in (1) and (2).
Let w(x) be a solution of
∇2w = δ(x− ξ), (4)
where δ(x − ξ) is a two-dimensional Dirac delta function. Using the property of Dirac
delta function, we have ∫Ωu(∇2w)dx =
∫Ωu δ(x− ξ) dx = u(ξ). (5)
In view of (1), we have ∫Ωw(∇2u) dx = 0. (6)
MODULE 9: THE METHOD OF GREEN’S FUNCTIONS 3
It now remains to choose boundary conditions for w(x) on ∂Ω so that the boundary
integral in (3) involves only w(x) and known functions. This can be accomplished by
requiring w(x) to satisfy the homogeneous form of the given boundary condition (2), i.e.
αw + β∂w
∂n
∣∣∣∣∂Ω
= 0. (7)
If x ∈ ∂Ω1 on ∂Ω, we have
u∂w
∂n− w
∂u
∂n=
1
α
(αu
∂w
∂n− αw
∂u
∂n
)=
1
α
∂w
∂n
(αu+ β
∂u
∂n
)=
1
αB∂w
∂n, (8)
where we have used (2). If x ∈ ∂Ω2 ∪ ∂Ω3 on ∂Ω, we have
u∂w
∂n− w
∂u
∂n= − 1
βw
(αu+ β
∂u
∂n
)= − 1
βBw. (9)
The function w(x) is called the Green’s function for the boundary value problem (1)-(2).
To indicate its dependence on the point ξ, we denote the Green’s function by
w = G(x; ξ). (10)
In terms of G(x; ξ), (2) takes the form
u(ξ) = −∫∂Ω1
1
αB∂G
∂nds+
∫∂Ω2∪∂Ω3
1
βBGds. (11)
The Green’s function G(x; ξ) thus satisfies the equation
∇2G = δ(x− ξ), x, ξ ∈ Ω, (12)
and the BC
αG+ β∂G
∂n
∣∣∣∣∂Ω
= 0. (13)
The Poisson equation in a rectangle: Let Ω : 0 < x < a, 0 < y < b be a rectangular
domain in R2 with boundary ∂Ω. Consider the following BVP
∇2u = f(x, y) in Ω, (14)
with the BC
u(x, 0) = u(x, b) = 0, 0 < x < a, (15)
u(0, y) = u(a, y) = 0, 0 < y < b.
MODULE 9: THE METHOD OF GREEN’S FUNCTIONS 4
Let G(x, y; ξ, η) be the solution of the BVP
∇2G = δ(x− ξ, y − δ) in Ω (16)
with the BC
G(x, 0; ξ, η) = G(x, b; ξ, η) = 0, 0 < x < a, (17)
G(0, y; ξ, η) = G(a, y; ξ, η) = 0, 0 < y < b,
where δ(x−ξ, y−η) = δ(x−ξ)δ(y−η). Since ∂Ω is piecewise smooth, we use the divergence
theorem to find that for a pair of smooth functions u and w.∫Ω(w∇2u− u∇2w)dx =
∫∂Ω
(u∂w
∂n− w
∂u
∂n)ds. (18)
Equation (18) is Green’s formula for functions of two space variables. If u is the solution
of the given BVP and w is replaced by G, then the homogeneous BC satisfied by both u
and G make the right-hand side in (18) vanish and the formula reduces to∫Ω[u(x, y)δ(x− ξ)δ(y − η)− f(x, y)G(x, y; ξ, η)]dxdy = 0. (19)
This yields
u(ξ, η) =
∫ΩG(x, y; ξ, η)f(x, y)dxdy. (20)
Applying (18) with u(x, y) replaced by G(x, y; ξ, η) and w(x, y) replaced by G(x, y; ρ, σ),
we obtain
G(ξ, η; ρ, σ) = G(ρ, σ; ξ, η). (21)
Applying a simple interchange of variables, (20) becomes
u(x, y) =
∫ΩG(x, y; ξ, η)f(ξ, η)dξdη. (22)
G(x, y; ξ, η) is called the Green’s function of the given BVP. Formula (22) shows the effect
of all the sources in Ω on the temperature at the point (x, y).
To construct the Green’s function G, recall the two dimensional eigenvalue problem
associated with the BVP (14)-(15).
Uxx + Uyy + λU = 0, 0 < x < a, 0 < y < b,
U(0, y) = 0, U(a, y) = 0, 0 < y < b,
U(x, 0) = 0 U(x, b) = 0 0 < x < a.
MODULE 9: THE METHOD OF GREEN’S FUNCTIONS 5
The eigenvalues and the corresponding eigenfunctions are given by
λnm =(nπa
)2+(mπb
)2, Unm = sin
(nπxa
)sin(mπy
b
), n,m = 1, 2, . . .
We now seek an expansion of the form
G(x, y; ξ, η) =
∞∑n=1
∞∑m=1
cnm(ξ, η)Unm(x, y)
=
∞∑n=1
∞∑m=1
cnm(ξ, η) sin(nπx
a
)sin(mπy
b
). (23)
Putting (23) in (16), we obtain
∇2G(x, y; ξ, η) =
∞∑n=1
∞∑m=1
cnm(ξ, η)(∇2Unm)(x, y)
= −∞∑n=1
∞∑m=1
λnmcnm(ξ, η)Unm(x, y)
= δ(x− ξ)δ(y − η). (24)
Multiplying both sides of (24) by Upq. Then integrate over Ω and use the property of
Dirac delta function to obtain
cpq(ξ, η) = − 4
abλpqUpq(ξ, η). (25)
In view of (23), we now conclude
G(x, y; ξ, η) = − 4
ab
∞∑n=1
∞∑m=1
sin(nπξa
)sin(mπη
b
)(nπa )2 + (mπ
b )2sin(nπx
a
)sin(mπy
b
). (26)
EXAMPLE 1. Using Green’s function method to solve
uxx + uyy = −π2 sin(πx) sin(2πy), 0 < x < 1, 0 < y < 2,
u(x, 0) = 0, u(x, 2) = 0, 0 < x < 1,
u(0, y) = 0 u(1, y) = 0 0 < y < 2.
Here, a = 1, b = 2 and f(x, y) = −π2 sin(πx) sin(2πy). By (23), we have
G(x, y; ξ, η) = −2
∞∑n=1
∞∑m=1
sin(nπξ) sin(mπη/2)
(nπ)2 + (mπ)2/4sin (nπx) sin
(mπy2
).
MODULE 9: THE METHOD OF GREEN’S FUNCTIONS 6
It now follows from (22) that
u(x, y) =
∫ 2
0
∫ 1
0(−2)
∞∑n=1
∞∑m=1
4 sin(nπξ) sin(mπη/2)
π2(4n2 +m2)sin (nπx) sin
(mπy2
)×(−π2) sin(πξ) sin(2πη)dξdη
= 8
∞∑n=1
∞∑m=1
1
4n2 +m2
(∫ 1
0sin(πξ) sin(nπξ)dξ
)×(∫ 2
0sin(2πη) sin
(mπy2
)dη
)sin (nπx) sin
(mπy2
)=
1
2
(8
4 · 12 + 42
)sin(πx) sin(2πy) =
1
5sin(πx) sin(2πy).
REMARK 2. We know that separation of variables cannot be performed if the PDE and/or
BCs are not homogeneous. The eigenfunction expansion technique is used to deal with
IBVPs, where the PDE is nonhomogeneous and the BCs are zero.
Practice Problems
1. Use the Green’s function method to find the solution of the Dirichlet BVP:
uxx + uyy = x2 + y2, 0 < x < 1, 0 < y < 1,
u(x, 0) = u(x, 1) = 0, 0 < x < 1,
u(0, y) = u(1, y) = 0, 0 < y < 1.
2. Use the Green’s function method to find the solution of the Neumann BVP:
uxx + uyy = 0, 0 < x < 1, 0 < y < 1,
ux(x, 0) = ux(x, 1) = 0, 0 < x < 1,
u(0, y) = u(1, y) = 0, 0 < y < 1.
MODULE 9: THE METHOD OF GREEN’S FUNCTIONS 7
Lecture 2 The Wave Equation
Let Ω ⊂ R2 be a bounded domain. Consider the wave equation
utt −∇2u = 0, (x, t) ∈ Q = Ω× (0, T ], (1)
subject to the IC
u(x, 0) = f(x), ut(x, 0) = g(x), x ∈ Ω, (2)
and BC
αu+ β∂u
∂n
∣∣∣∣∂Ω
= B(x, t), 0 < t ≤ T. (3)
Notice that the problem is defined over the bounded (cylindrical) region Q in (x, t)-space
(see, Fig. 9.1). The lateral boundary of Q is denoted by ∂Qx and the two caps of the
cylinder, which are portions of the planes t = 0 and t = T , are denoted by ∂Q0 and ∂QT ,
respectively. The boundary conditions for u(x, t) are assigned on ∂Qx. The exterior unit
Figure 9.1: The region Q
normal n to ∂Q has the form n = (nx, 0) on ∂Qx, where nx is the exterior unit normal to
∂Ω. On ∂Q0, n has the form n = (0,−1), and on ∂QT , it has the form n = (0, 1).
MODULE 9: THE METHOD OF GREEN’S FUNCTIONS 8
Using the divergence theorem, it follows that∫Q
[w(utt −∇2u
)− u
(wtt −∇2w
)]dx
=
∫Q∇ · (−w∇u+ u∇w,wut − uwt)dx
=
∫∂Q
(−w∇u+ u∇w,wut − uwt) · nds
=
∫∂Qx
(−w∇u+ u∇w) · nxds+∫∂QT
(wut − uwt)dx
−∫∂Q0
(wut − uwt
)dx, (4)
where ∇ = (∇, ∂∂t) is the gradient operator in space-time. The integral relation (4) forms
the basis for the Green’s function method for solving the initial and boundary value
problem (1)-(3).
REMARK 1. When Ω = (x0, x1) (i.e., in one space dimension) and Q = (x0, x1)×(t0, t1),
the equation (4) takes the form∫Q
[w(utt − uxx
)− u(wtt − wxx
)]dx = −
∫ t1
t0
[w(x1, t)ux(x1, t)− w(x0, t)ux(x0, t)]dt
+
∫ t1
t0
[wx(x1, t)u(x1, t)− wx(x0, t)u(x0, t)]dt
+
∫ x1
x0
w(x, t1)ut(x, t1)− u(x, t1)wt(x, t1)dx
−∫ x1
x0
w(x, t0)ut(x, t0)− u(x, t0)wt(x, t0)dx.
Next, we show that how w(x, t) is determined so that the solution u(x, t) of (1)-(3)
can be specified at an arbitrary point (ξ, τ) in the region Q from (4). For this, we first
require that w(x, t) be a solution of
wtt −∇2w = δ(x− ξ)δ(t− τ), ξ ∈ Ω, 0 < τ < T. (5)
Using the property of Dirac delta function, we have∫Qu(wtt −∇2w
)dx =
∫Quδ(x− ξ)δ(t− τ)dx = u(ξ, τ). (6)
Further, from (1), we have ∫Qw(utt −∇2u)dx = 0. (7)
MODULE 9: THE METHOD OF GREEN’S FUNCTIONS 9
Since ∫∂Qx
(−w∇u+ u∇w) · nxds =∫∂Qx
(− w
∂u
∂n+ u
∂w
∂n
)ds, (8)
we find that
αw + β∂w
∂n
∣∣∣∣∂Qx
= 0, (9)
and obtain ∫∂Qx
(− w
∂u
∂n+ u
∂w
∂n
)ds =
∫S1
1
αB∂w
∂nds−
∫S2∪S3
1
βBwds. (10)
where S1, S2 and S3 are the portions of ∂Qx that correspond to ∂Ω1, ∂Ω2 and ∂Ω3 on ∂Ω,
respectively. If w(x, 0) and wt(x, 0) are specified, the integral over ∂Q0 in (4) is completely
determined since u(x, 0) and ut(x, 0) are known. However, u and ut, at t = T (i.e., on
∂QT ) are not known. Observe that if we specify w and wt at t = T in such a way that
the unknown values of u and ut play no role in the integral over ∂QT . This leads to the
only possible choice is to set
w(x, T ) = 0, wt(x, T ) = 0, (11)
so that integral over ∂QT vanishes. The function w(x, t) determined from (5), (10), and
(11) is called the Green’s function for the initial and boundary value problem(1)-(3) for
u(x, t). It is denoted as w(x, t) = G(x, t; ξ, τ).
Once the initial and boundary value problem for G is solved, the values of G and Gt
at t = 0 are known. Then the solution u at an (arbitrary) point (ξ, τ) is given by
u(ξ, τ) =
∫∂Q0
(Gg −Gtf
)dx−
∫S1
1
αB∂G
∂nds+
∫S2∪S3
1
βBGds. (12)
Thus, the Green’s function G(x, t; ξ, τ) satisfies the equation
Gtt −∇2G = δ(x− ξ)δ(t− τ), x, ξ ∈ G, t, τ < T, τ > 0 (13)
with the end condition
G(x, T ; ξ, τ) = 0, Gt(x, T ; ξ, τ) = 0. (14)
and BC
αG+ β∂G
∂n
∣∣∣∣∂Qx
= 0, t < T (15)
Since G(x, t; ξ, τ) = G(ξ,−τ ;x,−t), G satisfies the same differential equation but with
time running forwards instead of backwards.
MODULE 9: THE METHOD OF GREEN’S FUNCTIONS 10
D’Alembert’s formula via Green’s function. Consider the following IBVP:
utt = c2uxx, −∞ < x <∞, t > 0, (16)
u(x, t) = 0, ux(x, t) = 0 as x→ ±∞, t > 0, (17)
u(x, 0) = f(x), ut(x, 0) = g(x), −∞ < x <∞. (18)
The Green’s function G(x, t; ξ, τ) associated with (16)-(18) is a solution of
Gtt = c2Gxx + δ(x− ξ, t− τ) −∞ < x <∞, t > 0, (19)
G(x, t, ξ, τ), Gx(x, t; ξ, τ) → 0 as x→ ±∞, (20)
G(x, t; ξ, τ) = 0, −∞ < x <∞, t < τ, (21)
where δ(x− ξ, t− τ) = δ(x− ξ)δ(t− τ).
Note that an application of Fourier transform yields
F [δ(x− ξ)] =1√2π
∫ ∞
∞δ(x− ξ)e−iωξ.
Writing F [G](ω, t; ξ, τ) = G(ω, t; ξ, τ) and applying FT to the PDE (19) and IC satisfies
by G, we obtain
Gtt + ω2c2G =1√2πe−iωξδ(t− τ), t > 0, (22)
G(ω, t; ξ, τ) = 0, t < τ. (23)
Since δ(t− τ) = 0 for t = τ , the solution of (22) is
G(ω, t; ξ, τ) =
0, t < τ
C1 cos(cω(t− τ)) + C2 sin(cω(t− τ)), t > τ,(24)
where C1 and C2 are arbitrary functions of ω, ξ and τ . Requiring G to be continuous at
t = τ yields C1 = 0. To find C2, we consider an interval [τ1, τ2] such that 0 < τ1 < τ < τ2
and integrate (22) with respect to t over this interval:
Gt(ω, τ2; ξ, τ)− Gt(ω, τ1; ξ, τ) + ω2c2∫ τ2
τ1
G(ω, t; ξ, τ)dt
=1√2πe−iωξ
∫ τ2
τ1
δ(t− τ)dt =1√2πe−iωξ
∫ ∞
−∞δ(t− τ)dt =
1√2πe−iωξ.
By (24),
Gt(ω, τ1; ξ, τ) = 0,
Gt(ω, τ2; ξ, τ) = cωC2 cos(cω(τ2 − τ)).
MODULE 9: THE METHOD OF GREEN’S FUNCTIONS 11
Letting τ1, τ2 → τ and using the continuity of G at t = τ , it follows that C2 =
e−iωξ/(√2πcω). Hence
G(ω, t; ξ, τ) =
0, t < τ1√2ce−iωξ sin(cω(t−τ))
πω , t > τ,
Note that
F−1
[1
π
sin(aω)
ω
]= H(a− |x|),
F−1[eiωaF [f ](ω)
]= f(x− a).
Setting a = c(t− τ) and a = ξ, respectively, we obtain
G(x, t; ξ, τ) =1
2cH(c(t− τ)− |x− ξ|). (25)
The values of G in the upper half (t > 0) of the (x, t)-plane, (25) can be written in the
form
G(x, t; ξ, τ) =1
2c[H((x− ξ) + c(t− τ))−H((x− ξ)− c(t− τ))]. (26)
Writing u in term of G, we obtain
u(x, t) =
∫ b
a[G(x, t; ξ, 0)u(ξ, 0)−Gτ (x, t; ξ, 0)u(ξ, 0)]dξ
−c2∫ t
0[Gξ(x, t; ξ, τ)u(ξ, τ)−G(x, t; ξ, τ)uξ(ξ, τ)]
ξ=bξ=adτ, (27)
where G(x, t; ξ, τ) is called the Green’s function for the wave equation.
When −∞ < x < ∞, the corresponding formula is obtained from (27) by letting
a → −∞ and b → ∞ and taking into account that G(x, t; ξ, τ) = 0 for |x| sufficiently
large.
In this case, the formula (27) reduces to
u(x, t) =
∫ ∞
−∞[G(x, t; ξ, 0)u(ξ, 0)−Gτ (x, t; ξ, 0)u(ξ, 0)]dξ. (28)
This formula can be simplified by using the explicit form of G. Using (26) and the fact
that
H ′(τ − a) = δ(τ − a),
we obtain
Gτ (x, t; ξ, τ) = −1
2[δ((x− ξ) + c(t− τ)) + δ((x− ξ)− c(t− τ))].
MODULE 9: THE METHOD OF GREEN’S FUNCTIONS 12
Using the definition of Dirac delta function δ and H, (28) becomes
u(x, t) =1
2
∫ ∞
−∞[δ(x− ξ − ct) + δ(x− ξ + ct)]f(ξ)dξ
+1
2c
∫ ∞
−∞[H(x− ξ + ct)−H(x− ξ − ct)]g(ξ)dξ
=1
2[f(x+ ct) + f(x− ct)] +
1
2c
∫ x+ct
x−ctg(ξ)dξ,
which is called D’Alembert’s formula (see Module 6, Eq. (11)).
Practice Problems
1. Use the Green’s function method to solve IBVP:
utt = uxx, −∞ < x <∞, t > 0,
u(x, t), ux(x, t) → 0, ax x→ 0, t > 0,
u(x, 0) = 0, ut(x, 0) = x, −∞ < x <∞.
2. Use the Green’s function method to solve IBVP:
utt = uxx + f(x, t), −∞ < x <∞, t > 0,
u(x, t), ux(x, t) → 0, ax x→ 0, t > 0,
u(x, 0) = ut(x, 0) = 0, −∞ < x <∞,
where
f(x, t) =
t, −1 < x < 1,
0, otherwise.
MODULE 9: THE METHOD OF GREEN’S FUNCTIONS 13
Lecture 3 The Heat Equation
Consider the heat equation
ut −∇2u = 0, (x, t) ∈ Q = Ω× (0, T ], (1)
with the initial condition
u(x, 0) = f(x), x ∈ Ω, (2)
and the BC
αu+ β∂u
∂n
∣∣∣∣∂Ω
= B(x, t), 0 < t ≤ T. (3)
The above problem can be treated in the same way as the wave equation discussed in the
previous lecture. Let Q be a cylindrical region in (x, t)-space obtained by extending the
region Ω parallel to itself from t = 0 to t = T (cf. Fig. 9.1). The lateral boundary of Q
is denoted by ∂Qx. The two caps of the cylinder, which are portions of the planes t = 0
and t = T , are denoted by ∂Q0 and ∂QT , respectively.
The boundary conditions for u(x, t) are assigned on ∂Qx. The exterior unit normal n
to ∂Q has the form n = (nx, 0) on ∂Qx, where nx is the exterior unit normal to ∂Ω. On
∂Q0, n has the form n = (0,−1), and on ∂QT , it has the form n = (0, 1).
As consequence of the divergence theorem, we have the following integral equation∫ ∫Q
[w(ut −∇2u
)− u
(− wt −∇2w
)]dx (4)
=
∫ ∫Q∇ ·(− w∇u+ u∇w,wu
)dx
=
∫∂Qx
(− w
∂u
∂n+ u
∂w
∂n
)ds+
∫∂QT
uwdx−∫∂Q0
uwdx,
(5)
where ∇ = (∇, ∂∂t) is the gradient operator in space-time.
Observe that the operator ∂∂t − ∇2 that occurs in the heat equation (1) is not self-
adjoint as was the case in the of the Laplace and wave equations. The adjoint operator
of ∂∂t − ∇2 is given as − ∂
∂t − ∇2. With this choice for the adjoint operator we find that
w(ut −∇2u)− u(−wt −∇2w) is a divergence expression.
Let w(x, t) to be a solution of
−wt −∇2w = δ(x− ξ)δ(t− τ), ξ ∈ Ω, 0 < τ < T (6)
MODULE 9: THE METHOD OF GREEN’S FUNCTIONS 14
with the end condition
w(x, T ) = 0 (7)
and the BC
αw + β∂w
∂n
∣∣∣∣∂Ω
= 0. (8)
As before, we obtain from (4)
u(ξ, τ) =
∫∂Q0
fGdx−∫S1
1
αB∂G
∂nds+
∫S2∪S3
1
βBGds, (9)
where we have set w(x, t) = G(x, t; ξ, τ). G(x, t; ξ, τ) is the Green’s func- function for
the initial and boundary value problem (1)-(3). Thus the Green’s function G(x, t; ξ, τ)
satisfies the equation
−Gt −∇2G = δ(x− ξ)δ(t− τ), x, ξ ∈ Ω, t, τ < T, τ > 0 (10)
with the end condition
G(x, T ; ξ, τ) = 0 (11)
and the BC
αG+ β∂G
∂n
∣∣∣∣∂Ω
= 0, t < T. (12)
The equation (10) satisfied by the Green’s function G is a backward heat equation. Since
the problem for the Green’s function is to be solved backwards in time, the initial and
boundary value problem (10)-(12) for G is well posed. Once G is determined then all
the terms on the right side of (9) are known and the solution u(x, t) of the initial and
boundary value problem (1)-(3) is completely determined.
Consider the following IBVP:
ut = α2uxx, 0 < x < L, t > 0, (13)
u(0, t) = 0, u(L, t) = 0, t > 0 (14)
u(x, 0) = f(x), 0 < x < L. (15)
The method of separation of variables yields
u(x, t) =
∞∑n=1
cne−α2(nπ/L)2t sin
nπx
L
=∞∑n=1
[2
L
∫ L
0f(ξ) sin
nπξ
L
]e−α2(nπ/L)2t
sin
nπx
L
=
∫ L
0f(ξ)
[ ∞∑n=1
2
Lsin
nπx
Lsin
nπξ
Le−α2(nπ/L)2t
]dξ.
MODULE 9: THE METHOD OF GREEN’S FUNCTIONS 15
Define the Green’s function of this problem by
G(x, t; ξ, 0) =
∞∑n=1
2
Lsin
nπx
Lsin
nπξ
Le−α2(nπ/L)2t. (16)
Then the solution of the IBVP can be expressed in the form
u(x, t) =
∫ L
0G(x, t; ξ, 0)f(ξ)dξ. (17)
EXAMPLE 1. Consider the following IBVP:
ut = uxx, 0 < x < 1, t > 0,
u(0, t) = 0, u(1, t) = 0, t > 0
u(x, 0) = x, 0 < x < 1.
Solution. Here α2 = 1, L = 1 and f(x) = x. So
G(x, t; ξ, 0) =
∞∑n=1
2 sin(nπx) sin(nπξ)e−n2π2t. (18)
Thus,
u(x, t) =
∫ 1
0G(x, t; ξ, 0)ξdξ (19)
=
∞∑n=1
2 sin(nπx)
[∫ 1
0ξ sin(nπξ)dξ
]e−n2π2t. (20)
Integrating by parts, we notice that[ξ
(− 1
nπcos(nπξ)
)]10
+
∫ 1
0
1
nπcos(nπξ)dξ = (−1)n+1 1
nπ.
Thus, the solution is given by
u(x, t) =
∞∑n=1
2
nπ
[(−1)n+1e−n2π2t
]sin(nπx). (21)
Practice Problems
1. Use the Green’s function method to solve IBVP:
ut = uxx, 0 < x < 1, t > 0,
ux(0, t) = 0, ux(1, t) = 0, t > 0,
u(x, 0) = x, 0 < x < 1.
MODULE 9: THE METHOD OF GREEN’S FUNCTIONS 16
2. Use the Green’s function method to solve IBVP:
ut = 4uxx, 0 < x < 1, t > 0,
u(0, t) = 0, u(1, t) = 0, t > 0,
u(x, 0) = 1, 0 < x < 1.
Bibliography
[1] D. Bleecker and G. Csordas, Basic Partial Differential Equations, Van Nostrand
Reinhold, New York, 1992.
[2] C. Constanda, Solution Techniques for Elementary Partial Differential Equations,
Chapman & Hall/CRC, New York, 2002.
[3] L. J. Crowin and R. H. Szczarba, Multivariable Calculus, Marcel Dekker, Inc, New
York, 1982.
[4] S. J. Farlow, Partial Differential Equations for Scientists and Engineers, Birkhauser,
New York, 1993.
[5] F. John, Partial Differential Equations, Springer-Verlag, New York, 1982.
[6] E. Kreyszig, Advanced Engineering Mathematics, Wiley, 2011.
[7] J. Marsden and A. Weinstein, Calculus III, Springer-Verlag, New York, 1985.
[8] T. Myint-U and L. Debnath, Linear Partial Differential Equations for Scientists and
Engineers, Birkhauser, Boston, 2007.
[9] R. K. Nagle and E. B. Saff, Fundamentals of Differential Equations and Boundary
Value Problems, Addison-Wesley, New York, 1996.
[10] I.N. Sneddon, Elements of Partial Differential Equations, Dover Publications, New
York, 2006.
[11] E. C. Zachmanoglou and D. W. Thoe, Introduction to Partial Differential Equations
with Applications, Dover Publication, Inc., New York, 1986.
[12] E. Zauderer, Partial Differential Equations of Applied Mathematics, Second Edition,
John Wiley & Sons, New York, 1989.
17