26
PRINCIPLES OF MATHEMATICAL ANALYSIS. WALTER RUDIN 10. Integration of Differential Forms 1. Let H be a compact convex set in R k with nonempty interior. Let f ∈C (H ), put f (x) = 0 in the complement of H , and define R H f as in Definition 10.3. Prove that R H f is independent of the order in which the k integrations are carried out. Hint: Approximate f by functions that are continuous on R k and whose supports are in H , as was done in Example 10.4. 2. For i =1, 2, 3,... , let ϕ i ∈C (R 1 ) have support in (2 -i , 2 1-i ), such that R ϕ i = 1. Put f (x, y)= X i=1 [ϕ i (x) - ϕ i+1 (x)] ϕ i (y). Then f has compact support in R 2 , f is continuous except at (0, 0), and Z f (x, y) dx dy =0 but Z f (x, y) dy dx =1. Observe that f is unbounded in every neighbourhood of (0, 0). 3. (a) If F is as in Theorem 10.7, put A = F 0 (0), F 1 (x)= A -1 F(x). Then F 0 1 (0)= I . Show that F 1 (x)= G n G n-1 ... G 1 (x) in some neighbourhood of 0, for certain primitive mappings G 1 ,..., G n . This gives another version of Theorem 10.7: F(x)= F 0 (0)G n G n-1 ... G 1 (x). (b) Prove that the mapping (x, y) 7(y,x) of R 2 onto R 2 is not the composition of any two primitive mappings, in any neighbourhood of the origin. (This shows that the flips B i cannot be omitted from the statement of Theorem 10.7.) 4. For (x, y) R 2 , define F(x, y)=(e x cos y - 1,e x sin y). May 20, 2005. Solutions by Erin P. J. Pearse. 1

rudin

  • Upload
    qwsx098

  • View
    274

  • Download
    4

Embed Size (px)

Citation preview

Page 1: rudin

PRINCIPLES OF MATHEMATICAL ANALYSIS.

WALTER RUDIN

10. Integration of Differential Forms

1. Let H be a compact convex set in Rk with nonempty interior. Let f ∈ C(H), putf(x) = 0 in the complement of H, and define

Hf as in Definition 10.3.

Prove that∫

Hf is independent of the order in which the k integrations are carried

out. Hint: Approximate f by functions that are continuous on Rk and whose supportsare in H, as was done in Example 10.4.

2. For i = 1, 2, 3, . . . , let ϕi ∈ C(R1) have support in (2−i, 21−i), such that∫ϕi = 1. Put

f(x, y) =

∞∑

i=1

[ϕi(x) − ϕi+1(x)]ϕi(y).

Then f has compact support in R2, f is continuous except at (0, 0), and∫

f(x, y) dx dy = 0 but

f(x, y) dy dx = 1.

Observe that f is unbounded in every neighbourhood of (0, 0).

3. (a) If F is as in Theorem 10.7, put A = F′(0), F1(x) = A−1F(x). Then F′1(0) = I.

Show that

F1(x) = Gn◦Gn−1◦. . .◦G1(x)

in some neighbourhood of 0, for certain primitive mappings G1, . . . ,Gn. Thisgives another version of Theorem 10.7:

F(x) = F′(0)Gn◦Gn−1◦. . .◦G1(x).

(b) Prove that the mapping (x, y) 7→ (y, x) of R2 onto R2 is not the composition ofany two primitive mappings, in any neighbourhood of the origin. (This showsthat the flips Bi cannot be omitted from the statement of Theorem 10.7.)

4. For (x, y) ∈ R2, define

F(x, y) = (ex cos y − 1, ex sin y).

May 20, 2005. Solutions by Erin P. J. Pearse.

1

Page 2: rudin

Principles of Mathematical Analysis — Walter Rudin

� Prove that F = G2◦G1, where

G1(x, y) = (ex cos y − 1, y)

G2(u, v) = (u, (1 + u) tan v)

are primitive in some neighbourhood of (0, 0).

G2◦G1(x, y) = G2(ex cos y − 1, y)

=

(

ex cos y − 1, (ex cos y)sin y

cos y

)

= (ex cos y − 1, ex sin y)

= F(x, y).

G1(x, y) = (g1(x, y), y) and G2(u, v) = (u, g2(u, v) are clearly primitive.

� Compute the Jacobians of G1,G2,F at (0, 0).

JG1(0) = detG′

1(x)∣∣∣x=0

=

∣∣∣∣

ex cos y −ex sin y0 1

∣∣∣∣(x,y)=(0,0)

= ex cos y∣∣∣(x,y)=(0,0)

= 1 · 1 = 1.

JG2(0) =

∣∣∣∣

1 0tan v (1 + u) sec2 v

∣∣∣∣(x,y)=(0,0)

= (1 + u) sec2 v∣∣∣(x,y)=(0,0)

= 1 · 12 = 1.

JF(0) =

∣∣∣∣

ex cos y −ex sin yex sin y ex cos y

∣∣∣∣(x,y)=(0,0)

= e2x cos2 y + e2x sin2 y∣∣∣(x,y)=(0,0)

= e2x(cos2 y + sin2 y

)∣∣∣(x,y)=(0,0)

= e2x∣∣∣(x,y)=(0,0)

= 1.

� DefineH2(x, y) = (x, ex sin y)

and findH1(u, v) = (h(u, v), v)

2

Page 3: rudin

Solutions by Erin P. J. Pearse

so that F = H1◦H2 in some neighbourhood of (0, 0).We have

H1◦H2(x, y) = H1(x, ex sin y)

= (h(x, ex sin y), ex sin y)

= (ex cos y − 1, ex sin y)

for h(u, v) = eu cos (arcsin(e−uv)) − 1, valid in the neighbourhood R ×(−π

2, π

2

)

of (0, 0). Indeed, for (u, v) = (x, ex sin y),

eu cos(arcsin(e−uv)

)− 1 = ex cos

(arcsin(e−xex sin y)

)− 1

= ex cos (arcsin(sin y)) − 1

= ex cos y − 1.

5. Formulate and prove an analogue of Theorem 10.8, in which K is a compact subsetof an arbitrary metric space. (Replace the functions ϕi that occur in the proof ofTheorem 10.8 by functions of the type constructed in Exercise 22 of Chap. 4.)

3

Page 4: rudin

Principles of Mathematical Analysis — Walter Rudin

6. Strengthen the conclusion of Theorem 10.8 by showing that the functions ψi can bemade differentiable, even smooth (infinitely differentiable). (Use Exercise 1 of Chap.8 in the construction of the auxiliary functions ϕi.)

7. (a) Show that the simplex Qk is the smallest convex subset of Rk that contains thepoints 0 = e0, e1, . . . , ek.

We first show that Qk is the convex hull of the points 0, e1, . . . , ek, i.e.,

x ∈ Qk ⇐⇒ x =

k∑

i=0

ciei, with 0 ≤ ci ≤ 1,

k∑

k=0

ci = 1.

(⇒) Consider x = (x1, . . . , xk) ∈ Qk. Let ci = xi for 1 = 1, . . . , k and define

c0 = 1 −∑k

i=1 xi. Then ci ∈ [0, 1] for i = 0, 1, . . . , k and∑k

i=0 ci = 1. Moreover,

k∑

i=0

ciei =

(

1 −k∑

i=1

xi

)

e0 +

k∑

i=1

ciei

=

(

1 −k∑

i=1

xi

)

0 +k∑

i=1

xiei = (x1, . . . , xk) = x.

(⇐) Now given x as a convex combination x =∑k

i=0 ciei, write x = (c1, . . . , ck).Then 0 ≤ ci ≤ 1 and

k∑

i=0

ci = 1 =⇒k∑

i=1

ci ≤ 1,

so x ∈ Qk.Now if K is a convex set containing the points 0, e1, . . . , ek, convexity implies itmust also contain all points x =

∑ki=0 ciei, i.e., it must contain all of Qk.

(b) Show that affine mappings take convex to convex sets.

An affine mapping T may always be represented as the composition of a linearmapping S followed by a translation B. Translations (as congruences) obviouslypreserve convexity, so it suffices to show that linear mappings do.Let S be a linear mapping. If we have a convex set K with u,v ∈ K, then anypoint between u,v is (1 − λ)u + λv for some λ ∈ [0, 1], and in the image of S,the linearity of S gives

S ((1 − λ)u + λv) = (1 − λ)S(u) + λS(v),

which shows that any point between u and v gets mapped to a point betweenS(u) and S(v), i.e., convexity is preserved.

4

Page 5: rudin

Solutions by Erin P. J. Pearse

8. Let H be the parallelogram in R2 whose vertices are (1, 1), (3, 2), (4, 5), (2, 4).

� Find the affine map T which sends (0, 0) to (1, 1), (1, 0) to (3, 2), and (0, 1) to(2, 4).

Since T = B◦S, where S is linear and B is a translation, let B(x) = x + (1, 1)and find S which takes I2 to (0, 0), (2, 1), (3, 4), (1, 3). Once we fix a basis (let’suse the standard basis in Rk), then S corresponds to a unique matrix A. Thenwe can define S in terms of its action on the basis of R2:

A =

| |S(e1) S(e2)

| |

=

[2 11 3

]

.

Then S(e1) = S(1, 0) = (2, 1)T , and S(e2) = S(0, 1) = (1, 3)T , as you can check.Now for b = (1, 1), we have

T (x) = B◦S(x) = Ax + b =

[2 11 3

] [x1

x2

]

+

[11

]

=

[2x1 + x2 + 1x1 + 3x2 + 1

]

.

� Show that JT = 5.

Let T1(x1, x2) = 2x1 + x2 + 1 and T2(x1, x2) = x1 + 3x2 + 1.

JT =

∣∣∣∣∣∣

ddx1T1

ddx2T1

ddx1T2

ddx2T2

∣∣∣∣∣∣

=

∣∣∣∣

2 11 3

∣∣∣∣= 6 − 1 = 5.

� Use T to convert the integral

α =

H

ex−y dx dy

to an integral over I2 and thus compute α.

T satisfies all requirements of Thm 10.9, so we have

α =

H

ex−y dy =

I2

e(2x1+x2+1)−(x1+3x2+1)|5| dx

= 5

∫ 1

0

∫ 1

0

ex1−2x2 dx1 dx2

= 5

(∫ 1

0

ex1 dx1

)(∫ 1

0

e−2x2 dx2

)

= 5 [ex1 ]10[−1

2e−2x2

]1

0

= 5(e1 − 1)(−12e−2 + 1

2)

= 52(e1 − 1)(1 − e−2)

= 52(e1 − 1 − e−1 + e−2).

5

Page 6: rudin

Principles of Mathematical Analysis — Walter Rudin

9. Define (x, y) = T (r, θ) on the rectangle

0 ≤ r ≤ a, 0 ≤ θ ≤ 2π

by the equationsx = r cos θ, y = r sin θ.

Show that T maps this rectangle onto the closed disc D with center at (0, 0) andradius a, that T is one-to-one in the interior of the rectangle, and that JT (r, θ) = r.If f ∈ C(D), prove the formula for integration in polar coordinates:

D

f(x, y) dx dy =

∫ a

0

∫ 2π

0

f(T (r, θ)) rdr dθ.

Hint: Let D0 be the interior of D, minus the segment from (0, 0) to (0, a). As itstands, Theorem 10.9 applies to continuous functions f whose support lies in D0. Toremove this restriction, proceed as in Example 10.4.

6

Page 7: rudin

Solutions by Erin P. J. Pearse

12. Let Ik be the unit cube and Qk be the standard simplex in Rk; i.e.,

Ik = {u = (u1, . . . , uk) ... 0 ≤ ui ≤ 1, ∀i}, and

Qk = {x = (x1, . . . , xk) ... xi ≥ 0,∑xi ≤ 1.}

Define x = T (u) by

x1 = u1

x2 = (1 − u1)u2

...

xk = (1 − u1) . . . (1 − uk−1)uk.

The unit cube Ik The standard simplex Qk

Figure 1. The transform T , “in slow motion”.

� Show thatk∑

i=1

xi = 1 −k∏

i=1

(1 − ui).

This computation is similar to (28)–(30), in Partitions of Unity, p.251.By induction, we begin with the (given) basis step x1 = u1, then assume

j∑

i=1

xi = 1 −j∏

i=1

(1 − ui), for some j < k.

Using this assumption immediately, we can write∑j+1

i=1xi =∑j

i=1xi + xj+1

=(

1 −∏ji=1(1 − ui)

)

+(

(1 − u1) . . . (1 − uj−1)uj

)

= 1 − (1 − u1) . . . (1 − uj) + (1 − u1) . . . (1 − uj)uj+1

= 1 −(

(1 − u1) . . . (1 − uj) − (1 − u1) . . . (1 − uj)uj+1

)

= 1 − (1 − u1) . . . (1 − uj)(1 − uj+1)

= 1 −∏j+1

i=1 (1 − ui).

7

Page 8: rudin

Principles of Mathematical Analysis — Walter Rudin

� Show that T maps Ik onto Qk, that T is 1-1 in the interior of Ik, and that itsinverse S is defined in the interior of Qk by u1 = x1 and

ui =xi

1 − x1 − · · · − xi−1

for i = 1, 2, . . . , k.

Note that T is a continuous map (actually, smooth) because each of its compo-nent functions is a polynomial.

First, we show that T (∂Ik) = ∂Qk, so that the boundary of Ik is mapped ontothe boundary of Qk. Injectivity and surjectivity for the interior will be given bythe existence of the inverse function.

Let u ∈ ∂Ik. Then for at least one uj, either uj = 0 or uj = 1. uj = 0 impliesxj = 0 so that x ∈ Ej ⊆ ∂Qk. Meanwhile, uj = 1 implies

∑xi = 1−

∏(1−ui) =

1 − 0 = 1, which puts x ∈ E0 ⊆ ∂Qk. Therefore, T (∂Ik) ⊆ ∂Qk.

Let x ∈ ∂Qk. Then either xj = 0 for some 1 ≤ j ≤ k (because x ∈ Ej) or else∑xj = 1 (because x ∈ E0), or both.

case (i) Let xj = 0 so x = (x1, . . . , 0, . . . , xk), and take xε = (x1, . . . , ε, . . . , xk),so xε ∈ (Qk)◦. We have

T−1(xε) =(

x1,x2

1−x1, x3

1−x1−x2, . . . ,

xj

1−x1−···−xj−1,

xj+1

1−x1−···−xj, . . . , xk

1−x1−···−xk−1

)

=(

x1,x2

1−x1, x3

1−x1−x2, . . . , ε

1−x1−···−xj−1,

xj+1

1−x1−···−xj−1−ε, . . . , xk

1−x1−···−xk−1

)

ε→0−−−−→(

x1,x2

1−x1, x3

1−x1−x2, . . . , 0,

xj+1

1−x1−···−xj−1, . . . , xk

1−x1−···−xk−1

)

where the last step uses the continuity guaranteed by the Inverse FunctionTheorem. Now check that for

u =(

x1,x2

1−x1, x3

1−x1−x2, . . . , 0,

xj+1

1−x1−···−xj−1, . . . , xk

1−x1−···−xk−1

)

,

we do in fact have T (u) = x:

T(

x1,x2

1−x1, x3

1−x1−x2, . . . , 0,

xj+1

1−x1−···−xj−1, . . . , xk

1−x1−···−xk−1

)

=(

x1, (1 − x1)x2

1−x1, (1 − x1)(1 − x2

1−x1) x3

1−x1−x2,

. . . , (1 − x1)(1 − x2

1−x1) . . . (1 − xj−2

1−x1−···−xj−3)

xj−1

1−x1−···−xj−2,

(1 − x1)(1 − x2

1−x1) . . . (1 − xj−1

1−x1−···−xj−2)

xj

1−x1−···−xj−1,

(1 − x1)(1 − x2

1−x1) . . . (1 − xj

1−x1−···−xj−1)

xj+1

1−x1−···−xj,

. . . , (1 − x1)(1 − x2

1−x1) . . . (1 − xk−1

1−x1−···−xk−2) xk

1−x1−···−xk−1

)

= (x1, x2, x3, . . . xj−1, 0, (1 − x1 − x2 − · · · − xj−1)xj+1

1−x1−···−xj, . . . , xk)

= (x1, x2, x3, . . . xj−1, 0, xj+1, . . . , xk)

8

Page 9: rudin

Solutions by Erin P. J. Pearse

case (ii)∑k

i=1 xi = 1.

Then by the previous part, 1 −∏k

i=1(1 − ui) = 1, so∏k

i=1(1 − ui) = 0.This can only happen if one of the factors is 0, i.e., if one of the ui is 1.This puts u ∈ ∂Ik.

For the interior, we have an inverse.

S◦T (u) = S(u1, (1 − u1)u2, . . . , (1 − u1)(1 − u2) . . . uk)

=

(

u1,(1 − u1)u2

1 − u1

, . . . ,(1 − u1)(1 − u2) . . . (1 − uk−1)uk

1 − u1 − (1 − u1)u2 − · · · − (1 − u1)(1 − u2) . . . uk−1

)

= (u1, u2, . . . , uk)

= u.

where the third equality follows by successive distributions against the last factor:

[(1 − u1)(1 − u2) . . . (1 − uk−2)] (1 − uk−1)

= (1 − u1)(1 − u2) . . . (1 − uk−2) − (1 − u1)(1 − u2) . . . (1 − uk−2)uk−1.

Injectivity may also be shown directly as follows: let u and v be distinct pointsof (Ik)◦, with T (u) = x and T (v) = y. We want to show x 6= y. From theprevious part, we have T−1(∂Qk) ⊆ ∂Ik, so x,y ∈ (Qk)◦. Let the jth coordinatebe the first one in which u differs from v, so that ui = vi for i < j, but uj 6= vj.Then

xj = (1 − u1) . . . (1 − uj−1)uj

= (1 − v1) . . . (1 − vj−1)uj

6= (1 − v1) . . . (1 − vj−1)vj = yj.

So x 6= y.

� Show that

JT (u) = (1 − u1)k−1(1 − u2)

k−2 . . . (1 − uk−1),

and

JS(x) = [(1 − x1)(1 − x1 − x2) . . . (1 − x1 − · · · − xk−1)]−1 .

JT (u) =

∣∣∣∣∣∣∣∣∣∣

1 0 0 . . . 0−u2 1 − u1 0 . . . 0

−(1 − u2)u3 −(1 − u1)u3 (1 − u1)(1 − u2) . . . 0...

......

...−(1 − u2) . . . uk . . . . . . . . . (1 − u1) . . . (1 − uk−1)

∣∣∣∣∣∣∣∣∣∣

= 1 · (1 − u1) · (1 − u1)(1 − u2) · · · · · (1 − u1)(1 − u2) . . . (1 − uk−1)

= (1 − u1)k−1(1 − u2)

k−2 . . . (1 − uk−1).

9

Page 10: rudin

Principles of Mathematical Analysis — Walter Rudin

JS(u) =

∣∣∣∣∣∣∣∣∣∣∣

1 0 0 . . . 0x2

(1−x1)−2

11−x1

0 . . . 0x3

(1−x1−x2)−2

x3

(1−x1−x2)−2

11−x1−x2

. . . 0...

......

...xk

(1−x1−···−xk−1)−2

xk

(1−x1−···−xk−1)−2

xk

(1−x1−···−xk−1)−2 . . . 11−x1−···−xk−1

∣∣∣∣∣∣∣∣∣∣∣

= 1 · 11−x1

· 11−x1−x2

· · · · · 11−x1−···−xk−1

= [(1 − x1)(1 − x1 − x2) . . . (1 − x1 − · · · − xk−1)]−1

10

Page 11: rudin

Solutions by Erin P. J. Pearse

13. Let r1, . . . rk be nonnegative integers, and prove that∫

Qk

xr1

1 . . . xrk

k dx =r1! . . . rk!

(k + r1 + · · · + rk)!.

Hint: Use Exercise 12, Theorems 10.9 and 8.20. Note that the special case r1 = · · · =rk = 0 shows that the volume of Qk is 1/k!.

Theorem 8.20 gives that for α, β > 0 we have∫ 1

0

tα−1(1 − t)β−1 dt =Γ(α)Γ(β)

Γ(α + β).

Theorem 10.9 says when T is a 1-1 mapping of an open set E ⊆ Rk into Rk withinvertible Jacobian, then

Rk

f(y) dy =

Rk

f(T (x))|JT (x)| dx

for any f ∈ C(E). Since we always have∫

cl(E)f =

int(E)f, we can apply this theorem

to the T given in the previous exercise:∫

Qk

xr1

1 . . . xrk

k dx =

Ik

T1(u)r1T2(u)r2 . . . Tk(u)rk |JT (u)|, du

=

Ik

ur1

1 [(1 − u1)u2]r2 . . . [(1 − u1) . . . (1 − uk−1)uk]

rk

·[(1 − u1)

k(1 − u2)k−1 . . . (1 − uk−1)

]du

=

Ik

ur1

1 ur2

2 . . . urk

k (1 − u1)r2+···+rk+k−1(1 − u2)

r3+···+rk+k−2

. . . (1 − uk−2)rk−1+rk+2(1 − uk−1)

rk+1 du

=

(∫

I

ur1

1 (1 − u1)r2+···+rk+k−1 dx1

)(∫

I

ur2

2 (1 − u2)r3+···+rk+k−2 dx1

)

. . .

(∫

I

urk−1

k−1 (1 − uk−1)rk+1 dxk

)(∫

I

urk

k dxk

)

.

Now we can use Theorem 8.20 to evaluate each of these, using the respective values

αj = 1 + rj, β = rj+1 + · · · + rk + k − j + 1,

so that, except for the final factor∫

I

urk

k dxk =

[urk+1

k

rk + 1

]1

0

=1

rk + 1,

the above product of integrals becomes a product of gamma functions:

= Γ(1+r1)Γ(r2+...rk+k)Γ(1+r1+r2+···+rk+k)

· Γ(1+r2)Γ(r3+...rk+k−1)Γ(r2+r3+···+rk+k)

· Γ(1+r3)Γ(r4+...rk+k−2)Γ(r3+r4+···+rk+k−1)

· · · · Γ(1+rk−1)Γ(rk+2)

Γ(1+2+rk−1+rk)· 1

rk+1

then making liberal use of Γ(1 + rj) = rj! from Theorem 8.18(b), we have

=r1!...rk−1!

(r1+...rk+k)!· Γ(r2+···+rk+k)

Γ(r2+···+rk+k)· Γ(r3+···+rk+k−1)

Γ(r3+···+rk+k−1). . . Γ(rk+2)

rk+1.

11

Page 12: rudin

Principles of Mathematical Analysis — Walter Rudin

Now making use of xΓ(x) = Γ(x + 1) from Theorem 8.19(a), to simplify the finalfactor, we have

Γ(rk+2)rk+1

= Γ((rk+1)+1)rk+1

= (rk+1)Γ(rk+1)rk+1

= Γ(rk + 1) = rk!,

so that the final formula becomes∫

Qk

xr1

1 . . . xrk

k dx = r1!...rk!(r1+...rk+k)!

.

Note: if we take rj = 0, ∀j, then this formula becomes

Qk

1dx =0! . . . 0!

(0 + . . . 0 + k)!=

1

k!,

showing that the volume of Qk is 1k!

.

15. If ω and λ are k- and m-forms, respectively, prove that ω ∧ λ = (−1)kmλ ∧ ω.

Write the given forms as

ω =∑

ai1,...,ik(x) dxi1 ∧ . . . ∧ dxik =∑

IaIdxI

and

λ =∑

bj1,...,jk(x) dxj1 ∧ . . . ∧ dxjm

=∑

JaJdxJ .

Then we have

ω ∧ λ =∑

I,J

aI(x) bJ(x) dxI ∧ dxJ and λ ∧ ω =∑

I,J

aI(x) bJ(x) dxJ ∧ dxI ,

so it suffices to show that for every summand,

dxI ∧ dxJ = (−1)kmdxJ ∧ dxI .

Making excessive use of anticommutativity (Eq. (46) on p.256),

dxI ∧ dxJ = dxi1 ∧ . . . ∧ dxik ∧ dxj1 ∧ . . . ∧ dxjm

= (−1)kdxj1 ∧ dxi1 ∧ . . . ∧ dxik ∧ dxj2 ∧ . . . ∧ dxjm

= (−1)2kdxj1 ∧ dxj2 ∧ dxi1 ∧ . . . ∧ dxik ∧ dxj3 ∧ . . . ∧ dxjm

...

= (−1)mkdxj1 ∧ . . . ∧ dxjm∧ dxi1 ∧ . . . ∧ dxik

= (−1)kmdxJ ∧ dxI .

12

Page 13: rudin

Solutions by Erin P. J. Pearse

16. If k ≥ 2 and σ = [p0,p1, . . . ,pk] is an oriented affine k-simplex, prove that ∂2σ = 0,directly from the definition of the boundary operator ∂. Deduce from this that∂2Ψ = 0 for every chain Ψ.

Denote σi = [p0, . . . ,pi−1,pi+1, . . . ,pk] and for i < j, let σij be the (k−2)-simplexobtained by deleting pi and pj from σ.

Now we use Eq. (85) to compute

∂σ =k∑

j=0

(−1)jσj.

Now we apply (85) again to the above result and get

∂2σ = ∂(∂σ) = ∂

(k∑

j=0

(−1)jσj

)

=

k∑

j=0

(−1)j∂σj

=

k∑

j=0

(−1)j

k−1∑

i=0

(−1)iσij

=k∑

j=0

k−1∑

i=0

(−1)i+jσij

Lemma. For i < j, σij = σj−1,i.

Proof. These both correspond to removing the same points pi, pj, just in differentorders (because pj is the (j − 1)th vertex of σi).

σ = [p0, ...,pi, ...,pj, ...,pk] 7→ σj = [p0, ...,pi, ...,pk] 7→ σij = [p0, ...,pk]

σ = [p0, ...,pi, ...,pj, ...,pk] 7→ σi = [p0, ...,pj, ...,pk] 7→ σj−1,i = [p0, ...,pk].

Thus we split the previous sum along the “diagonal”:

∂2σ =k−1∑

i≥j=0

(−1)i+jσij

︸ ︷︷ ︸

A

+k∑

i<j=1

(−1)i+jσij

︸ ︷︷ ︸

B

.

Continuing on with B for a bit, we have

B =k∑

i<j=1

(−1)i+jσj−1,i by lemma

=k∑

j=1

j−1∑

i=0

(−1)i+jσj−1,i rewriting the sum

=

k∑

i=1

i−1∑

j=0

(−1)i+jσi−1,j swap dummies i↔ j

13

Page 14: rudin

Principles of Mathematical Analysis — Walter Rudin

...

...

...

i0

0

1

1

2

2

k-1

k-1

j

Figure 2. Reindexing the double sum over i, j.

=

k−1∑

i=0

i∑

j=0

(−1)i+j+1σi,j reindex i 7→ i+ 1

= (−1)

k−1∑

j=1

k−1∑

i=j

(−1)i+jσi,j see Figure 2

= −A.Whence

∂2σ = A+B = A− A = 0.

In order to see how this works with the actual faces, define F ik : Qk−1 7→ Qk for

i = 0, . . . , k − 1 to be the affine map F ik = (e0, . . . , ei, . . . ek), where ei means omit

ei, i.e.,

F ik(ej) =

{

ej, j < i

ej+1, j ≥ i.

Then define the ith face of σ to be σi = σ◦F ik. Then

e0

e0 p

0

e1

e1

p1

e2 p

2

F21

F20

F22

Q1 Q2σ(Q2)

σ

σ2

σ0

σ1

Figure 3. The mappings F ik and σ, for k = 2.

∂2σ = ∂(∂σ) = ∂(σ0 − σ1 + σ2)

= ∂(σ0) − ∂(σ1) + ∂(σ2)

= (p2 − p1) − (p2 − p0) + (p1 − p0)

= 0

14

Page 15: rudin

Solutions by Erin P. J. Pearse

17. Put J2 = τ1 + τ2, where

τ1 = [0, e1, e1 + e2], τ2 = −[0, e2, e2 + e1].

� Explain why it is reasonable to call J2 the positively oriented unit square in R2.Show that ∂J2 is the sum of 4 oriented affine 1-simplices. Find these.

τ1 has p0 = 0,p1 = e1,p2 = e1 + p2, with arrows pointing toward higher index.τ2 has p0 = 0,p1 = e1,p2 = e1 + p2, with arrows pointing toward lower index.

e0

e1

e2

p0

p1 = e

1

p2 = e

1 + e

2

τ1

e0

e1

e2

p0

p1 = e

2p

2 = e

1 + e

2

−τ2

Figure 4. τ1 and −τ2.

e0

e1

e2

p0

p1 = e

2p

2 = e

1 + e

2

τ2

τ1 + τ

1

Figure 5. τ2 and τ1 + τ2.

So

∂(τ1 + τ2) = ∂[0, e1, e1 + e2] − ∂[0, e2, e2 + e1]

= [e1, e1 + e2] − [0, e1 + e2] + [0, e1] − [e2, e2 + e1] + [0, e2 + e1] − [0, e2]

= [0, e1] + [e1, e1 + e2] + [e2 + e1, e2] + [e2, 0].

So the image of τ1 + τ2 is the unit square, and it is oriented counterclockwise,i.e., positively.

� What is ∂(τ1 − τ2)?

∂(τ1 − τ2) = ∂[0, e1, e1 + e2] + ∂[0, e2, e2 + e1]

= [e1, e1 + e2] − [0, e1 + e2] + [0, e1] + [e2, e2 + e1] − [0, e2 + e1] + [0, e2]

= [0, e1] + [e1, e1 + e2] − [e2 + e1, e2] − [0, e2] − 2[0, e2 + e1].

15

Page 16: rudin

Principles of Mathematical Analysis — Walter Rudin

20. State conditions under which the formula∫

Φ

f dω =

∂Φ

fω −∫

Φ

(df) ∧ ω

is valid, and show that it generalizes the formula for integration by parts.Hint: d(fω) = (df) ∧ ω + f dω.

By the definition of d, we have

d(fω) = d(f ∧ ω) = (df ∧ ω) + (−1)0(f ∧ dω) = (df) ∧ ω + f dω.

Integrating both sides yields∫

Φ

d(fω) =

Φ

(df) ∧ ω +

Φ

f dω.

If the conditions of Stokes’ Theorem are met, i.e., if Φ is a k-chain of class C ′′ in anopen set V ⊆ Rn and ω is a (k − 1)-chain of class C ′ in V , then we may apply it tothe left side and get

∂Φ

fω =

Φ

(df) ∧ ω +

Φ

f dω,

which is equivalent to the given formula.

21. Consider the 1-form

η =x dy − y dx

x2 + y2in R2\{0}.

(a) Show∫

γη = 2π 6= 0 but dη = 0.

Put gg(t) = (r cos t, r sin t) as in the example. Then

x2 + y2 7→ r2, dx 7→ −r sin t dt, anddy 7→ r cos t dt

and the integral becomes∫

γ

η =

∫ 2π

0

−r sin t

r2(−r sin t)dt+

∫ 2π

0

r cos t

r2(r cos t)dt

=

∫ 2π

0

sin2 t+ cos2 tdt

=

∫ 2π

0

dt

= 2π 6= 0.

However, using df =∑n

i=1∂f∂x1dx1, we compute

dη = d

(x dy

x2 + y2− y dx

x2 + y2

)

= d

(x dy

x2 + y2

)

− d

(y dx

x2 + y2

)

16

Page 17: rudin

Solutions by Erin P. J. Pearse

1

u

t2π

Γ

Φ

γ

Figure 6. The homotopy from Γ to γ.

= d

(x

x2 + y2

)

∧ dy +x

x2 + y2∧ d(dy)

− d

(y

x2 + y2

)

∧ dx− y

x2 + y2∧ d(dx)

=

[(x2 + y2) · 1 − x(2x)

(x2 + y2)2dx+

(x2 + y2) · 0 − x(2y)

(x2 + y2)2dy

]

∧ dy

−[(x2 + y2) · 0 − y(2x)

(x2 + y2)2dx+

(x2 + y2) · 1 − y(2y)

(x2 + y2)2dy

]

∧ dx

= 1(x2+y2)2

(

(y2 − x2)dx ∧ dy − 2xy dy ∧ dy

+ 2xy dx ∧ dx− (x2 − y2)dy ∧ dx)

= 1(x2+y2)2

(

(y2 − x2)dx ∧ dy + (x2 − y2)dx ∧ dy)

= 1(x2+y2)2

(y2 − x2 + x2 − y2)dx ∧ dy

= 0 dx ∧ dy = 0.

(b) With γ as in (a), let Γ be a C ′′ curve in R2\{0}, with Γ(0) = Γ(2π), such that theintervals [γ(t),Γ(t)] do not contain 0 for any t ∈ [0, 2π]. Prove that

Γη = 2π.

For 0 ≤ t ≤ 2π and 0 ≤ u ≤ 1, define the straight-line homotopy between γ andΓ:

Φ(t, u) = (1 − u)Γ(t) + uγ(t), Φ(t, 0) = Γ(t), Φ(t, 1) = γ(t).

Then Φ is a 2-surface in R2\{0} whose parameter domain is the rectangle R =[0, 2π] × [0, 1] and whose image does not contain 0. Now define s(x) : I → R2

by s(x) = (1− x)γ(0) + xΓ(x), so s(I) is the segment connecting the base pointof γ to the base point of Γ. Also, denote the boundary of R by

∂R = R1 +R2 +R3 +R4,

17

Page 18: rudin

Principles of Mathematical Analysis — Walter Rudin

where R1 = [0, 2πe1], R2 = [2πe1, 2πe1 + e2], R3 = [2πe1 + e2, e2], R4 = [e2, 0].

∂Φ = Φ(∂R)

= Φ(R1 +R2 +R3 +R4)

= Φ(R1) + Φ(R2) + Φ(R3) + Φ(R4)

= Γ + s+ γ + s

= Γ + s− γ − s

= Γ − γ,

where γ denotes the reverse of the path γ (i.e., traversed backwards). Then∫

Γ

η −∫

γ

η =

Γ−γ

η =

∂Φ

η ∂Φ = Γ − γ

=

Φ

dη by Stokes

=

Φ

0 by (a)

= 0

shows that ∫

Γ

η =

γ

η.

(c) Take Γ(t) = (a cos t, b sin t) where a, b > 0 are fixed. Use (b) to show∫ 2π

0

ab

a2 cos2 t + b2 sin2 tdt = 2π.

We have

x(t) = Γ1(t) = a cos t and y(t) = Γ2(t) = b sin t

and

dx = ∂x∂tdt = −a sin t dt and dy = ∂y

∂tdt = b cos t dt,

so

x dy − y dx

x2 + y2=ab sin2 t+ ab cos2 t

a2 cos2 t + b2 sin2 t=

ab

a2 cos2 t + b2 sin2 t,

and the integral becomes

2π =

Γ

η =

∫ 2π

0

ab cos2 t dt+ ab sin2 t dt

a2 cos2 t + b2 sin2 t

=

∫ 2π

0

ab

a2 cos2 t+ b2 sin2 tdt.

18

Page 19: rudin

Solutions by Erin P. J. Pearse

(d) Show that

η = d(arctan y

x

)

in any convex open set in which x 6= 0 and that

η = d(

− arctan xy

)

in any convex open set in which y 6= 0. Why does this justify the notationη = dθ, even though η isn’t exact in R2\{0}?

θ(x, y) = arctan(

yx

)is well-defined on

D+ = {(x, y) ... x > 0}.On D+ we have

dθ = d(arctan y

x

)

= ∂θ∂xdx + ∂θ

∂ydy

=∂∂x

(yx

)

1 +(

yx

)2dx+

∂∂y

(yx

)

1 +(

yx

)2dy

=−yx−2 dx

1 +(

yx

)2 +1xdy

1 +(

yx

)2

=−y dxx2 + y2

+x dy

x2 + y2.

So dθ = η on D+ (and similarly on D− = {(x, y) ... x < 0}), which is everywhereθ is defined. If we take

B+ = {(x, y) ... y > 0},then

d(

− arctan xy

)

= − 1/y dx

1 + (x/y)2+

(x/y)2 dy

1 + (x/y)2= − y dx

x2 + y2+

x dy

x2 + y2,

and similarly on B− = {(x, y) ... y < 0}.We say η = dθ because θ gives arg(x, y); if we let γ(t) = (cos t, sin t) as above,then this becomes

arctan(y

x

)

= arctan

(sin t

cos t

)

= t,

where the inverse is defined, i.e., for 0 ≤ t ≤ 2π.Suppose η were exact: ∃λ, a 0-form on R2

0 such that dλ = η. Then

dθ − dλ = d(θ − λ) = 0 =⇒ θ − λ = c

=⇒ θ = λ+ c

=⇒ θ = λ

19

Page 20: rudin

Principles of Mathematical Analysis — Walter Rudin

but λ is continuous and θ isn’t. Alternatively, if η were exact,

γ

η =

γ

dλ =

∂γ

λ =

λ = 0,

contradicting (b). <↙

(e) Show (b) can be derived from (d).

Taking η = dθ,

Γ

η =

Γ

dθ =

[

θ

]2π

0

= 2π − 0 = 2π.

(f) Γ is a closed C ′ curve in R20 implies that

1

Γ

η = Ind(Γ) :=1

2πi

∫ b

a

Γ′(t)

Γ(t)dt.

We use the following basic facts from complex analysis:

log z = log |z| + i arg z, so arg z = −i(log z − log |z|).

First, fix |Γ| = c, so Γ is one or more circles around the origin in whateverdirection, but of constant radius. Then

Γ

η =

Γ

dθ =

∫ b

a

d(θ(Γ(t)))

= −i∫ b

a

d(log Γ(t) − log |Γ(t)|)

= −i∫ b

a

d(log Γ(t))

= −i∫ b

a

Γ′(t)

Γ(t)dt.

Now use (b) for arbitrary Γ.

20

Page 21: rudin

Solutions by Erin P. J. Pearse

24. Let ω =∑ai(x) dxi be a 1-form of class C ′′ in a convex open set E ⊆ Rn. Assume

dω = 0 and prove that ω is exact in E.

� Fix p ∈ E. Define

f(x) =

[p,x]

ω, x ∈ E.

Apply Stokes’ theorem to the 2-form dω = 0 on the affine-oriented 2-simplex[p,x,y] in E.

By Stokes’ Thm,

0 =

[p,x,y]

dω =

∂[p,x,y]

ω Stokes’ Thm

=

[x,y]−[p,y]+[p,x]

ω def of ∂

=

[x,y]

ω −∫

[p,y]

ω +

[p,x]

ω Rem 10.26, p.267

=

[x,y]

ω − f(y) + f(x) def of f

f(y) − f(x) =

[x,y]

ω.

� Deduce that

f(y) − f(x) =

n∑

i=1

(yi − xi)

∫ 1

0

ai((1 − t)x + ty) dt

for x,y ∈ E. Hence (Dif)(x) = ai(x).

Note that γ = [x,y] is the straight-line path from x to y, i.e., γ(t) = (1−t)x+ty.Then the right-hand integral from the last line of the previous derivation becomes

f(y) − f(x) =

[x,y]

ω

=

[x,y]

n∑

i=1

ai(u) dui

=

∫ 1

0

n∑

i=1

ai ((1 − t)x + ty) ∂∂t

(

(1 − t)xi + tyi

)

dt

=

n∑

i=1

(yi − xi)

∫ 1

0

ai ((1 − t)x + ty) dt.

Putting y = x + sei, we plug this into the formula for the ith partial derivative.Note that all terms other than j = i are constant with respect to xi, and hence

21

Page 22: rudin

Principles of Mathematical Analysis — Walter Rudin

vanish. This leaves

(Dif)(x) = lims→0

f(x + sei) − f(x)

s

= lims→0

(xi+s)−xi

s

∫ 1

0

ai ((1 − t)x + t(x + sei)) dt

= lims→0

1s· s∫ 1

0

ai (x + tsei)) dt

= lims→0

∫ 1

0

ai (x + tsei)) dt

=

∫ 1

0

lims→0

ai (x + tsei)) dt

=

∫ 1

0

ai (x) dt

= ai(x)

∫ 1

0

dt

= ai(x).

We can pass the limit through the integral using uniform convergence.

27. Let E be an open 3-cell in R3, with edges parallel to the coordinate axes. Suppose(a, b, c) ∈ E, fi ∈ C ′(E) for i = 1, 2, 3

ω = f1dy ∧ dz + f2dz ∧ dx+ f3dx ∧ dy,and assume that dω = 0 in E. Define

λ = g1dx+ g2dy

where

g1(x, y, z) =

∫ z

c

f2(x, y, s) ds−∫ y

b

f3(x, t, c) dt

g2(x, y, z) = −∫ z

c

f1(x, y, s) ds,

for x, y, z) ∈ E.

� Prove that dλ = ω in E.

We compute the exterior derivative:

dλ = d(g1dx+ g2dy)

= dg1 ∧ dx+ g1 ∧ d(dx) + dg2 ∧ dy + g2 ∧ d(dy)

= d

[∫ z

c

f2(x, y, s) ds−∫ y

b

f3(x, t, c) dt

]

∧ dx+ d

[

−∫ z

c

f1(x, y, s) ds

]

∧ dy

22

Page 23: rudin

Solutions by Erin P. J. Pearse

= ∂∂y

(∫ z

c

f2(x, y, s) ds

)

dy ∧ dx+ ∂∂z

(∫ z

c

f2(x, y, s) ds

)

dz ∧ dx

− ∂∂y

(∫ y

b

f3(x, t, c) dt

)

dy ∧ dx− ∂∂z

(∫ y

b

f3(x, t, c) dt

)

dz ∧ dx

− ∂∂x

(∫ z

c

f1(x, y, s) ds

)

dx ∧ dy − ∂∂z

(∫ z

c

f1(x, y, s) ds

)

dz ∧ dy

= ∂∂z

(∫ z

c

f1(x, y, s) ds

)

dy ∧ dz + ∂∂z

(∫ z

c

f2(x, y, s) ds−∫ y

b

f3(x, t, c) dt

)

dz ∧ dx

∂∂y

(∫ y

b

f3(x, t, c) dt−∫ z

c

f2(x, y, s) ds

)

dx ∧ dy

− ∂∂x

(∫ z

c

f1(x, y, s) ds

)

dx ∧ dy

Note that the third integral above is constant with respect to z, and hence vanishes.

= f1(x, y, z)dy ∧ dz + f2(x, y, z)dz ∧ dx+ f3(x, y, c)dx ∧ dy

−[∫ z

c

(∂∂yf2(x, y, s) + ∂

∂xf1(x, y, s)

)

ds

]

dx ∧ dy

= f1(x, y, z)dy ∧ dz + f2(x, y, z)dz ∧ dx

+

[

f3(x, y, c) −∫ z

c

(∂∂yf2(x, y, s) + ∂

∂xf1(x, y, s)

)

ds

]

dx ∧ dy. (10.1)

Since ω is closed,

0 = dω = d(f1dy ∧ dz) + d(f2dz ∧ dx) + d(f3dx ∧ dy)= df1 ∧ dy ∧ dz + f1 ∧ d(dy ∧ dz) + df2 ∧ dz ∧ dx+ f2 ∧ d(dz ∧ dx)

+ df3 ∧ dx ∧ dy + f3 ∧ d(dx ∧ dy)= ∂

∂xf1dx ∧ dy ∧ dz + ∂

∂yf2dy ∧ dz ∧ dx + ∂

∂zf3dz ∧ dx ∧ dy

=(

∂∂xf1 + ∂

∂yf2 + ∂

∂zf3

)

dx ∧ dy ∧ dz

=⇒ ∂∂zf3 = −

(∂∂xf1 + ∂

∂yf2

)

,

so the last line of the previous derivation, (10.1), becomes[

f3(x, y, c) +

∫ z

c

(∂∂zf3

)ds

]

dx ∧ dy

=[

f3(x, y, c) +(

f3(x, y, z) − f3(x, y, c))]

dx ∧ dy

= f3(x, y, z) dx ∧ dyand we finally obtain

dλ = f1(x, y, z)dy ∧ dz + f2(x, y, z)dz ∧ dx+ f3(x, y, z)dx ∧ dy= ω.

23

Page 24: rudin

Principles of Mathematical Analysis — Walter Rudin

� Evaluate these integrals when ω = ζ and thus find the form λ = −(z/r)η, where

η = x dy−y dxx2+y2 .

Now the form is

ω = ζ = 1r3 (x dy ∧ dz + y dz ∧ dx + z dx ∧ dy) ,

sof1 =

x

r3, f2 =

y

r3, f3 =

z

r3.

The integrals are∫ z

c

f2(x, y, s) ds =

∫ z

c

y(x2 + y2 + s2)−3/2 ds = yx2+y2

(

zr− c√

x2+y2+c2

)

,

∫ y

b

f3(x, t, c) dt =

∫ y

b

c(x2 + t2 + c2)−3/2 dt = cx2+c2

(

y√x2+y2+c2

− b√x2+b2+c2

)

,

∫ z

c

f1(x, y, s) ds =

∫ z

c

x(x2 + y2 + s2)−3/2 ds = xx2+y2

(

zr− c√

x2+y2+c2

)

.

Thus λ = g1 dx+ g2 dy is

λ = yx2+y2

(

zr− c√

x2+y2+c2

)

dx− cx2+c2

(

y√x2+y2+c2

− b√x2+b2+c2

)

dx

− xx2+y2

(

zr− c√

x2+y2+c2

)

dy

=(− z

r

) (

− y dxx2+y2 + xdy

x2+y2

)

+

[

cx2+c2

(

b√x2+b2+c2

− y√x2+y2+c2

)

− yc

(x2+y2)√

x2+y2+c2

]

dx

+ xc

(x2+y2)√

x2+y2+c2dy

=(− z

r

)η,

due to some magic cancellations at the end.

24

Page 25: rudin

Solutions by Erin P. J. Pearse

28. Fix b > a > 0 and define

Φ(r, θ) = (r cos θ, r sin θ)

for a ≤ r ≤ b, 0 ≤ θ ≤ 2π. The range of Φ is thus an annulus in R2. Put ω = x3dy,and show ∫

Φ

dω =

∂Φ

ω

by computing each integral separately.

Starting with the left-hand side,

dω = ∂∂xx3dx ∧ dy + ∂

∂yx3dy ∧ dy = 3x2dx ∧ dy + 0 = 3x2dx ∧ dy,

so ∣∣∣∣

cos θ −r sin θsin θ r cos θ

∣∣∣∣= r cos2 θ + r sin2 θ = r,

whence

3x2dx ∧ dy = 3r2(cos2 θ) r dr dθ,

and thus∫

Φ

dω =

∫ 2π

0

∫ b

a

3r3 cos2 θ dr dθ

=

(∫ 2π

0

cos2 θ dθ

)(∫ b

a

3r3 dr

)

=[

12θ + 1

4sin 2θ

]2π

0

[34r4]b

a

=(π + 1

4(sin 4π − sin 0)

)· 3

4

(b4 − a4

)

= 3π4

(b4 − a4).

Meanwhile, on the right-hand side, note that

∂Φ = Γ − γ,

as in Example 10.32 or Exercise 21(b). Thus,∫

Φ

ω =

Γ−γ

ω =

Γ

x3 dy −∫

γ

x3 dy

=

∫ 2π

0

b3 cos3 θ(b cos θ) dθ −∫ 2π

0

a3 cos3 θ(a cos θ) dθ

= b4∫ 2π

0

cos4 θ dθ − a4

∫ 2π

0

cos4 θ dθ

=(b4 − a4

)∫ 2π

0

cos4 θ dθ

=(b4 − a4

)· 3π

4.

25

Page 26: rudin

Principles of Mathematical Analysis — Walter Rudin

11. Lebesgue Theory

1. If f ≥ 0 and∫

Ef dµ = 0, prove that f(x) = 0 almost everywhere on E.

First define

En := {f > 1n} and A :=

En = {f 6= 0}.We need to show that µA = 0, so suppose not. Then

µA > 0 =⇒∑

En > 0

=⇒ µEn > 0 for some n.

But then we’d have∫

E

f dµ ≥∫

En

f dµ > 1n· µ(En) > 0. <↙

2. If∫

Af dµ = 0 for every measurable subset A of a measurable set E, then f(x) =ae 0

on E.

First, suppose f ≥ 0. Then f =ae 0 on every A ⊆ E by the previous exercise. Inparticular, E ⊆ E, so f =ae 0 on E.

Now let f � 0. Then f = f+ − f−, and each of f+, f− are 0 on E by the aboveremark, so f =ae 0 on E, too.

3. If {fn} is a sequence of measurable functions, prove that the set of points x at which{fn(x)} converges is measurable.

Define f(x) = lim fn(x), so f is measurable by (11.17) and |fn(x) − f(x)| is mea-surable by (11.16),(11.18). Then we can write the given set as

{x ... ∀ε > 0, ∃N s.t. n ≥ N =⇒ |fn(x) − f(x)| < ε}= {x ... ∀k ∈ N, ∃N s.t. n ≥ N =⇒ |fn(x) − f(x)| < 1/k}

=∞⋂

k=1

{x ... ∃N s.t. n ≥ N =⇒ |fn(x) − f(x)| < 1/k}

=

∞⋂

k=1

∞⋃

n=1

{x ... |fn(x) − f(x)| < 1/k}

4. If f ∈ L(µ) on E and g is bounded and measurable on E, then fg ∈ L(µ) on E.

Since g is bounded, ∃M such that |g| ≤M . Then∣∣∣∣

E

fg dµ

∣∣∣∣≤M

E

|f | dµ <∞

by (11.23(d)) and (11.26).

26