117
Solutions for exercises/Notes for Linear Algebra Done Right by Sheldon Axler Toan Quang Pham [email protected] Monday 10 th September, 2018 Contents 1. Some note before reading the book 4 2. Terminology 4 3. Chapter 1 - Vector Spaces 4 3.1. Excercises 1.B ...................................... 4 3.2. 1.C Subspaces ...................................... 4 3.3. Exercises 1.C ...................................... 7 4. Chapter 2 - Finite-dimensional vector spaces 9 4.1. 2.A Span and linear independence .......................... 9 4.1.1. Main theories .................................. 9 4.1.2. Important/Interesting results from Exercise 2.A ............... 11 4.2. Exercises 2.A ...................................... 12 4.3. 2.B Bases ........................................ 14 4.4. Addition, scalar multiplication of specific list of vectors ............... 15 4.5. Exercises 2B ....................................... 15 4.6. 2.C Dimension ..................................... 17 4.7. Exercises 2C ....................................... 18 5. Chapter 3: Linear Maps 21 5.1. 3.A The vector space of linear maps ......................... 21 5.2. Exercises 3A ...................................... 21 5.3. 3.B Null Spaces and Ranges .............................. 23 5.4. Exercises 3B ....................................... 25 5.4.1. A way to construct (not) injective, surjective linear map .......... 25 5.4.2. Exercises .................................... 25 1

Solutions for exercises/Notes for Linear Algebra Done

  • Upload
    others

  • View
    31

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Solutions for exercises/Notes for Linear Algebra Done

Solutions for exercises/Notes for Linear

Algebra Done Right by Sheldon Axler

Toan Quang Pham

[email protected]

Monday 10

th

September, 2018

Contents

1. Some note before reading the book 4

2. Terminology 4

3. Chapter 1 - Vector Spaces 43.1. Excercises 1.B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.2. 1.C Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.3. Exercises 1.C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4. Chapter 2 - Finite-dimensional vector spaces 94.1. 2.A Span and linear independence . . . . . . . . . . . . . . . . . . . . . . . . . . 9

4.1.1. Main theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.1.2. Important/Interesting results from Exercise 2.A . . . . . . . . . . . . . . . 11

4.2. Exercises 2.A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.3. 2.B Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.4. Addition, scalar multiplication of specific list of vectors . . . . . . . . . . . . . . . 154.5. Exercises 2B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154.6. 2.C Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.7. Exercises 2C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

5. Chapter 3: Linear Maps 215.1. 3.A The vector space of linear maps . . . . . . . . . . . . . . . . . . . . . . . . . 215.2. Exercises 3A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215.3. 3.B Null Spaces and Ranges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.4. Exercises 3B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

5.4.1. A way to construct (not) injective, surjective linear map . . . . . . . . . . 255.4.2. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

1

Page 2: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 2

5.5. Exercises 3C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295.6. 3D: Invertibility and Isomorphic Vector Spaces . . . . . . . . . . . . . . . . . . . 315.7. Exercises 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.8. 3E: Products and Quotients of Vector Spaces . . . . . . . . . . . . . . . . . . . . 365.9. Exercises 3E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365.10. 3F: Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405.11. Exercises 3F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

6. Chapter 4: Polynomials 486.1. Exercise 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

7. Chapter 5: Eigenvalues, Eigenvectors, and Invariant Subspaces 507.1. 5A: Invariant subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507.2. Exercises 5A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507.3. 5B: Eigenvectors and Upper-Triangular Matrices . . . . . . . . . . . . . . . . . . 557.4. Exercises 5B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557.5. 5C: Eigenspaces and Diagonal Matrices . . . . . . . . . . . . . . . . . . . . . . . . 587.6. Exercises 5C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

8. Chapter 6: Inner Product Spaces 648.1. 6A: Inner Products and Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648.2. Exercises 6A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648.3. 6B: Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708.4. Exercises 6B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708.5. 6C: Orthogonal Complements and Minimization Problems . . . . . . . . . . . . . 778.6. Exercises 6C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

9. Chapter 7: Operators on Inner Product Spaces 819.1. 7A: Self-adjoint and Normal Operators . . . . . . . . . . . . . . . . . . . . . . . . 819.2. Exercises 7A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829.3. 7B: The Spectral Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859.4. Exercises 7B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 869.5. 7C: Positive Operators and Isometries . . . . . . . . . . . . . . . . . . . . . . . . 889.6. Exercises 7C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899.7. 7D: Polar Decomposition and Singular Value Decomposition . . . . . . . . . . . . 919.8. Exercises 7D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

10.Chapter 8: Operators on Complex Vector Spaces 9710.1. 8A: Generalized Eigenvectors and NilpotentOperators . . . . . . . . . . . . . . . 9710.2. Exercises 8A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9810.3. 8B: Decomposition of an Operator . . . . . . . . . . . . . . . . . . . . . . . . . . 10010.4. Exercises 8B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10210.5. 8C: Characteristic and Minimal Polynomials . . . . . . . . . . . . . . . . . . . . . 10510.6. Exercises 8C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

Page 3: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 3

10.7. 8D: Jordan Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10910.8. Exercises 8D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

11.Chepter 9: Operators on Real Vector Spaces 11311.1. 9A: Complexification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11311.2. Exercises 9A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11311.3. 9B: Operators on Real Inner Product Spaces . . . . . . . . . . . . . . . . . . . . . 11611.4. Exercises 9B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

12.Chapter 10: Trace and Determinant 11912.1. 10A: Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11912.2. Exercises 10A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

13.Summary 122

14.Interesting problems 123

15.New knowledge 123

Page 4: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 4

1. Some note before reading the book

1. If the problem does not specify that vector space V is finite-dimensional then it meansV can be either finite-dimensional or infinite-dimensional, which is sometimes harder tosolve. For example, exercises

exer:3B:24exer:3B:24

24 in 3B.

2. Carefully check whether the theorem/definition holds for real vector space or complexvector space.

2. Terminology

1. There exists a basis of V with respect to which the matrix of T = the matrix of T withrespect to basis of V .

2. Eigenvectors v1

, . . . , vm

corresponding to eigenvalues �1

, . . . ,�m

= Eigenvalues �1

, . . . ,�m

and corresponding eigenvectors v1

, . . . , vm

.

3. Chapter 1 - Vector Spaces

3.1. Excercises 1.B

1. �(�v) = (�1)(�v) = (�1)((�1)v) = 1v = v.

2. If a = 0 then it’s obviously true. If a 6= 0 then v =

�1

a

a�v =

1

a

(av) = 1

a

0 = 0.

3. Let 1

3

(w � v) = x then 3x = w � v so 3x + v = (m � v) + v = m. We can see that x inhere is unique, otherwise if there exist such two x, x0 then x =

1

3

(w � v) = x0.

4. The empty set is not a vector space because it doesn’t contain any additive identity 0 (itdoesn’t have any vector).

5. Because the collection of objects satisfying the orgininal condition is all vectors in V , sameas collection of objects satisfying the new condition.

6. R [ {1} [ {�1} is a vector space over R.

3.2. 1.C Subspaces

Problem 3.2.1 (Example 1.35). (a) Obvious.

(b) Let S be set of all continuous real-valued functions on the interval [0, 1]. S has additiveidentity (Example 1.24). If f, g 2 S then f + g, cf is continuous real-valued function onthe interval [0, 1] for any c 2 F.

(c) The idea is similar, if g, f are two differentiable real-valued functions then so are f + g, cf .

Page 5: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 5

(d) Let g, f 2 S then g0(2) = f 0(2) = b. According to Notion 1.23 then f + g 2 S so

b = (f + g)0(2) but (f + g)(x) = f(x) + g(x) so (f + g)0(x) = f 0(x) + g0(x) implying

b = (f + g)0(2) = f 0(2) + g0(2) = 2b or b = 0.

(e) Let an

, bn

be to sequences of S then lim

n!1an

= lim

n!1bn

= 0 so lim

n!1(a

n

+bn

) = lim

n!1ca

n

= 0

which follows an

+ bn

, can

2 S.

Problem 3.2.2 (Page 19). What are subspaces of R2?

Answer. {(0, 0)} and R2 are obviously two of them. Are there any other subspaces? Let Sbe a subspace of R2 and there is an element (x

1

, y1

) 2 S with (x1

, y1

) 6= (0, 0). Note that�(x

1

, y1

) 2 S for each � 2 F so this follows that all points in the line y =

y1x1x belong to S. If

there doesn’t exist a point (x2

, y2

) 2 S so that (x2

, y2

) doesn’t lie in the line y =

y1x1x then we

can state that S, set of points lie in y =

y1x1x, is a subspace of R2. Hence, since point (x

1

, y1

) isan arbitrary point, we deduce that all lines in R2 through the origin are subspaces of R2.

Now, what if there exists such (x2

, y2

) 2 S then the line y =

y2x2x also belongs to S. We will

state that if there exists two lines belong to S then S = R2. Indeed, since R2 is the union ofall lines in R2 through the origin, it suffices to prove that all lines y = ax for some a belongsto R, given that two lines y =

y1x1x and y =

y2x2x already belong to S. Let m =

y1x1, n =

y2x2

.We know that if (x

1

, y1

), (x2

, y2

) 2 S then (x1

+ x2

, y1

+ y2

) 2 S. We pick x1

, x2

so that(m � a)x

1

= (a � n)x2

then mx1

+ nx2

= a(x1

+ x2

). We pick y1

= mx1

, y2

= nx2

then thisfollows (y

3

, x3

) = (y1

+ y2

, a(x1

+ y2

)) 2 S which yields that line y = ax belongs to S. Sincethis is true for arbitrary a so we deduce that S = R2.

Thus, all subspaces of R2 are {(0, 0)},R2 and all lines in R2 through the origin.

Problem 3.2.3 (Page 19). What are subspaces of R3?

Answer. Similar idea to previous problem. It’s obvious that {(0, 0, 0)},R3 are two subspaces ofR3. Let S be a different subspace.

1. If (x1

, y1

, z1

) 2 S then �(x1

, y1

, z1

) 2 S implying line ` in R3 through the origin and(x

1

, y1

, z1

) belongs to S.

2. If there exists (x2

, y2

, z2

) 2 S and (x2

, y2

, z2

) does not lie in the line ` on R3 then theplane P through the origin, (x

1

, y1

, z1

) and (x2

, y2

, z2

) belongs to S.

3. If there exists (x3

, y3

, z3

) 2 S but not in the plane P then S = R3.

Thus, {0},R3, all line in R3 through the origin and all planes in R3 through the origin are allsubspaces of R3.

theo_7:1C:1.45 Theorem 3.2.4 (Sum of subspaces, page 20) Suppose U1

, . . . , Um

are subspaces of V . ThenU1

+ U2

+ . . .+ Um

is the smallest subspace of V containing U1

, . . . , Um

.

Page 6: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 6

Proof. It’s not hard to prove that U1

+ U2

+ . . . + Um

is subspace of V . Let U be a subspaceof V containing U

1

, . . . , Um

, we will prove that if v 2 U1

+ . . . + Um

then v 2 U . Indeed, sincev 2 U

1

+ . . . + Um

then v = v1

+ . . . + vm

for vi

2 Ui

for all 1 i m. On the other hand,vi

2 U since U contains U1

, . . . , Um

so v =

Pm

i=1

vi

2 U . Thus, our claim is proven, which yieldsthat U contains U

1

+ . . .+ Um

.Next, we will prove that U

1

+ . . .+ Um

contains U1

, . . . , Um

. This is not hard since we knowthat 0 2 U

i

for all 1 i m so for any v 2 Ui

then

v = 0 + . . .+ 0| {z }i�1

+v + 0 + . . .+ 0| {z }m�i

2 U1

+ . . .+ Um

.

These two argument follows that U1

+ . . . + Um

is the smallest subspace of V containingU1

, . . . , Um

.

Theorem 3.2.5 (1.44, Condition of a direct sum, page 23)Suppose U

1

, U2

, . . . , Um

are subspaces of V . Then U1

+ . . .+Um

is a direct sum if and onlyif 0 can only be written as sum u

1

+ . . .+ um

where each ui

= 0 in Ui

.

Proof. If U1

+ . . .+ Um

is a direct sum then obviously 0 is written uniquely as sum 0 + . . .+ 0.If 0 is written uniquely as 0 + 0 + . . .+ 0. Assume the contrary that sum U = U

1

+ . . .+ Um

is not a direct sum, which means there exists u 2 U so that u = v1

+ . . .+ vm

= u1

+ . . .+ um

where ui

, vi

2 Ui

and there exists i so ui

6= vi

. This follows

mX

i=1

(vi

� ui

) =

mX

i=1

vi

�mX

i=1

ui

= u� u = 0.

In other words, there is another way to write 0 as (v1

�u1

)+ . . .+(vm

�um

) where v1

�u1

2 Ui

and there exists j so vj

� uj

6= 0, a contradiction. Thus, U1

+ . . .+ Um

is a direct sum.

Theorem 3.2.6 (1.45, Direct sum of two subspaces)Suppose U, V are two subspaces of V . Then U+W is a direct sum if and only if U\W = {0}.

Proof. If U +W is a direct sum. If there exists v 2 U \W and v 6= 0 then �v = (�1)v 2 U so0 = v � v, which contradicts to the unique representation of 0 as sum of vectors in U,W .

If U \W = {0}. Assume the contrary that U +W is not a direct sum then 0 = u + v withu 2 U, v 2 W,u, v 6= 0. This follows u = �v 2 W or u 2 U \ W , a contradiction. We aredone.

3.3. Exercises 1.C

1. (a), (d) are subspaces of F3. (b) is not subspace of F3 because it doesn’t contain 0. (c) isnot subspace of F3 because (0, 1, 1) + (1, 1, 0) = (1, 2, 1) 62 F3.

Page 7: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 7

2. Already did.

3. Let S be set of such functions. Let f, g 2 S then f + g is a differentiable real-valuedfunction and (f + g)0(�1) = f 0

(�1) + g0(�1) = 3f(2) + 3g(2) = 3(f + g)(2). Similarly to�f .

4. Similar to example 1.35.

5. Yes.

6. (a) Since a3 = b3 if and only if a = b so {(a, b, c) 2 R3

: a3 = b3} = {(a, b, c) 2 R3

: a = b}.It’s not hard to prove that this is subspace of R3.(b) No. We see that (1, 1),

⇣1 + i, �1�

p3

2

+

�1+

p3

2

i⌘2 S but

⇣2 + i, 1�

p3

2

+

�1+

p3

2

i⌘62 S.

7. Set {(a, b) : a, b 2 Z} is not subspace of R2.

8. Set U = {(a, b) 2 R2

: a = b} [ {(a, b) 2 R2

: a = 2b} is not subspace of R2 because(1, 1), (2, 1) 2 U but (3, 2) 62 U .

9. We consider function f(x) so that f(x) = x for x 2 [0,p2) and f(x) = f(x+

p2)., function

g(x) = x for x 2 [0,p3) and g(x) = g(x+

p3) then f + g is not periodic.

Indeed, assume that the function f(x)+g(x) is periodic with length T . Note that if T/p2

is not an integer then f(0) < f(T ), similar to T/p3. Thus, since T/

p2 and T/

p3 can’t

be both integers so f(0)+g(0) < f(T )+g(T ), a contradiction. Thus, f+g is non-periodicfunction. This yields that set of periodic function from R to R is not a subspace of RR.

10. For any u, v 2 U1

\ U2

, then u + v 2 U1

, u + v 2 U2

so u + v 2 U1

\ U2

. Similar to�u 2 U

1

\ U2

.

11. Similar to 10.

12. Assume the contrary, that means there exists two subspaces U1

, U2

of V so that U1

[ U2

is also a subspace and there exists u1

2 U1

, u2

2 U2

, u1

62 U2

, u2

62 U1

. We have u1

, u2

2U1

[ U2

so u1

+ u2

2 U1

[ U2

. WLOG, assume u1

+ u2

2 U1

then (u1

+ u2

)� u1

2 U1

oru2

2 U1

, a contradiction. Thus, either U1

✓ U2

or U2

✓ U1

.

13. Let U1

, U2

, U3

be three subspaces of V so U1

[ U2

[ U3

is also a subspace. If there existstwo among these three subspaces, WLOG U

1

✓ U2

, then U1

[ U2

[ U3

= U2

[ U3

is asubspace if and only if U

3

✓ U2

or U2

✓ U3

, in other words either U2

contains U1

, U3

orU3

contains U1

, U2

.If there doesn’t exist any two among three subspaces U

1

, U2

, U3

so that one contains theother, then there exists a vector v 62 U

2

, v 2 U1

[U2

[U3

which yields v 2 U1

[U3

. WLOG,v 2 U

1

. Since U1

6✓ U2

, U2

6✓ U1

so there exists u 2 U2

, u 62 U1

. With similar argumentto 12, we find u + v 62 U

2

, u + v 62 U1

so u + v 2 U3

. Similar u + 2v 2 U3

which we find(u + 2v) � (u + v) = v 2 U

3

. Thus, we find that for any v 62 U2

then v 2 U1

\ U3

. Thisfollows U

1

[ U2

= U1

\ U2

or U1

= U2

, a contradiction to the assumption.

Page 8: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 8

14. Done

15. U+U is exactly U . From Theorem 1.39, U+U contains U . We also notice that U containsU + U because every u 2 U + U then u 2 U .

16. Yes. As long as U,W are sets of vectors of V .

17. Yes. As long as U1

, U2

, U3

are sets of vectors of V .

18. Operation of the addition on the subspaces of V has additive identity, which is U = {0}.Only {0} has additive inverse.

19. Let V = {(x, y, z) 2 R3

: x, y, z 2 R}, U1

= {(x, 0, 0) 2 R3

: x 2 R}, U2

{(0, x, 0) 2 R3

:

x 2 R} and W = {(x, y, 0) 2 R3

: x, y 2 R} then U1

+W = U2

+W = W but U1

6= U2

.

20. W = {(x, 2x, y, 2y) 2 F4

: x, y 2 F}. It is obvious that W + U is a direct sum. Itsuffices to prove that F4

= W � U or prove that for any (m,n, p, q) 2 F4 there exists(x, 2x, y, 2y) 2 W, (x

1

, x1

, y1

, y1

) 2 U so (m,n, p, q) = (x, 2x, y, 2y) + (x1

, x1

, y1

, y1

). Wecan choose x = n�m,x

1

= 2m� n, y = q � p, y1

= 2p� q.

21. W = {(x, y, z, z, x) 2 F5

: x, y, z 2 F}. Vector 0 can be represented uniquely as w1

+ u1

where w1

2 W,u1

2 U and w1

= u1

= 0. Thus, W � U is a direct sum. It suffices toprove for each (m,n, p, q, r) 2 F5 there is w = (x, y, z, z, x) 2 W,u = (x

1

, y1

, x1

+ y1

, x1

�y1

, 2x1

) 2 U so (m,n, p, q, r) = w + u. We can choose x1

= r � m,x = 2m � r, z =

m� r + 1

2

p+ 1

2

q, y1

=

1

2

p� 1

2

q, y = n+

1

2

q � 1

2

p.

22. W1

= {(0, 0, x, 0, 0) 2 F5

: x 2 F},W2

= {(0, 0, 0, x, 0) 2 F5

: x 2 F},W3

= {(0, 0, 0, 0, x) 2F5

: x 2 F}.

23. Let’s take Exercise 20 as counterexample, V = F4,W = {(x, x, y, y) 2 F4

: x, y 2 F}, U1

=

{(x, 2x, y, 2y) 2 F4

: x, y 2 F} and U2

= {(x, 3x, y, 3y) 2 F4

: x, y 2 F}.

24. Since Ue

\ Uo

= {f(x) = 0} so Ue

+ Uo

is a direct sum. Consider f(x) 2 RR then f(x) =fe

(x) + fo

(x) where fe

(x) =

1

2

[f(x) + f(�x)] 2 Ue

and fo

(x) =

1

2

[f(x)� f(�x)] 2 Uo

.Thus, RR

= Ue

� Uo

.

Page 9: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 9

4. Chapter 2 - Finite-dimensional vector spaces

4.1. 2.A Span and linear independence

4.1.1. Main theories

theo_8:2A:2.21 Theorem 4.1.1 (Linear Dependence Lemma) Suppose v1

, . . . , vm

is linearly dependent listin V . There exists j 2 {1, 2, . . . ,m} such that the following holds:

(a) vj

2 span(v1

, . . . , vj�1

).

(b) if the jth term is removed from v1

, . . . , vm

, the span of the remaining list equalsspan(v

1

, . . . , vm

).

theo1 Theorem 4.1.2 (2.23) In a finite-dimensional vector space, the length of every linearlyindependent list of vector is less than or equal to the length of every spanning list ofvectors.

This theorem is also called as Steinitz exchange lemma or Replacement’s theorem.

Proof. Let v1

, v2

, . . . , vm

is linearly independent list and u1

, u2

, . . . , un

is spanning list of vectors.It suffices to prove for the case that u

1

, u2

, . . . , un

is also linearly independent, otherwise ifu1

, u2

, . . . , un

is linearly dependent, according to Linear Dependence Lemma (2.21), we can takeout some vectors from the list without changing the span and bring back to linearly independentlist. Assume the contrary that n < m and from this assumption, we will prove that list v

1

, . . . , vm

is linearly dependent.Since u

1

, u2

, . . . , un

spans V and is also linearly independent so for each vi

, there exists uniqueci1

, ci2

, . . . , cin

2 F so vi

= ci1

u1

+ ci2

u2

+ . . .+ cin

un

. Hence, for a1

, . . . , am

2 F, we have

a1

v1

+ a2

v2

+ . . .+ am

vm

= u1

mX

i=1

ai

ci1

+ u2

mX

i=1

ai

ci2

+ . . .+ un

mX

i=1

ai

cin

.

We will show that there exists a1

, . . . , am

not all 0 so a1

v1

+ . . . + am

vm

= 0. Since u1

, . . . , un

is linearly independent so form the above, we follow

a1

v1

+ . . .+ am

vm

= 0 ()

8>>>><

>>>>:

a1

c11

+ a2

c21

+ . . .+ am

cm1

= 0,

a2

c12

+ a2

c22

+ . . .+ am

cm2

= 0,

. . .

am

c1n

+ a2

c2n

+ . . .+ am

cmn

= 0.

(1) eq1

This system of equation has m variable and n equation with n < m. Here is an algorithm infinding non-zero solution for a

1

, . . . , am

. Number the equations in (eq1eq1

1) from top to bottom as(11), (12), . . . , (1n).

1. For 1 i m� 1, get rid of am

in (1i) by subtracting to (1(i+ 1)) times some constant.By doing this, we obtain a system of equations where the first n�1 equations only containm� 1 variables, the last equation contains m variables.

Page 10: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 10

2. For 1 i m� 2, get rid of am

in (1i) by subtracting to (1(i+ 1)) times some constant.We obtain an equivalent system of equations where the first n�2 equations contain m�2

variables, equation (1(n� 1)) contains m� 1 variables and and equation (1n) contains mvariables.

3. Keep doing this until we get an system of equation so that the (1i) equation containsm� n+ i variables.

4. Starting from equation (11) with m�n+1 variables a1

, . . . , am�n+1

, we can pick a1

, . . . , am+n�1

so that it satisfies equation (11) and a1

, . . . , am�n+1

not all 0, which is possible sincem � n+ 1.

5. Come to (12) and pick am�n+2

, to (13) and pick am�n+3

until to (1n) and pick am

.

Thus, there exists a1

, . . . am

not all 0 so that a1

v1

+ . . . + am

vm

= 0, a contradiction sincev1

, . . . , vm

is linearly independent. Hence, we must have n � m. We are done.

See Exercisesexer:2A:12exer:2A:12

12,exer:2A:13exer:2A:13

13,exer:2A:14exer:2A:14

14,exer:2A:17exer:2A:17

17 for applications of this theorem.

theo:2 Theorem 4.1.3 (2.26 Finite dimensional subspaces) Every subspace of a finite-dimensionalvector space is finite-dimensional.

Proof. Let V = span(v1

, v2

, . . . , vm

) be a finite-dimensional vector space and U be its subspace.Among vectors in the spanning list v

1

, v2

, . . . , vm

, let u1

, u2

, . . . un

be vectors that are in subspaceU , k

1

, k2

, . . . , km�n

be vectors that are not in U . If U = span(u1

, u2

, . . . , un

) then we are done.If not, then there exists ` 2 U so ` 62 span(u

1

, u2

, . . . , un

). Hence, since ` 2 span(v1

, . . . , vm

) so

` =

nX

i=1

ui

ai

+

m�nX

j=1

ki

bi

for ai

, bi

2 F and there exists bi

(1 i m � n) so that bi

6= 0. Since ui

, ` 2 U so that

means h1

=

m�nX

i=1

ki

bi

2 U and also in span(k1

, . . . , km�n

) and h1

6= 0. We add h1

to the list

h1

, u1

, u2

, . . . , un

in U . Similarly, we compare U and span(h1

, u1

, . . . , un

) and may add an-other h

2

2 span(k1

, k2

, . . . , km�n

) onto the list. This process will keep going until we obtaina spanning list of U . We will prove that after adding at most m � n such h

i

, we can getU = span(h

1

, h2

, . . . , hj

, u1

, . . . , un

) with j m� n.

Indeed, assume the contrary that after adding at least m � n such hi

2 U so hi

6= 0, westill can’t obtain a list of vectors that spans U . That means there exists ` 2 U and ` /2span(h

1

, . . . , hm�n

, u1

, . . . , un

). Similarly, we can construct another , hm+n�1

6= 0, hm�n+1

2 Uand h

m�n+1

2 span(k1

, . . . , km�n

) based on `. Let hi

= ci1

k1

+ ci2

k2

+ . . . + ci,m�n

km�n

thenwe have

Page 11: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 11

a1

h1

+ . . .+ am�n+1

hm�n+1

= k1

()

8>>>><

>>>>:

a1

c1,1

+ a2

c2,1

+ . . .+ am�n+1

cm�n+1,1

= 1,

a1

c1,2

+ a2

c2,2

+ . . .+ am�n+1

cm�n+1,2

= 0,

. . .

a1

c1,m�n

+ . . .+ am�n+1

cm�n+1,m�n

= 0.

This is a system of m � n equations with m � n + 1 variables a1

, . . . , am�n+1

2 F so similarlyto Theorem

theo1theo1

4.1.2’s proof, there are infinite number of ai

so a1

h1

+ . . . + am�n+1

hm�n+1

= k1

.Since h

i

2 U so k1

2 U , a contradiction. Thus, there are at most m � n such hi

2 U (1 i j) so that there does not exist ` 2 U but ` /2 span(h

1

, . . . , hj

, u1

, . . . , un

). Note thatspan(h

1

, . . . , hj

, u1

, . . . , un

) ✓ U so this can only mean U = span(h1

, . . . , hj

, u1

, . . . , un

). Thus,U is a finite-dimensional vector space.

Remark 4.1.4. Both my proofs for the two theorems use the result that a system of m equationswith n variables in R with m < n has infinite solution in R.

4.1.2. Important/Interesting results from Exercise 2.A

coroll:1 Corollary 4.1.5 (Example 2.20) If some vector in a list of vectors in V is a linear combi-nation of the other vectors, then the list is linearly dependent.

Proof. List v1

, . . . , vm

has v1

=

Pi

ai

vi

then v1

�P

i

ai

vi

= 0, or there exists x1

, . . . , xm

not all0 so

Pm

i=1

xi

vi

= 0. Hence, v1

, . . . , vm

is linearly dependent.

This is another way to check if some list is linearly independent or not. See Exercises 10 (2A)for application. The reverse is also true.

Corollary 4.1.6(Exercise

exer:2A:11exer:2A:11

11, 2A) Suppose v1

, . . . , vm

is linearly independent in V and w 2 V then v1

, v2

, . . . , vm

, wcoroll:2

is linearly independent if and only if w 62 span(v1

, v2

, . . . , vm

).

This is a corollary implying from Linear Dependence Lemma (2.21). With this, we can constructa spanning list for any infinite-dimensional vector space. Also with this, we can prove Exerciseexer:2A:14exer:2A:14

14, which proves existence of a sequence of vectors in infinite-dimensional vector space so that forany positive integer m, the first m vectors in the sequence is linearly independent. See Exerciseexer:2A:15exer:2A:15

15,exer:2A:16exer:2A:16

16 for applications for Exerciseexer:2A:14exer:2A:14

14.

4.2. Exercises 2.A

1. For any v 2 V then v = a1

v1

+ a2

v2

+ a3

v3

+ a4

v4

for ai

2 F. Hence,

v = a1

(v1

� v2

) + (a1

+ a2

)(v2

� v3

) + (a1

+ a2

+ a3

)(v3

� v4

) + (a1

+ a2

+ a3

+ a4

)v4

,

= b1

(v1

� v2

) + b2

(v2

� v3

) + b3

(v3

� v4

) + b4

v4

.

Page 12: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 12

Thus, v 2 span(v1

� v2

, v2

� v3

, v3

� v4

, v4

). We follow V ✓ span(v1

� v2

, v2

� v3

, v3

�v4

, v4

). On the other hand, since v1

� v2

, v2

� v3

, v3

� v4

, v4

so for any u 2 span(v1

�v2

, v2

� v3

, v3

� v4

, v4

) then u 2 V . We find span(v1

� v2

, v2

� v3

, v3

� v4

, v4

✓ V . Thus,v1

� v2

, v2

� v3

, v3

� v4

, v4

spans V .

2. Example 2.18: (a) True for v 6= 0. If v = 0 then for any a 2 F we always have av = 0, solist of one vector 0 is not linearly independent.(b) Consider u, v 2 V . We have au+ bv = 0 () au = �bv so in order to have only onechoice of (a, b) to be (0, 0) then u can’t be scalar multiple of v.(c),(d) True.

3. It suffices to find t so there are x, y, z 2 R other than x = y = z = 0 so x · (3, 1, 4) +

y · (2,�3, 5) + z · (5, 9, t) = 0 or

8><

>:

3x+ 2y + 5z = 0,

x� 3y + 9z = 0,

4x+ 5y + tz = 0.

Pick t = 2 then we find (x, y, z) =

(�3a, 2a, a) for any a 2 R. This follows (3, 1, 4), (2,�3, 5), (5, 9, t) is not linearly indepen-dent in R3.

4. Similarly to 3, let say if x · (2, 3, 1) + y · (1,�1, 2) + z · (7, 3, c) = 0 for x, y, z 2 R then8><

>:

2x+ y + 7z = 0,

3x� y + 3z = 0,

x+ 2y + cz = 0

From two above equations, we find x = �2z, y = �3 and substitute

into the third equation to get (c� 8)z = 0. This has at least two solutions of z if and onlyif c = 8. Done.

5. (a) The only solution x, y 2 R so x(1+ i)+y(1� i) = 0 is x = y = 0. Hence list 1+ i, 1� iis linearly independent in vertor space C over R.(b) Beside x = y = 0 then x = i, y = 1 also satisfies x(1 + i) + y(1 � i) = 0. Hence,1 + i, 1� i is linearly dependent in vector space C over C.

exer:2A:6 6. Consider a, b, c, d 2 F so a(v1

� v2

) + b(v2

� v3

) + c(v3

� v4

) + dv4

= 0, which is equivalentto av

1

+ (b� a)v2

+ (c � b)v3

+ (d � c)v4

= 0. Since v1

, v2

, v3

, v4

is linearly dependent inV so this follows a = b � a = c � b = d � c = 0 or a = b = c = d = 0 and a, b, c, d areuniquely determined. Hence, v

1

� v2

, v2

� v3

, v3

� v4

, v4

is linearly independent in V .

exer:2A:7 7. True. Let a1

(5v1

� 4v2

) + a2

v2

+ a3

v3

+ . . .+ am

vm

= 0 or 5a1

v1

+ (a2

� 4a1

)v2

+ a3

v3

+

. . .+am

vm

= 0, which implies 5a1

= a2

�4a1

= a3

= . . . = am

= 0 or a1

= a2

= . . . = am

.

8. Consider a1

�v1

+ a2

�v2

+ . . . + am

�vm

= 0, which implies a1

� = a2

� = . . . = am

� = 0

since a1

, a2

, . . . , am

is linear independent. Thus, a1

= a2

= . . . = am

so �v1

,�v2

, . . . ,�vm

is linearly independent.

9. Not true. Let v1

, v2

, . . . , vm

be (1, 0, 0, . . . , 0), (0, 1, . . . , 0), . . . , (0, . . . , 0, 1), respectivelyand let w

3

, . . . , wm

be (0, 0, 2, 0, . . . , 0), . . . , (0, . . . , 0, 2) and w1

= (0,�1, 0, . . . , 0) and

Page 13: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 13

w2

= (�1, 0, . . . , 0). Therefore, for i � 3 then wi

+vi

= (0, . . . , 0| {z }i�1

, i, 0, . . . , 0) and w1

+v1

=

(1,�1, 0, . . . , 0), w2

+ v2

= (�1, 1, 0, . . . , 0). It’s not hard to see (w1

+ v1

) + (w2

+ v2

) = 0

so w1

+ v1

, w2

+ v2

, . . . , wm

+ vm

is linearly dependent.

10. If v1

+ w, v2

+ w, . . . , vm

+ w is linearly dependent then according to Linear DependenceLemma (2.21) there exists j 2 {1, 2, . . . ,m} so that v

j

+m 2 span(v1

+m, . . . , vj�1

+m).Hence, v

j

+m = a1

(v1

+m) + . . .+ aj�1

(vj�1

+m) or

(a1

+ . . .+ aj�1

� 1)m = vj

� a1

v1

� a2

v2

� . . .� aj�1

vj�1

If a1

+ . . . + aj�1

� 1 = 0 then vj

� a1

v1

� a2

v2

� . . . � aj�1

vj�1

= 0. If j = 1 thenthat means v

j

= 0, so v1

, v2

, . . . , vm

is linearly dependent, a contradiction. If j � 2 thenvj

= a1

v1

+ . . . + aj�1

vj�1

which also follows that v1

, . . . , vm

is linearly dependent, acontradiction.Thus, we must have a

1

+ . . .+ aj�1

� 1 6= 0. Therefore, since RHS 2 span(v1

, . . . , vm

) som 2 span(v

1

, . . . , vm

).

exer:2A:11 11. If v1

, . . . , vm

, w is linearly independent then w 62 span(v1

, . . . , vm

) according to Corollarycoroll:1coroll:1

4.1.5. The reverse, if w 62 span(v1

, . . . , vm

) then assume the contrary, if v1

, . . . , vm

, w islinearly dependent then it will lead to a contradiction according to Linear DependenceLemma (2.21). Done.

exer:2A:12 12. Because P4

(F) = span(1, x, x2, x3, x4) and from Theoremtheo1theo1

4.1.2 (2.23) length of any linearlyindependent list is less than length of a spanning list 1, x, x2, x3, x4, which is 5. Thatexplains why we can’t have 6 polynomials that is linearly independent in P

4

(F).

exer:2A:13 13. Because 1, x, x2, x3, x4 is a linearly independent list of length 5 in P4

(F) and accordingto Theorem

theo1theo1

4.1.2 (2.23) length of any spanning list is greater than or equal to length ofa linearly independent list, which is 5. Thus, we can’t have four polynomials that spansP4

(F).

exer:2A:14 14. Prove that V is infinite-dimensional if and only if there is a sequence v1

, v2

, . . . of vectorsin V such that v

1

, . . . , vm

is linearly independent for every positive integer m.

Proof. If V is infinite-dimensional: Pick v1

2 V , pick vi

so vi

2 V, vi

62 span(v1

, . . . , vi�1

).Such v

i

exists, otherwise any v 2 V must also in span(v1

, . . . , vi�1

) so V ✓ span(v1

, . . . , vi�1

)

and we also have span(v1

, . . . , vi�1

) ✓ V so a finite list of vectors span V , a contradic-tion. Note that since v

1

, . . . , vi�1

is linearly independent and vi

62 span(v1

, . . . , vi�1

) so byCorollary

coroll:2coroll:2

4.1.6 then v1

, . . . , vi

is linearly independent. By continuing the process, we canconstruct such sequence of vectors.

If there exists such sequence of vectors in V : Assume the contrary that V is finite-dimensional then there exists finite spanning list in V with length `. However, fromexistence of such sequence, we can find in V a linearly independent list with arbitrary

Page 14: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 14

large length, which contradicts to Theoremtheo1theo1

4.1.2 that length of any linearly independentlist less than or equal to length of spanning list in a finite-dimensional vector space. Weare done.

exer:2A:15 15. With F1, consider the sequence (1, 0, . . .), (0, 1, 0, . . .), (0, 0, 1, 0, . . .), . . . then from Exer-cise

exer:2A:14exer:2A:14

14, we follow F1 is infinite-dimensional.

exer:2A:16 16. Consider sequence of continuous real-valued functions defined on [0, 1]: 1, x, x2, x3, . . . andwe are done.

exer:2A:17 17. Nice problem. Since pj

(2) = 0 for each j so that means x�2 divides each pj

. Let qi

=

pix�2

then q0

, q1

, . . . , qm

are polynomials in Pm�1

(F). If q0

, q1

, . . . , qm

is linear independent butit has length m + 1 larger than length m of spanning list in P

m�1

(F), a contradictionaccording to Theorem

theo1theo1

4.1.2. Thus, q0

, q1

, . . . , qm

is linear dependent. That means thereexists a

i

2 F not all 0 soP

m

i=0

ai

qi

= 0. This follows (x�2)

Pm

i=0

ai

qi

= 0 orP

m

i=0

ai

pi

= 0.Hence, p

0

, . . . , pm

is linearly dependent.

4.3. 2.B Bases

theo_3:2B:2.31 Theorem 4.3.1 (2.31) Every spanning list in a vector space can be reduced to a basis ofthe vector space.

theo_4:2B:2.32 Theorem 4.3.2 (2.32) Every finite-dimensional vector has a basis.

theo_5:2B:2.33 Theorem 4.3.3 (2.33) Every linearly independent list of vectors in a finite-dimensionalvector space can be extended to a basis of the vector space.

theo_6:2B:2.34 Theorem 4.3.4 (2.34) Suppose V is finite-dimensional and U is subspace of V . Then thereis a subspace W of V such that V = U �W .

Proof. According to Theoremtheo:2theo:2

4.1.3 then U is also finite-dimensional. Hence, according to The-orem

theo_4:2B:2.32theo_4:2B:2.32

4.3.2 then U has a basis v1

, . . . , vm

. Since v1

, . . . , vm

is linearly independent so accordingto Theorem

theo_5:2B:2.33theo_5:2B:2.33

4.3.3, there exists vectors w1

, w2

, . . . , wn

so v1

, . . . , vm

, w1

, . . . , wn

is a basis of VLet W = span(w

1

, . . . , wn

) then V = U �W .

Exerciseexer:2B:8exer:2B:8

8 chapter 2B gives another rephrasing of theoremtheo_6:2B:2.34theo_6:2B:2.34

4.3.4, saying that if there exists sub-spaces U,W of V so V = U �W and each has basis {u

1

, . . . , um

} and {w1

, . . . , wn

} respectivelythen u

1

, . . . , um

, w1

, . . . , wn

is a basis of V .

4.4. Addition, scalar multiplication of specific list of vectors

The book doesn’t seem to mention this. I just realised this while i was doing the exercises. Itmakes the writing much easier if I write a generalisation in here so I can refer to it later in the

Page 15: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 15

exercises. It basically says spanning list/basis/ linearly independent list has addition and scalarmultiplication properties.

prop:1 Proposition 4.4.1 Given (v1

, v2

, . . . , vm

) that is a basis/spanning list/linearly independentlist then (v

1

, v2

, . . . , vi

+ w, vi+1

, . . . , vm

) is also a basis/spanning list/linearly independentlist with w =

Pm

j=1

bj

vj

with bi

6= �1 and bj

2 F for all 1 j m..

Proof. It’s not hard to see that if (v1

, . . . , vm

) is a spanning list then so is (v1

, . . . , vi

+w, vi+1

, . . . , vm

).If (v

1

, . . . , vm

) is linearly independent, we consider ai

2 F so a1

v1

+ a2

v2

+ . . .+ ai

(vi

+w) +. . .+ a

m

vm

= 0 or

(a1

+ ai

b1

)v1

+ (a2

+ ai

b2

)v2

+ . . .+ (ai

+ ai

bi

)vi

+ . . .+ (am

+ ai

bm

)vm

= 0.

This follows aj

+ ai

bj

= 0 for all 1 j m. Consider ai

+ ai

bi

= 0 or ai

(1 + bi

) = 0, sincebi

6= �1 so ai

= 0. We also have aj

+ai

bj

= 0 for all 1j m so we find aj

= 0 for all 1 j m.Hence, (v

1

, v2

, . . . , vi

+ w, . . . , vm

) is linearly independent.

Using this proposition, we can solve exercisesexer:2A:6exer:2A:6

6,exer:2A:7exer:2A:7

7 in chapter 2A. We can construct a new basisfrom known basis of a vector, such as exercises

exer:2B:3exer:2B:3

3,exer:2B:4exer:2B:4

4,exer:2B:5exer:2B:5

5,exer:2B:6exer:2B:6

6 in chapter 2B.

4.5. Exercises 2B

1. If (v1

, . . . , vm

) is a basis in V then so is (v1

, . . . , vm�1

, vm

+ v1

) according to propositionprop:1prop:1

4.4.1. Hence in order for V to have only one basis, we must have vm

= vm

+ v1

or v1

= 0.Since vector 0 in the list so in order for this list to be linearly independent, the list mustcontain 0 only. Thus, this follows {0} is the only vector space that has only one basis.

2. Not hard to do.

exer:2B:3 3. (a) U = {(x1

, x2

, x3

, x4

, x5

) 2 R5

: x1

= 3x2

, x3

= 7x4

}. Basis of U is {(3, 1, 0, 0, 0), (0, 0, 7, 1, 0), (0, 0, 0, 0, 1)}.(b) Add two more to basis of U which are (0, 1, 0, 0, 0) and (0, 0, 0, 1, 0) we will get a basis

of R5.(c) According to proof of theorem

theo_6:2B:2.34theo_6:2B:2.34

4.3.4, pick W = span((0, 1, 0, 0, 0), (0, 0, 0, 1, 0)). thenR5

= U �W .

exer:2B:4 4. (a) Basis of U = {(x1

, x2

, x3

, x4

, x5

) 2 C5

: 6z1

= z2

, z3

+ 2z4

+ 3z5

= 0} can be(1, 6, 0, 0, 0), (0, 0, 4, 1,�2), (0, 0, 7, 1,�3). We need to prove that this list is a basis,i.e. there exists uniquely m,n 2 C so m(4, 1,�2) + n(7, 1,�3) = (z

3

, z4

, z5

) forany z

3

, z4

, z5

2 C so z3

+ 2z4

� 3z5

= 0. Note that 4 + 2 · 1 + 3 · (�2) = 0 and7 + 2 · 1 + 3 · (�3) = 0 so as long as we determine m,n for z

3

, z4

then that m,nalso works for z

5

. Hence, it deduces to m(4, 1) + n(7, 1) = (z3

, z4

) or 4m + 7n = z3

and m+ n = z4

. We have system of equations

(4Re(m) + 7Re(n) = Re(z

3

)

Re(m) + Re(n) = Re(z4

)

and

Page 16: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 16

(4 Im(m) + 7 Im(n) = Im(z

3

)

Im(m) + Im(n) = Im(z4

)

. Hence, m,n can be uniquely determined. Thus,

the list is a basis.(b) Proposition

prop:1prop:1

4.4.1 can construct a basis of C5 than contains three vectors in (a). Let

vi

=

0

@0, 0, . . . , 0| {z }

i�1

, 1, 0, . . . , 0

1

A then v1

, v2

, v3

, v4

, v5

is a basis of C5 then according

propositionprop:1prop:1

4.4.1 so does {v1

+ 6v2

, v2

, 4v3

+ v4

� 2v5

, v4

, v5

} = {u1

, v2

, u3

, v4

, v5

},which is

(1, 6, 0, 0, 0), (0, 1, 0, 0, 0), (0, 0, 4, 1,�2), (0, 0, 0, 1, 0), (0, 0, 0, 0, 1).

From here we find�u1

, v2

, u3

, 74

u3

� 3

4

v4

+

1

2

v5

, v5

is also a basis, or

(1, 6, 0, 0, 0), (0, 1, 0, 0, 0), (0, 0, 4, 1,�2), (0, 0, 7, 1,�3), (0, 0, 0, 0, 1).

(c) According to proof of theoremtheo_6:2B:2.34theo_6:2B:2.34

4.3.4, pick W = span((0, 1, 0, 0, 0), (0, 0, 0, 0, 1)) andwe are done.

exer:2B:5 5. True. Note that 1, x, x2, x3 is a basis of P3

(F) so according propositionprop:1prop:1

4.4.1 we find1, x, x2 + x3, x3 is also a basis of P

3

(F) and one of the vectors has degree 2.

exer:2B:6 6. True according to propositionprop:1prop:1

4.4.1.

7. It’s not true. A counterexample: Pick V = F4 and U = {(x1

, x2

, x3

, x4

) 2 F4

: x2

= 2x1

}.Note that v

1

= (0, 0, 0, 1), v2

= (0, 0, 1, 0) 2 U and v4

= (1, 0, 0, 0), v3

= (0, 1, 0, 0) 62 Uand (0, 0, 0, 1), (0, 0, 1, 0) is not a basis of U (it does not span U).

exer:2B:8 8. Since V = U �W so every v 2 V can be represented uniquely as v = ru + sw with u 2U,w 2 W . Since u

1

, . . . , um

is a basis of U so u can be represented uniquely asP

m

i=1

ai

ui

.Similarly, w can be represented uniquely as w =

Pn

i=1

bi

wi

. Hence, v can be representeduniquely as a linear combination of u

1

, . . . , um

, w1

, . . . , wn

. Hence, u1

, . . . , um

, w1

, . . . , wn

is a basis of V .

Page 17: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 17

4.6. 2.C Dimension

theo_9:2C:2.35 Theorem 4.6.1 (2.35) Any two bases of a finite-dimensional vector space the a samelength.

Proof. This is true according to theoremtheo1theo1

4.1.2. By considering a basis v1

, . . . , vm

in finite-dimensional vector space V then v

1

, . . . , vm

is a list length m which is both linearly independentand spanning V . Other bases are also linearly independent lists so they must have length lessthan length of any spanning list, which is m. They are also spanning lists so their lengh mustbe greater than length of any independent list, which is m. Hence, other bases must also havelength m.

Definition 4.6.2. The dimension of a finite-dimensional vector space is the length of any basisof the vector space. The dimension of V (if V is finite-dimensional) is denoted by dimV .

theo_10:2C:2.38 Theorem 4.6.3 (2.38) If V is finite-dimensional and U is a subspace of V then dim U dim V .

Proof. This is true according to theoremtheo_5:2B:2.33theo_5:2B:2.33

4.3.3, since basis u1

, . . . , um

of U is linearly independentso there exists w

1

, . . . , wn

2 V so u1

, . . . , um

, w1

, . . . , wn

is a basis of V . Therefore, dim U dim V .

theo_11:2C:2.39 Theorem 4.6.4 (2.39) Suppose V is finite-dimensional. Then every linearly independentlist of vectors in V with length dim V is a basis of V .

Proof. According to theoremtheo_5:2B:2.33theo_5:2B:2.33

4.3.3, every linearly independent list can be extended to a basis.Since the list has length dim V , that means no vector need to be added to the list to create abasis, i.e. the list is a basis.

theo_12:2C:2.42 Theorem 4.6.5 (2.42) Suppose V is finite-dimensional. Then every spanning list of vectorsin V with length dim V is a basis of V .

theo_13:2C:2.43 Theorem 4.6.6 (Dimension of a sum, 2.43) If U1

and U2

are subspaces of a finite-dimensionalvector space, then

dim(U1

+ U2

) = dim U1

+ dim U2

� dim (U1

\ U2

).

Proof. Note that U1

\ U2

is a subspace of U1

, U2

. Hence, if we let v1

, . . . , vm

be basis of vectorspace U

1

\U2

then according to theoremtheo_5:2B:2.33theo_5:2B:2.33

4.3.3, there exists w1

, . . . , wn

2 U1

so v1

, . . . , vm

, w1

, . . . , wn

is basis of U1

and there exists u1

, . . . , ul

2 U2

so v1

, . . . , vm

, u1

, . . . , ul

is basis of U2

. Thus,

Page 18: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 18

this follows the right hand side of the identity will be m + n + l. Hence, it suffices to provedim(U

1

+ U2

) = m+ n+ l. Indeed, we will prove that

v1

, . . . , vm

, u1

, . . . , ul

, w1

, . . . , wn

(2) eq2:2C:2.43

is a basis of U1

+ U2

. This list obviously spans U1

+ U2

. Before, we go and prove the aboveis is linearly independent, notice that form exercise

exer:2B:8exer:2B:8

8 2B then U1

= (U \ U2

) � S where S =

span(w1

, . . . , wn

) so this follows S \ (U1

\U2

) = {0} from theorem Direct sum of two subspacestheo_7:1C:1.45theo_7:1C:1.45

3.2.4.Now, coming back to (

eq2:2C:2.43eq2:2C:2.43

2), assume the list is linearly depdendent, and note that v1

, . . . , vm

, u1

, . . . , ul

is linearly independent so from Linear Dependence Lemmatheo_8:2A:2.21theo_8:2A:2.21

4.1.1 we find that there exists1 j n so w

i

2 span(v1

, . . . , vm

, u1

, . . . , ul

, w1

, . . . , wj�1

), or there exists v 2 S, v 6= 0 sov 2 U

2

. Note that S ⇢ U1

so from this we find v 2 U1

\ U2

. Hence, v 2 (S \ (U1

\ U2

)) andv 6= 0, a contradiction with above argument. Thus, the list is linearly independent. We obtainlist (

eq2:2C:2.43eq2:2C:2.43

2) is a basis of U1

+U2

so dim (U1

+U2

) = m+n+ l = dim U1

+dim U1

�dim (U1

+U2

).

4.7. Exercises 2C

1. Let v1

, . . . , vm

be basis of U then this list is linearly independent which has length equalto dim V so from theorem

theo_11:2C:2.39theo_11:2C:2.39

4.6.4 the list is a basis of V , therefore, U = V .

2. We have dim R2

= 2 and from theoremtheo_10:2C:2.38theo_10:2C:2.38

4.6.3 then if U is subspace of R2 then dim U 2.If dim U = 0 then U = {0}, if dim U = 1 then U is U = span((x, y)) which is a line inR2 through the origin and (x, y) 6= 0. If dim U = 2 then U = R2.

3. Similarly, we find dim U 3 for U a subspace of R3. If dim U = 0 then U = {0},if dim U = 1 then U is set of points in line in R3 through the origin. If dim U = 2

then U = span((x1

, y1

, z1

), (x2

, y1

, z2

)) so (x1

, y1

, z1

) 6= k(x2

, y2

, x2

) with kR, i.e. they arenot in the same line through the origin. This follows U is the plane through the origin,(x

1

, y1

, z1

) and (x2

, y2

, z2

).

4. a) With U = {p 2 P4

(F) : p(6) = 0} then x� 6, (x� 6)x, (x� 6)x2, (x� 6)x3 is a basisof subspace U of P

4

(F). Indeed, it’s obvious that the list spans U . The list is alsolinearly independent since it doesn’t exist a, b, c, d so a(x � 6) + b(x � 6)x + c(x �6)x2 + d(x� 6)x3 = 0 for all x 2 R.

b) 1, x�6, (x�6)x, (x�6)x2, (x�6)x3 is a basis of P4

(F) since this is spans P4

(F) andit has length 5 = dim P

4

(F).c) According to theorem

theo_6:2B:2.34theo_6:2B:2.34

4.3.4 then W = span (1).

5. a) If U = {p 2 P4

(F) : p00(6) = 0} then 1, x, (x � 6)

3, (x � 6)

4 is a basis of U . This isis obviously linearly independent. Since dim P

4

(F) = 5 so from theoremtheo_10:2C:2.38theo_10:2C:2.38

4.6.3 thendim U 5 and if dim U = 5 then U = P

4

(F) according to theoremtheo_11:2C:2.39theo_11:2C:2.39

4.6.4, which isa contradiction since x2 62 U, x2 2 P

4

(F). Thus, dim U = 4 so again fromtheo_11:2C:2.39theo_11:2C:2.39

4.6.4 wehave 1, x, (x� 6)

3, (x� 6)

4 is a basis of U .

Page 19: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 19

b) Add x2 to 1, x, (x� 6)

3, (x� 6)

4 we get a basis of P4

(F) because this list is linearlyindependent and as length of 5.

c) According to theoremtheo_6:2B:2.34theo_6:2B:2.34

4.3.4, we find W = span (x2).

exer:2C:6 6. a) With U = {p 2 P4

(F) : p(2) = p(5)} then 1, (x � 2)(x � 5), (x � 2)(x � 5)x, (x �2)(x�5)x2 is a basis of U . Indeed, this is linearly independent and similarly to aboveargument, we find dim U 4 so we are done.

b) Add x to the list and we obtain a linearly independent list with length of 5 so it is abasis of P

4

(F).c) According to theorem

theo_6:2B:2.34theo_6:2B:2.34

4.3.4, we find W = span (x).

7. a) With U = {p 2 P4

(F) : p(2) = p(5) = p(6)} then 1, (x � 2)(x � 5)(x � 6), (x �2)(x � 5)(x � 6)x is a basis of U . Indeed, this is is linearly independent and it isa subspace of U

2

= {p 2 P4

(F) : p(2) = p(5)}. From exerciseexer:2C:6exer:2C:6

6 2C we knowdim U

2

= 4 so dim U 4. If dim U = 4 then from theoremtheo_11:2C:2.39theo_11:2C:2.39

4.6.4 we have U = U2

but (x � 2)(x � 5) 2 U2

, (x � 2)(x � 5) 62 U , a contradiction. Thus, dim U 3 sodim U = 3 which follows 1, (x� 2)(x� 5)(x� 6), (x� 2)(x� 5)(x� 6)x is a basis ofU .

b) Add x, x2 to the list and we get a linearly independent list with length of 5 so thenew list is a basis of P

4

(F).c) According to theorem

theo_6:2B:2.34theo_6:2B:2.34

4.3.4 we find W = span (x, x2).

8. a) With U = {p 2 P4

(F) :

R1

�1

p = 0} then the list 2x, 3x2 � 1, 4x3, 5x4 � 3x2 is abasis of U . Indeed, this list is linearly independent and from dim U 4 becausex2 62 U, x2 2 P

4

(F), we follow dim U = 4 which means the list is a basis of Uaccording to theorem

theo_11:2C:2.39theo_11:2C:2.39

4.6.4.An interesting remark is that if V = {p 2 P

5

(F) : p(�1) = p(1), p(0) = 0} then fora basis of V , a list of antiderivative of all polynomials in basis of V is a basis of U .

b) Add 1 to the list we get a basis of P4

(F).c) Similarly, W = span (1).

9. If w 62 span (v1

, . . . , vm

) then w+v1

, . . . , w+vm

is linearly independent in V so dim span(w+v1

, . . . , w + vm

) = m. If w 2 span (v1

, . . . , vm

) then there w can be written uniquelyas w =

Pm

i=1

ai

vi

. WLOG, say the biggest i so ai

6= 0 is j (1 j m). ConsiderU = span(w+v

1

, . . . , w+vj�1

, w+vj+1

, . . . , w+vm

). We see that the list w+v1

, . . . , w+

vj�1

, w + vj+1

, . . . , w + vm

is linearly independent because otherwise

b1

(w + v1

) + . . .+ bj�1

(w + vj�1

) + bj+1

(w + vj+1

) + . . .+ bm

(w + vm

) = 0

is true for bi

2 F not all 0. However, this follows w = c1

v1

+ . . . + cj�1

vj�1

+ cj+1

vj

+

. . .+ cm

vm

, a contradiction since aj

= 0 in this representation of w. Thus, w+ v1

, . . . , w+

vj�1

, w + vj+1

, . . . , w + vm

is a basis of U so dim U = m � 1. On the other hand, U is asubspace of span(w + v

1

, . . . , w + vm

) so

dim span(w + v1

, . . . , w + vm

) � dim U = m� 1.

Page 20: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 20

10. List p0

, p1

, . . . , pm

is linearly independent. Indeed, consider P (x) =P

m

i=1

ai

pi

for ai

2 F.We have [xm]P = [xm]a

m

pm

so P (x) = 0 for all x 2 R iff [xm]P = 0 or [xm]am

pm

= 0

which follows am

= 0. This leads to [xm�1

]P = [xm�1

]am�1

pm�1

and we keep goinguntil a

i

= 0 for all 0 i m. Thus, the list is linearly independent with length ofm+ 1 = dim P

m

(F) so from theoremtheo_11:2C:2.39theo_11:2C:2.39

4.6.4 we follow p0

, . . . , pm

is a basis of Pm

(F).

11. From theoremtheo_13:2C:2.43theo_13:2C:2.43

4.6.6 we follow dim (U \ W ) = 0, i.e. U \ W = {0}. This follows R8

=

U �W .

12. We have dim U = dim W = 5 and if U \W = {0} then dim (U \W ) = 0. Hence, fromtheorem

theo_13:2C:2.43theo_13:2C:2.43

4.6.6 we find dim (U + W ) = 10. But U + W is a subspace of R9 which hasdimension of 9, a contradiction from theorem

theo_10:2C:2.38theo_10:2C:2.38

4.6.3. Thus U \W 6= {0}.

13. Since dim U = dim W = 4 and dim (U +W ) dim R6

= 6 so dim (U \W ) � 2. Thisfollows there exists two vectors u,w in U \W so neither of these vector is a scalar multipleof the other.

14. dim (U1

+ . . . ,+Um

) dim U1

+ . . .+ dim Um

is true from theoremtheo_13:2C:2.43theo_13:2C:2.43

4.6.6.

15. Since dim V = n so there exists a basis v1

, . . . , vn

of V . Pick Ui

= span(vi

) then of coursedimU

i

= 1 and V = U1

� U2

� . . .� Un

.

16. It’s not hard to prove U1

+ . . .+Um

is finite-dimensional and note that let vi,1

, vi,2

, . . . , vi,bi

be the basis of Ui

then dim Ui

= bi

and

v1,1

, . . . , v1,b1 , . . . , vm,1

, . . . , vm,bm

spans U1

+ . . .+Um

and is linearly independent since U1

+ . . .+Um

is a direct sum. Thus,dim U

1

� U2

� . . .� Um

= b1

+ . . .+ bm

= dim U1

+ . . .+ dim Um

.

17. This is false, take three lines in R2 as U1

, U2

, U3

then LHS = 2 and RHS = 3.

Page 21: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 21

5. Chapter 3: Linear Maps

5.1. 3.A The vector space of linear maps

Set of all linear maps from V to W is denoted as L(V,W ).

theo_14:3A:3.5 Theorem 5.1.1 (3.5 Linear maps and basis of domain) Suppose v1

, . . . , vn

is a basis of Vand w

1

, . . . , wn

2 W . There exists a unique linear map T : V ! W such that Tvj

= wj

foreach j = 1, . . . , n.

prop:2 Proposition 5.1.2 (Linear maps take 0 to 0) Suppose T is a linear map from V to W .Then T (0) = 0.

5.2. Exercises 3A

1. We have T (0, 0, 0) = (0 + b, 0) but from propositionprop:2prop:2

5.1.2 then T (0, 0, 0) = (0, 0) so b = 0.On the other hand, we have T (1, 1, 1) = (1, 6 + c) so T (2, 2, 2) = 2T (1, 1, 1) = (2, 12 + 2c)but T (2, 2, 2) = (2, 12 + 8c). Thus, c = 0. If b = c = 0 then it’s obvious that T is linear.

2. Consider p = 1 2 P(R) then T (1) =�3 + b, 15

4

+ c sin 1�

so T (2) = 2T (1) =�6 + 2b, 15

2

+ 2c sin 1�.

However, T (2) =�6 + 4b, 15

2

+ c sin 2�. This follows b = 0, c = 0. The reverse is not hard.

3. Let T

0

@0, . . . , 0| {z }

i�1

, 1, 0, . . . , 0

1

A= (A

1,i

, A2,i

, . . . , Am,i

). Then, for any (x1

, . . . , xn

) 2 Fn we

have

T (x1

, . . . , xn

) =

nX

i=1

xi

· T

0

@0, . . . , 0| {z }

i�1

, 1, 0, . . . , 0

1

A ,

=

nX

i=1

xi

· (A1,i

, A2,i

, . . . , Am,i

) ,

= (A1,1

x1

+ . . .+A1,n

xn

, . . . , Am,1

x1

+ . . .+Am,n

xn

) .

4. Assume the contrary that v1

, . . . , vm

is linearly dependent. Hence, there exists ai

2 R notall 0 so that

Pn

i=1

ai

vi

= 0 in V . Hence,

T

nX

i=1

ai

vi

!=

nX

i=1

ai

Tvi

= 0.

This follows, Tv1

, . . . , T vm

is linearly dependent, a contradiction. Thus, v1

, . . . , vm

islinearly independent in V .

5. Yes, L(V,W ) is a vector space.

Page 22: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 22

6. Prove 3.9, page 56: Associativity: (T1

T2

)T3

= T1

(T2

T3

). Let T1

2 L(U, V ), T2

2 L(V,W ), T3

2L(W,Z). Hence,

(T1

T2

)T3

(u) = (T1

T2

)(T3

(u)) = T1

(T2

(T3

(u))) = T1

(T2

T3

).

7. Say we have for some v 2 V then Tv = w 2 V but since dim V = 1 so w = �v.Furthermore, according to theorem

theo_14:3A:3.5theo_14:3A:3.5

5.1.1 there exists a unique linear map T 2 L(V, V ) soTv = w which is Tv = �v.

8. Let '(x, y) =

3pxy2 then '(ax, ay) = a 3

pxy2 = a'(x, y) but '((x, y) + (z, t)) = '(x +

z, y + t) = 3p(x+ z)(y + t)2 6= '(x, y) + '(z, t). Thus, ' is not linear map.

9. Let '(x+ iy) = y+ ix then '((x+ iy)+ (z+ it)) = y+ t+ i(z+x) = '(x+ iy)+'(z+ it).But '(i(x+ iy)) = '(ix� y) = �iy + x 6= i'(x+ iy) = iy � x. Thus, ' is not linear.

10. Consider v1

2 U, v2

2 V \ U and v1

, v2

, Sv1

6= 0. Hence, T (v1

+ v2

) = 0 since v1

+ v2

62 Ubut Tv

1

+ Tv2

= Sv1

6= 0. Thus, T (v1

+ v2

) 6= Tv1

+ Tv2

. Thus, T is not a linear map onV .

11. Consider S 2 L(U,W ) and let v1

, . . . , vm

be basis of U then Svi

= wi

. According totheorem

theo_5:2B:2.33theo_5:2B:2.33

4.3.3, there exists u1

, . . . , ul

so v1

, . . . , vm

, u1

, . . . , ul

is a basis of V . Pick arbitraryl vectors `

1

, . . . , `l

in W . According to theoremtheo_14:3A:3.5theo_14:3A:3.5

5.1.1, there exists a linear map T : V ! Wso Tv

i

= wi

= Svi

, Tui

= `i

.

12. Since V is finite-dimensional so there exists a basis v1

, . . . , vm

of V . Since W is infinite-dimensional so according to Exercise

exer:2A:14exer:2A:14

14 (2A), there exists sequence of vectors w1

, w2

, . . .so that w

1

, w2

, . . . , wk

is linearly independent for any k 2 N. In order to show thatL(V,W ) is infinite-dimensional, we will prove the existence of seuquences T

1

, T2

, . . . whereTi

2 L(V,W ) and T1

, . . . , Tk

is linearly independent for any k 2 N.Indeed, according to theorem

theo_14:3A:3.5theo_14:3A:3.5

5.1.1, there exists Ti

: V ! W so that Ti

(vj

) = w(i�1)m+j

for all 1 j m. Thus, for v =

Pm

j=1

↵j

vj

then Ti

(v) =P

m

j=1

↵j

w(i�1)m+j

. Thus, forany k 2 N and for all v 2 V, v 6= 0, v =

Pm

j=1

↵j

vj

if

kX

i=1

�i

Ti

(v) = 0v ()kX

i=1

mX

j=1

�i

↵j

w(i�1)m+j

= 0v.

Note that w1

, . . . , wkm

is linearly independent so the above equation follows that �i

↵j

= 0

for any 1 i k, 1 j m. However, since v 6= 0 so there must exists an ↵l

6= 0 (1 l m). This deduces �

i

= 0 for all 1 i k, i.e. T1

, . . . , Tk

is linearly independent. Andthis is true for any k 2 N so L(V,W ) is infinite-dimensional.

13. Since v1

, . . . , vm

is linear dependent so there exists vi

=

Pj 6=i

↵j

vj

with ↵j

2 F. Hence,pick arbitrary w

j

(1 j m, j 6= i) and then pick wi

so wi

6=P

j 6=i

↵j

wj

= Tvi

. Done.

14. Let v1

, . . . , vm

(m � 2) be basis of V then for i � 3, let Svi

= Tvi

= vi

but Sv1

= Tv2

=

�v1

, Sv2

= Tv1

= v2

. Hence, ST (v1

) = Sv2

= v2

but TS(v1

) = T (�v1

) = �v2

. Thus,ST 6= TS.

Page 23: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 23

5.3. 3.B Null Spaces and Ranges

Proposition 5.3.1(3.14) Suppose T 2 L(V,W ) then null T is a subspace of V .

Proof. According to the definition of null space if v 2 null T then v 2 V . If v1

, v2

2 null Tthen Tv

1

= Tv2

= 0 so T (v1

+ v2

) = Tv1

+ Tv2

= 0. Hence, v1

+ v2

2 null T . We also haveT (�v

1

) = �Tv1

= 0 so �v1

2 null T for any � 2 F.

Proposition 5.3.2(3.16) Let T 2 L(V,W ) then T is injective if and only if null T = {0}.

Proof. If T is injective. Assume the contrary that there exists v 6= 0, v 2 null T . then for anyu 6= v, u 2 V then T (u+ v) = Tu+ Tv = Tu, a contradiction since T is injective.

Reversely, if null TT = {0}. For any u, v 2 V so Tu = Tv then T (u � v) = 0. Thus, u = v.This follows T is injective.

Proposition 5.3.3(3.19) If T 2 L(V,W ) then range T is a subspace of W .

Proof. We have T (0) = 0 so 0 2 range T . For any v1

, v2

2 range T then there exists u1

, u2

2 Vso Tu

1

= v1

, Tu2

= v2

so T (u1

+ u2

) = v1

+ v2

. Hence, v1

+ v2

2 range T . We also have�v

1

= �Tu1

= T (�u1

) so �v1

2 range T for any � 2 F. Thus, range T is a subspace of W .

Definition 5.3.4. A function T : V ! W is called surjective if its range equal W .

theo_15:3B:3.22 Theorem 5.3.5 (3.22, Fundamental Theorem of Linear Maps) Suppose V is finite-dimensionaland T 2 L(V,W ). Then range T is finite-dimensional and

dim V = dim null T + dim range T.

Proof. Let v1

, . . . , vm

be the basis of V then for any v 2 V, v =

Pm

i=1

↵i

vi

then f(v) 2 range Tand f(v) =

Pm

i=1

↵i

f(vi

). This follows range T = span (f(v1

), . . . , f(vm

)) or range T is finite-dimensional.

Let u1

, . . . , um�k

be basis of null T . Hence, acoording to theoremtheo_5:2B:2.33theo_5:2B:2.33

4.3.3, linearly independentlist u

1

, . . . , um�k

can be extented to a basis u1

, . . . , um�k

, w1

, . . . , wk

of V . It suffices to provethat dim range T = k.

Indeed, let W = span(w1

, . . . , wk

). Note that for any u, v 2 W, v 6= u then Tv 6= Tu, otherwiseT (u � v) = 0 which means , u � v 6= 0, u � v 2 null T but u � v 2 W , a contradiction sinceu1

, . . . , um�k

, w1

, . . . , wk

is linearly independent.We prove that (Tw

1

, Tw2

, . . . , Twk

) is a basis of range T . From the above argument, we findTw

1

, . . . , Twk

is linearly independent. For any v 2 range T then there exists u 2 V so Tu = v.

Page 24: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 24

According to theoremtheo_6:2B:2.34theo_6:2B:2.34

4.3.4, V = null T � W so for any u 2 V there uniquely exists u1

2null T,w 2 W so u = u

1

+ w. Hence, Tu = Tu1

+ Tw = Tw or v = Tu = Tw =

Pk

i=1

↵k

Twk

for any v 2 range T . Thus, Tw1

, . . . , Twk

spans range T, which follows dim range T = k, asdesired.

theo_16:3B:3.23 Theorem 5.3.6 (3.23) Suppose V and W are finite-dimensional vector spaces such thatdim V > dim W . Then no linear map from V to W is injective.

Proof. Asusme the contrary, there is injective linear map T 2 L(V,W ). Hence, null T = {0}or dim null T = 0. Thus, from theorem

theo_15:3B:3.22theo_15:3B:3.22

5.3.5, dim V = dim range T dim W since range Tis a subspace of W , a contradiction. Thus, there doesn’t exist injective linear map from V toW .

theo_16:3B:3.24 Theorem 5.3.7 (3.24) Suppose V and W are finite-dimensional vector spaces such thatdim V < dim W . Then no linear map from V to W is surjective.

Proof. For any T 2 L(V,W ), we have dim range T = dim V � dim null T < dim W , whichmeans range T 6= W , so T is not surjective.

Theorem 5.3.8(3.26) A homogeneous system of linear equations with more variables than equations hasnonzero solutions.

Proof. Using the notation from the textbook. This means n > m, or dim Fn > dim Fm.This follows no linear map from Fn to Fm is injective, i.e. dim nullT (x

1

, . . . , xn

) > 0 for anyT 2 L(V,W ). This means the system of linear equations has nonzero solutions.

Theorem 5.3.9(3.29) An inhomogeneous system of linear equations with more equations than variableshas no solution for some choice of constant terms.

Proof. We have dim Fn < dim Fm so no linear map from Fn to Fm is surjective. Hence,for any T 2 L(V,W ) then range T 6= Fm. This follows there exists (c

1

, . . . , cm

) 2 Fm but(c

1

, . . . , cm

) 62 range T . Thus, the system of linear equations with constant terms c1

, . . . , cm

hasno solution.

5.4. Exercises 3B

5.4.1. A way to construct (not) injective, surjective linear map

That is based on theoremtheo_14:3A:3.5theo_14:3A:3.5

5.1.1. T is injective if basis V is mapped one-to-one with basis of W(which is also we need dim V dim W for existence of injective linear map from V to W ). In

Page 25: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 25

other words, if v1

, . . . , vm

is basis of V then T 2 L(V,W ) is injective if and only if Tv1

, . . . , T vm

are linearly independent in W . T is not injective otherwise.Similarly, T is surjective if and only if Tv

1

, . . . , T vm

spans W .Some examples are exercises

exer:3B:7exer:3B:7

7,exer:3B:8exer:3B:8

8,exer:3B:9exer:3B:9

9,exer:3B:10exer:3B:10

10.Many exercises below use theorem

theo_14:3A:3.5theo_14:3A:3.5

5.1.1!

5.4.2. Exercises

1. If T 2 L(V,W ) then from theoremtheo_15:3B:3.22theo_15:3B:3.22

5.3.5, dim V = 5. We can let V = F5 and sincerange T is a subspace of W so 2 = dim range T dim W . Hence, we can let W = F2.Hence, we can write T (x, y, z, t, w) = (x + y, t + w). With this, null T = {(x, y, z, t, w) 2F5

: x+ y = t+ w = 0}. Hence, basis of null T is (1,�1, 0, 0, 0), (0, 0, 1, 0, 0), (0, 0,�1, 1).And range T = F2.

2. Since range S ⇢ null T so (TS)(v) = T (Sv) = 0 because Sv 2 range S so Sv 2 null T .Thus,

(ST )2(v) = (ST )((ST )v) = ST (Su) = S(TSu) = S(0) = 0.

3. (a) If v1

, . . . , vm

spans V then range T = V or T is surjective.(b) If v

1

, . . . , vm

are linearly independent then T (z1

, . . . , zm

) = 0 if and only if (z1

, . . . , zm

) =

0, i.e. null T = {0} or T is injective.

4. Let T1

(x, y, z, t, w) = (x + y, 0, 0, t + z + w) then range T1

= {(x, 0, 0, y) 2 R4} sodim range T

1

= 2 so dim null T1

= 3. Let T2

(x, y, z, t, w) = (x + y, 0, t, 0) then (T1

+

T2

)(x, y, z, t, w) = (2x + 2y, 0, t, t + z + w) so dim range (T1

+ T2

) = 3. This followsdim null (T

1

+ T2

) = 2 or T1

+ T2

62 S = {T 2 L(R5,R4

) : dim null T > 2}. Hence, S isnot a subspace of L(R5,R4

).

5. We contruct such linear map based on proof of theoremtheo_15:3B:3.22theo_15:3B:3.22

5.3.5. Note that basis of R4 is(1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) so if we choose (1, 0, 0, 0), (0, 1, 0, 0) to be basisof null T then we need T (0, 0, 1, 0), T (0, 0, 0, 1) to be basis of range T . Therefore, null T =

{(x, y, 0, 0) 2 R4} = range T so we can set T (0, 0, 1, 0) = (1, 0, 0, 0), T (0, 0, 0, 1) =

(0, 1, 0, 0). Hence, T can be T (x, y, z, t) = (z, t, 0, 0).

6. Because that will mean dim null T = dim range T so 5 = dim R5

= 2 · dim null T , acontradiction.

exer:3B:7 7. Let v1

, . . . , vm

be basis of V and w1

, . . . , wn

be basis of W with n � m � 2. ConstructT so Tv

i

= wi

for 1 i m � 1 but Tvm

= 2wm�1

. Next construct S so Svi

= wi

for1 i m � 2 but Sv

m�1

= 2wm

and Svm

= wm

. It’s obvious that T, S 2 R = {T 2L(V,W ) : T is not injective}.However, T + S is injective. Indeed, we have (T + S)(v

i

) = wi

for 1 i m � 2 and(T +S)v

m�1

= wm�1

+2wm

, (T +S)vm

= 2wm�1

+wm

. Note that w1

, . . . , wm�2

, wm�1

+

2wm

, 2wm�1

+wm

are linearly independent so dim null (T +S) = {0} or T +S is injective.Thus, R is not subspace of L(V,W ).

Page 26: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 26

exer:3B:8 8. Notation the same with exerciseexer:3B:7exer:3B:7

7 but this time m � n � 2. Construct T so Tvi

= wi

for1 i n�2, Tv

j

= jwn�1

for n�1 j m. Construct S so Svi

= wi

for 1 i n�2,Tv

j

= (m + 1 � j)wn

for n � 1 j m. It’s obvious that T, S 2 R = {T 2 L(V,W ) :

T is not surjective}.However, T + S is surjective since Tv

1

, T v2

, . . . , T vn

is w1

, . . . , wn�2

, (n� 1)wn�1

+ (m�n+2)w

n

, nwn�1

+ (m� n+1)wn

, . . . which spans W . Thus, S + T is surjective. Thus, Ris not subspace of L(V,W ).

exer:3B:9 9. T is injective so for any v 2 span (v1

, . . . , wm

) then Tv = 0 if and only if v = 0, i.e.Pm

i=1

↵i

Tvi

= 0 if and only if ↵i

= 0 for all 1 i m, i.e. Tv1

, . . . , T vm

is linearlyindependent in W .

exer:3B:10 10. For any u 2 range T there exists v 2 V or v =

Pm

i=1

↵i

vi

so Tv = u or u =

Pm

i=1

↵i

Tvi

.Thus, any u 2 range T can be expressed as linear combination of Tv

1

, . . . , T vm

whichfollows Tv

1

, . . . , T vm

spans range T .

exer:3B:11 11. If we have S1

(S2

. . . Sn

u) = S1

(S2

. . . Sn

v) then we find S2

. . . Sn

u = S2

. . . Sn

v since S1

isinjective linear map. Similarly, we from this we deduce u = v. This follows S

1

S2

. . . Sn

isinjective.

exer:3B:12 12. Since null T is a subspace of finite-dimensional vector space V so according to theoremtheo_5:2B:2.33theo_5:2B:2.33

4.3.3, a basis v1

, . . . , vm

of null T can be extended to a basis of V , i.e. v1

, . . . vm

, u1

, . . . , un

.Hence, let U = span (u

1

, . . . , un

) then we have U \ null T = {0}. On the other hand, anyv 2 V there exists u 2 U, z 2 null T so v = u + z so Tv = T (u + z) = Tu. This followsrange T = {Tv : v 2 V } = {Tu : u 2 U}.

exer:3B:13 13. A basis for null T is (5, 1, 0, 0), (0, 0, 7, 1) so dim null T = 2. Combining with theoremtheo_15:3B:3.22theo_15:3B:3.22

5.3.5we find dim range T = 2 = dim W and since range T is a subspace of W so range T = Waccording to theorem

theo_11:2C:2.39theo_11:2C:2.39

4.6.4. Thus, T is surjective.

exer:3B:14 14. Similarly, from theoremtheo_15:3B:3.22theo_15:3B:3.22

5.3.5 to find dim range T = 5 = dim R5 so T is surjective.

exer:3B:15 15. Assume the contrary, there exists such linear map T then dim null T = 2. On the otherhand, dim range T dim R2

= 2 so dim null T + dim range T 4, a contradiction totheorem

theo_15:3B:3.22theo_15:3B:3.22

5.3.5.

exer:3B:16 16. Assume the contrary that V is infinite-dimensional then according to exerciseexer:2A:14exer:2A:14

14 thereexists a sequence v

1

, v2

, . . . so v1

, . . . , vm

is linearly independent for every positive integerm. WLOG, say that v

1

, . . . , vn

is basis of null T .Since range T of a linear map on V is finite-dimensional so there exists u

1

, . . . , uk

2 V soTu

1

, . . . , Tuk

is a basis of range T . Note that there exists a M > n so all uj

(1 j k)can be represented as linear combination of v

i

for i < M . Hence, since TvM

2 range T so

TvM

=

`X

i=1

↵i

Tui

=

M�1X

i=1

�i

Tvi

.

Page 27: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 27

This follows vM

�P

M�1

i=1

�i

vi

2 null T or vM

=

PM�1

i=1

�i

vi

+

Pn

j=1

�j

vj

, a contradictionsince v

1

, . . . , vM

is linearly independent. Thus, V is finite-dimensional.

exer:3B:17 17. According to theoremtheo_16:3B:3.23theo_16:3B:3.23

5.3.6, if there exists an injective linear map from V to W thendim V dim W . Conversely, if dim V dim W , let v

1

, . . . , vm

be basis of V andw1

, . . . , wn

(n � m) be basis of W then according to theoremtheo_14:3A:3.5theo_14:3A:3.5

5.1.1, there exists a lin-ear map T : V ! W such that Tv

j

= wj

for 1 j m. With this, we will find thatnull T = {0} so T is injective.

exer:3B:18 18. According to theoremtheo_16:3B:3.24theo_16:3B:3.24

5.3.7, if there exists a surjective map from V onto W then dim V �dim W . Conversely, if dim V � dim W , let v

1

, . . . , vm

be basis of V and w1

, . . . , wn

(n m) be basis of W then according to theorem

theo_14:3A:3.5theo_14:3A:3.5

5.1.1, there exists a linear map T : V ! Wsuch that Tv

j

= wj

for 1 j n. With this, we find range T = W so T is surjective.

exer:3B:19 19. If there exists such linear map T then according to theoremtheo_15:3B:3.22theo_15:3B:3.22

5.3.5, we have dim null T =

dim U = dim V �dim range T � dim V �dim W . Conversely, if dim U � dim V �dim W ,let u

1

, u2

, . . . , um

, v1

, . . . , vn

be basis of V where u1

, . . . , um

be basis of U and let w1

, . . . , wk

be basis of W then k � n. According to theoremtheo_14:3A:3.5theo_14:3A:3.5

5.1.1, there exists a linear map T 2L(V,W ) so Tu

i

= 0, T vj

= wj

for 1 j n, 1 i m. With k � n, we can show thatnull T = U .

exer:3B:20 20. If there exists such linear map S then when Tu = Tv then STu = STv or u = v. Thus,T is injective. Conversely, if T is injective, let Tv

1

, . . . , T vm

be a basis of range T . Hence,according to

theo_14:3A:3.5theo_14:3A:3.5

5.1.1, there exists a linear map S 2 L(W,V ) so S(Tvi

) = vi

and S(wj

) = 0

for arbitrary uj

2 V where Tv1

, . . . , T vm

, w1

, . . . , wn

is basis of W .For any v 2 V then Tv =

Pm

i=1

↵i

Tvi

. Since T is injective so null T = 0 so v =

Pm

i=1

↵i

vi

.Hence,

STv = S

mX

i=1

↵i

Tvi

!=

mX

i=1

↵i

STvi

=

mX

i=1

↵i

vi

= v.

exer:3B:21 21. If there exists such linear map S then any w 2 W we have T (Sw) = w 2 W so w 2 range T .Since range T is a subspace of W , this follows range T = W or T is surjective.Conversely, if T is surjective then range T = W . There exists v

1

, . . . , vm

2 V so Tv1

, . . . , T vm

is basis of W . Hence, according to theoremtheo_14:3A:3.5theo_14:3A:3.5

5.1.1, there exists a linear map S 2 L(W,V )

so S(Tvi

) = vi

for all 1 i m. This follows, for any w 2 W,w =

Pm

i=1

↵i

Tvi

then

TSw = T

mX

i=1

↵i

S(Tvi

)

!= T

mX

i=1

↵i

vi

!=

mX

i=1

↵i

Tvi

= w.

exer:3B:22 22. For any v 2 null ST then Tv 2 null S. Hence, we define linear map T 0 2 L(null ST, null S)so that for any v 2 null ST then T 0v = Tv. Note that since null ST is a subspace of V sonull T 0 is a subspace of null T so dim null T 0 dim null T . According to theorem

theo_15:3B:3.22theo_15:3B:3.22

5.3.5,we have

dim null ST = dim null T 0+ dim range T 0 dim null T + dim null S.

Page 28: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 28

exer:3B:23 23. Note that range ST is a subspace of range S so dim range ST dim range S. We canwrite range ST = {S(Tv) : v 2 U} = {Sv : v 2 range T}. Let u

1

, . . . , um

be basis ofrange T then Su

1

, . . . , Sum

will span range ST so dim range ST dim range T . Thus,dim range ST min{dim range S, dim range T}.

exer:3B:24 24. (note that V is not necessarily finite-dimensional so we can’t actually set a basis for V ) Ifthere exists such S, then for any v 2 null T

1

then ST1

v = S(0) = 0 = T2

v or v 2 null T2

.Thus, null T

1

⇢ null T2

.Conversely, if null T

1

⇢ null T2

, let T1

v1

, . . . , T1

vm

, w1

, . . . , wn

be basis of W where Tv1

, . . . , T vm

is basis of range T . Let S be linear map so S(Tvi

) = T2

vi

and Swj

= 0 for 1 i m, 1 j n.We prove that T

2

= ST1

. Indeed, for any v 2 V , we have Tv 2 range T so T1

v =Pm

i=1

↵i

T1

vi

so v �P

m

i=1

↵i

vi

2 null T1

⇢ null T2

. Hence,

T2

(v) = T2

mX

i=1

↵i

vi

!=

mX

i=1

↵i

T2

vi

=

mX

i=1

↵i

ST1

vi

= ST1

mX

i=1

↵i

T1

vi

!= S(T

1

v).

exer:3B:25 25. If there exists such S then for all v 2 V we have T1

v = T2

Sv 2 range T2

which followsrange T

1

⇢ range T2

. Conversely, for v1

, . . . , vm

as basis of V , since range T1

⇢ range T2

there exists ui

so T1

vi

= T2

ui

for all 1 i m. According to theoremtheo_14:3A:3.5theo_14:3A:3.5

5.1.1, there existsa linear map S 2 L(V, V ) so Sv

i

= ui

for all 1 i m. This follows T2

Svi

= T2

ui

= T1

vi

for all 1 i m, which leads to T2

Sv = T1

v for all v 2 V .

exer:3B:26 26. Let Dxi = pi

for all i � 1 then deg pi

= i � 1 so every polynomial can be representedas linear combination of p

i

. For any p0 2 P(R), p0 =P

m

i=1

↵i

pi then D(

Pm

i=1

↵i

xi) = p0.Hence, p0 2 range D. This follows range D = P(R) or D is surjective.

exer:3B:27 27. Consider a linear map D 2 L(P(R),P(R)) so Dp = 5p00+3p0 then according to exerciseexer:3B:26exer:3B:26

26,D is surjective so for any p 2 P(R) then there exists q 2 P(R) so Dq = p or 5q00+3q0 = p.

exer:3B:28 28. Let 'i

(v) be scalar multiple of wi

in representation of Tv. We prove that 'i

is linear.Indeed, since T is linear so T (u+v) = Tu+Tv or

Pm

i=1

'i

(u+v)wi

=

Pm

i=1

('i

u+'i

v)wi

.Since w

1

, . . . , wm

is linearly independent so 'i

(u+ v) = 'i

(u)+'i

(v). Similarly, 'i

(�v) =�'

i

v for any v 2 V .

exer:3B:29 29. It’s obvious that null '� {au : a 2 F} is subset of V . On the other hand, for any v 2 Vthen '(v) = a. We know '(u) 6= 0 so x = v � a

'u

u 2 V . Hence, we have

a = '(v) = '

✓x+

a

'uu

◆= a+ '(x).

Thus, 'x = 0 or x 2 null '. Hence, any v 2 V can be represented uniquely as v = w+ auwhere w 2 null ' and a 2 F. This follows null '� {au : a 2 F} = V .

Page 29: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 29

exer:3B:30 30. According to exerciseexer:3B:29exer:3B:29

29, if there is u 2 V not in null '1

= null '2

then let c =

'1

(u)/'2

(u). For any v 2 V, v can be represented as v = z+au, (a 2 F) where z 2 null '1

.Thus, '

1

(v) = '1

(au) = a'1

(u) = c'2

(v).

exer:3B:31 31. Null space of T1

, T2

is {(x1

, x2

, x3

, 0, 0) : x1

, x2

, x3

2 R}. We have T1

(0, 0, 0, 1, 0) =

T2

(0, 0, 0, 0, 1) = (1, 0) and T1

(0, 0, 0, 0, 1) = T2

(0, 0, 0, 1, 0) = (0, 1).

5.5. Exercises 3C

exer:3C:1 1. Let v1

, . . . , vn

be basis of V and w1

, . . . , wm

be basis of W where w1

, . . . , wk

(k =

dim range T m) is basis of range T since range T is subspace of W . We know thatTv

1

, . . . , T vn

spans range T and for any 1 i k then wi

2 range T , hence wi

can bewritten as linear combination of Tv

j

(1 j n). We have

wi

=

nX

j=1

↵j

Tvj

=

nX

j=1

↵j

mX

h=1

Ah,j

wj

.

In order for the above to hold, there must exist a 1 h m so Ah,i

6= 0. This is true forany 1 i k. Thus, M(T ) has at least k = dim range T nonzero entries.

exer:3C:2 2. Let v1

, . . . , v4

be basis of P3

(R) and w1

, . . . , w3

be basis of P2

(R). From the matrix, sinceA

i,j

= 0 for i 6= j and Ai,i

= 0 so we find Dvk

= wk

for 1 k 3 and Dv4

= 0. So wecan choose v

i

= xi for 1 i 3 and v4

= 0 then wi

= ixi�1 for 1 i 3.

exer:3C:3 3. Let Tv1

, . . . , T vm

be basis of range T where vi

2 V then v1

, . . . , vm

is linearly independentin V , hence u

1

, . . . , un

can be added to the list to create a basis of V . This follows thatu1

, . . . , un

is basis of null space of T . Thus, with v1

, . . . , vm

, u1

, . . . , un

as basis of V andTv

1

, . . . , T vm

as basis of range T , we have Aj,j

= 1 for 1 j dim range T .

exer:3C:4 4. Since the first column of M(T ) (except for possibly A1,1

= 1) so we can let w1

= Tv1

. Itdoesn’t matter what should w

2

, . . . , wn

specifically be as long as w1

, . . . , wn

is basis of W .

exer:3C:5 5. If there doesn’t exists v = w1

+

Pn

i=2

↵i

wi

in range T then for any u 2 V , since Tvi

2range T so we must have A

1,i

= 0 for all 1 i m, or entries of the first row of M(T )are all 0.If there does exist such w = w

1

+

Pn

i=2

↵i

wi

in range T then that means there exists v1

2 Vso Tv

1

= w. Extend v1

to a basis v1

, . . . , vm

of V with Tvi

= ui

for i � 2. Note thatif v

1

, . . . , vm

is basis of of V then so does v1

, v2

� �2

v1

, v3

� �3

v1

, . . . , vm

� �m

v1

. Hence,T (v

i

� �i

v1

) = ui

� �i

w. By choosing the right �i

to remove w1

from ui

, we find thatv1

, v2

� �2

v1

, v3

� �3

v1

, . . . , vm

� �m

v1

is the desired basis of V , which will gives A1,k

= 0

for all 1 k m.

exer:3C:6 6. If there exists such bases of V,W then Tv1

= Tv2

= . . . = Tvm

6= 0. Since v1

, v2

, . . . , vm

is basis of V so v1

, v2

� v1

, v3

� v1

, . . . , vm

� v1

is also basis of V . Note that T (vi

� v1

) = 0

for all 2 i m so that means dim null V = m � 1 or dim range T = 1 according totheorem

theo_15:3B:3.22theo_15:3B:3.22

5.3.5.

Page 30: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 30

Conversely, if dim range T = 1 then dim null V = m � 1 so there exists basis v2

, . . . , vm

of null V . This list can be exented to basis v1

, . . . , vm

of V . Hence, Tv1

= T (v1

+ v2

) =

. . . = T (v1

+ vm

) and v1

, v1

+ v2

, . . . , v1

+ vm

is basis of V . From this, we can choose basisof W to obtain matrix M(T ) of entries 1.

exer:3C:7 7. Let M(T ) = A,M(S) = C then Tvk

= A1,k

w1

+ . . .+Am,k

wm

and Svk

= C1,k

w1

+ . . .+Cm,k

wm

. This follows (T +S)vk

=

Pm

i=1

(Ai,k

+Ci,k

)wi

. Thus, (i, k)th entry of M(T +S)is A

i,k

+ Ci,k

, which also equals to (i, k)th entry of M(T ) +M(S).

exer:3C:8 8. We have (�T )vk

= �(Tvk

).

exer:3C:9 9. Done.

exer:3C:10 10. It suffices to prove (AC)

j,k

= (Aj,.

C)

1,k

for any 1 k p. Indeed,

(Aj,.

C)

1,k

=

nX

i=1

(Aj,.

)

1,i

Ci,k

=

nX

i=1

Aj,i

Ci,k

= (AC)

j,k

.

exer:3C:11 11. We have

(a1

C1,.

+ · · ·+ an

Cn,.

)

1,k

= (a1

C1,.

)

1,k

+ . . .+ (an

Cn,.

)

1,k

,

=

nX

i=1

ai

Ci,k

= (aC)

i,k

.

exer:3C:12 12. Pick A =

✓1, 00, 0

◆and C =

✓1, 10, 0

◆then AC =

✓1, 10, 0

◆but CA =

✓1, 00, 0

◆.

exer:3C:13 13. Prove A(B+C) = AB+AC, say A is m-by-n matrix and B,C are n-by-p matrix. Hence,

(AB +AC)

i,j

= (AB)

i,j

+ (AC)

i,j

=

nX

h=1

(Ai,h

Bh,j

+Ai,h

Ch,j

)

=

nX

h=1

Ai,h

(B + C)

h,j

= (A(B + C))

i,j

.

exer:3C:14 14. A,B,C are m-by-n,n-by-p,p-by-q matrices then

((AB)C)

i,j

=

pX

h=1

(AB)

i,h

Ch,j

,

=

pX

h=1

Ch,j

nX

l=1

Ai,l

Bl,h

,

=

nX

l=1

Ai,l

pX

h=1

Bl,h

Ch,j

,

=

nX

l=1

Ai,l

(BC)

l,j

,= (A(BC))

i,j

.

Page 31: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 31

exer:3C:15 15. Just did inexer:3C:14exer:3C:14

14.

5.6. 3D: Invertibility and Isomorphic Vector Spaces

Theorem 5.6.1 (3.56)A linear map is invertible or non-singular if and only if it is injective and surjective.

This is deduced from exercisesexer:3B:20exer:3B:20

20 andexer:3B:21exer:3B:21

21 (3B).

theo:3.59:3D Theorem 5.6.2 (3.59) Two finite-dimensional vector space over F are isomorphic if andonly if they have same dimension.

theo:3.65:3D Theorem 5.6.3 (3.65) Suppose T 2 L(V,W ) and v inV . Suppose v1

, . . . , vn

is basis of Vand w

1

, . . . , wm

is basis of W . Then

M(Tv) = M(T )T (v).

theo:3.69:3D Theorem 5.6.4 (3.69) Suppose V is finite-dimensional and T 2 L(V ). Then the followingare equivalent:

(a) T is invertible.

(b) T is surjective.

(c) T is injective.

5.7. Exercises 3D

exer:3D:1 1. We have (ST )(T�1S�1

) = I and (T�1S�1

)(ST ) = I.

exer:3D:2 2. Let v1

, . . . , vn

be basis of V . Consider noninvertible operators Ti

2 L(V ) so Ti

(vj

) = 0

if j 6= i and Ti

(vi

) = vi

. Hence, linear map T1

+ . . . + Tn

is the identity map, which isinvertible. Thus, set of noninvertible operators on V is not a subspace of L(V ).

exer:3D:3 3. If there exists such invertible operator T then if Su = Sv then Tu = Tv which followsu = v. Thus, S is injective.If S is injective, since U is subspace of V , basis v

1

, . . . , vm

of U can be extended to basisv1

, . . . , vn

(m n) of V . Since S is injective so Sv1

, . . . , Svm

is linearly independent, sothis list can be extended to basis Sv

1

, . . . , Svm

, um+1

, . . . , un

of V . From this, we let T bea linear map so Tv

i

= Svi

for 1 i m and Tvi

= ui

for m < i n. We find that T issurjective so T is invertible and Tu = Su for any u 2 U .

exer:3D:4 4. If there exists such invertible operator S then if u 2 null T1

then 0 = T1

u = ST2

u soT2

u = 0. Similarly if u 2 null T2

then u 2 null T1

. Thus, null T1

= null T2

.

Page 32: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 32

If null T1

= null T2

: Let T2

v1

, T2

v2

, . . . , T2

vm

, w1

, . . . , wn

be basis of W . Note thatT1

v1

, . . . , T1

vm

is linearly independent, otherwise there exists T1

v = 0 where v is linearcombination of v

1

, . . . , vm

. This means v 2 null T1

= null T2

so T2

v = 0, which leads toT2

v1

, . . . , T2

vm

being linearly dependent, a contradiction. Thus, T1

v1

, . . . , T1

vm

is linearlyindependent so it can be extended to basis T

1

v1

, . . . , T1

vm

, u1

, . . . un

of W .Let S 2 L(W ) be linear map so S(T

2

vi

) = T1

vi

for 1 i m and Swj

= uj

for 1 j n.This follows S is surjective so S is invertible.Now we prove ST

2

= T1

. Indeed, for any v 2 V then T2

v =

Pm

i=1

↵i

T2

vi

so v�P

m

i=1

↵i

vi

2null T

2

= null T1

so

T1

v = T1

mX

i=1

↵i

vi

!=

mX

i=1

↵i

T1

vi

=

mX

i=1

↵i

ST2

vi

= ST2

mX

i=1

↵i

VI

!= S(T

2

v).

exer:3D:5 5. If there exists such invertible operator S then for any u 2 range T1

, u = T1

v = T2

(Sv) sou 2 range T

2

. Similarly, if u 2 range T2

then u 2 range T1

. Thus, range T1

= range T2

.Conversely, for v

1

, . . . , vm

as basis of V , since range T1

= range T2

there exists ui

soT1

vi

= T2

ui

for all 1 i m. According to theoremtheo_14:3A:3.5theo_14:3A:3.5

5.1.1, there exists a linear mapS 2 L(V, V ) so Sv

i

= ui

for all 1 i m. This follows T2

Svi

= T2

ui

= T1

vi

for all1 i m, which leads to T

2

Sv = T1

v for all v 2 V . On the other hand, since v1

, . . . , vm

is linearly independent so u1

, . . . , um

is linearly independent so it is basis of V . Hence, Sis surjective so S is invertible.

exer:3D:6 6. If there exists such R,S. Let v1

, . . . , vm

be basis of null T1

then ST2

Rvi

= 0. Since S isinvertible so S is injective, which means T

2

Rvi

= 0. Thus, Rvi

2 null T2

and Rv1

, . . . , Rvm

is linearly independent so dim null T2

� dim null T1

. Since R is invertible so R is surjectiveso let Ru

1

, . . . , Run

be basis of null T2

. Hence S(T2

Rui

) = S(0) = 0 so T1

ui

= 0 soui

2 null T1

. Since u1

, . . . , un

is linearly independent so dim null T2

dim null T1

. Thus,dim null T

1

= dim null T2

.Conversely, if dim null T

1

= dim null T2

: Let v1

, . . . , vm

be basis of V where vn+1

, . . . , vm

(n m) is basis of null T

1

. Let u1

, . . . , um

be another basis of V where un+1

, . . . , um

is basisof null T

2

. This folows that two lists T1

v1

, . . . , T1

, vn

and T2

u1

, . . . , T2

un

are linearly inde-pendent so they can be extended to two bases of W , which are T

1

v1

, . . . , T1

vn

, w1

, . . . , wk

and T2

u1

, . . . , T2

un

, z1

, . . . , zk

.According to theorem

theo_14:3A:3.5theo_14:3A:3.5

5.1.1, there exists a linear map R 2 L(V ) so Rvi

= ui

for 1 i m.Since u

1

, . . . , um

is basis of V so R is surjective so R is invertible. There also exists linearmap S 2 L(W ) so S(T

2

ui

) = T1

vi

for 1 i n and Szj

= wj

for 1 j k. We can seethat S is also surjective so S is invertible.

Page 33: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 33

Now we prove T1

= ST2

R. Indeed, for any v =

Pm

i=1

↵i

vi

in V , we have

ST2

Rv = ST2

mX

i=1

↵i

ui

!= ST

2

nX

i=1

↵i

ui

!(since T

2

uj

= 0 for m � j > n),

= S

nX

i=1

↵i

T2

ui

!=

nX

i=1

↵i

T1

vi

,

=

mX

i=1

↵i

T1

vi

= T1

v (since T1

vj

= 0 for m � j > n).

exer:3D:7 7. (a) For any T1

, T2

2 E then we have (T1

+ T2

)(v) = T1

v + T2

v = 0 so T1

+ T2

2 E and(�T

1

)(v) = �(T1

v) = 0 so �T1

2 E. We also have 0 2 E since 0(v) = 0. Thus, E issubspace of V .(b) Let v, v

1

, . . . , vn

be basis of V and w1

, . . . , wm

be basis of W . We prove that M withrespect to these bases is an isomorphism between two vector spaces E and D = {A 2Fm,n

: A.,1

= 0} (need to prove D is vector space first, which is not hard).Indeed, M is linear. If M(T ) = 0 then T = 0. Thus, M is injective. For any A 2 D, letT be the linear map from E to D so Tv

k

=

Pm

i=1

Ai,k

wi

for 1 k n and Tv = 0. Thisfollows T 2 E. Thus, M is surjective, which follows M is invertible.Hence, according to theorem

theo:3.59:3Dtheo:3.59:3D

5.6.2, dim E = dim D. One can easily verify that dim D =

mn = (dim V � 1)dim W so dim E = (dim V � 1)dim W .

exer:3D:8 8. Since T is surjective so range T = W so W is finite-dimensional. Let v1

, . . . , vm

bebasis of V where v

n+1

, . . . ,m (n m) is basis of null T . Let U = span (v1

, . . . , vn

).For any w 2 W , there exists v =

Pm

i=1

↵i

vi

so Tv = w so u = v �P

m

i=n+1

↵i

vi

isin U and Tu = Tv = w. Thus, T |

U

is surjective. If Tu = Tv for u, v 2 U thenu� v 2 null T = span (v

n+1

, . . . , vm

) so u� v = 0 since v1

, . . . , vm

is linearly independent.Thus, T |

U

is injective. Thus, T |U

is an isomorphism of U onto W .

exer:3D:9 9. If both S, T are invertible then T�1S�1 is inverse of ST so ST is invertible. Conversely, ifST is invertible then there is an inverse X of ST . For any v 2 V , there exists TXv 2 V soS(TXv) = v. Hence, S is surjective so according to theorem

theo:3.69:3Dtheo:3.69:3D

5.6.4, S is invertible. On theother hand, if Tu = Tv for u, v 2 V then XS(Tu) = XS(Tv) but u = XSTu, v = XSTvso u = v. Thus, T is injective so according to theorem

theo:3.69:3Dtheo:3.69:3D

5.6.4, T is invertible.

exer:3D:10 10. If ST = I then for any v 2 V , we have S(Tv) = v so S is surjective so according totheorem

theo:3.69:3Dtheo:3.69:3D

5.6.4, S is invertible. Let S�1 be inverse of S then we have T = IT = (S�1S)T =

S�1

(ST ) = S�1. Thus, T is inverse of S so TS = I. Similarly, if TS = I then ST = I.

exer:3D:11 11. According to exerciseexer:3D:10exer:3D:10

10 (3D) if S(TU) = I then (TU)S = T (US) = I then (US)T = Iso T�1

= US.

exer:3D:12 12. Define T, U, S 2 L(P(R)) so Uv = vx3 for any v 2 P(R); Txi = xi�2 for i � 2 andTx = x, T1 = 1; Sxi = xi�1 for i � 1 and S1 = 1. With this, we have STU = I. However,if T is not invertible since T is not injective as T (2x2 + x3) = T (2 + x3) = 2 + x.

Page 34: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 34

exer:3D:13 13. Note that RST 2 L(V ) and RST is surjective so according to theoremtheo:3.69:3Dtheo:3.69:3D

5.6.4, RST isinvertible operator. According to exercise

exer:3D:9exer:3D:9

9 (3D), since RST is invertible so R and ST arealso invertible, which follows S, T,R are invertible. Thus, S is injective.

exer:3D:14 14. T is linear. We have Tu = Tv then W(u) = W(v) so u = v. Thus, T is injective. For anyA 2 Fn,1 there exists a v 2 V so Tv = A. Thus, M is isomorphism of V onto Fn,1.

exer:3D:15 15. Let A = M(T ) with respect to standard bases of both Fn,1,Fm,1. Then for any v 2 Fn,1,we have M(v) = v and Tv = M(Tv). Hence, according to theorem

theo:3.65:3Dtheo:3.65:3D

5.6.3, we haveTv = M(Tv) = M(T )M(v) = Av.

exer:3D:16 16. Let v1

, . . . , vn

be basis of V . First, we prove that for any 1 i n, there exists Ci

2 Fso Tv

i

= Ci

vi

. Indeed, we can choose linear map S so Svi

= vi

and Svj

= 2vj

for1 j n, j 6= i. This follows STv

i

= T (Svi

) = Tvi

. Hence, if Tvi

=

Pn

k=1

↵k

vk

then

nX

k=1

↵k

vk

= Tvi

= STvi

= S

nX

k=1

↵k

vk

!= ↵

i

vi

+ 2

X

k 6=i

↵k

vk

.

This followsP

k 6=i

↵k

vk

= 0, which happens when ↵k

= 0 for k 6= i. Thus, Tvi

= ↵i

vi

=

Ci

vi

.Next, we prove that for any two 1 j, i n so j 6= i then C

i

= Cj

= C. Indeed, pick aninvertible operator S so Sv

i

= vj

then TSvi

= Tvj

= Cj

vj

but STvi

= S(Ci

vi

) = Ci

vj

.Hence, C

i

vj

= Cj

vj

so Ci

= Cj

= C. Thus, Tvi

= Cvi

for all 1 i n, or T is scalarmultiple of I.

exer:3D:17 17. (For this, it’s better to think linear maps as matrices) Let v1

, . . . , vn

be basis of V . Denotelinear map T

i,j

as Tvj

= vi

, T vk

= 0 for all k 6= j, 1 k n (image this as n-by-n matrixwith 0 in all entries except for 1 in (i, j)-th entry).If all T 2 E is 0 then E = {0}. If there exists T 6= 0, WLOG Tv

1

=

Pn

i=1

↵i

vi

with ↵1

6= 0.We find T

i,1

(TT1,1

) 2 E . Note that Ti,1

TT1,1

= ↵1

Ti,1

so Ti,1

2 E for all 1 i n. SinceTi,1

2 E so Ti,j

= Ti,1

T1,j

2 E for all 1 j n. Thus, Ti,j

2 E for all 1 i, j n. SinceT1,1

, . . . , Tn,n

is linearly independent and dim V = n2 so E = V .

exer:3D:18 18. Define a map T from V onto L(F, V ) so for any v 2 V , Tv = Rv

2 L(F, V ) so Rv

1 = v.it’s not hard to verify that T is linear. If Tu = Tv then R

v

= Ru

so Rv

(1) = Ru

(1) sou = v. Thus, T is injective. For any R 2 L(F, V ), then there is R(1) 2 V so V (R(1)) = R.Thus, T is surjective. In conclusion, T is an isomorphism so V and L(F, V ) are isomorphicvector spaces.

exer:3D:19 19. (a) For any q 2 P(R), which we can restrict to q 2 Pm

(R). Define T 0: P

m

(R) ! Pm

(R)

so T 0p = Tp for p 2 Pm

(R). Since deg Tp deg p = m so T 0 is indeed an operator onfinite-dimensional P

m

(R). Since T is injective so T 0 is also injective, which follows T 0 isinvertible. Hence, for q 2 P

m

(R) there exists p 2 Pm

(R) so T 0p = Tp = q. This follows Tis surjective.

Page 35: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 35

(b) Assume the contrary, there is non-zero p so deg Tp < deg p. Let Tp = q. From(a), there exists r 2 P(R) so deg r deg q and Tr = q. This follows Tp = Tr sop = r. However deg p > deg Tp = deg q � deg r, a contradiction. Thus, we must havedeg Tp = deg p for any non-zero p 2 P(R).

exer:3D:20 20. Denote n-by-n matrix A with Ai,j

is the (i, j)-th entry. Define a linear map T 2 L(Fn,1

)

so Tv = Av. Hence, (a) is equivalent to T being injective and (b) is equivalent to T beingsurjective. Thus, (a) is equivalent to (b) according to theorem

theo:3.69:3Dtheo:3.69:3D

5.6.4.

5.8. 3E: Products and Quotients of Vector Spaces

theo:3.85:3E Theorem 5.8.1 (3.85) Suppose U is a subspace of V and u, v 2 V . Then the following areequivalent:

(a) v � w 2 U .

(b) v + U = w + U .

(c) (v + U) \ (w + U) 6= 0.

theo:3.89:3E Theorem 5.8.2 (3.89) Suppose V is finite-dimensional and U is subspace of V . Thendim V/U = dim V � dim U .

theo:3.91:3E Theorem 5.8.3 (3.91) Suppose T 2 L(V,W ). Define ˜T : V/(null T ) ! W by ˜T (v +

null T ) = Tv. Then:

(a) ˜T is a linear map from V/(null T ) to W .

(b) ˜T is injective,

(c) range ˜T = range T ,

(d) V/(null T ) is isomorphic to range T .

Remark 5.8.4. Be careful with vector space V/U , because there are more than one v 2 V togive the same affine subset v + U of V . So if we define a map T from V/U to some W , be surethat T makes senses, i.e. for v + U = u+ U then T (v + U) = T (u+ U).

5.9. Exercises 3E

exer:3E:1 1. If T is a linear map then for any two u, v 2 V we have (u, Tu)+(v, Tv) = (u+v, T (u+v)) 2graph of T and �(u, Tu) = (�u, T�u) 2 graph of T . We also have (0, T0) 2 graph of T sograph of T is a subspace of V ⇥W .Conversely, if graph of T is a subspace of V ⇥W . We have (u, Tu)+(v, Tv) = (u+v, Tu+Tv) 2 graph of T so Tu+ Tv = T (u+ v) (otherwise if Tu+ Tv 6= T (u+ v) then function

Page 36: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 36

T maps u+ v to two different results, a contradiction). Similarly, �(u, Tu) = (�u,�Tu) 2range T so �Tu = T (�u). Thus, T is a linear map.

exer:3E:2 2. Let (v1,1

, v1,2

. . . , v1,m

), (v2,1

, . . . , v2,m

), . . . , (vk,1

, . . . , vk,m

) be basis of V1

⇥ V2

⇥ · · ·⇥ Vm

.Assume the contrary that V

1

is infinite-dimensional. then there exists v 2 V1

so v 62span (v

1,1

, v2,1

, . . . , vk,1

). This follows (v, u2

, . . . , um

) for any ui

2 Vi

is not a linear com-bination of (v

i,1

, . . . , vi,m

), 1 i k, a contradiction. Thus, Vi

is finite-dimensional.

exer:3E:3 3. Let V = P(R), U1

= span (1, x, x2) and U2

= {p 2 P(R) : deg p � 2}. We find U1

+ U2

isnot a direct sum since 1 + x+ 2x2 + x3 = (1+ x+ x2) + (x2 + x3) = (1 + x) + (2x2 + x3).Define a linear map T from U

1

⇥ U2

onto U1

+ U2

so T (u1

, u2

) = u1

+ xu2

. We canfind that T is surjective. If T (u

1

, u2

) = T (v1

, v2

) then u1

+ xu2

= v1

+ xv2

. Note thatdeg u

1

< deg xu2

for any u1

2 U1

, u2

2 U2

so this only happens when u1

= v1

, u2

= v2

.Thus, T is injective which follows T is an isomorphism from U

1

⇥ U2

onto U1

+ U2

.

exer:3E:4 4. Define a map S from L(V1

⇥ · · · ⇥ Vm

,W ) onto L(V1

,W ) ⇥ · · · ⇥ L(Vm

,W ) so for anyT 2 L(V

1

⇥ · · ·⇥ Vm

,W ), S(T ) = (R1

, . . . , Rm

) where Ri

are map from Vi

onto W so: forany v = (0, . . . , 0, v

i

, 0, . . . , 0) 2 V1

⇥ · · ·⇥ Vm

then Ri

vi

= Tv = w 2 W . It’s not hard tocheck that R

i

is a linear map.Now we prove that S is a linear map. Say S(T

1

) = (R1

, . . . , Rm

) and S(T2

) = (U1

, . . . , Um

)

then S(T1

)+S(T2

) = (R1

+U1

, . . . , Rm

+Um

). On the other hand, for any 1 i m, wehave for any v

i

2 Vi

then (Ri

+ Ui

)vi

= Ri

vi

+ Ui

vi

= T1

v + T2

v = (T1

+ T2

)v where v =

(0, . . . , 0, vi

, 0, . . . , 0). This follows S(T1

+T2

) = (R1

+U1

, . . . , Rm

+Um

) = S(T1

)+S(T2

).Similarly, for any (R

1

, . . . , Rm

) 2 L(V1

,W ) ⇥ · · · ⇥ L(Vm

,W ) we can define respectiveT 2 L(V

1

⇥ · · · ⇥ Vm

,W ). Thus, S is surjective. If S(T1

) = S(T2

) then (R1

, . . . , Rm

) =

(U1

, . . . , Um

) so Ri

= Ui

or T1

vi

= T2

vi

where vi

= (0, . . . , 0, vi

, 0, . . . , 0) for all 1 i m.This leads to T

1

= T2

. Thus, S is injective so S is an isomorphism from L(V1

⇥· · ·⇥Vm

,W )

onto L(V1

,W )⇥ · · ·⇥ L(Vm

,W ) so these two vector spaces are isomorphic.

exer:3E:5 5. Similarly to previous exerciseexer:3E:4exer:3E:4

4, define a linear map S from L(V,W1

⇥ · · · ⇥ Wm

) ontoL(V,W

1

)⇥ · · ·⇥ L(V,Wm

) so for T 2 L(V,W1

⇥ · · ·⇥Wm

), S(T ) = (R1

, . . . , Rm

) whereR

i

2 L(V,Wi

) so: if Tv = (w1

, . . . , wm

) then Ri

vi

= wi

.

exer:3E:6 6. Define a linear map T from V n onto L(Fn, V ) so T (v1

, . . . , vn

) = S 2 L(Fn, V ) soS(0, . . . , 0, 1

i

, 0, . . . , 0) = vi

. T is obviously surjective. If T (v1

, . . . , vn

) = T (u1

, . . . , un

)

then Sv

= Su

so Sv

(0, . . . , 0, 1i

, 0, . . . , 0) = Su

(0, . . . , 0, 1i

, 0, . . . , 0) which means vi

= ui

.This follows (v

1

, . . . , vn

) = (u1

, . . . , un

). Thus, T is injective so two vector spaces V n andL(Fn, V ) are isomorphic.

exer:3E:7 7. For any u 2 U , we have v + u 2 v + U = x + W so v + u � x 2 W which according totheorem

theo:3.85:3Etheo:3.85:3E

5.8.1 follows v+u+W = x+W = v+U so u+W = U . Since u 2 U so W ✓ U .Similarly, for any w 2 W then w + x 2 x+W = v + U so w + x� v 2 U so w + x+ U =

v + U = x+W so w + U = W . Thus, U ✓ W . In conclusion, we find U = W .

Page 37: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 37

exer:3E:8 8. If A is an affine subset of V then A is r + U for some r 2 V and some subspace U ofV . For v, w 2 A then there exists u

1

, u2

2 U so r + u1

= v, r + u2

= w. We find�v + (1� �)w = r + �u

1

+ (1� �)u2

2 A for any � 2 F.Conversely, if �v + (1� �)w 2 A for all v, w 2 A and all � 2 F: Pick arbitrary w 2 A, weprove that U = {u� w : u 2 A} is a subspace of V . Indeed, for any x

1

, x2

2 U then x1

=

u1

�w, x2

= u2

�w for u1

, u2

2 A. Pick � = 2 then 2u1

�w 2 A, 2u2

�w 2 A. Pick � = 1/2then 1/2(2u

1

�w) + 1/2(2u2

�w) 2 A or u1

+ u1

�w 2 A. This follows u1

+ u2

� 2w 2 uor x

1

+ x2

2 U for any x1

, x2

2 U . On the other hand, �(2u1

� w) + (1 � �)w 2 A or2�u

1

2 A for any u1

2 A. Thus, U is a subspace of V . This follows A = w+U so A is anaffine subset of V .

exer:3E:9 9. If A1

\A2

is nonempty then for any x, y 2 A1

\A2

, according to exerciseexer:3E:8exer:3E:8

8, �x+(1��)y 2 A1

and �x+ (1� �)y 2 A2

so �x+ (1� �)y 2 A1

\ A2

for any � 2 F, x, y 2 A1

\ A2

. Thus,according to exercise

exer:3E:8exer:3E:8

8, we find A1

\A2

is an affine subset of V .

exer:3E:10 10. This is deduced from exerciseexer:3E:9exer:3E:9

9.

exer:3E:11 11. (a) For any x, y 2 A where x =

Pm

i=1

�i

vi

, y =

Pm

i=1

�i

vi

then for any ↵ 2 F, we have(1� ↵)x+ ↵y =

Pm

i=1

vi

[�i

(1� ↵) + ↵�i

]. SincemX

i=1

�i

(1� ↵) + ↵�i

= (1� ↵)

mX

i=1

�i

+ ↵

mX

i=1

�i

= 1

so according to exerciseexer:3E:8exer:3E:8

8, A is an affine subset of V .(b) Let U be affine subset of V that contains v

1

, . . . , vm

. Hence, �1

+ �2

v2

2 U for any�1

,�2

2 F;�1

+ �2

= 1. This follows �3

v3

+ (1 � �3

)(�1

v1

+ �2

v2

) 2 U for any �3

2 F.Since �

3

+ (1� �3

)�1

+ (1� �3

)�2

= 1 so we can say that �3

v3

+ �2

v2

+ �1

v1

2 U for any�1

,�2

,�3

2 F so �1

+ �2

+ �3

= 1. Similarly, �4

v4

+ (1 � �4

)(�1

v1

+ �2

v2

+ �3

v3

) 2 U .With this, we will eventually obtain

Pm

i=1

�i

vi

2 U forP

m

i=1

�i

= 1 or U contains A.(c) We find v+0 2 A so v =

Pm

i=1

↵i

vi

withP

m

i=1

↵i

= 1. This follows for any u 2 U thenu =

Pm

i=1

�i

vi

withP

m

i=1

�i

= 0 or u =

Pm�1

i=1

�i

(vi

� vm

) since �m

= ��1

� . . .� �m�1

.Thus, U is a subspace of span (v

1

� vm

, v2

� vm

, . . . , vm�1

� vm

) so dim U m� 1.

exer:3E:12 12. Let v1

+U, v2

+U, . . . , vm

+U be basis of V/U then v1

, . . . , vm

is linearly independent andPm

i=1

↵i

vi

62 U for any not-all-zero ↵i

2 F.Define a linear map T from V onto U ⇥ (V/U) so Tv = (v �

Pm

i=1

↵i

vi

, v + U) wherev + U = (

Pm

i=1

↵i

vj

) + U . According to theoremtheo:3.85:3Etheo:3.85:3E

5.8.1 then v �P

m

i=1

↵i

vi

2 U so Tv 2U ⇥ (V/U).For any (w, v +U) 2 U ⇥ (V/U) there exists ↵

1

, . . . ,↵m

2 F so v +U = (

Pm

i=1

↵i

vi

) +U .By letting u = w +

Pm

i=1

↵i

vi

2 V then we have Tu = (w, v + U). Thus, T is surjective.If Tv

1

= Tv2

or (w1

, x1

+ U) = (w2

, x2

+ U) then w1

= w2

and x1

+ U = x2

+ U =

(

Pm

i=1

↵i

vi

) + U . Hence,

v1

= w1

+

mX

i=1

↵i

vi

= w2

+

mX

i=1

↵i

vi

= v2

.

Page 38: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 38

Thus, T is injective. In conclusion, vector spaces V is isomorphic to U ⇥ (V/U).

exer:3E:13 13. Since v1

+U, v2

+U, . . . , vm

+U is basis of V/U so ifP

m

i=1

↵i

(vi

+U) = 0+U then ↵i

= 0

for all 1 i m, i.e. ifP

m

i=1

↵i

vi

6= 0 thenP

m

i=1

↵i

vi

62 U . We also find that v1

, . . . , vm

is linearly independent. From all the above, we deduce v1

, . . . , vm

, u1

, . . . , un

is a linearlydependent list. According to theorem

theo:3.89:3Etheo:3.89:3E

5.8.2 then m+ n = dim V/U + dim U = dim V solinearly independent list v

1

, . . . , vm

, u1

, . . . , un

is basis of V .

exer:3E:14 14. (a) If (x1

, x2

, . . .) 2 U then there exists M1

2 N so for all i � M1

then xi

= 0. Sim-ilarly, if (y

1

, y2

, . . .) 2 U then there exists M2

. Thus, for (x1

+ y1

, x2

+ y2

, . . .), for alli � max{M

1

,M2

} then xi

+ yi

= 0, which means (x1

+ y1

, x2

+ y2

, . . .) 2 U . Similarly�(x

1

, . . .) 2 U . And (0, 0, . . .) 2 U so U is subspace of F1.(b) Denote p

i

as i-th prime number and vi

2 F1 so for any k 2 N, k � 1 then the kpi

-thcoordinate of v

i

is 2, the rest of the coordinates of vi

is 1. We will prove that prove thatv1

+U, . . . , vm

+U is linearly independent for any m � 1, which can follow that F1/U isinfinite-dimensional according to exercise

exer:2A:14exer:2A:14

14 (2A).Indeed, consider (

Pm

i=1

↵i

vi

) + U is equal to U when v =

Pm

i=1

↵i

vi

2 U according totheorem

theo:3.85:3Etheo:3.85:3E

5.8.1, which means there exists M 2 N so for all i � M , i-th coordinate of v is0. According to notation of v

i

, the x-th coordinate of v where• x ⌘ 1 (mod p

1

· · · pm

) isP

m

i=1

↵i

.• x ⌘ 0 (mod p

j

), x ⌘ 1 (mod

Qi 6=j

pi

) is 2↵j

+

Pi 6=j

↵i

.If we pick x � M then from the above, we find that

Pm

i=1

↵i

= 2↵j

+

Pi 6=j

↵i

= 0

for every 1 j m. Hence, we find ↵i

= 0 for all 1 i m. In other words,v1

+ U, v2

+ U, . . . , vm

+ U is linearly independent in F1/U , as desired.

exer:3E:15 15. According to theoremtheo:3.91:3Etheo:3.91:3E

5.8.3, V/(null ') is isomorphic to range '. We find that range ' = Fsince ' 6= 0 so for any c 2 F, there exists � 2 F so �'(1) = c. Hence, dim range ' =

dim F = 1, which follows dim V/(null ') = 1.

exer:3E:16 16. Since dim V/U = 1 so there exists u 2 V, u 62 U . Consider v 6= u, v 62 U, v 2 V thenv + U = �u + U for some � 2 F, hence according to theorem

theo:3.85:3Etheo:3.85:3E

5.8.1, v = �u + x for somex 2 U .Define a linear map ' 2 L(V,F) so '(u) = 1, '(v) = 0 for all v 62 U , '(v) = �'(u) ifv + U = �u + U . Consider 'v = 0 for some v 2 V . If v 62 U then v = �u + x for some� 2 F, x 2 U so '(v) 6= 0, a contradiction. Thus, only v 2 U will give 'v = 0, whichmeans null ' = U .

exer:3E:17 17. Let v1

+ U, v2

+ U, . . . , vm

+ U be basis of U . For any v 2 V then there exists uniquely↵1

, . . . ,↵m

2 F so v+U = (

Pm

i=1

↵i

vi

)+U . According to theoremtheo:3.85:3Etheo:3.85:3E

5.8.1, v�P

m

I=1

↵i

vi

=

u 2 U . Hence, every v 2 V can be represented uniquely as u+x where x 2 span (v1

, . . . , vm

).Thus, V = U � span (v

1

, . . . , vm

).

exer:3E:18 18. If there exists such linear map S then 0 = T (0) = S(⇡0) = S(U). For any u 2 U thenT (u) = S(⇡u) = S(U) = 0 so u 2 null T . Thus, U ⇢ null T .

Page 39: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 39

Conversely, if U ⇢ null T . Define S : V/U ! W so S(v + U) = T (v). We show that thedefinition of S makes sense, suppose u, v 2 V so u+U = v+U then according to theoremtheo:3.85:3Etheo:3.85:3E

5.8.1, v � u 2 U so v � u 2 null T so Tv = Tu.

exer:3E:19 19. Disjoint union is analogous to direct sum: If number of elements of A1

[A2

[ . . . [Am

isequal to sum of number of elements of A

i

, then A1

[ . . . [Am

is a disjoint union.

exer:3E:20 20. (a) We have �(S1

+ S2

) = (S1

+ S2

) � ⇡ = S1

� ⇡ + S2

� ⇡ = �(S1

) + �(S2

). Similarly,�(�S

1

) = (�S1

) � ⇡ = ��(S1

). Thus, � is a linear map.(b) If �(S

1

) = �(S2

) then S1

�⇡ = S2

�⇡ or S1

(v+U) = S2

= (v+U) or (S1

�S2

)(v+U)

for any v + U 2 V/U . Hence, S1

= S2

so � is injective.(c) For any T 2 {T 2 L(V,W ) : Tu = 0 for every u 2 U}, according to exercise

exer:3E:18exer:3E:18

18, thereexists S 2 L(V/U,W ) so �(S) = T = S � ⇡. Thus, range � = {T 2 L(V,W ) : Tu =

0 for every u 2 U}.

5.10. 3F: Duality

theo:3.95:3F Theorem 5.10.1 (3.95) Suppose V is finite-dimensional. Then V 0= L(V,F) is also finite-

dimensional and dim V 0= dim V .

theo:3.105:3F Theorem 5.10.2 (3.105) Suppose U ⇢ V then U0 is a subspace of V 0.

theo:3.106:3F Theorem 5.10.3 (3.106) Suppose V is finite-dimensional and U is subspace of V . Thendim U + dim U0

= dim V .

theo:3.107:3F Theorem 5.10.4 (3.107) Suppose V and W are finite-dimensional and T 2 L(V,W ).Then:

(a) null T 0= (range T )0 (don’t need V,W being finite-dimensional).

(b) dim null T 0= dim null T + dim W � dim V .

theo:3.108,3.110:3F Theorem 5.10.5 (3.108,3.110) Suppose V,W are finite-dimensional and T 2 L(V,W ).Then T is injective if and only if T 0 is surjective and T is surjective if and only if T 0

is injective.

theo:3.109:3F Theorem 5.10.6 (3.109) Suppose V and W are finite-dimensional and T 2 L(V,W ).Then:

(a) dim range T 0= dim range T ;

(b) range T 0= null T .

Page 40: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 40

theo:3.117:3F Theorem 5.10.7 (3.117) Suppose V and W are finite-dimensional and T 2 L(V,W ). Thendim range T equals to column rank of M(T ).

5.11. Exercises 3F

exer:3F:1 1. Let ' 2 L(V,F). If ' is not a zero map then there exists v 2 V so 'v = c 6= 0. Hence, forany � 2 F then '(�/c · v) = � so ' is surjective.

exer:3F:2 2. Linear functionals on R[0,1]: '1

(f) = f(0.5),'2

(f) = f(0.1),'3

(f) = f(0.2).

exer:3F:3 3. Pick ' 2 V 0 so '(v) = 1, value of '(u) for u 62 {av : a 2 F} can be picked arbitrarily.

exer:3F:4 4. Extend basis v1

, . . . , vm

of U to basis v1

, . . . , vn

(m < n) of V . Define ' from V onto F so'(v

i

) = 0 for 1 i m, and '(vj

) = 1 for m < j n. Hence, '(u) = 0 for every u 2 U .

exer:3F:5 5. It is a consequence of exerciseexer:3E:4exer:3E:4

4 (3E) when W = F.

exer:3F:6 6. (a) If v1

, . . . , vm

spans V : Consider ' 2 V 0 so �(') = 0 then '(vi

) = 0 for all 1 i m,which leads to '(v) = 0 for all v 2 V since v

1

, . . . , vm

spans V . Hence, ' = 0, i.e. � isinjective.Conversely, if � is injective. If v

1

, . . . , vm

does not span V then there exists vm+1

62span (v

1

, . . . , vm

), vm+1

2 V . Hence, there exists ' 2 V 0 so '(vm+1

) = 1 and '(v) = 0 forall v 2 V, v 62 {av

m+1

: a 2 F}. We find �(') = 0 but ' 6= 0, a contradiction to injectivityof �. Thus, v

1

, . . . , vm

spans V .(b) If v

1

, . . . , vm

is linearly independent then for any (x1

, . . . , xm

) 2 Fm, we can construct' 2 V 0 so '(v

i

) = xi

. Hence, � is surjective.If � is surjective but v

1

, . . . , vm

is linearly dependent, i.e. WLOG vm

=

Pm�1

i=1

↵i

vi

.Hence, �(') =

⇣'(v

1

), . . . ,'(vm�1

),P

m�1

i=1

↵i

'i

(vi

)

⌘, which does not cover all Fm. Thus,

v1

, . . . , vm

is linearly independent.

exer:3F:7 7. Because 'j

(xj) = (x

j)

(j)(0)

j!

= 1 and 'j

(xi) = 0 for i 6= j.

exer:3F:8 8. (a) True.

(b) Dual basis of basis in (a) is '0

,'1

, . . . ,'m

where 'j

(p) = p

(j)(5)

j!

.

exer:3F:9 9. Since '1

, . . . ,'n

is dual basis of V 0 so for any 2 V 0, there exists ↵1

, . . . ,↵n

2 F so =

Pn

i=1

↵i

'i

. Note that (

Pn

i=1

↵i

'i

) (vi

) = ↵i

so (vi

) = ↵i

. Thus, =

Pn

i=1

(vi

)'i

.

exer:3F:10 10. We have (S+T )0(') = ' � (S+T ) = ' �S+' �T = S0(')+T 0

(') = (S0+T 0

)('). Thus,(S + T )0 = S0

+ T 0. Similarly, (�T )0 = �T 0.

exer:3F:11 11. If there exists such (c1

, . . . , cm

) 2 Fm and (d1

, . . . , dn

) 2 Fn then A.,k

= dk

·C where C is anm-by-1 matrix so C

i

= ci

. Hence, rank of A is the dimension of span (A.,1

, A.,2

, . . . , A.,n

),which is 1.

Page 41: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 41

If rank of A is 1 then dimension of span (A.,1

, A.,2

, . . . , A.,n

) is 1, which means A.,k

= dk

·Cfor some C 2 Fm,1.

exer:3F:12 12. Let I be the identity map on V then the dual map I 0(') = ' � I = ' so I 0 is the identitymap on V 0.

exer:3F:13 13. (a) (T 0('

1

))(x, y, z) = ('1

� T )(x, y, z) = '1

(4x + 5y + 6z, 7x + 8y + 9z) = 4x + 5y + 6zand (T 0

('2

))(x, y, z) = 7x+ 8y + 9z.(b) According to exercise

exer:3F:9exer:3F:9

9 (3F) then since T 0('

1

) 2 (R3

)

0 and 1

, 2

, 3

is dual basis of(R3

)

0 so

(T 0('

1

)) = (T 0('

1

))(1, 0, 0) 1

+ (T 0('

1

))(0, 1, 0) 2

+ (T 0('

1

))(0, 0, 1) 3

,

= 4 1

+ 5 2

+ 6 3

.

exer:3F:14 14. (a) We have (T 0('))(p) = ('�T )(p) = '(Tp) = '(x2p(x)+p00(x)) = (x2p(x)+p00(x))0(4) =

(2xp(x) + x2p0(x) + p000(x))(4) = 8p(4) + 16p0(4) + p000(4).(b) (T 0

('))(x3) = '(T (x3)) = '(x5 + 6x) =R1

0

(x5 + 6x)dx =

1

6

+ 3.

exer:3F:15 15. If T = 0 then T 0(') = ' � T = 0 for any ' 2 W 0. Hence, T 0

= 0.Conversely, if T 0

= 0, let w1

, . . . , wm

be basis of W . For any v, we can write T (v) =Pm

i=1

↵i

wi

. We can choose ' 2 W so '(wi

) = ↵i

. Hence,

0 = (T 0('))(v) = '(Tv) =

mX

i=1

↵i

'(wi

) =

mX

i=1

↵2

i

.

Thus, ↵i

= 0 for all 1 i m. This follows Tv = 0 for all v 2 V , i.e. T = 0.

exer:3F:16 16. Let S be the map that takes T 2 L(V,W ) to T 0 2 L(W 0, V 0). S is a linear map since

S(T1

+ T2

) = (T1

+ T2

)

0= (T

1

)

0+ (T

2

)

0= S(T

1

) + S(T2

) and S(�T1

) = (�T1

)

0= �(T

1

)

0=

�S(T1

).We prove S is injective. Indeed, if S(T ) = T 0

= 0 then according to exerciseexer:3F:15exer:3F:15

15 (3F) thenT = 0. Thus, S is injective.We prove S is surjective. Let v

1

, . . . , vn

be basis of V and '1

, . . . ,'n

be the dual basis ofV 0, w

1

, . . . , wm

be basis of W and 1

, . . . , m

be dual basis of W 0. We want to determineall A

i,j

so Tvk

=

Pm

i=1

Ai,k

wi

. For every 1 i m, 1 k n, let Ai,k

= (T 0(

i

))(vk

).Now we will prove that such linear map T will satisfy T 0

( ) = � T for all ' 2 W 0.Indeed, we find that

j

(Tvk

) = j

mX

i=1

Ai,k

wi

!,

= j

mX

i=1

(T 0(

i

))(vk

)wi

!,

= (T 0(

j

))(vk

), (because j

(wi

) = 0 for i 6= j)

Page 42: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 42

This is true for all 1 k n so for any v 2 V then j

(Tv) = (T 0(

j

))(v) or T 0(

j

) = j

�T .This again is true for every 1 j m so for any 2 W 0 then T 0

( ) = �T , which meansS(T ) = T 0. Thus, S is surjective.In conclusion, S is an isomorphism of L(V,W ) onto L(W 0, V 0

).

exer:3F:17 17. For any ' 2 U0 then '(u) = 0 for any u 2 U , i.e. u 2 null ' for any u 2 U , i.e.U ⇢ null '. Thus U0 ⇢ {' 2 V 0

: U ⇢ null '}. Similarly, we can finally show thatU0

= {' 2 V 0: U ⇢ null '}.

exer:3F:18 18. U = {0} iff dim U = 0 iff dim U0

= dim V � dim U = dim V = dim V 0 iff U0

= V 0 sinceU0 is subspace of V 0.

exer:3F:19 19. U = V iff dim U = dim V iff dim U0

= 0 iff U0

= {0}.

exer:3F:20 20. If U ⇢ W then if ' 2 W 0 we will have '(w) = 0 for any w 2 W , which means '(u) = 0

for any u 2 U ⇢ W . Thus, ' 2 U0. Hence, W 0 ⇢ U0.

exer:3F:21 21. Let w1

, . . . , wn

be basis of W . If U is not subspace of W then there exists v 2 V so v 62W, v 2 U . Hence, basis w

1

, . . . , wn

of W can be extended to basis w1

, . . . , wn

, v, v1

, . . . , vm

of V . Let ' 2 V 0 so '(wi

) = 0 and '(v) = 1. This follows ' 2 W 0 but ' 62 U0, acontradiction since W 0 ⇢ U0.

exer:3F:22 22. Since U ⇢ U +W so according to exerciseexer:3F:20exer:3F:20

20, (U +W )

0 ⇢ U0. Similarly, (U +W )

0 ⇢ W 0.Thus (U +W )

0 ⇢ U0 \W 0.On the other hand, if ' 2 U0 \ W 0 then '(u) = 0 for any u 2 U or u 2 W . Thus,'(u+ w) = 0 for any u 2 U,w 2 U so ' 2 (U +W )

0. We follow U0 \W 0 ⇢ (U +W )

0.In conclusion, U0 \W 0

= (U +W )

0.

exer:3F:23 23. For any ' 2 U0, 2 W 0 then '+ 2 U0

+W 0 and ('+ )(u) = 0 for any u 2 U \W .Thus, U0

+W 0 ⇢ (U \W )

0.Since U,W are subspaces of V so U\W is also subspace of V . Let v

1

, . . . , vn

be basis of U\W , which can be extended to basis v

1

, . . . , vn

, u1

, . . . , um

of U . Basis v1

, . . . , vn

of U\W canalso be extended to basis v

1

, . . . , vn

, w1

, . . . , wk

of W . Note that v1

, . . . , vn

, u1

, . . . , um

, w1

, . . . , wk

is linearly independent so this list can be extended further to basis v1

, . . . , vn

, u1

, . . . , um

, w1

, . . . , wk

, z1

, . . . , zh

of V . Hence, for any T 2 (U \ W )

0, then there exists ' 2 U0, 2 V 0 so '(wi

) =

Twi

,'(zi

) = 1 and (ui

) = Tui

,'(zi

) = Tzi

� 1. Hence, for any v 2 V, v =

Pn

i=1

↵i

vi

+Pm

i=1

�i

ui

+

Pk

i=1

�i

wi

+

Ph

i=1

ai

zi

then

('+ )(v) =mX

i=1

�i

(ui

) +

kX

i=1

�i

'(wi

) +

hX

i=1

ai

('+ )(zi

),

=

mX

i=1

�i

Tui

+

kX

i=1

�i

Twi

+

hX

i=1

ai

Tzi

,

= Tv.

Thus, T = '+ . This follows (U \W )

0 ⇢ U0

+W 0. Thus, (U \W )

0

= U0

+W 0.

Page 43: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 43

exer:3F:24 24. Done.

exer:3F:25 25. Let S = {v 2 V : '(v) = 0 for every ' 2 U0}. If v 2 U then obviously v 2 S. Thus,U ⇢ S. If v 2 S but v 62 U , then there exists basis u

1

, . . . , un

, v, v1

, . . . , vm

of V whereu1

, . . . , un

is basis of U . Hence, there exists ' 2 U0 so 'ui

= 0 and 'v = 0. Thus, v 62 S,a contradiction. We find that if v 2 S then v 2 U . In conclusion U = S = {v 2 V : '(v) =0 for every ' 2 U0}.

exer:3F:26 26. Let S = {v 2 V : '(v) = 0 for every ' 2 �}. If ' 2 � then '(v) = 0 for every v 2 S so' 2 S0. Thus, � ⇢ S0.Let v

1

, . . . , vn

be basis of S, which can be extended to basis v1

, . . . , vn

, u1

, . . . , um

of V .Let '

1

, . . . ,'n

, 1

, . . . , m

be dual basis of V 0. Hence, for any ' 2 � ⇢ V 0, we have' =

Pn

i=1

↵i

'i

+

Pm

j=1

�j

j

. Note that 1

, . . . , m

is basis of S0 so since vk

2 S, we have j

(vk

) = 0 for any 1 j m. Thus,

'(vk

) =

nX

i=1

↵i

'i

(vk

) +

mX

j=1

�j

j

(vk

),

= ↵k

.

On the other hand, since � ⇢ S0 and ' 2 �, vk

2 S so '(vk

) = 0. Thus, ↵k

= 0 for every1 k n. This follows

1

, . . . , m

spans �. Note that 1

, . . . , m

is linearly independent.Thus,

1

, . . . , m

is basis of �, which leads to � = span ( 1

, . . . , m

) = S0.

exer:3F:27 27. According to theoremtheo:3.107:3Ftheo:3.107:3F

5.10.4 then (range T )0 = {�' : � 2 R}. According to exerciseexer:3F:25exer:3F:25

25(3F) then

range T = {p 2 P(R) : (�')(p) = 0 for any � 2 R},= {p 2 P(R) : �p(8) = 0 for any � 2 R},= {p 2 P(R) : p(8) = 0}.

exer:3F:28 28. According to theoremtheo:3.107:3Ftheo:3.107:3F

5.10.4 then (range T )0 = {�' : � 2 F}. According to exerciseexer:3F:25exer:3F:25

25(3F) then

range T = {v 2 V : (�')(v) = 0 for any � 2 F},= {v 2 V : '(v) = 0},= null '.

exer:3F:29 29. According to theoremtheo:3.109:3Ftheo:3.109:3F

5.10.6 then (null T )0 = {�' : � 2 F}. According to exerciseexer:3F:25exer:3F:25

25 (3F)then

null T = {v 2 V : (�')(v) = 0 for any � 2 F},= null '.

Page 44: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 44

exer:3F:30 30. Let U = (null '1

) \ (null '2

) \ · · · \ (null 'm

) then U is a subspace of V . Hence, U0

=

{' 2 V 0: '(u) = 0 for any u 2 U}. Thus, we find that '

i

2 U0 for every 1 i m. Since'1

, . . . ,'m

is linearly independent so dim U0 � m. Thus, dim U = dim V � dim U0 dim V �m.On the other hand, since V is finite-dimensional so let dim V = n � m. If we let A

i

be setthat contains a basis of null '

i

. Then we will notice that dim U � |A1

\ A2

\ · · · \ Am

|.It suffices to prove |A

1

\A2

\ · · · \Am

| � n�m.Observe that since '

i

in the linearly dependent list so range 'i

= 1. Thus, |Ai

| = null 'i

=

dim V � range 'i

= n�1. We prove by induction on m that |A1

\A2

\ · · ·\Am

| � n�m.Indeed, it’s true for m = 1. If it’s true for m� 1, consider for m then we have:

|A1

\ · · · \Am

| = |A1

\ · · · \Am�1

|+ |Am

|� |(A1

\ · · · \Am�1

) [Am

|,� n� (m� 1) + (n� 1)� n = n�m.

Thus, we obtain dim U � |A1

\ · · · \Am

| � n�m. Hence, we find dim U = dim V �m.

exer:3F:31 31. According to exerciseexer:3F:30exer:3F:30

30 (3F) then dim ((null '1

) \ (null '2

) \ · · · \ (null 'n

)) = 0 anddim ((null '

2

) \ · · · \ (null 'n

)) = 1 so there exists v1

2 V so '1

(v1

) = 1 but 'i

(v1

) = 0

for 2 i n. Similarly, there exists v2

, v3

, . . . , vn

.Note that v

k

62 span (v1

, . . . , vk�1

) because otherwise 1 = 'k

(vk

) =

Pk�1

i=1

↵i

'i

(vk

) = 0,a contradiction. This follows v

1

, . . . , vn

is linearly independent so v1

, . . . , vn

is basis of Vwhose dual basis is '

1

, . . . ,'n

.

exer:3F:32 32. (a) T is surjective since range T = span (v1

, . . . , vn

) = V . Thus, operator T is invertible.(b) Since T is invertible so Tv

1

, . . . , T vn

is linearly independent, which follows M(Tu1

), . . . ,M(Tun

)

is linearly independent, i.e. columns of M(T ) is linearly independent.(c) Since dim span (M(Tu

1

), . . . ,M(Tun

)) = n = dim Fn,1 so column of M(T ) spansFn,1.(d) Let '

1

, . . . ,'n

be dual basis of u1

, . . . , un

and 1

, . . . , n

be dual basis of v1

, . . . , vn

.T 0 2 L(V 0

) is a dual map of T with respect to these to dual bases. Since (M(T 0))

t

= M(T )so i-th row of M(T ) is i-th column of M(T 0

). Since T is invertible so T 0 is also invertible.Hence, T 0

('1

), . . . , T 0('

n

) is linearly independent. This follows M(T 0'1

),M(T 0'2

), . . . ,M(T 0'n

)

is linearly independent or columns of M(T 0) are linearly independent or rows of M(T ) are

linearly idependent in Fn,1.(e) Row rank of M(T ) is n so rows of M(T ) spans Fn,1.

exer:3F:33 33. Let S be a function that takes A 2 Fm,n to At 2 Fn,m then S(A + B) = (A + B)

t

=

At

+ Bt

= S(A) + S(B) and S(�A) = (�A)

t

= �At

= �S(A). Thus, S is a linear mapfrom Fm,n onto Fn,m.S can be easily seen to be surjective, as for any A 2 Fn,m then there exists B = At 2 Fm,n

so S(B) = S(At

) = (At

)

t

= A.If S(A) = 0 then A = 0. Thus, S is injective so S is invertible.

Page 45: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 45

exer:3F:34 34. (a) For any ' 2 V 0 then (⇤(v1

+v2

))(') = '(v1

+v2

) = '(v1

)+'(v2

) = (⇤v1

)(')+(⇤v2

)(').Thus, ⇤(v

1

+ v2

) = ⇤v2

+⇤v1

. Similarly, ⇤(�v) = �(⇤v). Thus, ⇤ is a linear map from Vto V 00.(b) For any v 2 V , we will prove that (⇤ � T )(v) = (T 00 � ⇤)(v) or ⇤(Tv) = T 00

(⇤v). Inorder to prove this, we show that for any ' 2 V 0 then (⇤(Tv))(') = (T 00

(⇤v))('). Indeed,we have (⇤(Tv))(') = '(Tv) and since T 00

= (T 0)

0 2 L(V 00, V 00) so

(T 00(⇤v))(') = ((⇤v) � T 0

)(') = (⇤v)(T 0(')) = (T 0

('))(v) = (' � T )(v) = '(Tv).

Thus, (T 00(⇤v))(') = (⇤(Tv))(') for any ' 2 V 0, which follows ⇤(Tv) = T 00

(⇤v) for anyv 2 V , which leads to ⇤ � T = T 00 � ⇤.(c) If ⇤v = 0 then (⇤v)(') = '(v) = 0 for any ' 2 V 0, which leads to v = 0. Thus, ⇤ isinjective.Let v

1

, . . . , vn

be basis of V and '1

, . . . ,'n

be dual basis of V 0. For any S 2 V 00, weshow that ⇤v = S where v =

Pn

i=1

S('i

)vi

. It suffices to show that. Indeed, we have'i

(v) = S('i

) for every 1 i n. This follows '(v) = S(') for any ' 2 V 0. Thus,⇤v = S. Thus, ⇤ is surjective.In conclusion, ⇤ is an isomorphism from V onto V 00.

exer:3F:35 35. Denote 'i

2 (P(R))

0 so 'i

(xi) = 1 and 'i

(xj) = 0 for i 6= j. This follows for any' 2 (P(R))

0 then ' =

Pi�0

'(xi)'i

.Let e

i

2 R1 where ei

= (0, . . . , 0| {z }i�1

, 1, 0, . . .). Then any e 2 R1 we have e =P

i�0

↵i

ei

.

Define a linear map T from (P(R))

0 onto F1 so T ('i

) = ei

. T is surjective. If T (') = 0

for some ' =

Pi�0

'(xi)'i

then T (') =P

i�0

'(xi)ei

= 0. Thus, '(xi) = 0 for any i � 0.This follows ' = 0. Thus, T is injective. We find T is an ismorphism from (P(R))

0 ontoF1.

exer:3F:36 36. (a) Since range i = U so according to theoremtheo:3.107:3Ftheo:3.107:3F

5.10.4 then null i0 = U0.(b) Since null T = {0} so (null T )0 = U 0. According to theorem

theo:3.109:3Ftheo:3.109:3F

5.10.6 then range i0 =(null T )0 = U 0.(c) Since null i0 = U0 so according to theorem

theo:3.91:3Etheo:3.91:3E

5.8.3 then ˜i0 is injective and range ˜i0 =range i0 = U 0 so ˜i0 is surjective. Thus, ˜i0 is an isomorphism from V 0/U0

= V 0/(null i)0onto U 0.

exer:3F:37 37. (a) According to theoremtheo:3.108,3.110:3Ftheo:3.108,3.110:3F

5.10.5, ⇡0 is injective iff ⇡ is surjective, which is true.(b) According to theorem

theo:3.109:3Ftheo:3.109:3F

5.10.6 then range ⇡0 = (null ⇡)0 = U0.(c) From (a) and (b) then ⇡0 is an ismorphism from (V/U)

0 onto U0.

Page 46: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 46

6. Chapter 4: Polynomials

theo:4.17:4 Theorem 6.0.1 (4.17) Suppose p 2 P(R) is nonconstant polynomial. Then p has uniquefactorization (except for the order of factors) of the form

p(x) = c(x� �1

) · · · (x� �m

)(x2 + b1

x+ c1

) · · · (x2 + bM

x+ cM

)

where c,�1

, . . . ,�m

, b1

, . . . , bM

, c1

, . . . , cM

2 R with b2j

< 4cj

for each j.

6.1. Exercise 4

exer:4:1 1. Done.

exer:4:2 2. No since (�xm + x) + xm = x does not have degree of m.

exer:4:3 3. Yes.

exer:4:4 4. Pick p = (x� �1

)

a1(x� �

2

)

a2 · · · (x� �m

)

am so ai

2 Z, ai

� 1 andP

m

i=1

ai

= n.

exer:4:5 5. Let A be a (m+1)-by-(m+1) matrix so Ai,j

= zj�1

i

. Define a linear map T 2 L(Rm+1,1

)

so Tx = Ax. First, we prove that T is invertible. Indeed, if there exists x 6= 0, x 2

Rm+1,1, x =

0

BBB@

a0

a1

...am

1

CCCAso Tx = Ax = 0 then the polynomial a

0

+ a1

x + a2

x2 + . . . + am

xm

has m+1 distinct zeros z1

, . . . , zm

, zm+1

, a contradiction. Thus, we must have x = 0 whichfollows operator T is injective, which means T is invertible. Hence for any W 2 Rm+1,1,W

i,1

= wi

then there exists unique x 2 Rm+1,1, x1,i

= ↵i�1

so Tx = Ax = W . Notethat A

i,.

x = (Ax)i,1

= wi

so polynomial p = ↵0

+ ↵1

x + ↵2

x2 + . . . + ↵m

xm satisfies thecondition. Since the x 2 Rm+1,1 is uniquely determined so p is uniquely determined.

exer:4:6 6. If p(x) has a zero � then there exists q 2 P(C) so deg q = m� 1 and p(x) = (x� �)q(x).Hence, p0(x) = q(x) + (x� �)q0(x).If p(x), p0(x) shares a same zero, say � then p0(�) = 0 which follows q(�) = 0 so q(x) =(x� �)r(x) so p(x) = (x� �)2r(x) with r 2 P(C) and deg r = m� 2. This follows p hasat most m� 1 distinct zeros.

exer:4:7 7. According to theoremtheo:4.17:4theo:4.17:4

6.0.1 then p(x) has a unique factorization

p(x) = c(x� �1

) · · · (x� �m

)(x2 + b1

x+ c1

) · · · (x2 + bM

x+ cM

).

With p(x) having no real zero then each p(x) has a unique factorization

p(x) = c(x2 + b1

x+ c1

) · · · (x2 + bM

x+ cM

)

This follows degree of p is always even, a contradiction. Thus, polynomial with odd degreemust have a real zero.

Page 47: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 47

exer:4:8 8. There exists q(x) 2 P(R) so p(x) = (x � 3)q(x) + r for some r 2 R. We can see rightaway that r = p(3). We prove that Tp = q 2 P(R). Indeed, q 2 RR and for x 6= 3 thenq(x) =

p(x)�r

x�3

=

p(x)�p(3)

x�3

. For x = 3 then observe that p0(x) = q(x) + (x � 3)q0(x) sop0(3) = q(3). Thus, essentially, Tp = q is a polynomial in P(R).We have (T (p+ q))(3) = (p+ q)0(3) = p0(3) + q0(3) = (T (p))(3) + (T (q))(3) and for x 6= 3

then (T (p + q))(x) =

(p+q)(x)�(p+q)(3)

x�3

=

p(x)�p(3)

x�3

+

q(x)�q(3)

x�3

= (T (p))(x) + (T (q))(x).Thus, T (p+ q) = T (p) + T (q). Similarly, T (�p) = �T (p). We find T is a linear map.

exer:4:9 9. If p 2 P(C) then p has unique factorization of the form p(z) = c(z��1

) · · · (z��m

) wherec,�

1

, . . . ,�m

2 C. Hence

q(z) = p(z)p(z),

= cmY

i=1

(z � �i

)cmY

i=1

(z � �i

),

= ccmY

i=1

⇥(z � �

i

)(z � �i

)

⇤,

= |c|2mY

i=1

�z2 � 2(Re�

i

)z + |�i

|2�.

Thus q(z) is a polynomial with real coefficients.

exer:4:10 10. Let q 2 P(R) so q(xi

) = p(xi

) 2 R then according to exerciseexer:4:5exer:4:5

5, q is uniquely determinedin P(R).Let x

q

= M(q) with respect to standard basis in P(C). Repeat the construction ofinvertible operator T in exercise

exer:4:5exer:4:5

5 (4) but this time T 2 L(Cm+1,1

). Then Txp

= Txq

=

(p(x0

), p(x1

), . . . , p(xm

))

t. Since T is injective so xp

= xq

, which follows p = q. Thus, p isa polynomial with real coefficients.

exer:4:11 11. If deg q � deg p then there exists r, s 2 P(F) so q = pr + s with deg s < deg p. We findq+U = pr+ s+U = s+U . And each such s 2 P(F), deg s < deg p can be represented aslinear combination of 1, x, . . . , xdeg(p)�1. Thus, we follow that 1+U, x+U, . . . , xdeg(p)�1

+Uspans P(F)/U . And this list is linearly independent so this list is basis of P(F)/U . Hence,dim P(F)/U = deg p.

Page 48: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 48

7. Chapter 5: Eigenvalues, Eigenvectors, and Invariant Subspaces

7.1. 5A: Invariant subspaces

example:eigenvalue Example 7.1.1 Suppose T 2 L(F2

) defined by T (w, z) = (�z, w) then T does not haveeigenvalue if F = R. However, T has eigenvalues ±i if F = C.

theo:5.6:5A Theorem 7.1.2 (5.6) Suppose V is finite-dimensional, T 2 L(V ) and � 2 F. Then thefollowing are equivalent:

(a) � is an eigenvalue of T .

(b) T � �I is not injective.

(c) T � �I is not surjective.

(d) T � �I is not invertible.

This is deduced from theoremtheo:3.69:3Dtheo:3.69:3D

5.6.4.

theo:5.10:5A Theorem 7.1.3 (5.10) Let T 2 L(V ). Suppose �1

, . . . ,�m

are distinct eigenvalues of Tand v

1

, . . . , vm

are corresponding eigenvectors. Then v1

, . . . , vm

is linearly independent.

theo:5.13:5A Theorem 7.1.4 (5.13) Suppose V is finite-dimensional. Then each operator on V has atmost dim V distinct eigenvalues.

7.2. Exercises 5A

exer:5A:1 1. (a) For any u 2 U then Tu = 0 2 U so U is invariant under T .(b) For any u 2 U then Tu 2 range T ⇢ U so U is invariant under T .

exer:5A:2 2. For every u 2 null S then STu = T (Su) = T (0) = 0 so Tu 2 null S. Thus, null S isinvariant under T .

exer:5A:3 3. For every u 2 range S then there exists v 2 V so Sv = u. Hence, Tu = T (Sv) = S(Tv) 2range S so range S is invariant under T .

exer:5A:4 4. For any u1

2 U1

, u2

2 U2

, . . . , um

2 Um

then T (u1

+ . . . + um

) = Tu1

+ . . . + Tum

2U1

+ . . .+ Um

. Thus, U1

+ . . .+ Um

is invariant under T .

exer:5A:5 5. Let U1

, . . . , Um

be invariant subspaces of V under T . Then for any v 2 S = U1

\ . . .\Um

then Tv 2 V so S is invariant under T .

exer:5A:6 6. If U 6= {0} then there exists basis u1

, . . . , um

of U . If there exists v 2 V, v 62 U then wecan choose operator T 2 L(V ) so Tu

1

= v, which follows U is not invariant under T , acontradiction. Thus, if v 2 V then v 2 U , i.e. U = V .

Page 49: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 49

exer:5A:7 7. If T (x, y) = (�3y, x) = �(x, y) for (x, y) 6= (0, 0) then �3y = �x, x = �y which follows�3y = �2y. If y = 0 then x = 0, a contradiction. Hence, y 6= 0 which means �3 = �2.Thus, T does not have any eigenvalue.

exer:5A:8 8. If T (w, z) = (z, w) = �(w, z) for (w, z) 6= (0, 0) then z = �w,w = �z so z = �2. Similarly,since z 6= 0 so �2 = 1 which follows � = ±1. If � = 1 then eigenvector is (z, z), if � = �1

then eigenvector is (z,�z).

exer:5A:9 9. If T (z1

, z2

, z3

) = �(z1

, z2

, z3

) = (2z1

, 0, 5z3

) for (z1

, z2

, z3

) 6= 0 then �z1

= 2z2

,�z2

=

0,�z3

= 5z3

. If � = 0 then z2

= z3

= 0 so eigenvector is (z, 0, 0). If �ne0 then z1

= z2

= 0

and �z3

= 5z3

so � = 5. Respective eigenvector is (0, 0, z).

exer:5A:10 10. (a) If T (x1

, . . . , xn

) = �(x1

, . . . , xn

) = (x1

, 2x2

, 3x3

, . . . nxn

) for (x1

, . . . , xn

) 6= (0, . . . , 0)then (�� i)x

i

= 0 for i = 1, . . . , n. This follows � 2 {1, . . . , n}. If � = i then eigenvectoris ↵ · e

i

where ei

2 Fn so ei

’s coordinates are 0 except the i-th coordinate is 1.(b) Let U be an invariant subspace of V under T then for any u = (x

1

, . . . , xn

) 2 U thenv = (x

1

, 2x2

, . . . , nxn

) 2 U . With only these two vectors in U , we can choose certain↵,� 2 F to 1-st coordinate of w = ↵u + �v is 0. We also have Tw 2 U and from w, Tw,we can obtain a new vector in U so its 1st and 2nd coordinates are 0. Keep going likethat, we can eventually obtain that e

i

= (0, . . . , 0| {z }i�1

, 1, 0, . . . 0) 2 U if xi

6= 0. This follows

subspace U is spanned by list of ei

so xi

6= 0 for some (x1

, . . . , xn

) 2 U . Thus, any subsetof the standard basis of Fn is a spanning list of an invariant subspace of V .

exer:5A:11 11. If Tp = �p = p0 for p 6= 0. If � = 0 then p0 = 0 so respective eigenvector p is a constant. If� 6= 0 then since �p 6= p0 for all p 2 P(R) so doesn’t exists eigenvector in this case. Thus,the only answer is � = 0 as eigenvalue and p = c 2 R as eigenvector.

exer:5A:12 12. If (Tp)x = �p(x) = xp0(x) for some p(x) 6= 0. Let p = a0

+a1

x+. . . , a4

x4 then p0(x) = a1

+

2a2

x+3a3

x2+4a4

x3. Thus, this follows �ai

= iai

for i = 0, . . . , 4. Hence, � 2 {0, 1, 2, 3, 4}so that (a

0

, . . . , a4

) 6= (0, . . . , 0). If � = i then p = axi is can eigenvector for any a 2 R.

exer:5A:13 13. Since V is finite-dimensional so according to theoremtheo:5.13:5Atheo:5.13:5A

7.1.4, there are at most dim Vdifferent eigenvalues of T , in other words, at most dim V ↵ 2 F so T �↵I is not invertibleaccording to theorem

theo:5.6:5Atheo:5.6:5A

7.1.2. Thus, we can clearly choose ↵ so |↵ � �| < 1

1000

and T � ↵Iis invertible.

exer:5A:14 14. If P (u+ w) = �(u+ w) = u for u+ w 6= 0, u 2 U,w 2 W then (1� �)u = �w. If � 6= 0, 1then w 2 U , a contradiction since V = U �W . If � = 0 then u = 0 so eigenvector is anyw 2 W . If � = 1 then w = 0 so eigenvector is any u 2 W .

exer:5A:15 15. (a) If � is an eigenvalue of T then there exists v 2 V, v 6= 0 so Tv = �v. Since S isinvertible so there exists u 2 V so Su = v which follows S�1v = u. We have S�1TSu =

S�1T (Su) = S�1Tv = S�1

(�v) = �u. Thus, � is also eigenvalue of S�1TS.

Page 50: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 50

If � is an eigenvalue of S�1TS then there exists v 2 V, v 6= 0 so S�1TSv = �v. orS�1

(TSv) = �S�1

(Sv). Since S�1 is invertible so T (Sv) = �Sv. Thus, � is also eigenvalueof T .(b) If v is eigenvalue of T then S�1v is eigenvector of S�1TS corresponding to �.

exer:5A:16 16. Let v1

, . . . , vn

be basis of V and for 1 k n, let Tvk

=

Pn

i=1

↵i,k

vi

so ↵i,j

2 R accordingto problem’s condition. If � is eigenvalue of T with eigenvector v =

Pm

i=1

�i

vi

with �i

2 C.We show that v =

Pn

i=1

�i

vi

is the eigenvector corresponding to eigenvalue � of T , i.e.Tv = � · v.Indeed, from Tv = �v, where Tv =

Pn

k=1

�k

Tvk

=

Pn

i=1

�k

Pn

i=1

↵i,k

vi

, we follow for anyi = 1, . . . , n then

Pn

k=1

�k

↵i,k

= ��i

soP

n

k=1

�k

↵i,k

= ��i

orP

n

k=1

�k

↵i,k

= � · �i

. Thus,

Tv =

nX

i=1

nX

k=1

�k

↵i,k

!vi

=

nX

i=1

(� · �i

vi

) = � · v.

Thus, � is also an eigenvalue of T .

exer:5A:17 17. T (x1

, x2

, x3

, x4

) = (x2

, x3

, x4

,�x1

).

exer:5A:18 18. If T (z1

, z2

, . . .) = (0, z1

, . . .) = �(z1

, z2

, . . .) for some (z1

, z2

, . . .) 6= (0, 0, . . .) then �z1

= 0

and �zi+1

= zi

for all i � 1. In both two cases � = 0 and � 6= 0, we all obtain zi

= 0 forall i � 1, a contradiction. Thus, T does not have eigenvalue.

exer:5A:19 19. If T (x1

, . . . , xn

) = �(x1

, . . . , xn

) = (x1

+ . . .+xn

, . . . , x1

+ . . .+xn

) for some (x1

, . . . , xn

) 6=(0, . . . , 0) then �x

i

= x1

+ . . . + xn

for all 1 i n. Adding all n equations we find�(x

1

+ . . . + xn

) = n(x1

+ . . . + xn

). If � 6= n then x1

+ . . . + xn

= 0, which followsxi

= 0 for all 1 i n, a contradiction. Thus, � = n, which leads to xi

= xj

for any1 i < j n. This follows respective eigenvector is ↵ · (1, 1, . . . , 1) for any ↵ 2 F.

exer:5A:20 20. If T (z1

, z2

, . . .) = �(z1

, z2

, . . .) = (z2

, z3

, . . .) for some (z1

, z2

, . . .) 6= (0, 0, . . .) then �zi

=

zi+1

for all i � 1. For any eigenvalue � then corresponding eigenvector is ↵(1,�,�2, . . .)for any ↵ 2 F.

exer:5A:21 21. If � is eigenvalue of T then there exists v 2 V, v 6= 0 so Tv = �v. Hence, T�1

(v) =

T�1

�1

Tv�=

1

v or 1

is eigenvalue of T�1. Similarly if 1

is eigenvalue of T�1 then � iseigenvalue of T . We also find that T and T�1 have the same eigenvectors.

exer:5A:22 22. If w = �v then �3 is eigenvalue of T . If w + v 6= 0 then T (v + w) = 3(v + w) so 3 iseigenvalue of T .

exer:5A:23 23. If � is eigenvalue of ST with STv = �v for v 2 V, v 6= 0. Then TS(Tv) = T (STv) =

T (�v) = �Tv. Since Tv 6= 0 so � is also eigenvalue of TS. Similarly, if � is eigenvalue ofTS then � is also eigenvalue of ST .

Page 51: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 51

exer:5A:24 24. (a) Let x =

0

B@1

...1

1

CA then Tx = Ax = x so 1 is eigenvalue of T .

(b) Note that matrix A is precisely M(T ) and according to theoremtheo:3.117:3Ftheo:3.117:3F

5.10.7, dim range (T�I) is equal to column rank of A � I. Since sum of all entries in each column of A is 1 sosum of the entries in each column of A� I is 0. Let e

i

2 Fn,1 be the i-th row of A� I then�e

n

= e1

+ . . .+ en�1

so row rank of A� I is at most n� 1, which means column rank ofA� I is at most n� 1 or dim range (T � I) n� 1 so T � I is not injective. Accordingto theorem

theo:5.6:5Atheo:5.6:5A

7.1.2, we follow 1 is an eigenvalue of T .

exer:5A:25 25. Let Tu = �1

u, Tv = �2

v and T (u + v) = �3

(u + v) then �1

u + �2

v = �3

(u + v) or(�

1

� �3

)u = (�3

� �2

)v. If �1

= �3

then �2

= �3

= �1

, as desired. If �1

6= �3

then�3

6= �2

which follows u = ↵v for ↵ 2 F,↵ 6= 0. Hence, Tu = ↵Tv or �1

u = ↵�2

v or↵v(�

1

� �2

) = 0. Thus, �1

= �2

in all cases.

exer:5A:26 26. According to exerciseexer:5A:25exer:5A:25

25 (5A) we can follows all nonzero vectors in V corresponds to sameeigenvalue �, i.e. Tv = �v for all v 2 V . Thus, T = �I.

exer:5A:27 27. Let v1

, . . . , vn

be basis of V and Tv1

=

Pn

i=1

↵i

vi

. Consider n � 1-dimensional invariantsubspace U

i

= span(v1

, . . . , vi�1

, vi+1

, . . . , vn

) (i 6= 1) then since v1

2 Ui

we follow Tv1

2 Ui

which leads to ↵i

= 0. This is true for all i 6= 1 so eventually we obtain Tv1

= ↵1

vi

.Similarly, we can find Tv

k

= ↵k

vk

for k = 1, . . . , n.Now consider n�1-dimensional invariant subspace S

1

= span (v1

�v2

, v1

�v3

, . . . , v1

�vn

)

then since v1

� v2

2 S1

so ↵1

v1

� ↵2

v2

= T (v1

� v2

) =

Pn

i=2

�i

(v1

� vi

). Note thatv1

, . . . , vn

is linearly independent so we must have �i

= 0 for all i � 3. We obtain↵1

v1

� ↵2

v2

= �2

(v1

� v2

), which brings us back to exerciseexer:5A:25exer:5A:25

25 (5A) and obtain ↵1

= ↵2

.Similarly, we can obtain ↵

i

= ↵j

= � for all 1 i < j n, which leads to Tv = �v for allv 2 V . Thus, T = �I.

exer:5A:28 28. Similarly to exerciseexer:5A:27exer:5A:27

27 (5A), let v1

, . . . , vn

(n � 3) be basis of V then from v1

2span (v

1

, v2

) and v1

2 span (v1

, v3

), we deduce Tv1

= ↵1

v1

. Similarly, Tvi

= ↵i

vi

forall i = 1, . . . , n.Consider v

1

�v2

in 2-dimensional invariant subspace span (v1

�v2

, v1

�v3

) so ↵1

vi

�↵2

v2

=

T (v1

� v2

) = �1

(v1

� v2

)+�2

(v1

� v3

). Since v1

, v2

, v3

is linearly independent so �2

= 0 or↵1

v1

� ↵2

v2

= �1

(v1

� v2

), which according to exerciseexer:5A:25exer:5A:25

25 (5A) then ↵1

= ↵2

. Similarly,we can show that ↵

i

= ↵j

= ↵ for all 1 i < j n. This follows Tv = ↵v for every v 2 Vor T = ↵I.

exer:5A:29 29. Let �1

,�2

, . . . ,�m

be all distinct eigenvalues of T and v1

, . . . , vm

be corresponding eigen-vectors of T then according to theorem

theo:5.10:5Atheo:5.10:5A

7.1.3, v1

, . . . , vm

is linearly independent. If �i

6= 0

for all i = 1, . . . ,m then Tvi

= �i

vi

2 range T so vi

2 range T for all i = 1, . . . ,m. Thisfollows m dim range T = k. If there exists �

1

= 0 then �i

6= 0 for all i � 2, whichleads to v

i

2 range T for all i � 2 and we obtain m � 1 k in this case. In conclusion,m k + 1.

Page 52: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 52

exer:5A:30 30. Let x1

, x2

, x3

be eigenvectors corresponding to �4, 5,p7 then x

1

, x2

, x3

is linearly indepen-dent so x

1

, x2

, x3

is basis of R3. Say x = ↵1

x1

+ ↵2

x2

+ ↵3

x3

then Tx� 9x = �13↵1

x1

�4↵

2

x2

+ (

p7� 9)↵

3

x3

. We write (�4, 5,p7) as linear combination of x

1

, x2

, x3

then fromthat to find ↵

i

so Tx � 9x = (�4, 5,p7). Thus, such x 2 R3 so Tx � 9x = (�4, 5,

p7)

exists.

exer:5A:31 31. If v1

, . . . , vm

is linearly independent, since V is finite-dimensional, we can construct T 2L(V ) so Tv

i

= �i

vi

for i = 1, . . . ,m. Hence, T has eigenvectors v1

, . . . , vm

correspondingto �

1

, . . . ,�m

.

exer:5A:32 32. The hint says it all.

exer:5A:33 33. Because T (v + range T ) = Tv + range T = 0 + range T .

exer:5A:34 34. If null T is injective but (null T ) \ (range T ) 6= {0}, i.e. there exists v 62 null T butTv 2 null T , i.e. Tv 2 (null T ) \ (range T ). Hence, (T/null T )(v + null T ) = null T .Since v 62 null T so we find T/null T is not injective, a contradiction. Thus, we must have(null T ) \ (range T ) = {0}.Conversely, if (null T ) \ (range T ) = {0}. If T/(null T ) is not injective then there existsv 62 null T so Tv 2 null T . This follows Tv 6= 0 and Tv 2 (null T ) \ (range T ), acontradiction. Thus, T/null T is injective.

exer:5A:35 35. If � is eigenvalue of T/U then there exists v 2 V, v 62 U so (T/U)(v+U) = Tv+U = �v+U .This follows Tv � �v 2 U .Consider (T � �I)|

U

2 L(U), if (T � �I)|U

is not injective then there exists u 2 U, u 6= 0

so (T � �I)u = 0 or Tu = �u so � is eigenvalue of T . If (T � �I)|U

is injective then(T � �I)|

U

is invertible according to theoremtheo:3.69:3Dtheo:3.69:3D

5.6.4 (3D) as V is finite-dimensional. Hencefor Tv � �v = u 2 U there exists w 2 U so (T � �I)w = u = Tv � �v which followsT (v � w) = �(v � w). Since v � w 6= 0 as v /2 U so we also find � as eigenvalue of T .In conclusion, each eigenvalue of T/U is an eigenvalue of T .

exer:5A:36 36. Pick V = P(R) and � 2 R,� 6= 0. Let T 2 L(V ) so T (1) = 0, T (x) = x3 + �x2, Tx2 =

x4 + �x and Txk = x7+k for all k � 3. Pick subspace of V as U = span (x3, x4, . . .).Hence, with v = x+ x2 we have Tv � �v = x3 + x4 2 U so Tv + U = �v + U , i.e. � is aneigenvalue of T/U .However, Tv 6= �v for all v 2 V . Indeed, if deg v � 3 then deg Tv � 7+deg v, which leadsto Tv 6= �v. If deg v 2, let v = a

0

+ a1

x+ a2

x2 then Tv = a1

(x3 + �x2) + a2

(x4 + �x)and �v = �(a

0

+ a1

x + a2

x2). In this case, Tv = �v when a1

= a2

= 0 or Tv = 0 = �a0

and v = a0

6= 0, a contradiction.Thus, in this example, � is eigenvalue of T/U but not of T .

7.3. 5B: Eigenvectors and Upper-Triangular Matrices

Page 53: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 53

theo:5.21:5B Theorem 7.3.1 (5.21) Every operator on a finite-dimensional, nonzero, complex vectorspace has an eigenvalue.

theo:5.27:5B Theorem 7.3.2 (5.27) Suppose V is a finite-dimensional complex vector space and T 2L(V ). Then T has an upper-triangular matrix with respect to some basis of V .

theo:5.30:5B Theorem 7.3.3 (5.30) Suppose T 2 L(V ) has an upper-triangular matrix with respect tosome basis of V . Then T is invertible if and only if all the entries on the diagonal of thatupper-triangular matrix are nonzero.

In general, T � �I is invertible iff all entries on the diagonal of that upper-triangular matrixare different from �.

theo:5.32:5B Theorem 7.3.4 (5.32) Suppose T 2 L(V ) has an upper-triangular matrix with respect tosome basis of V . Then the eigenvalues of T are precisely the entries on the diagonal of thatupper-triangular matrix.

Following theorems are not from the book. I found these from a blog post of tastymath75025(AoPS).

theo:1:5B Theorem 7.3.5 Every operator T on a finite-dimensional real vector space V has invariantsubspace U of dimension 1 or 2.

theo:2:5B Theorem 7.3.6 Every operator T on a odd-dimensional real vector space has an eigenvalue.

From the above we follow

Proposition 7.3.7If V is a real vector space with even dimension, then for any operator T on V , everyinvariant subspace of V under T has even dimension.

7.4. Exercises 5B

exer:5B:1 1. (a) We have (I�T )(I+T+T 2

+. . .+Tn�1

) = I�Tn

= I and (I+T+. . .+Tn�1

)(I�T ) = Iso I � T is invertible and (I � T )�1

= I + T + T 2

+ . . .+ Tn�1.(b) Looks like 1� xn = (1� x)(1 + x+ . . .+ xn�1

).

exer:5B:2 2. If � is eigenvalue of T then let v be the eigenvector corresponding to �. We have ((x �2)(x� 3)(x� 4))(T ))v = 0 or (�� 2)(�� 3)(�� 4) = 0. Thus, � = 2 or � = 3 or � = 4.

exer:5B:3 3. We have T 2 � I = (T + I)(T � I) so for any v 2 V, v 6= 0 then (T + I)((T � I)v) = 0. Ifthere exists v so (T � I)v = w 6= 0 then (T + I)w = 0 which follows �1 is eigenvalue of T ,a contradiction. Thus, (T � I)v = 0 for all v 2 V , which follows T = I.

Page 54: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 54

exer:5B:4 4. We have 0 = P 2 � P = P (P � I) so for every v 2 V then P (P � I)v = 0 which follows(P � I)v 2 null P . Hence, any v 2 V can be written as v = Pv � (P � I)v wherePv 2 range P, (P � I)v 2 null P . We also note that (null P )\ (range P ) = {0}, otherwisethere exists u /2 null P but Pu 2 null P which leads to P 2u = 0 6= Pu, a contradiction.Thus, V = null P � range P .

exer:5B:5 5. Note that (STS�1

)

k

= (STS�1

)(STS�1

) · · · (STS�1

) = ST kS�1 so if p = a0

+ a1

x +

a2

x2 + . . .+ am

xm then

p(STS�1

) = a0

I + a1

STS�1

+ . . .+ am

(STS�1

)

m,

= a0

I + a1

STS�1

+ . . .+ am

STmS�m,

= S(a0

+ a1

T + . . .+ am

Tm

)S�1,

= Sp(T )S�1.

exer:5B:6 6. Since U is invariant under T so for any u 2 U then Tnu 2 U for all i � 0. This followsp(T )u 2 U for every u 2 U , i.e. U is invariant under p(T ) for any p 2 P(F).

exer:5B:7 7. Since 9 is eigenvalue of T 2 so there exists v 2 V, v 6= 0 so ((z2 � 9)(T ))v = (T 2 � 9I)v = 0

or (((z� 3)(z+3))(T ))v = (T � 3I)(T +3I)v = 0. If (T +3I)v = 0 then �3 is eigenvalueof T . If (T + 3I)v = u 6= 0 then (T � 3I)u = 0 which follows 3 is eigenvalue of T .

exer:5B:8 8. Imagine T as rotation 45

� clockwise in the plane R2. This will give T 4

= �I. Note thatT (x, y) = (x cos 45� � y sin 45�, x sin 45� + y cos 45�).

exer:5B:9 9. If � is a zero of p then p = (z � �)q(z). Hence 0 = p(T )v = (T � �I)q(T )v. Sincedeg q < deg p so q(T )v = u 6= 0 which follows (T � �I)u = 0 or � is eigenvalue of T .

exer:5B:10 10. Let p = a0

+ a1

z + . . .+ am

zm then from Tv = �v we have

p(T )v = (a0

+ a1

T + . . .+ am

Tm

)v,

= a0

v + a1

Tv + . . .+ am

Tmv,

= a0

v + a1

�v + . . .+ am

�mv,

= p(�)v.

exer:5B:11 11. If ↵ = p(�) for some eigenvalue � of T then from exerciseexer:5B:10exer:5B:10

10 (5B) we have ↵ is eigvalue ofp(T ). Conversely, if ↵ is eigenvalue of p(T ) then there exists v 2 V, v 6= 0 so p(T )v = ↵vor (p(T )�↵I)v = 0. Since F = C so we can factorise p�↵ = c(x��

1

) · · · (x��m

) whichfollows 0 = (p(T ) � ↵I)v = c(T � �

1

I) · · · (T � �m

I)v. Hence, there exists 1 j m so�j

is eigenvalue of T . Hence p(T )v = p(�j

)v = ↵v so ↵ = p(�j

). See MSE,

the prob-

lem is still

true if T is

only trian-

gularizable

on any fi-

nite vector

space over

a field F

See MSE,

the prob-

lem is still

true if T is

only trian-

gularizable

on any fi-

nite vector

space over

a field F

exer:5B:12 12. Back to exerciseexer:5B:8exer:5B:8

8 (5B), there exists T 2 L(R2

) so T 4

= �I, i.e. �1 is eigenvalue of(x4)(T ). However, �1 6= p(�) = �4 for any � 2 R.

exer:5B:13 13. If subspace U 6= {0} is invariant under T . U is finite-dimensional then T |U

2 L(U) soaccording to theorem

theo:5.21:5Btheo:5.21:5B

7.3.1, there exists eigenvalue � of T |U

, which follows � is eigenvalueof T , a contradiction. Thus U is either {0} or infinite-dimensional.

Page 55: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 55

exer:5B:14 14. Pick T so M(T ) =

0

@0 0 1

1 0 0

0 1 0

1

A.

exer:5B:15 15. Pick T so M(T ) =

0

@1 1 0

1 1 0

0 0 1

1

A.

exer:5B:16 16. If dim V = n then consider linear map S : Pn

(C) ! V so Sp = (p(T ))v. Sincedim P

n

(C) = n + 1 > n = dim V so according to theoremtheo_16:3B:3.23theo_16:3B:3.23

5.3.6, S is not injective.Hence, there exists p 2 P

n

(C), p 6= 0 so Sp = (p(T ))v = 0, which leads to existence ofeigenvalue of T .

exer:5B:17 17. If dim V = n then consider linear map S : Pn

2(C) ! L(V ) so Sp = p(T ). Sincedim P

n

2(C) = n2

+ 1 > n2

= dim L(V ) so according to theoremtheo_16:3B:3.23theo_16:3B:3.23

5.3.6, S is not injec-tive, i.e. there exists p 2 P

n

2(C), p 6= 0 so Sp = p(T ) = 0. Pick v 6= 0, v 2 V thenp(T )v = 0 which follows existence of eigenvalue of T .

exer:5B:18 18. Since V is finite-dimensional complex vector space so there exists � so T��I is invertible oris not invertible. If � is not eigenvalue of T then T��I is invertible so dim range (T��I) =dim v. If � is eigenvalue of T then T��I is not invertible so dim range (T��I) dim v�1.Thus, f(�) is not a continuous function.

exer:5B:19 19. If F = C then there exists eigenvalue � of T with eigenvector v 6= 0. Hence, for anyp 2 P(C) then p(T )v = p(�)v according to exercise

exer:5B:10exer:5B:10

10 (5B). On the other hand, we canchoose S 2 L(V ) so Sv 62 span v as dim V > 1. Hence, S 62 {p(T ) : p 2 P(C)} so{p(T ) : p 2 P(C)} 6= L(V ).If F = R, if there exists eigenvalue of T then it’s similar to previous case. If T has noeigenvalues, i.e. T has no invariant subspace with dimension 1 then according to theoremtheo:1theo:1

??, T has invariant subspace U with dimension 2. Hence, for any v 2 U then p(T )v 2 Ufor any p 2 P(R). If dim V > 2 then we can choose S 2 L(V ) so Su 62 U for u 2 U , whichfollows S 62 {p(T ) : p 2 P(R)}. Thus, {p(T ) : p 2 P(R)} 6= L(V ).If dim V = 2 then U = V . According to proof of theorem

theo:1:5Btheo:1:5B

7.3.5 and the fact that V has noeigenvalue, we can choose v 2 V so V = U = span (v, Tv) for some v 2 V, v 6= 0. We chooseS 2 L(V ) so Sv = �v and S(Tv) = �v for �,� 2 R;�,� 6= 0. If S 2 {p(T ) : p 2 P(R)}then S = a

1

I + a2

T . Note that a2

6= 0, otherwise �v = S(Tv) = (a1

I)(Tv) = a1

Tv, acontradiction since v, Tv is linearly independent. Thus, from �v = Sv = (a

1

I + a2

T )(v)we obtain Tv =

��a1a2

v, a contradiction since V has no eigenvalue. Thus, S 62 {p(T ) : p 2P(R)}.Thus, in all cases, we have {p(T ) : p 2 P(F)} 6= L(V ).

exer:5B:20 20. It is a consequence of theoremtheo:5.27:5Btheo:5.27:5B

7.3.2. Since V is finite-dimensional complex vector space sothere exists a basis v

1

, . . . , vn

of V so M(T ) with respect to this basis is upper-triangularmatrix. Hence, for any 1 i n, i-dimensional subspace U

i

= span (v1

, . . . , vi

) of V isinvariant under T .

Page 56: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 56

7.5. 5C: Eigenspaces and Diagonal Matrices

theo:5.38:5C Theorem 7.5.1 (5.38) Suppose V is finite-dimensional and T 2 L(V ). Suppose also that�1

, . . . ,�m

are distinct eigenvalues of T . Then E(�1

, T ) + . . . + E(�m

, T ) is a direct sum.Furthermore,

dim E(�1

, T ) + . . .+ dim E(�m

, T ) dim V.

theo:5.41:5C Theorem 7.5.2 (5.41) Suppose V is finite-dimensional and T 2 L(V ). Let �1

, . . . ,�m

denote the distinct eigenvalues of T . Then the following are equivalent:

(a) T is diagonalizable;

(b) V has basis consisting of eigenvectors of T ;

(c) there exists 1-dimensional subspace U1

, . . . , Un

of V , each invariant under T so V =

U1

� · · ·� Un

.

(d) V = E(�1

, T )� · · ·� E(�m

, T ).

(e) dim V = dim E(�1

, T ) + . . .+ dim E(�m

, T ).

theo:5.44:5C Theorem 7.5.3 (5.44) If T 2 L(V ) has dim V distinct eigenvalues then T is diagonaliz-able.

7.6. Exercises 5C

exer:5C:1 1. Since T is diagonalizable so V has basis v1

, . . . , vn

consisting of eigenvectors of T . LetTv

i

= �i

vi

. List of vj

so �j

= 0 is basis of null space of T . For all vj

so �j

6= 0 then we findTv

j

= �j

vj

spans range of T , hence vj

is basis of range of T . Thus, V = null T � range T .

exer:5C:2 2. The converse is not true. For example, if v1

, v2

, v3

is basis of V , let T 2 L(V ) so Tv1

=

0, T v2

= v3

, T v3

= �v2

then span (v1

) is null space of T and span (v2

, v3

) is range of T .Hence, V = null T � range T but T is not diagonalizable as 0 is the only eigenvalue of Twith E(0, T ) = span (v

1

).

exer:5C:3 3. If (a) is true then (b) is also true. Let u1

, . . . , un

be basis of null T which can be extendedto basis u

1

, . . . , un

, v1

, . . . , vm

of V .If (b) is true then since v

i

2 V = null T + range T so vi

� wi

2 range T for somewi

2 null T = span (u1

, . . . , un

). Note that v1

� w1

, . . . , vm

� wm

is linearly dependentas v

1

, . . . , vm

is linearly dependent and since dim range T = m so v1

� w1

, . . . , vm

� wm

is basis of range T . Hence if there is v 2 (null T ) \ (range T ) then v =

Pn

i=1

↵i

ui

=Pm

j=1

�j

(vj

� wj

) which leads to �j

= 0 as v1

, . . . , vm

, u1

, . . . , un

is linearly dependent.Thus, v = 0 or (null T ) \ (range T ) = {0}.

Page 57: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 57

If (c) is true then from theoremtheo_7:1C:1.45theo_7:1C:1.45

3.2.4 (1C), we find null T+range T is a direct sum. Hence,if v

1

, . . . , vm

is basis of null T , u1

, . . . , un

is basis of range T then v1

, . . . , vm

, u1

, . . . , un

is linearly independent. On the other hand, since V is finite-dimensional so dim V =

dim range T + dim null T = m + n. Thus, v1

, . . . , vm

, u1

, . . . , un

is basis of V , or V =

null T � range T .

exer:5C:4 4. Let V = P(R) and T 2 L(V ) so T (1) = 0, Tx = 0, Txi = x2 for all i � 2. Then null T =

span (1, x) and range T = span (x2) so null T \ range T = {0} but V 6= null T + range T .

exer:5C:5 5. If � is not eigenvalue of T then T � �I is invertible, i.e. range (T � �I) = V for anyoperator T 2 L(V ). Thus, the problem is equivalent to proving that T is diagonalizableiff V = null (T � �I)� range (T � �I) for all eigenvalue � of T .Claim 1. If T diagonalizable then V = null (T ��I)�range (T ��I) for every eigenvalue� of T .Proof 1. If T is diagonalizable. Let �

1

, . . . ,�m

be distinct eigenvalues of T . For 1 i m,let U =

Lj 6=i

E(�j

, T ) and consider (T ��i

I) restricted to U . For any u 2 U, u =

Pj 6=i

uj

with uj

2 E(�j

, T ) then

(T � �i

I)u =

X

j 6=i

�j

uj

� �i

X

j 6=i

uj

=

X

j 6=i

(�j

� �i

)uj

.

Thus, from this, we follow (T � �i

I)|U

2 L(U) and (T � �i

I)|U

is injective, so accordingto theorem

theo:5.6:5Atheo:5.6:5A

7.1.2 (5A) then (T � �i

I)|U

is invertible so range (T � �i

I)|U

= U . Accordingto exercise

exer:5C:3exer:5C:3

3 (5C) then

V =

mM

i=1

E(�i

, T ) = null (T � �i

I)� U = null (T � �i

I)� range (T � �i

I)|U

.

This follows, V = null (T ��i

I)+ range (T ��i

I) so again from exerciseexer:5C:3exer:5C:3

3, V = null (T ��i

I)� range (T � �i

I). This is true for all 1 i m.Proof 2. Note that if T is diagonalizable then T ��I is also diagonalizable for any � 2 C.Hence, according to exercise

exer:5C:1exer:5C:1

1 (5C) then V = null (T ��I)� range (T ��I) for all � 2 C,as desired.

Claim 2. If V = null (T � �I)� range (T � �I) for all � 2 C then T is diagonalizable.Proof. Since V is complex vector space so there exists eigenvalue �

1

of T . Let U1

=

range (T � �1

I) be a subspace of V then dim U1

< dim V since dim null (T � �1

I) � 1.If dim U

1

� 1 then there exists eigenvalue �2

of T |U1 . Since V = null (T��

1

I)�range (T��1

I) so according to exerciseexer:5C:3exer:5C:3

3 (5C) then null (T � �1

I) \ range (T � �1

I) = {0}. Letv2

2 U1

is eigenvalue of T |U1 corresponding to �

2

. If �2

= �1

then v2

2 null (T � �1

I),a contradiction to the fact that null (T � �

1

I) \ range (T � �1

I) = {0}. Thus, �1

6= �2

.This follows E(�

2

, T ) = E(�2

, T |U1).

Page 58: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 58

Let U2

= range (T ��2

I)|U1 then U

2

is subspace of U1

. From previous argument, we knowthat null (T ��

2

I)\range (T ��2

I) = {0} so null (T ��2

I)|U1 \range (T ��

2

I)|U1 = {0}

so U1

= null (T � �2

I)|U1 � range (T � �

2

I)|U1 according to exercise

exer:5C:3exer:5C:3

3 (5C). Combiningall of these, we obtain

V = null (T � �1

I)� range (T � �1

I),

= E(�1

, T )� U1

,

= E(�1

, T )� null (T � �2

I)|U1 � range (T � �

2

I)|U1 ,

= E(�1

, T )� E(�2

, T |U1)� U

2

,

= E(�1

, T )� E(�2

, T )� U2

.

We do the same thing for U2

. This keeps going as long as dim Ui

� 1 (since there alwaysexists eigenvalue of an operator on complex vector space according to theorem

theo:5.21:5Btheo:5.21:5B

7.3.1). Atthe end, we will obtain V =

Lm

i=1

E(�i

, T ) for eigenvalues �i

of T . From theoremtheo:5.41:5Ctheo:5.41:5C

7.5.2(5C), this follows T is diagonalizable.

exer:5C:6 6. Since T has dim V distinct eigenvalues so T is diagonalizable, hence according to theoremtheo:5.41:5Ctheo:5.41:5C

7.5.2 (5C), V has basis v1

, . . . , vn

which are eigenvectors of T corresponding to eigenval-ues �

1

, . . . ,�n

. Thus, v1

, . . . , vn

are also eigenvectors of S analogously corresponding to�1

, . . . ,�n

. Hence, STvi

= S(�i

vi

) = �i

�i

vi

= TSvi

for all 1 i n. This followsST = TS as v

1

, . . . , vn

is basis of V .

exer:5C:7 7. T has diagonal matrix A with respect to basis v1

, . . . , vn

of V . Let �1

, . . . ,�m

be distincteigenvalues of T . Let S

k

be number of vectors vi

so vi

2 E(�k

, T ), i.e. number oftimes �

k

appears on the diagonal A. Note that v1

, . . . , vn

is linearly independent soSk

dim E(�k

, T ). Hence, we have

n =

mX

k=1

Sk

mX

k=1

dim E(�k

, T ) = n.

The inequality holds when Sk

= dim E(�k

, T ), i.e. �k

appears on the diagonal A preciselydim E(�

k

, T ) times.

exer:5C:8 8. If both T � 2I and T � 6I are not invertible, then according to theoremtheo:5.6:5Atheo:5.6:5A

7.1.2 (5A), 2 and6 are eigevalues of T , i.e. dim E(2, T ) � 1, dim E(6, T ) � 1. However,

dim E(2, T ) + dim (6, T ) + dim T (8, T ) > 5 = dim F5.

This contradicts to theoremtheo:5.38:5Ctheo:5.38:5C

7.5.1 (5C). Thus, either T�2I or T�6I or both are invertible.

exer:5C:9 9. According to exerciseexer:5A:21exer:5A:21

21 (5A), if � is not eigenvalue of T then 1

is not eigenvalue of T�1.Hence, E(�, T ) = E(

1

, T�1

) = {0}.If � is eigenvalue of T then according to exercise

exer:5A:21exer:5A:21

21 (5A), if v 2 E(�, T ) then v 2 E(

1

, T�1

).Similarly and we obtain E(�, T ) = E(

1

, T�1

).

Page 59: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 59

exer:5C:10 10. We have null T = null (T � 0I) = E(0, T ). Hence, according to theoremtheo:5.38:5Ctheo:5.38:5C

7.5.1,

mX

i=1

dim E(�i

, T ) + dim E(0, T ) dim V = dim range T + dim null T.

This followsP

m

i=1

dim E(�i

, T ) dim range T.

exer:5C:11 11. Done.

exer:5C:12 12. Let v1

, v2

, v3

be eigenvectors of R corresponding to 2, 6, 7 then v1

, v2

, v3

is basis of F3.Similarly, let w

1

, w2

, w3

be eigenvectors of T corresponding to 2, 6, 7 then w1

, w2

, w3

isbasis of V . Let S 2 L(F3

) so Svi

= wi

then S is injective so S is invertible. We haveS�1TSv

i

= S�1Twi

= S�1

(↵i

wi

) = ↵i

vi

= Rvi

. Thus, S�1TS = R.

exer:5C:13 13. Let T,R 2 L(F4

) so Tv1

= Rv1

= 2v1

, T v2

= Rv2

= 6v2

, T v3

= Rv3

= 7v3

and Tv4

=

v1

+ 7v4

, Rv4

= v2

+ 6v4

where v1

, . . . , v4

is basis of F4. It’s not hard to check that T,Ronly have 2, 6, 7 as eigenvalues. We show that for any if there is S 2 L(F4

) so SR = TSthen S is not invertible, which can follow that there doesn’t exists invertible operator Sso R = S�1TS.Indeed, if Sv

1

=

P4

i=1

↵i

vi

then

TSv1

= T

4X

i=1

↵i

vi

!= 2↵

1

v1

+ 6↵2

v2

+ 7↵3

v3

+ ↵4

(v1

+ 7v4

).

And

SRv1

= 2Sv1

= 2

4X

i=1

↵i

vi

.

With this, we follow ↵4

= 0, which leads to ↵2

= ↵3

= 0. Thus, Sv1

= ↵1

v1

. Similarly,Sv

2

= ↵2

v2

(note that we can’t imply Sv3

= ↵3

v3

, because of the way we choose Tv4

=

v1

+ 7v4

).Now, consider TSv

4

and SRv4

. If Sv4

=

P4

i=1

�i

vi

then

TSv4

= T

4X

i=1

�i

vi

!= 2�

1

v1

+ 6�2

v2

+ 7�3

v3

+ �4

(v1

+ 7v4

).

And

SRv4

= S(6v4

+ v2

) = ↵2

v2

+ 6

4X

i=1

�i

vi

.

If TSv4

= SRv4

then we must have �4

= �3

= 0. This follows Sv4

= �1

v1

+ �2

v2

, i.e.Sv

4

2 span (Sv1

, Sv2

) since Sv1

= ↵1

v1

, Sv2

= ↵2

v2

. Hence, S is not invertible.In conclusion, with such T,R then there does not exists invertible operator S so R =

S�1TS.

Page 60: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 60

exer:5C:14 14. If T does not have a diagonal matrix for any basis of C3 then that means dim E(6, T ) =dim E(7, T ) = 1 and T only has eigenvalues 6, 7. This makes the construction for T mucheasier: For arbitrary basis of C3 then choose T 2 L(C3

) so Tv1

= 6v1

, T v2

= 7v2

, T v3

=

v1

+ v2

+ 6v3

.Indeed, we show that T has only two eigenvalues 6 and 7. If there exists v = a

1

v1

+a2

v2

+

a3

v3

6= 0 so

Tv = �(a1

v1

+ a2

v2

+ a3

v3

) = 6a1

v1

+ 7a2

v2

+ a3

(v1

+ v2

+ 6v3

)

Then �a3

= 6a3

,�a1

= 6a1

+a3

,�a2

= 7a2

+a3

. Hence, if � 6= 6, 7 then a3

= a2

= a1

= 0,a contradiction.Next, we show that dim E(6, T ) = dim E(7, T ) = 1. Indeed, if there is v = a

1

v1

+ a2

v2

+

a3

v3

so Tv = 6v = 6a1

v1

+ 7a2

v2

+ a3

(v1

+ v2

+ 6v3

). This follows, a3

= 0, a2

= 0. Thus,span (v

1

) = E(6, T ). Similarly, span (v2

) = E(7, T ).These two claims combining with theorem

theo:5.41:5Ctheo:5.41:5C

7.5.2 (5C) deduce that T is not diagonalizable.

exer:5C:15 15. As claimed in exerciseexer:5C:14exer:5C:14

14 (5C), since T is not diagonalizable, T has only 6, 7 as eigenvalues.Hence, as 8 is not eigenvalue of T , T � 8I is invertible according to theorem

theo:5.6:5Atheo:5.6:5A

7.1.2 (5A),which means there exists (x, y, z) 2 C3 so (T � 8I)(x, y, z) = (17,

p5, 2⇡).

exer:5C:16 16. (a) Induction on n, if Tn�1

(0, 1) = (Fn�1

, Fn

) then

Tn

(0, 1) = T (Fn�1

, Fn

) = (Fn

, Fn�1

+ Fn

) = (Fn

, Fn+1

).

(b) If there is (x, y) 6= (0, 0) so T (x, y) = �(x, y) = (y, x + y) then �x = y,�y = x + y.This follows y(�2 � �� 1) = 0. Hence, eigenvalues of T is � =

1±p5

2

.

(c) We have⇣1, 1�

p5

2

⌘2 E

⇣1�

p5

2

, T⌘

and⇣1, 1+

p5

2

⌘2 E

⇣1+

p5

2

, T⌘.

(d) We have (0, 1) = 1p5

⇣⇣1, 1+

p5

2

⌘�⇣1, 1�

p5

2

⌘⌘. Hence,

Tn

(0, 1) =1p5

Tn

1,

1 +

p5

2

!� Tn

1,

1�p5

2

!!,

=

1p5

" 1 +

p5

2

!n

1,

1 +

p5

2

!� 1�

p5

2

!n

1,

1�p5

2

!#

This follows Fn

=

1p5

h⇣1+

p5

2

⌘n

�⇣1�

p5

2

⌘n

i.

(e) It is equivalent to prove that��� 1p

5

⇣1�

p5

2

⌘n

��� < 1

2

, which is true since⇣p

5�1

2

⌘n

< 1 <p5

2

.

Page 61: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 61

8. Chapter 6: Inner Product Spaces

8.1. 6A: Inner Products and Norms

theo:6.14:6A Theorem 8.1.1 (6.14) Suppose u, v 2 V , with v 6= 0. Set c =hu, vikvk2

and w = u� hu, vikvk2

v

then hw, vi = 0 and u = cv + w.

theo:6.15:6A Theorem 8.1.2 (6.15, Cauchy-Schwarz Inequality) Suppose u, v 2 V . Then | hu, vi | kuk kvk. The inqualitiy holds when one of u, v is a scalar multiple of the other.

8.2. Exercises 6A

exer:6A:1 1. Definiteness is not true, i.e. |1 · 0|+ |1 · 0| = 0 but (1, 0) 6= (0, 0).

exer:6A:2 2. For (x1

, y1

, z1

) 2 R3 so x1

y1

< 0 then h(x1

, y1

, z1

), (x1

, y1

, z1

)i = 2x1

y1

< 0, a contradic-tion to condition of positivity in definition of inner product.

exer:6A:3 3. (INCORRECT - NEED TO CHECK) According to definition of inner product thenhv, vi > 0 for v 6= 0, hence any inner product on V satisfying the old definition willalso satisfies the new definition.Conversely, consider an inner product on V that satisfies the new definition, there existsv 2 V, v 6= 0 so hv, vi > 0. Note that with new definition, we can’t define norm of vector,however, we can avoid this notation by only woking on hu, ui rather than

phu, ui. With

this in mind, we can check that the orthogonal decomposition strill works in the newdefinition, but only for v, as shown:

For any u 2 V then with c =hu, vihv, vi and w = u� hu, vi

hv, viv then hw, vi = 0 and u = cv + w.

Hence, with this, we have hu, ui = hcv + w, cv + wi = c2 hv, vi+ hw,wi

exer:6A:4 4. (a) hu+ v, u� vi = kuk2 � hu, vi + hv, ui � kvk2. Since we’re talking about real innerproduct, so hu, vi = hv, ui, done.(b) This is deduced from (a).(c) A rhombus made by two vectors u, v 2 R2 with same norm has two diagonals u�v, u+v.From (b), we find u+ v is orthogonal to u� v, which follows two diagonals of the rhombusare perpendicular to each other.

exer:6A:5 5. If there exists v 2 V, v 6= 0 so Tv =

p2v then kTvk2 =

⌦p2v,

p2v↵= 2 kvk2 > kvk2,

a contradiction. Thus there does not exists v 2 V, v 6= 0 so Tv =

p2v, i.e.

p2 is

not eigenvalue of T , which follows that T �p2I is invertible from theorem

theo:5.6:5Atheo:5.6:5A

7.1.2 underassumption that V is finite-dimensional.

Page 62: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 62

exer:6A:6 6. If hu, vi = 0 then according to Pythagorean’s theorem, ku+ avk2 = |a|2 kvk2+kuk2 � kuk2for all a 2 F.If ku+ avk2 � kuk2 for all a 2 F then it is equivalent to 0 a2 kvk2 + 2aRe hv, ui. Ifhv, ui 6= 0 then we can choose a so 0 > a2 kvk2 + 2aRe hv, ui, a contradiction. Thus,hv, ui = 0.

exer:6A:7 7. Since a, b 2 R so kau+ bvk2 = |a|2 kuk2 + 2ab hu, vi + |b|2 kvk2, similarly, kbu+ avk2 =

|b|2 kuk2+2ab hu, vi+ |a|2 kvk2. Hence, kau+ bvk2 = kbu+ avk2 iff |a|2 kuk2+ |b|2 kvk2 =|b|2 kuk2 + |a|2 kvk2 iff

�|a|2 � |b|2

� ⇣kuk2 � kvk2

⌘= 0 for all a, b 2 R iff kvk = kuk.

exer:6A:8 8. We have | hu, vi | = 1 = kuk kvk so according to theoremtheo:6.15:6Atheo:6.15:6A

8.1.2, u = av for a 2 F. Fromhere we find a = 1 or u = v.

exer:6A:9 9. Let kuk = a, kvk = b then we have according to theoremtheo:6.15:6Atheo:6.15:6A

8.1.2 (6A), we have | hu, vi | abso

(1� | hu, vi |)2 � (1� ab)2 � (1� a2)(1� b2).

exer:6A:10 10. Using theoremtheo:6.14:6Atheo:6.14:6A

8.1.1 (6A), we have with c =

h(1, 2), (1, 3)ik(1, 3)k2

=

7

10

then u =

7

10

(1, 3) and

v = (1, 2)� 7

10

(1, 3) =

✓3

10

,�1

10

◆.

exer:6A:11 11. For x = (a1/2, b1/2, c1/2, d1/2), y = (a�1/2, b�1/2, c�1/2, d�1/2

) 2 R4 then according totheorem

theo:6.15:6Atheo:6.15:6A

8.1.2 (6A) we have

16 = |hx, yi|2 kxk2 kyk2 = (a+ b+ c+ d)

✓1

a+

1

b+

1

c+

1

d

◆.

exer:6A:12 12. Apply theoremtheo:6.15:6Atheo:6.15:6A

8.1.2 to (x1

, . . . , xn

), (1, . . . , 1) 2 Rn with respect to Euclidean inner prod-uct.

exer:6A:13 13. Draw triangle formed by u, v and u � v. Let ✓ be angle between u and v then applyinglaw of cosines, we have

ku� vk2 = kuk2 + kvk2 � 2 cos ✓ kuk kvk .

On the other hand, for u, v 2 R2 then

ku� vk2 = kuk2 + kvk2 � 2 hu, vi .

Thus, hu, vi = kuk kvk cos ✓.

exer:6A:14 14. Since �1 cos ✓ 1 so we must have �1 hx, yikxk kyk 1, which is true according to

theoremtheo:6.15:6Atheo:6.15:6A

8.1.2.

Page 63: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 63

exer:6A:15 15. Consider x = (a1

, 21/2a2

, . . . , n1/2an

), y = (b1

, 2�1/2b2

, . . . , n�1/2bn

) 2 R2 then accordingto theorem

theo:6.15:6Atheo:6.15:6A

8.1.2 with respect to Euclidean inner product, we have the desired inequality.

exer:6A:16 16. 2kvk2 = ku+ vk2 + ku� vk2 � 2kuk2 = 34 so kvk =

p17.

exer:6A:17 17. We have k(x, y)k = max{|x|, |y|} =

1

2

(|x+ y|+ |y � x|) so k(x, y)k2 = 1

2

(x2+y2+|y2�x2|).If such inner product on R2 exists, then for any u, v 2 R2 then ku + vk2 + ku � vk2 =

2(kuk2 + kvk2). Let u = (x, y), v = (a, b) then

ku+vk2+ku�vk2 = x2+y2+a2+b2+1/2��(x+ a)2 � (y + b)2

��+1/2

��(x� a)2 + (y � b)2

�� ,

and2(kuk2 + kvk2) = x2 + y2 + a2 + b2 + |x2 � y2|+ |a2 � b2|.

Then we must have��(x+ a)2 � (y + b)2

��+

��(x� a)2 + (y � b)2

��= 2|x2 � y2|+ 2|a2 � b2|.

However, if x = 6, y = �5, a = 3, b = 2 then LHS = 9

2 � 3

2

+ 7

2 � 3

2

= 112 butRHS = 2(6

2 � 5

2

+ 3

2 � 2

2

) = 32, a contradiction. Thus, such inner product on R2 doesnot exist.

exer:6A:18 18. Exerciseexer:6A:19exer:6A:19

19 (next exercise) makes it easier, in which we have

h(1, 1), (1,�1)i = 1

4

�k(2, 0)k2 � k(0, 2k2

�= 0.

So (1, 1) is othorgonal to (1,�1), hence according to Pythagorean’s theorem then

k(1, 1) + (1,�1)k2 = k(2, 0)k2 = k(1, 1)k2 + k(1,�1)k2.

We have LHS = 2

2

= 4 and RHS = 2k(1, 1)k2 = 2 · 22/p. This leads to p = 2.If p = 2 then according to exercise

exer:6A:19exer:6A:19

19, it suffices to show

h(x, y), (a, b)i = 1

4

⇥(x+ a)2 + (y + b)2 � (x� a)2 � (y � b)2

⇤= xa+ yb.

is an inner product on R2. This is obviously true as this inner product is Euclidean innerproduct.

exer:6A:19 19. We have

ku+ vk2 � ku� vk2 = hu+ v, u+ vi � hu� v, u� vi ,=

�kuk2 + hu, vi+ hv, ui+ kvk2

���kuk2 � hu, vi � hv, ui+ kvk2

�,

= 2 hu, vi+ 2 hv, ui .

In real inner product, we find hu, vi = hv, ui, which will obtain the desired identity.

Page 64: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 64

exer:6A:20 20. From exerciseexer:6A:19exer:6A:19

19, we know that ku+ vk2 � ku� vk2 = 2 hu, vi+ 2 hv, ui. Next, we have

ku+ ivk2 � ku� ivk2 = hu+ iv, u+ ivi2 � hu� iv, u� ivi ,=

�kuk2 + hu, ivi+ hiv, ui+ kivk2

��kuk2 + kivk2 � hu, ivi � hiv, ui

�,

=2 [hu, ivi+ hiv, ui] ,=2i [hv, ui � hu, vi] .

Thus,hu, vi = 1

4

�ku+ vk2 � ku� vk2 + ku+ ivk2i� ku� ivk2i

�.

exer:6A:21 21. We will only prove for the case F = R, the other case F = C is similar (which prooffor this case can be seen here) . Define a function h , i : U ⇥ U ! R so hu, vi =

1

4

�ku+ vk2 � ku� vk2

�. Then it’s obvious that kuk = hu, ui1/2. It suffices to show that

h; , i is an inner product.Indeed, the positivity and definiteness conditions satisfies since hu, ui = kuk2 is 0 iffkuk = 0 iff u = 0. Now we check the condition of additivity in first slot, we have

hu, vi+ hw, vi = 1

4

�ku+ vk2 + kw + vk2 � ku� vk2 � kw � vk2

�.

Since the norm satisfies the parallelogram equality so

ku+ vk2 + kw + vk2 = 1

2

�kw + u+ 2vk2 + ku� wk

�2

,

=

1

2

⇥ku� wk2 + 2

�kw + u+ vk2 + kvk2

�� kw + uk2

⇤,

= kw + u+ vk2 + 1

2

ku� wk2 + kvk2 � 1

2

kw + uk2

Similarly, we obtain

ku� vk2 + kw � vk2 = kw + u� vk2 + kvk2 + 1

2

ku� wk2 � 1

2

kw + uk2.

Thus,hu, vi+ hw, vi = 1

4

�kw + u+ vk2 � kw + u� vk2

�= hw + u, vi .

Next we check the condition of homogeneity in first slot. From the condition of additivity offirst slot, we follow that hnu, vi = n hu, vi for any n 2 Z. For � = a/b 2 Q, a, b 2 Z, b 6= 0

then for any u, v 2 U , we have u/b 2 U so b hu/b, vi = hu, vi. Hence,

� hu, vi = a

bhu, vi =

Dabu, vE= h�u, vi .

So far, homogeneity in first slot is true on Q. Since kuk+kvk � ku+vk so (kuk+ kvk)2 �ku + vk2 so kukkvk � hu, vi. From kuk + kvk � ku + vk we also follow that the norm iscontinuous.

Page 65: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 65

Next, consider � 2 C and consider sequence (rn

)

n�1

! � where ri

2 Q then we have

hrn

u, vi � h�u, vi = h(rn

� �)u, vi k(rn

� �)ukkvk = |rn

� �|kukkvk.

Since limn!1 |r

n

��|kukkvk = 0 so lim

n!1 hrn

u, vi = h�u, vi. Since rn

2 Q so lim

n!1 hrn

u, vi =hu, vi lim

n!1 rn

= � hu, vi. Thus, h�u, vi = � hu, vi for all � 2 R.Thus, the function h, i is indeed an inner product.

exer:6A:22 22. We need to show that for any a1

, . . . , an

2 R then (a1

+ . . .+ an

)

2 n�a21

+ . . .+ a2n

�.

Consider x = (a1

, . . . , an

), y = (1, 1, . . . , 1) 2 Rn then according to theoremtheo:6.15:6Atheo:6.15:6A

8.1.2, we have

|a1

+ . . .+ an

| = | hx, yi | kxkkyk =

qn(a2

1

+ . . .+ a2n

).

exer:6A:23 23. We have h(v1

, . . . , vm

), (v1

, . . . , vm

)i = hv1

, v1

i+ . . .+ hvm

, um

, i equals 0 iff hvi

, vi

i = 0 iffvi

= 0 for all 1 i m. One can easily check other conditions in the definition of innerspace.

exer:6A:24 24. We have hu, ui1

= 0 iff hSu, Sui = 0 iff Su = 0 iff u = 0 since S is injective. We also have

hu+ w, vi1

= hS(u+ w), Svi = hSu, Svi+ hSw, Svi = hu, vi1

+ hw, vi1

.

Similarly, h�u, vi1

= � hu, vi1

and hu, vi1

= hSu, Svi = hSv, Sui = hv, ui1

.

exer:6A:25 25. Since S is not injective so there exists u 2 V, u 6= 0 so Sv = 0. Hence, hv, vi1

= 0 butv 6= 0, which fails the definiteness condition of inner product.

exer:6A:26 26. (a) This is talking about Euclidean inner product space. So

hf(t), g(t)i0 = h(f1

(t), . . . , fn

(t)) , (g1

(t), . . . , gn

(t))i0 ,= [f

1

(t)g1

(t) + . . .+ fn

(t)gn

(t)]0 ,

=

nX

i=1

⇥f 0i

(t)gi

(t) + fi

(t)g0i

(t)⇤,

=

nX

i=1

f 0i

(t)gi

(t) +

nX

i=1

fi

(t)g0i

(t),

=

⌦f 0(t), g(t)

↵+

⌦f(t), g0(t)

↵.

(b) We have hf(t), f(t)i = kf(t)k2 = c2 so hf(t), f(t)i0 = 0. On the other hand, from (a)then hf(t), f(t)i0 = 2 hf 0

(t), f(t)i so done.(c) (DON’T KNOW WITH CURRENT KNOWLEDGE)

exer:6A:27 27. Since kw � uk+ kw � vk2 = 1

2

�k2w � u� vk2 + ku� vk2

�so we are done.

exer:6A:28 28. If there is two points u, v 2 C, u 6= v closest to w 2 C then kw�uk = kw�vk kw�rk forall r 2 C. Since 1

2

(u+v 2 C so we have kw�uk2 ��w � 1

2

(u+ v)��2

= kw�uk2� 1

4

ku�vk2,which follows ku� vk2 = 0 or u = v, a contradiction. Thus, there is at most one point inC that is closest to w.

Page 66: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 66

exer:6A:29 29. (a) d(u, v) � 0 and d(u, v) = 0 iff ku�vk = 0 iff u = v; d(u, v) = d(v, u); d(u, v)+d(v, w) =ku � vk + kv � wk � ku � wk = d(u,w) according to definition of norm on exercise

exer:6A:21exer:6A:21

21.Thus d is metric on V .(b) (I couldn’t solve this so I did some research and found the proof, note that norm isdefined in exercise

exer:6A:21exer:6A:21

21 (6A) ). Let v1

, . . . , vn

be basis of V . For all x 2 V, x =

Pn

i=1

↵i

vi

,define a norm kxk

1

=

Pn

i=1

|↵i

| (can verify that it is indeed a norm). Consider a Cauchysequence {x

k

}k�1

in the metric space (V, d) with xk

=

Pn

i=1

↵i,k

vi

, i.e. for any " > 0 thereexists N > 0 so for all m,n > N we have

d(xm

, xn

) = kxm

� xn

k =

�����

nX

i=1

(↵i,m

� ↵i,n

)vi

����� < ".

Since any two norms on some finite-dimensional vector space V over F are equivalent, thereexists pairs of real numbers 0 < C

1

C2

so for all x 2 V then C1

kxk1

kxk C2

kxk1

.Hence, for any j = 1, . . . , n, we have

�����

kX

i=1

(↵i,m

� ↵i,n

)vi

����� � C1

nX

i=1

|↵i,m

� ↵i,n

| � C1

|↵j,m

� ↵j,n

|.

This follows that for any " > 0, there exists N > 0 so for all m,n > N then |↵j,m

�↵j,n

| < ".This follows that {↵

j,i

}i�1

is a Cauchy sequence in F. Since F is complete (see page 2 inhere for proof of case F = R) so for any j = 1, . . . , n, {↵

j,i

}i�1

converges to �j

. Considera =

Pn

i=1

�i

vj

. We show that Cauchy sequence {xk

}k�1

converges, i.e. for any " > 0,there exists N > 0 so for all n > N then d(x

n

, a) = kxn

� ak < ". This is true because

kxn

� ak =

�����

nX

i=1

(↵i,n

� �i

)vi

����� C2

nX

i=1

|↵i,n

� �i

| < C2

nX

i=1

"i

.

(c) It suffices to prove that any Cauchy sequence {xk

}k�1

in U converges to x in U . Thisis true if we choose u

1

, . . . , un

as basis of U and prove similarly to (b), which will givesu 2 span (u

1

, . . . , un

) = U .

exer:6A:30 30. (Followed from the hint in the book) Let p = q+�1� kxk2

�r for some polynomial r on Rn.

It suffices to choose r so p is a harmonic polynomial on Rn, i.e. �

�1� kxk2

�r = �(�q).

Let V be set of polynomial r on Rn so deg r deg q. Hence V is a finite-dimensionalvector space with basis xm1

1

xm12

· · ·xmnn

whereP

n

i=1

mi

deg q. Define an operator T onV so for r 2 V then Tr = �

�1� kxk2

�r = �

�1�

Pn

i=1

x2i

�r. Consider polynomial r

so Tr = 0, i.e. h =

�1�

Pn

i=1

x2i

�r is harmonic polynomial. Since harmonic polynomial

h(x) = 0 for every x 2 Rn so kxk = 1, we follow h(x) = 0. Thus, T is injective, whichleads to T being surjective. Hence, for polynomial �(�q), there exists polynomial r soTr = �

�1� kxk2

�r = �(�q). By letting p = q +�

�1� kxk2

�r, we find �p = 0, i.e. p

is a harmonic polynomial on Rn.

exer:6A:31 31. Use exerciseexer:6A:27exer:6A:27

27 (6A), with a = w � u, b = w � v, c = u� v and d = w � 1

2

(u+ v).

Page 67: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 67

8.3. 6B: Orthonormal Bases

theo:6.30:6B Theorem 8.3.1 (6.30) Suppose e1

, . . . , en

is an orthonormal basis of V and v 2 V . Thenv = hv, e

1

i e1

+ · · ·+ hv, e2

i and kvk2 = | hv, e1

i |2 + · · ·+ | hv, en

i |2.

theo:6.31:6B Theorem 8.3.2 (6.31, Gram-Schmidt Procedure) Suppose v1

, . . . , vm

is a linearly inde-pendent list of vectors in V . Let e

1

= v1

/kv1

k. For j = 2, . . . ,m, define ej

inductivelyby

ej

=

vj

� hvj

, e1

i e1

� · · ·� hvj

, ej�1

i ej�1

kvj

� hvj

, e1

i e1

� · · ·� hvj

, ej�1

i ej�1

k .

Then span (v1

, . . . , vj

) = span (e1

, . . . , ej

) for every 1 j m.

Gram-Schmidt Procedure helps us to find an orthonormal basis of a vector space. From this,we follow:

theo:6.34:6B Theorem 8.3.3 (6.34) Every finite-dimensional inner product space has an orthonormalbasis.

theo:6.37:6B Theorem 8.3.4 (6.37) Suppose T 2 L(V ). If T has an upper-triangular matrix withrespect to some basis of V , then T has an upper-triangular matrix with respect to someorthonormal basis of V .

theo:6.42:6B Theorem 8.3.5 (6.42, Riesz Representation Theorem) Suppose V is finite-dimensional and' is a linear functional on V . Then there is a unique vector u 2 V such that 'v = hv, ui.In particular, u = '(e

1

)e1

+ · · ·+ '(en

)en

for ANY orthonormal basis e1

, . . . , en

of V .

8.4. Exercises 6B

exer:6B:1 1. (a) We have h(cos ✓, sin ✓), (� sin ✓, cos ✓)i = 0 and k(cos ✓, sin ✓)k = k(� sin ✓, cos ✓)k =p(cos ✓)2 + (sin ✓)2 = 1. Similarly with the other orthonormal basis.

(b) If (x1

, y1

), (x2

, y2

) is an orthonormal basis of R2 then x1

x2

+ y1

y2

= 0 and x21

+

y21

= x22

+ y22

= 1. Since |x1

| 1 so there exists ✓ 2 R so x1

= cos ✓, which followsy21

= (sin ✓)2. WLOG, say y1

= sin ✓ and x1

6= 0 then x2

/y2

= �y1

/x1

= � tan ✓. Hence,x22

+ y22

= y2�(tan ✓)2 + 1

�= 1 so y2

2

= (cos ✓)2. WLOG y2

= cos ✓ then x2

= � sin ✓.Thus, (x

1

, y1

), (x2

, y2

) is (cos ✓, sin ✓), (� sin ✓, cos ✓).

exer:6B:2 2. If v 2 span (e1

, . . . , en

) then it’s true according to theoremtheo:6.30:6Btheo:6.30:6B

8.3.1.Conversely, if kvk2 = | hv, e

1

i |2 + · · · + | hv, en

i |2. Let u = hv, e1

ie1

+ · · · + hv, en

ien

thenwe have

hu, vi = hv, ui = | hv, e1

i |2 + · · ·+ | hv, en

i |2 = kvk2.Hence, hv, ui = kvk2. On the other hand, we also have kuk2 = kvk2. This followsku� vk2 = kuk2 + kvk2 � hu, vi � hv, ui = 0, hence v = u 2 span (e

1

, . . . , en

).

Page 68: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 68

exer:6B:3 3. According to the proof of theoremtheo:6.37:6Btheo:6.37:6B

8.3.4 (6B), it suffices to apply the Gram-Schmidt Pro-cedure

theo:6.31:6Btheo:6.31:6B

8.3.2 to (1, 0, 0), (1, 1, 1), (1, 1, 2). We have e1

= (1, 0, 0) and

(1, 1, 1)� h(1, 0, 0), (1, 1, 1)i (1, 0, 0) = (0, 1, 1)

with k(0, 1, 1)k =

p2 so e

2

=

⇣0,q

1

2

,q

1

2

⌘. We have

e2

� h(1, 1, 2), e1

i e1

� h(1, 1, 2), e2

i e2

= (1, 1, 2)� (1, 0, 0)� 3p2

0,

r1

2

,

r1

2

!,

= (0, 1, 2)� (0, 3/2, 3/2) = (0,�1/2, 1/2).

Since k(0,�1/2, 1/2)k =

q1

2

so e3

=

⇣0,� 1p

2

, 1p2

⌘.

exer:6B:4 4. For postive integers i, j so i > j, we haveZ

�⇡

cos ix cos jx

⇡dx =

2

Z⇡

�⇡

[cos(i� j)x� cos(i+ j)x] dx,

=

2

sin(i� j)x

i� j� sin(i+ j)x

i+ j

�⇡

�⇡

= 0.

Z⇡

�⇡

sin ix sin jx

⇡dx =

2

Z⇡

�⇡

[cos(i� j)x� cos(i+ j)x] dx,

=

2

sin(i� j)x

i� j� sin(i+ j)x

i+ j

�⇡

�⇡

= 0.

Z⇡

�⇡

sin ix cos jx

⇡=

2

Z⇡

�⇡

[sin(i+ j)x+ sin(i� j)x] dx,

=

2

cos(i+ j)x

�i� j+

cos(i� j)x

j � i

�⇡

�⇡

= 0.

Z⇡

�⇡

cos ixp2⇡

dx =

1p2⇡

sin ix

i

�⇡

�⇡

= 0.

Z⇡

�⇡

sin ixp2⇡

dx =

1p2⇡

cos ix

�i

�⇡

�⇡

= 0.

Z⇡

�⇡

sin ix cos ix

⇡dx =

1

2⇡

Z⇡

�⇡

sin(2ix) dx = 0.

Similarly, we can find that����

1p2⇡

���� =

����cos ixp

���� =

����sin ixp⇡

���� = 1.

Page 69: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 69

exer:6B:5 5. We have k1k2 =R1

0

1 dx = 1 so e1

= 1. We have

x� hx, e1

i e1

= x�Z

1

0

x dx = x� 1

2

.

And����x� 1

2

����2

=

Z1

0

✓x� 1

2

◆2

dx =

Z1

0

✓x2 � x+

1

4

◆dx =

1

3

� 1

2

+

1

4

=

1

12

so e2

= (2x� 1)

p3. We have

x2 �⌦x2, e

1

↵e1

�⌦x2, e

2

↵e2

= x2 �Z

1

0

x2 dx� 3(2x� 1)

Z1

0

x2(2x� 1) dx,

= x2 � 1

3

� 1

2

(2x� 1),

= x2 � x� 1

6

.

Since����x

2 � x� 1

6

����2

=

Z1

0

✓x2 � x� 1

6

◆2

dx,

=

Z1

0

✓x4 � 2x3 +

2

3

x2 +1

3

x+

1

36

◆dx,

=

7

60

.

Thus, e3

=

2

p105

7

�x2 � x� 1

6

�.

exer:6B:6 6. Since differentiation operator on P2

(R) has an upper-triangular matrix with respect tothe basis 1, x, x2 so according to proof of theorem

theo:6.37:6Btheo:6.37:6B

8.3.4 (6B), the answer is in exerciseexer:6B:5exer:6B:5

5.

exer:6B:7 7. Define a linear functional T on P2

(R) so Tp = p�1

2

�. Consider the inner product hp, qi =R

1

0

p(x)q(x) dx then according to theoremtheo:6.42:6Btheo:6.42:6B

8.3.5 (6B), if Tp = hp, qi then q(x) = e1

(1/2)e1

+

e2

(1/2)e2

+ e3

(1/2)e3

where e1

, e2

, e3

is orthonormal basis of P2

(R), which can be foundin exercise

exer:6B:5exer:6B:5

5 (6B).

exer:6B:8 8. Similarly, we find

q(x) =

✓Z1

0

(cos⇡x)dx

◆+

✓Z1

0

(2x� 1) cos⇡x dx

◆3(2x� 1),

+

✓Z1

0

✓x2 � x� 1

6

◆cos⇡x dx

◆60

7

✓x2 � x� 1

6

◆.

exer:6B:9 9. If the Gram-Schmidt Procedure is applied to a list of vectors v1

, . . . , vm

that is notlinearly independent. WLOG say that v

1

, . . . , wm�1

is linearly independent and vm

2span (v

1

, . . . , vm�1

) then we can’t define em

because kvm

�hvm

, e1

i e1

�· · ·�hvm

, em�1

i em�1

k =

0.

Page 70: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 70

exer:6B:10 10. Since span (v1

) = span (e1

) so v1

= hv1

, e1

i e1

or e1

=

v1hv1,e1i . Observe that ke

1

k = 1

so that means | hv1

, e1

i | = kv1

k, which leads to hv1

, e1

i = ±kv1

k. Thus, there are twochoices for e

1

. Similarly, from vj

= hvj

, e1

i e1

+ · · · + hvj

, ej

i ej

we find hvj

, ej

i = ±kvj

�hv

j

, e1

i e1

� · · ·�hvj

, ej�1

i ej�1

k so there are two ways to choose ej

. In total, there are 2

m

orthonormal lists e1

, . . . , em

of vectors in V .

exer:6B:11 11. Choose a u 2 V, u 6= 0, let c = kuk21

/kuk22

. For any v 2 V, v 6= 0 can be decomposed intov = ku+ w with hu,wi

1

= 0 = hu,w, i2

according to theoremtheo:6.14:6Atheo:6.14:6A

8.1.1 (6A). Hence,

hv, ui1

= kkuk21

= ckkuk22

= c hv, ui2

.

On the other hand, we can also write u = kv + w with hv, wi1

= 0 = hv, w, i2

. We findhv, ui

1

= kkvk21

and hv, ui2

= kkvk22

. Combining with the above, we find kvk21

= ckvk22

forany v 2 V .Therefore, for any v, w 2 V , if one of them is 0 then hv, wi

1

= c hv, wi2

= 0. If v, w 6= 0

then v = kw + u with hw, ui1

= hw, ui2

= 0. Hence,

hv, wi1

= kkwk21

= ckkwk22

= hv, wi2

.

exer:6B:12 12. Since V is finite-dimensional, there exists an othornormal basis e1

, . . . , en

of V with respectto inner product h·, ·i

2

. We apply Gram-Schmidt Procedure to e1

, . . . , en

with respect toinner product h·, ·i

1

and obtain another orthonormal basis u1

, . . . , un

of V . With this, wehave span (e

1

, . . . , ej

) = span (u1

, . . . , uj

) for all 1 j n. According to theoremtheo:6.30:6Btheo:6.30:6B

8.3.1(6B) we have kvk2

2

=

Pn

i=1

|hv, ei

i|2 and

kvk21

=

nX

k=1

|hv, uk

i|2 ,

=

nX

k=1

�����

*v,

kX

i=1

huk

, ei

i ei

+�����

2

,

=

nX

k=1

�����

kX

i=1

huk

, ei

i hv, ei

i

�����

2

,

nX

k=1

kX

i=1

���huk

, ei

i��� · |hv, e

i

i|!

2

,

=

nX

k=1

Ak

|hv, ek

i|2 +X

1i<jn

Bi,j

|hv, ei

i| · |hv, ej

i| .

In above, Ak

, Bi,j

are constants obtained from |hui

, ej

i| so they will be the same regardlessof which v 2 V is chosen. Note that 2 |hv, e

i

i hv, ej

i| hv, ei

i2 + hv, ej

i2 so if we chooseC =

Pn

k=1

Ak

+

P1i<jn

Bi,j

we will obtain Ckvk22

� kvk21

for all v 2 V .

Page 71: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 71

exer:6B:13 13. Apply Gram-Schmidt Proceduretheo:6.31:6Btheo:6.31:6B

8.3.2 (6B) to linearly independent list of vectors v1

, . . . , vm

,we find an orthonormal list e

1

, . . . , em

so span (e1

, . . . , ej

) = span (v1

, . . . , vj

) for all1 j m. Hence, according to theorem

theo:6.30:6Btheo:6.30:6B

8.3.1 (6B), say w 2 span (v1

, . . . , vm

) thenwe have w = hw, e

1

i e1

+ · · ·+ hw, em

i em

and vj

= hvj

, e1

i e1

+ · · ·+ hvj

, ej

i ej

so

hw, vj

i =jX

i=1

hw, ei

i hvj

, ei

i = Cj

+ hw, ej

i hvj

, ej

i.

It suffices to choose hw, ei

i so hw, vj

i > 0 for all 1 j m.For any 1 j m, we have hv

j

, ej

i 6= 0, otherwise vj

2 span (e1

, . . . , ej�1

) = span (v1

, . . . , vj�1

),a contradiction since v

1

, . . . , vm

is linearly independent. Hence, we can actually choosehw, e

i

i inductively on i so hw, vi

i > 0. Thus, there exists w so hw, vi

i > 0 for all 1 i m.

exer:6B:14 14. Since kvi

� ei

k2 < 1

n

so we follow kvi

k2 + n�1

n

< 2| hvi

, ei

i | for all 1 i n.Assume the contrary that v

1

, . . . , vn

is linearly dependent, i.e. there exsits not-all-zeroa1

, . . . , an

2 F soP

n

i=1

ai

vi

= 0. From this, for all 1 j n, we obtainP

n

i=1

ai

hvi

, ej

i =h0, e

j

i = 0. Hence, if we consider the linear map T : Fn,1 ! Fn,1 so Tx = M(T )x whereM(T )

i,j

= hvj

, ei

i for all 1 i, j n. We know that with x = (a1

, . . . , an

)

T , x 6= 0 thenTx = 0 so T is not injective, hence T is not invertible so dim range T n� 1. Note thataccording to theorem

theo:3.117:3Ftheo:3.117:3F

5.10.7 (3F), dim range T equals column rank of M(T ) equals rowrank of M(T ). Hence, n rows of M(T ) is linearly dependent in F1,n, i.e. there exists not-all-zero b

1

, . . . , bn

2 F soP

n

i=1

bi

(hv1

, ei

i , · · · , hvn

, ei

i) = 0. Hence,P

n

i=1

bi

hvj

, ei

i = 0

for all 1 j n so |bj

hvj

, ej

i| =���P

k 6=j

bk

hvj

, ek

i��� for all 1 j n. On the other hand,

we have������

X

k 6=j

bk

hvj

, ek

i

������

2

0

@X

k 6=j

|bk

| · |hvj

, ek

i|

1

A2

,

X

k 6=j

|bk

|2X

k 6=j

| hvj

, ek

i |2,

=

�kv

j

k2 � | hvj

, ej

i |2�X

k 6=j

|bk

|2,

<

1

n� (1� | hv

j

, ej

i |)2�X

k 6=j

|bk

|2.

Note that for all 1 j n,P

k 6=j

kbk

k2 6= 0, otherwise we will have bk

= 0 for allk 6= j, which leads to b

j

hvj

, ej

i = 0 or hvj

, ej

i = 0 since bj

6= 0, a contradiction since0 < kv

i

k2 + n�1

n

< 2| hvi

, ei

i |. From this and the above inequality, we obtain

|bj

|2Pk 6=j

|bk

|2 <1/n� (1� | hv

j

, ej

i |)2

| hvj

, ej

i |2

Page 72: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 72

This leads to|bj

|2Pn

k=1

|bk

|2 <1/n� (1� | hv

j

, ej

i |)2

| hvj

, ej

i |2 + 1/n� (1� | hvj

, ej

i |)2.

Next, we will show that

1/n� (1� | hvj

, ej

i |)2

| hvj

, ej

i |2 + 1/n� (1� | hvj

, ej

i |)2 1

n.

Indeed, the above is equivalent to (n| hvj

, ej

i |� n+ 1)

2 � 0, which is true. Therefore, forall 1 j n then |bj |2Pn

k=1 |bk|2< 1

n

, which will leads to a contradiction if we sums up all theinequalities. Thus, v

1

, . . . , vn

is linearly independent so v1

, . . . , vn

is basis of V .

exer:6B:15 15. We prove for the general CR([a, b]) and '(f) = f(c) for some c 2 [a, b]. Assume thecontrary, there exists such function g then pick f(x) = (x � c)2g(x), we have f(c) = 0

and f(x) 2 CR([a, b]). Hence,Rb

a

f(x)g(x)dx =

Rb

a

[(x� c)g(x)]2 dx = f(c) = 0 so thisfollows (x� c)g(x) = 0 or g(x) = 0 for x 2 [a, b] since g(x) is continuous on [a, b]. Hence,hf, gi = 0 for all f 2 CR([a, b]), which is a contradiction if we choose f so f(c) 6= 0. Thus,there doesn’t exist such function g.

exer:6B:16 16. We induct on dim V . For dim V = 1 then there exists v 2 V so kvk = 1 and Tv = �vwith |�| < 1. Hence, since 0 < |�| < 1 so lim

m!1 |�|m = 0 so for any ✏ > 0, there existsM so for all m > M then kTmvk = k�mvk = |�|m < ✏.If the problem is true for dim V = n� 1. For dim V = n, since F = C, T has an upper-triangular matrix with respect to some basis of V according to theorem

theo:5.27:5Btheo:5.27:5B

7.3.2 (5B), whichfollows T has an upper-triangular matrix with respect to some orthonormal basis e

1

, . . . , en

of V according to theoremtheo:6.37:6Btheo:6.37:6B

8.3.4 (6B). Hence, T is invariant under U = span (e1

, . . . , en�1

)

so according to inductive hypothesis, for 0 < ✏ < 1, there exists m 2 Z,m � 1 sokTmvk ✏kvk for all v 2 U .We can easily show that Tme

n

= u+ ↵m

n,n

where u 2 U . Hence, we can prove inductivelythat for any positive integer k then Tmke

n

=

Pk�2

i=0

↵im

n,n

T (k�1�i)mu+↵m(k�1)

n,n

Tmen

. Sincemax{✏, |↵

n,n

|} < 1 so lim

x!1 xmax{✏, |↵n,n

|}x = 0. Hence, there exists X so for allx > X then xmax{✏, |↵

n,n

|}x < "

kuk for sufficiently small " > 0. If we choose large enoughk, then for any 0 i m� 2, we have

k↵im

n,n

T (k�1�i)muk = |↵n,n

|im���T (k�1�i)mu

��� ,

|↵n,n

|im✏���T (k�2�i)mu

��� ,

|↵n,n

|im✏k�1�ikuk,

"

i(m� 1) + k � 1

.

Page 73: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 73

We can simultaneously choose large enough k so |↵n,n

|m(k�1)kTmen

k � which will leadto

kTmken

k k�2X

i=0

���↵im

n,n

T (k�1�i)mu���+

���↵m(k�1)

n,n

Tmen

��� ,

(k � 1)"

i(m� 1) + k � 1

+ �.

With this we follows that for any " > 0, there exists positive integer k so kTmken

k ".Hence, for any v 2 V, v =

Pn

i=1

�i

ei

= w + �i

ei

then

kTmkvk kTmkwk+ |�n

| · kTmken

k, ✏mkkwk+ |�

n

|",

= ✏mk

p|�

1

|2 + · · ·+ |�n�1

|2 + |�n

|",

�p

|�1

|2 + · · ·+ |�n

|2 = �kvk.

We can choose � to be really small with proper choice of ✏ and k.

exer:6B:17 17. (a) We show that for u, v 2 V then �u+ �v = �(u+ v). Indeed, we have for any w 2 Vthen

(�u+ �v)(w) = (�u)(w) + (�v)(w) = hw, ui+ hw, vi = hw, u+ vi = (�(u+ v))(w).

Since F = R so we also have

(��u)(w) = hw,�ui = � hw, ui = �(�u)(w).

Thus, � is a linear map from V to V 0.(b) If F = C, for v 2 V, V 6= 0, pick � = 1 � i then (��v)(v) = hv,�vi = (1 + i)kvk2 6=(1� i)kvk2 = �(�v)(v). This � is not a linear map.(c) If �u = 0 for some u 2 V then (�u)(u) = kuk2 = 0 which leads to u = 0. Thus, � isinjective.(d) Since V is finite-dimensional so according to theorem

theo:3.95:3Ftheo:3.95:3F

5.10.1 (3F) then dim V = dim V 0

so dim V 0= dim V = dim null �+ dim range � = dim range � so � is surjective. Thus,

� is invertible so � is an isomorphism from V onto V 0.How is (d) an alternative proof of the Riesz Representation Theorem (

theo:6.42:6Btheo:6.42:6B

8.3.5 (6B)) whenF = R?: Note since � is surjective so for any linear functional ' in V 0, there exists v 2 Vso �v = ', which means '(u) = (�v)(u) = hu, vi for any u 2 V .

8.5. 6C: Orthogonal Complements and Minimization Problems

theo:6.47:6C Theorem 8.5.1 (6.47) Suppose U is finite-dimensional subspace of V . Then V = U �U?.

Page 74: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 74

theo:6.51:6C Theorem 8.5.2 (6.51) Suppose U is finite-dimensional subspace of V . Then U = (U?)

?.

def:6.53:6C Definition 8.5.3 (orthogonal projection). Suppose U is a finite-dimensional subspace of V .The orthogonal projection of V onto U is the operator P

U

RL(V ) defined as follows: For v 2 V ,

write v = u+ w where u 2 U,w 2 U?. Then PU

v = u.

theo:6.56:6C Theorem 8.5.4 (6.56) Suppose U is a finite-dimensional subspace of V , v 2 V and u 2 U .Then kv � P

U

vk kv � uk. The inequality holds iff u = PU

v.

theo:6.55:6C Theorem 8.5.5 (Properties of orthogonal projection PU

) Suppose U is a finite-dimensionalsubspace of V and v 2 V . Then

1. PU

2 L(V ).

2. PU

u = u for every u 2 U and PU

w = 0 for every w 2 U?.

3. range PU

= U and null PU

= U?.

4. v � PU

v 2 U?.

5. P 2

U

= PU

.

6. kPU

vk kvk.

7. For every orthonormal basis e1

, . . . , em

of U then PU

v = hv, e1

i e1

+ · · ·+ hv, em

i em

.

8.6. Exercises 6C

exer:6C:1 1. If v 2 {v1

, . . . , vm

}? iff hv, vi

i = 0 for all 1 i m iff hv, wi = 0 for all w 2span (v

1

, . . . , vm

) iff v 2 span (v1

, . . . , vm

)

?. Thus, {v1

, . . . , vm

}? = span (v1

, . . . , vm

)

?.

exer:6C:2 2. If U = V then it’s obviously U?= {0}. Conversely, if U?

= {0}. Let e1

, . . . , em

beorthonormal basis of U , if U 6= V then there exists v 62 U, v 2 U . Apply Gram-SchmidtProcedure

theo:6.31:6Btheo:6.31:6B

8.3.2 (6B) to linear independent list e1

, . . . , em

, v to get a orthonormal liste1

, . . . , em

, em+1

which leads to em+1

2 U?, a contradiction. Thus, U = V .

exer:6C:3 3. According to Gram-Schmidt Procedure then span (u1

, . . . , um

) = span (e1

, . . . , em

) = Uso e

1

, . . . , em

is orthonormal basis of U . Since each fi

is orthogonal to each ej

so fi

2 U?

for each 1 i n. Hence, f1

, . . . , fn

is an orthonormal list in U?. On the other hand,dim U?

= dim V � dim U = n so f1

, . . . , fn

is orthonormal basis of U?.

exer:6C:4 4. According to exerciseexer:6C:3exer:6C:3

3 (6C), apply Gram-Schmidt Procedure to (1, 2, 3,�4), (�5, 4, 3, 2),(1, 0, 0, 0), (0, 1, 0, 0).

exer:6C:5 5. Since V is finite-dimensional so U,UT is finite-dimensional. Hence, since U = (U?)

?

according to theoremtheo:6.51:6Ctheo:6.51:6C

8.5.2 (6C), so from theoremtheo:6.47:6Ctheo:6.47:6C

8.5.1 (6C), we have V = U � U?=

Page 75: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 75

(U?)

?�U?, i.e. we can represent v uniquely as v = u+w where u 2 U = (U?)

?, w 2 U?

hence PU

?v = w,PU

v = u so (PU

? + PU

) v = u+ w = Iv. Thus, PU

?v = I � PU

.

exer:6C:6 6. For any w 2 W then PU

PW

w = PU

w = 0 so w 2 null PU

= U?, or hw, ui = 0 for allu 2 U . Hence, hu,wi = 0 for all u 2 U , all w 2 W .

exer:6C:7 7. Let U = range P then null P ⇢ U? since every vector in null P is orthogonal to everyvector in range P . On the other hand, since P 2

= P so accoording to exerciseexer:5B:4exer:5B:4

4 (5B), wefind V = range P � null P . Hence, each v 2 V can be written uniquely as v = Pv + wwhere Pv 2 U,w 2 null P ⇢ U?. Thus, since null P ⇢ U? so P

U

v = PU

(Pv + w) = Pv.

exer:6C:8 8. Let U = range P . We will show that every vector in null P is orthogonal to every vectorin range P . Indeed, from P 2

= P and exerciseexer:5B:4exer:5B:4

4 (5B), we find V = range P � null P .Let e

1

, . . . , em

be orthonormal basis of range P and f1

, . . . , fn

be orthonormal basis ofnull P then e

1

, . . . , em

, f1

, . . . , fn

is basis of V . It suffices to prove that hei

, fj

i = 0 for all1 i m, 1 j n.From P 2

= P we follow v � Pv 2 null P for all v 2 V , i.e. each v can be representeduniquely as v = Pv + w where w 2 null P . Hence, for any ↵

1

, . . . ,↵m

,�1

, . . . ,�n

2 F sov =

Pm

i=1

↵i

ei

+

Pn

j=1

�j

fj

then Pv =

Pm

i=1

↵i

ei

. Thus, kPvk2 =P

m

i=1

|↵i

|2 and

kvk2 = kPvk2 +nX

j=1

|�i

|2 +mX

i=1

nX

j=1

2Re ↵i

�j

hei

, fj

i .

If there exists at least one hei

, fj

i 6= 0 then choose ↵x

= �y

= 0 for all x 6= i, y 6= j and↵i

,�j

so ↵i

�j

= Khei

, fj

i for some real number K < 0 so 2K| hei

, fj

i |2+ |�j

|2 < 0. Hence,

kvk2 � kPvk2 = |�j

|2 + 2K| hei

, fj

i |2 < 0.

This contradicts to assumption that kvk � kPvk. Thus, we must have hei

, fj

i = 0 forevery 1 i m, 1 j n. Thus, every vector in null P is orthogonal to every vectorin range P = U , or null P ⇢ U?. Combining with v = Pv + w for any v 2 V , we findPU

v = PU

(Pv + w) = Pv for all v 2 V .

exer:6C:9 9. Since U is finite-dimensional subspace of V so according to theoremtheo:6.47:6Ctheo:6.47:6C

8.5.1 (6C) we haveV = U � U?, or each v can be written uniquely as v = u+ w for u 2 U,w 2 U?.If U is invariant under T then we have Tu 2 U so for any v 2 V, v = u+w then P

U

TU

v =

PU

Tu = Tu = TPU

v which leads to PU

TPU

= TPU

. Conversely, if PU

TPU

u = TPU

u orPU

Tu = Tu then that means Tu 2 V for all u 2 U . Thus, U is invariant under T .

exer:6C:10 10. Since V is finite-dimensional so according to theoremtheo:6.47:6Ctheo:6.47:6C

8.5.1 (6C) we have V = U �U?, i.e.each v can be represented uniquely as v = u+ w with u 2 U,w 2 U?.If U and U? are both invariant under T then for any v 2 V, Tv = Tu + Tw with Tu 2U, Tw 2 U?. Hence, P

U

Tv = PU

(Tu + Tw) = Tu = TPU

v so PU

T = TPU

. Conversely,if P

U

T = TPU

then for any u 2 U we have PU

Tu = TPU

u = Tu so Tu 2 U . Thisfollows U is invariant under T . For any w 2 U? we have P

U

Tw = TPU

w = T (0) = 0 soTw 2 null P

U

= U?. Thus, U? is invariant under T .

Page 76: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 76

exer:6C:11 11. Such u is exactly PU

(1, 2, 3, 4) according to theoremtheo:6.56:6Ctheo:6.56:6C

8.5.4 (6C). We do this step by step:• Apply Gram-Schmidt Procedure to (1, 1, 0, 0), (1, 1, 1, 2) and obtain orthonormal basis

of U which is e1

, e2

.• Calculate h(1, 2, 3, 4), e

1

i , h(1, 2, 3, 4), e2

i (note that since V = R4 so we’re talkingabout Euclidean inner product).

• PU

(1, 2, 3, 4) = h(1, 2, 3, 4), e1

i e1

+ h(1, 2, 3, 4), e2

i e2

is the answer.

exer:6C:12 12. Define subspace U of P3

(R) as U = {p(x) 2 P3

(R) deg p 3, p0(0) = 0, p(0) = 0}. Thisfollows U = {ax3 + bx2 : a, b 2 R} = span (x3, x2). Hence, similarly to previous exercise,we find orthonormal basis of U from basis x3, x2 and inner product on P

3

(R):

hp, qi =Z

�⇡

p(x)q(x)dx.

From that we find PU

(3x+ 2) = h3x+ 2, e1

i e1

+ h3x+ 2, e2

i e2

.

exer:6C:13 13. It is exercise 6.58 in the book.

exer:6C:14 14. (a) If U? 6= {0}, then we show that for any g 2 U?, g 6= 0, we have xg(x) 2 U?. Indeed,for any f 2 U then f(x)x 2 U so

hf(x), xg(x)i =Z

1

�1

[xf(x)] g(x)dx = hxf(x), g(x)i = 0.

Hence, xg(x) 2 U?. On the other hand, (xg(x))(0) = 0 so xg(x) 2 U . Thus, we musthave xg(x) = 0 or g(x) = 0, a contradiction. Thus, U?

= {0}. Another proof for (a) canbe seen here.(b) Since U?

= {0} so theoremstheo:6.47:6Ctheo:6.47:6C

8.5.1,theo:6.51:6Ctheo:6.51:6C

8.5.2 (6C) do not hold in this case where CR([�1, 1])is infinite-dimensional.

Page 77: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 77

9. Chapter 7: Operators on Inner Product Spaces

9.1. 7A: Self-adjoint and Normal Operators

V,W are assumed to be finite-dimensional inner product spaces over F.

def:7.2:7A Definition 9.1.1 (adjoint). Suppose T 2 L(V,W ). The adjoint of T is the function T ⇤: W !

V such that hTv,wi = hv, T ⇤wi .

def:7.11:7A Definition 9.1.2 (self-adjoint operator). An operator T 2 L(V ) is called self-adjoint or Her-mitian if T = T ⇤.

theo:7.7:7A Theorem 9.1.3 (7.7) Suppose T 2 L(V,W ) then:

(a) null T ⇤= (range T )?.

(b) range T ⇤= (null T )?.

(c) null T = (range T ⇤)

?.

(d) range T = (null T ⇤)

?.

From theoremtheo:7.7:7Atheo:7.7:7A

9.1.3, we find that if T is self-adjoint, then that means null T = (range T )?. Ifadding the condition T 2

= T , we obtain that T is an orthogonal projectiondef:6.53:6Cdef:6.53:6C

8.5.3.

theo:7.10:7A Theorem 9.1.4 (7.10) Let T 2 L(V,W ). Suppose e1

, . . . , en

is an orthonormal basis of Vand f

1

, . . . , fm

is an orthonormal basis of W . Then M(T ⇤, (f1

, . . . , fm

), (e1

, . . . , en

)) is theconjugate transpose of M(T, (e

1

, . . . , en

), (f1

, . . . , fm

)).

From theoremtheo:7.10:7Atheo:7.10:7A

9.1.4 (7A), we find that for self-adjoint operator A, i.e. A = A⇤ then matrixof A with respect to some orthonormal basis is equal to its conjugate transpose. This is calledHermitian matrix. If we’re talking about real inner product space then matrix of self-adjointoperator A (wrt orthonormal basis) is equal to its own transpose. This kind of matrix is calledreal symmetric matrix.

theo:7.13:7A Theorem 9.1.5 (7.13) Every eigenvalue of a self-adjoint operator is real.

theo:7.21:7A Theorem 9.1.6 Suppose T 2 L(V ) is normal and v 2 V is eigenvector of T with eigenvalue�. Then v is also eigenvector of T ⇤ with eigenvalue �.

theo:7.22:7A Theorem 9.1.7 (7.22) Suppose T 2 L(V ) is normal. Then eigenvectors of T correspondingto distinct eigenvalues are orthogonal.

How to find the adjoint T ⇤ of T 2 L(V,W )?: There are two ways:

Page 78: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 78

1. Directly: Find T ⇤ from hTv,wi = hv, T ⇤wi. See exerciseexer:7D:6exer:7D:6

6 (7D) or exerciseexer:7A:1exer:7A:1

1 (7A) asexamples.

2. Indirectly: Find orthonormal bases of V and W . Write M(T ) with respect two these twobases then M(T ⇤

) is the conjugate transpose of M(T ) according to theoremtheo:7.10:7Atheo:7.10:7A

9.1.4 (7A).

9.2. Exercises 7A

exer:7A:1 1. For any (a1

, . . . , an

) 2 Fn then

h(a1

, . . . , an

), T ⇤(z

1

, . . . , zn

)i = hT (a1

, . . . , an

), (z1

, . . . , zn

)i ,= h(0, a

1

, . . . , an�1

), (z1

, . . . , zn

)i ,

=

n�1X

i=1

ai

zi+1

,

= h(a1

, . . . , an

), (z2

, . . . , zn

, 0)i .

Thus, T ⇤(z

1

, . . . , zn

) = (z2

, . . . , zn

, 0).

exer:7A:2 2. According to theoremtheo:7.7:7Atheo:7.7:7A

9.1.3 (7A), we have range (T ⇤ � �I) = (null (T � �I))? so thatmeans

dim null (T ⇤ � �I) = dim V � dim range (T ⇤ � �I),

= dim V � (dim null (T � �I))?,

= dim null (T � �I).

Thus, � is eigenvalue of T iff dim null (T � �I) � 1 iff dim null (T ⇤ � �I) � 1 iff � iseigenvalue of T ⇤.

exer:7A:3 3. U is invariant under T iff Tu 2 U for every u 2 U iff for every u 2 U , hu, T ⇤vi = hTu, vi = 0

for all v 2 U? iff T ⇤v 2 U? for every v 2 U? iff U? is invariant under T ⇤.

exer:7A:4 4. T is injective iff null T = {0} iff (range T ⇤)

?= {0} (according to theorem

theo:7.7:7Atheo:7.7:7A

9.1.3) iffrange T ⇤

= {0}? = V (according to theoremtheo:6.51:6Ctheo:6.51:6C

8.5.2 6C) iff T ⇤ is surjective. Similarly, T issurjective iff T ⇤ is injective.

exer:7A:5 5. We have dim range T = dim V � dim null T = dim (null T )? but according to theo-rem

theo:7.7:7Atheo:7.7:7A

9.1.3 then dim (null T )? = dim range T ⇤ so dim range T = dim range T ⇤. Sincedim range T = dim (null T ⇤

)

?= dim W�dim null T ⇤ and dim range T ⇤

= dim (null T )? =

dim V � dim null T so we obtain the desired equality.

exer:7A:6 6. (a) Write out orthonormal basis of P2

(R) using Gram-Schmidt Procedure to 1, x, x2 thenwrite out M(T ) with respect to this orthonormal basis.(b) This is not a contradiction because the matrix of T is created from nonorthonormalbasis of P

2

(R).

Page 79: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 79

exer:7A:7 7. Since S, T are self-adjoint so for any v, w 2 V then hSTv,wi = hTv, Swi = hv, TSwi.Thus, ST is self-adjoint iff for any v, w 2 V then hSTv,wi = hv, STwi iff hv, TSwi =

hv, STwi iff ST = TS.

exer:7A:8 8. We have 0 is a self-adjoint operator on V . For any self-adjoint operators S, T on V thenS+T = S⇤

+T ⇤= (S+T )⇤ so S+T is self-adjoint. For any � 2 R then �T = �T ⇤

= (�T )⇤

so �T is self-adjoint. Thus, the set of self-adjoint operators on V is a subspace of L(V ).

exer:7A:9 9. For any self-adjoint operator T 6= 0 on V , then with � = 1� i, we have �T = (1� i)T =

(1� i)T ⇤ 6= (1 + i)T ⇤= (�T )⇤.

exer:7A:10 10. We show that the set of normal operators on V is not closed under addition. It suffices to

prove this when dim V = 2. Note that✓a bc d

◆is the matrix of a normal operator iff b2 = c2

and ac + bd = ab + cd. Hence, with two normal operators S, T so M(S) =

✓1 2

2 3

◆and

M(T ) =

✓2 �3

3 2

◆then M(S + T ) =

✓3 �1

5 5

◆so S + T is not a normal operator. Thus,

for dim V = 2, set of normal operators on V is not a subspace of L(V ). For dim V > 2, we

can choose M(X) =

✓M(S)

0

◆(i.e. the 2-by-2 matrix at top left of M(X) is exactly

M(S) while the remaining entries of M(X) are 0) and M(Y ) =

✓M(T )

0

◆.

exer:7A:11 11. If P is self-adjoint then P = P ⇤ so according to theoremtheo:7.7:7Atheo:7.7:7A

9.1.3 (7A) then null P = null P ⇤=

(range T )?, i.e. every vector in null P is orthogonal to every vector in range P , whichbrings us back to exercise

exer:6C:7exer:6C:7

7 (6C), which proves the existence of U .Conversely, if such subspace U exists then since P = P

U

so range P = range PU

= U . Thisfollows null P = null P

U

= U?. Thus, range P = (null P )

?= range P ⇤ and similarly,

null P = null P ⇤. On the other hand, since P = P 2 so P ⇤= (P 2

)

⇤= (P ⇤

)

2 so accordingto exercise

exer:5B:4exer:5B:4

4 (5B) we find every v can be represented uniquely as v = P ⇤v + w wherew 2 null P ⇤. We also have v = Pv+w since P = P 2 and null P ⇤

= null P and range P =

range P ⇤. Thus, Pv = P ⇤v for every v 2 V so P = P ⇤ or P is self-adjoint.

exer:7A:12 12. Let v1

, v2

be eigenvectors of T corresponding to eigenvalues 3, 4. Then for e1

= v1

/kv1

k, e2

=

v2

/kv2

k then Te1

= 4e1

and Te2

= 4e2

. According to theoremtheo:7.22:7Atheo:7.22:7A

9.1.7 (7A) then he1

, e2

i = 0

so hTe1

, T e2

i = 0 so according to Pythagorean’s theorem we find 5

2

= 3

2

+4

2

= kTe1

k2+kTe

2

k2 = kT (e1

+ e2

)k2 so kT (e1

+ e2

)k = 5 and note that ke1

+ e2

k2 = ke1

k2 + ke2

k2 = 2

so ke1

+ e2

k =

p2.

exer:7A:13 13. M(T ) =

0

BB@

2 �3 0 0

3 2 0 0

0 0 0 0

0 0 0 0

1

CCA.

Page 80: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 80

exer:7A:14 14. Since T is normal so according to theoremtheo:7.22:7Atheo:7.22:7A

9.1.7 (7A) then hTv, Twi = hv, wi = 0 so2

2

(3

2

+ 4

2

) = kTvk2 + kTwk2 = kT (v + w)k2 so kT (v + w)k = 10.

exer:7A:15 15. First, we will find adjoint of T . For any w 2 V then

hv, T ⇤wi = hTv,wi = hv, ui hx,wi = hv, hw, xiui .

Thus, T ⇤w = hw, xiu.(a) With F = R, T is self-adjoint then Tv = T ⇤v for every v 2 V so hv, uix = hv, xiu sou, x is linearly dependent.If u, x is linearly dependent with u = �x then for any v 2 V we have hv, uix = hv,�xiu/� =

hv, xiu so T = T ⇤ or self-adjoint.(b) We have TT ⇤v = hv, xiTu = hv, xi kuk2x and T ⇤Tv = hv, uiT ⇤x = hv, ui kxk2u. If Tis normal then TT ⇤

= T ⇤T then u, x is linearly dependent.If u = �x then for every v 2 V , we have

hv, xi kuk2x =

⌦v,��1u

↵|�|2kxk2x,

=

⌦v,�/|�|2u

↵|�|2kxk2x,

= �/|�|2 hv, ui |�|2kxk2x,= hv, ui kxk2u.

This follows TT ⇤v = T ⇤Tv for every v 2 V so T is normal.

exer:7A:16 16. First, we will show that null T = null T ⇤. Indeed, we have v 2 null T iff 0 = kTvk2 =

kT ⇤vk2 iff T ⇤v = 0 or v 2 null T ⇤. Thus, null T = null T ⇤ so (null T )? = (null T ⇤)

? sorange T ⇤

= range T according to theoremtheo:7.7:7Atheo:7.7:7A

9.1.3 (7A).

exer:7A:17 17. We show that for any positive integer k then null T k

= null T k+1. Indeed, it’s obviousthat if v 2 null T k then v 2 null T k+1. If v 2 null T k+1 then T kv 2 null T and sincenull T = null T ⇤ according to exercise

exer:7A:16exer:7A:16

16 (7A) so T kv 2 null T ⇤. On the other hand,we also have T kv 2 range T and null T ⇤

= (range T )? so we must have kT kvk2 = 0 soT kv = 0, i.e. v 2 null T k. Thus, we obtain null T k

= null T k+1 for every positive integerk.Note that if T is normal then T k is also normal for every positive integer k. Hence,null T k

= null (T k

)

⇤= (range T k

)

? and similarly, null T k+1

= (range T k+1

)

?. Thus,range T k

= range T k+1 for every positive integer k � 1.

exer:7A:18 18. Consider 2-dimensional vector space V over C so with respect to orthonormal basis e1

, e2

of V then M(T ) =

✓1 1� i

1 + i 1

◆and M(T ⇤

) =

✓1 1 + i

1� i 1

◆. Then we have kTe

1

k =

p1

2

+ |1 + i|2 =

p1

2

+ |1� i|2 = kT ⇤e1

k and similarly kTe2

k = kT ⇤e2

k. However, T isnot normal.

Page 81: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 81

exer:7A:19 19. Since T (1, 1, 1) = 2(1, 1, 1) so (1, 1, 1) is eigenvector of T corresponding to eigenvalue 2.If (z

1

, z2

, z3

) 2 null T then (z1

, z2

, z3

) is eigenvector of T corresponding to eigenvalue 0.Hence, according to theorem

theo:7.22:7Atheo:7.22:7A

9.1.7 (7A) then h(z1

, z2

, z3

), (1, 1, 1)i = z1

+ z2

+ z3

= 0.

exer:7A:20 20. Note that both �

V

� T ⇤ and T 0 � �w

are linear map from W to V 0. Hence, it suffices toprove that for every w 2 W then (�

V

� T ⇤)(w) = (T 0 ��

W

)(w) or �

V

(T ⇤w) = T 0(�

W

w).Since there two are linear functionals on V so it again suffices to prove that for everyv 2 V then (�

V

(T ⇤w))v = (T 0(�

W

w))v. Indeed, according to definition of �

V

, wehave (�

V

(T ⇤w))v = hv, T ⇤wi. On the other hand, (T 0(�

W

w))v = ((�

W

w) � T )v =

(�

W

w)(Tv) = hTv,wi. Since hv, T ⇤wi = hTv,wi so we obtain the desired equality.

exer:7A:21 21. From exerciseexer:6B:4exer:6B:4

4 (6B), we find the list

1p2⇡

,cosxp⇡

, . . . ,cosnxp

⇡,sinxp⇡, · · · , sinnxp

is the orthonormal basis of V .(a) Note that we are talking about vector space over R so M(D⇤

) with respect to thementioned basis is eventually transpose of M(D). Hence, to show D⇤

= �D, it suffices toprove that M(D)

j,i

+M(D)

i,j

= 0. Note that the entry in row i, column j of M(D) ishDe

j

, ei

i where ek

is the k-th vector in the orthonormal basis mentioned above. It sufficesto show hDe

j

, ei

i+ hDei

, ej

i = 0 for every 1 i, j 2n+ 1. From integrals in exerciseexer:6B:4exer:6B:4

4(6B), we know that if |j� i| 6= n then hDe

i

, ej

i = 0. Hence, we only need to consider when|i� j| = n, i.e. if e

i

= cos(mx)/p⇡ and e

j

= sin(mx)/p⇡ for some 1 m n. We have

hDei

, ej

i = �m kej

k2 = �m and hDej

, ei

i = mkei

k2 = m so hDei

, ej

i + hDej

, ei

i = 0.Thus, we obtain D⇤

= �D as desired. Note that M(D) 6= 0 so D⇤ 6= D so D is notself-adjoint but D is normal.(b) It suffices to prove T = T ⇤, i.e. hTe

j

, ei

i = hTei

, ej

i for every 1 i, j 2n+ 1. Fromintegrals in exercise

exer:6B:4exer:6B:4

4 (6B), when i 6= j then hTej

, ei

i = hTei

, ej

i = 0. Thus, we obtainT = T ⇤ or T is self-adjoint.

9.3. 7B: The Spectral Theorem

theo:7.24:7B Theorem 9.3.1 (7.24, Complex Spectral Theorem) Suppose F = C and T 2 L(V ). Thenthe following are equivalent :

(a) T is normal.

(b) V has an orthonormal basis consisting of eigenvectors of T .

(c) T has a diagonal matrix with respect to some orthonormal basis of V .

Page 82: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 82

theo:7.29:7B Theorem 9.3.2 (7.29, Real Spectral Theorem) Suppose F = R and T 2 L(V ). Then thefollowing are equivalent:

(a) T is self-adjoint.

(b) V has an orthonormal basis consisting of eigenvectors of T .

(c) T has a diagonal matrix with respect to some orthonormal basis of V .

How to find orthonormal basis consisting of eigenvectors of a normal/self-adjoint operator T :

• Solve Tv = �v to find all eigenvalues.

• Find a basis of E(�, T ) for each eigenvalue � of T .

• Using Gram-Schidmt Proceduretheo:6.31:6Btheo:6.31:6B

8.3.2 (6B), find orthonormal basis of E(�, T ) for eacheigenvalue � of T .

• Combines all the bases of E(�, T ), given that T is self-adjoint/normal, we will obtain aorthonormal basis consisting of eigenvectors of T . This is true according to two exercisesexer:7B:4exer:7B:4

4,exer:7B:5exer:7B:5

5 (7B).

9.4. Exercises 7B

exer:7B:1 1. Pick T 2 L(R3

) so T (1, 1, 0) = 2(1, 1, 0), T (0, 1, 0) = 3(0, 1, 0) and T (0, 0, 1) = 4(0, 0, 1)then with respect to standard basis (which is also orthonormal basis with respect to usual

inner product) we have M(T ) =

0

@2 0 0

�1 3 0

0 0 4

1

A so M(T ⇤) =

0

@2 �1 0

0 3 0

0 0 4

1

A. Thus T is not

self-adjoint.

exer:7B:2 2. Since T is self-adjoint so according to Real/Complex Spectral Theorem, T has an diagonalmatrix so V = E(2, T ) � E(3, T ). So every v can be represented as u + w where u 2E(2, T ), w 2 E(3, T ). Hence (T 2 � 5T +6I)v = (T � 3I)(T � 2I)(u+w) = (T � 3I)[(T �2I)u+(T �2I)w] = (T �3I)(T �2I)w = (T �2I)[(T �3I)w] = 0. Thus, T 2�5T +6I = 0.

exer:7B:3 3. Pick T 2 L(C)

3

) so T (1, 0, 0) = 3(1, 0, 0), T (0, 1, 0) = 2(0, 1, 0) and T (0, 0, 1) = 2(1, 1, 1).Then (T 2 � 5T + 6I)(0, 0, 1) = (�2, 0, 0) so T 2 � 5T + 6I 6= 0. 2 and 3 are the onlyeigenvalues of T .

exer:7B:4 4. If T is normal then according to theoremtheo:7.22:7Atheo:7.22:7A

9.1.7 (7A), every pairs of eigenvectors correspond-ing to distinct eigenvalues of T are orthogonal. According to Complex Spectral Theoremtheo:7.24:7Btheo:7.24:7B

9.3.1 then T has diagonal matrix so from theoremtheo:5.41:5Ctheo:5.41:5C

7.5.2 (5C), V = E(�1

, T )�· · ·�E(�m

, T ).Conversely, according to theorem

theo:6.34:6Btheo:6.34:6B

8.3.3 (6B), each finite-dimensional vector space E(�i

, T )has an orthonormal basis. Combining all orthonormal bases of vector spaces E(�

i

, T ) for1 i m we obtain a new orthonormal basis (since every vector in E(�

i

, T ) is orthogonalto every vector in E(�

j

, T ) for j 6= i). This orthonormal basis has length dim E(�1

, T ) +

Page 83: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 83

. . .+dim E(�m

, T ), which equals to dim V because V = E(�1

, T )� · · ·�E(�m

, T ). Thus,we obtain an orthonormal basis of V consisting of eigenvectors of T so T is normal.

exer:7B:5 5. Completely the same as previous exerciseexer:7B:4exer:7B:4

4, except this time we use Real Spectral Theoreminstead of Complex Spectral Theorem and note that any self-adjoint operator is normal.

exer:7B:6 6. One direction is already proven by theoremtheo:7.13:7Atheo:7.13:7A

9.1.5 (7A). Thus, if every eigenvalue of anormal operator T is real then according to Complex Spectral Theorem

theo:7.24:7Btheo:7.24:7B

9.3.1 (7B), T hasdiagonal matrix M(T ) consisting of all real numbers. With this and theorem

theo:7.10:7Atheo:7.10:7A

9.1.4 (7A),we find M(T ) = M(T ⇤

) so T is self-adjoint.

exer:7B:7 7. Since T is normal so according to exerciseexer:7A:16exer:7A:16

16,exer:7A:17exer:7A:17

17 (7A) then = range T ⇤= range T =

range T 7 which follows V = null T 7 � range T 7 from theoremtheo:7.7:7Atheo:7.7:7A

9.1.3 (7A) andtheo:6.47:6Ctheo:6.47:6C

8.5.1 (6C).Thus, each v 2 V can be written as v = T 7u + w where w 2 null T 7. Thus, T 2v =

T 2

(T 7u + w) = T 9u = T 8u = T (T 7u + w) = Tv. We conclude T 2

= T . If v 6= 0

is an eigenvector of T corresponding to eigenvalue � 2 C then T v

= �2v = �v = Tvso �(� � 1)v = 0 so � 2 {0, 1}. Thus, all eigenvalues of normal operator T are real soaccording to exercise

exer:7B:6exer:7B:6

6 (7B), T is self-adjoint.

exer:7B:8 8. UNSOLVEd

exer:7B:9 9. Every normal operator T has a diagonal matrix according to Complex Spectral Theoremtheo:7.24:7Btheo:7.24:7B

9.3.1 (7B). Pick operator S 2 L(V ) so S has diagonal matrix M(S) and M(S)i,i

=pM(T )

i,i

for every 1 i n (this is possible because we are considering complex vectorspace). Hence, M(S2

) = (M(S))2 = M(T ) so S2

= T .

exer:7B:10 10. Theoremtheo:1:5Btheo:1:5B

7.3.5 (look at the proof) makes the construction easier. It suffices to choosean operator T so that T has no eigenvalue. Pick V = R2 and T (x, y) = (�y, x) then ifT (x, y) = �(x, y) we find �y = �x, x = �y so y(�2 + 1) = 0, which happens when y = 0

so x = 0. Thus, T has no eigenvalues. Hence, since for any v 2 R2, v 6= 0 then T 2v, Tv, vis linearly dependent so there exists real numbers b, c so b2 < 4c and T 2v + bTv + cv = 0.The condition b2 < 4c guarantees that T has no eigenvalue, otherwise we can factoriseT 2v + bTv + cv and then obtain eigenvalue for T .

exer:7B:11 11. If T is self-adjoint then T is also normal so according to Real/Complex Spectral Theorem, Thas a diagonal matrix M(T ) with respect to some orthonormal basis of V . Pick S 2 L(V )

so S also has a diagonal matrix so M(S)i,i

=

3pM(T )

i,i

for every i. We can concludeS3

= T from here.

exer:7B:12 12. Since T is self-adjoint so according to Complex/Real Spetral Theorem, V has an orthonor-mal basis consisting of eigenvectors of T , i.e. each v 2 V, kvk = 1 can be represented asv =

Pm

i=1

ai

ui

where ui

2 E(�i

, T ) and hui

, uj

i = 0 for 1 i < j m and ai

2 F soPm

i=1

|ai

|2 = 1. Hence,

kTv � �vk2 =

�����

mX

i=1

(�i

� �)ai

ui

�����

2

=

mX

i=1

|(�i

� �)ai

|2.

Page 84: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 84

If |�i

� �| � ✏ for all 1 i m then kTv � �vk2 � ✏2P

m

i=1

|ai

|2 = ✏2 so kTv � �vk � ✏, acontradiction. Thus, there must exists i so |�

i

� �| < ✏.

exer:7B:13 13. According to exerciseexer:7A:3exer:7A:3

3 (7A), if U is invariant under T then U? is invariant under T ⇤.Hence, if T is normal then T ⇤|

U

? 2 L(U?) is normal.

We prove Complex Spectral Theorem by induction on dim V . In a complex inner productspace V , there always exists an eigenvector u 6= 0 corresponding to eigenvalue � of T . LetU = span (u) then since dim U? < dim V , there exists an orthonormal basis e

1

, . . . , en�1

of U? consisting of eigenvectors of T ⇤|U

? . Since T is normal so according to exerciseexer:7A:2exer:7A:2

2(7A), e

1

, . . . , en�1

is also orthonormal basis consisting of eigenvectors of T . Note thateigenvalues of T of these eigenvectors must be different from �, otherwise u 2 U? whichmeans u = 0, a contradiction. Thus, V has orthonormal basis u, e

1

, . . . , en�1

consisting ofeigenvectors of T .

exer:7B:14 14. One direction is true according to Real Spectral Theorem. If U has basis u1

, . . . , un

consisting of eigenvectors of T , consider inner product on U so hui

, uj

i = 0 for all 1 i < j n then this inner product makes T into a self-adjoint operator according to RealSpectral Theorem.

exer:7B:15 15. It suffices to find a 2 R so0

@1 1 0

0 1 1

1 0 a

1

A

0

@1 0 1

1 1 0

0 1 a

1

A=

0

@1 0 1

1 1 0

0 1 a

1

A

0

@1 1 0

0 1 1

1 0 a

1

A .

9.5. 7C: Positive Operators and Isometries

Definition 9.5.1. An operator T 2 L(V ) is called positive if T is self-adjoint and hTv, vi � 0

for all v 2 V .

Another terminology for the above definition is positive semidefinite. A positive semidefi-nite operator that satisfies hTv, vi > 0 for all non-zero v is called positive definite.

theo:7.35:7C Theorem 9.5.2 (7.35, Positive Operators) Let T 2 L(V ). Then the following are equiva-lent:

(a) T is positive;

(b) T is self-adjoint and all the eigenvalues of T are nonnegative;

(c) T has positive square root;

(d) T has self-adjoint square root;

(e) There exists an operator R 2 L(V ) such that T = R⇤R.

Page 85: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 85

theo:7.42:7C Theorem 9.5.3 (7.42, Isometries) Suppose S 2 L(V ). Then the following are equivalent:

(a) S is an isometry;

(b) hSu, Svi = hu, vi for all u, v 2 V ;

(c) Se1

, . . . , Sen

is orthonormal for every orthonormal list of vectors e1

, . . . , en

in V ;

(d) there exists an orthonormal basis e1

, . . . , en

of V such that Se1

, . . . , Sen

is orthonor-mal;

(e) S⇤S = I;

(f) SS⇤= I;

(g) S⇤ is an isometry;

(h) S is invertible and S�1

= S⇤.

theo:7.43:7C Theorem 9.5.4 (7.43) Suppose V is a complex inner product space and S 2 L(V ). Thenthe following are equivalent:

(a) S is isometry.

(b) There is an orthonormal basis of V consisting of eigenvectors of S whose correspondingeigenvalues all have absolute value 1.

9.6. Exercises 7C

exer:7C:1 1. Let e1

, e2

be orthonormal basis of V . Let Te1

= xe1

+ ye2

and Te2

= ye1

+ ze2

withx, y, z 2 R so 2y + x + z < 0, x, z � 0. Hence, T is self-adjoint and hTe

1

, e1

i = x �0, hTe

2

, e2

i = z � 0. On the other hand,

hT (e1

+ e2

), e1

+ e2

i = h(x+ y)e1

+ (y + z)e2

, e1

+ e2

i = x+ 2y + z < 0.

Thus, T is not positive.

exer:7C:2 2. We havehT (v � w), v � wi = hw � v, v � wi = �kv � wk2 � 0.

Thus, we must have kv � wk2 = 0 or v = w.

exer:7C:3 3. We have for every u 2 U then hT |U

u, ui = hTu, ui � 0 so T |U

is positive operator on U .

exer:7C:4 4. For every v 2 V , we have hT ⇤Tv, vi = hTv, Tvi � 0 and (T ⇤T )⇤ = (T ⇤T so T ⇤T is positiveoperator on V . Similarly, TT ⇤ is positive operator on V .

exer:7C:5 5. If T, S is positive operators on V then for any v 2 V we have h(T + S)v, vi = hTv, vi +hSv, vi � 0 and (T + S)⇤ = T ⇤

+ S⇤= T + S so T + S is positive operator on V .

Page 86: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 86

exer:7C:6 6. If T is positive operator on V then according to theoremtheo:7.35:7Ctheo:7.35:7C

9.5.2 (7C), T has self-adjointsquare root R 2 L(V ). Hence, we have Rk

= (R⇤)

k

= (Rk

)

⇤ so Rk is self-adjoint for everypositive integer k. Hence, T k

= (R2

)

k

= (Rk

)

2 so again from theoremtheo:7.35:7Ctheo:7.35:7C

9.5.2 (7C), T k is apositive operator on V .

exer:7C:7 7. If hTv, vi > 0 for every v 2 V, v 6= 0 then if T is invertible (otherwise there exists v 2V, v 6= 0 so Tv = 0 which leads to hTv, vi = 0, a contradiction).Conversely, if T is invertible. Since T is positive so according to theorem

theo:7.35:7Ctheo:7.35:7C

9.5.2 (7C), thereexists operator R 2 L(V ) so R⇤R = T . Hence, hTv, vi = kRvk2. Since T is invertible soRv 6= 0 for all v 2 V , otherwise Tv = R⇤Rv = 0 a contradiction. Thus, kRvk2 > 0 orhTv, vi > 0 for every v 2 V, v 6= 0.

exer:7C:8 8. It suffices to check the definiteness condition of inner product. We have hv, viT

= 0 iffhTv, vi = 0. Applying previous exercise

exer:7C:7exer:7C:7

7 (7C), the condition hTv, vi = 0 for only v = 0

is equivalent to T being invertible.

exer:7C:9 9. Let S be a self-adjoint square root of identity operator I then according to Real/ComplexSpectral Theorem

theo:7.24:7Btheo:7.24:7B

9.3.1 (7B), V has an orthonormal basis e1

, e2

consisting of eigenvectorsof S corresponding to eigenvalues �

1

,�2

. Thus, we have S2e1

= �21

e1

= Ie1

= e1

so �21

= 1.Similarly, �2

2

= 1. Hence, identity operator on F2 has finitely many self-adjoint squareroots.

exer:7C:10 10. If (a) holds, i.e. S is isometry then from theoremtheo:7.42:7Ctheo:7.42:7C

9.5.3 (7C), S⇤ is also isometry whichleads to (b). If any of (b),(c) or (d) holds, we can follow that S⇤ is isometry which canleads back to other conditions according to theorem

theo:7.42:7Ctheo:7.42:7C

9.5.3 (7C).

exer:7C:11 11. Similar to exerciseexer:5C:12exer:5C:12

12 (5C). Since T1

is normal so according to Complex Spectral Theoremtheo:7.24:7Btheo:7.24:7B

9.3.1 (7B), there exists an orthonormal basis v1

, v2

, v3

consisting of eigenvectors 2, 5, 7 of T1

.Similarly, there exists an orthonormal basis w

1

, w2

, w3

consisting of eigenvectors 2, 5, 7 ofT2

. Let S 2 L(F3

) so Sv1

= w1

, Sv2

= w2

and Sw3

= w3

then S is invertible. Similarly toexercise

exer:5C:15exer:5C:15

15 (5C), we can show that T1

= S�1TS. Now it suffices to show that S is isometry.

Indeed, for any v 2 V, v =

P3

i=1

ai

ei

then kSvk2 =

���P

3

i=1

ai

wi

���2

=

P3

i=1

|ai

|2 = kvk2.Thus, S is isometry so according to theorem

theo:7.42:7Ctheo:7.42:7C

9.5.3 (7C), S�1

= S⇤ so T1

= S⇤T2

S.

exer:7C:12 12. It is essentially exerciseexer:5C:13exer:5C:13

13 (5C).

exer:7C:13 13. It is false. Pick S 2 L(V ) for Sej

= ej

for all j 6= 2 and Se2

= x1

e1

+x2

e2

where x1

, x2

2 Rso |x

1

|2+ |x2

|2 = 1 to guarantee kSe2

k = 1. Hence kS(e1

+ e2

)k2 = k(x1

+1)e1

+x2

e1

k2 =|x

1

+ 1|2 + |x2

|2 = 2 + 2x1

6= 2 = ke1

+ e2

k2. Thus, S is not an isometry.

exer:7C:14 14. We’ve already shown that T is self-adjoint so �T is also self-adjoint. We have

h(�T ) sin ix, sin ixi = hsin ix, sin ixi > 0, h(�T ) cos ix, cos ixi = hcos ix, cos ixi > 0.

And h(�T )ei

, ej

i = 0 for i 6= j (see exerciseexer:7A:21exer:7A:21

21 (7A) for more details). With this, we canconclude that h(�T )v, vi > 0 for all v 2 V , i.e. �T is positive.

Page 87: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 87

9.7. 7D: Polar Decomposition and Singular Value Decomposition

theo:7.45:7D Theorem 9.7.1 (7.45, Polar Decomposition) Suppose T 2 L(V ). Then there exists anisometry S 2 L(V ) such that T = S

pT ⇤T .

theo:7.51:7D Theorem 9.7.2 (7.51 Singular Value Decomposition) Suppose T 2 L(V ) has singular val-ues s

1

, . . . , sn

. Then there exist orthonormal bases e1

, . . . , en

and f1

, . . . , fn

of V suchthat

Tv = s1

hv, e1

i f1

+ . . .+ sn

hv, en

i fn

for every v 2 V .

theo:7.52:7D Theorem 9.7.3 (7.52) Suppose T 2 L(V ). Then the singular values of T are the nonnega-tive square roots of the eigenvalues of T ⇤T , with each eigenvalue � repeated dim E(�, T ⇤T )times.

How to find positive operator R if R2 is known:

• Since R2 is also positive so first is to find orthonormal basis e1

, . . . , en

of V consisting ofeigenvectors of R2 (see 7B).

• If R2ei

= �i

ei

then Rei

=

p�i

ei

. Hence,

Rv = R

nX

i=1

hv, ei

i ei

!=

nX

i=1

hv, ei

ip�i

ei

.

How to find singular values of an operator T :

• Find T ⇤ (see 7A) then find T ⇤T then find all eigenvalues of T ⇤T . Hence, singular valuesof T are nonnegative square roots of the eigenvalues of T ⇤T according to theorem

theo:7.52:7Dtheo:7.52:7D

9.7.3(7D).

• If T is self-adjoint: The singular values of T equals the absolute values of the eigenvaluesof T according to exercise

exer:7D:10exer:7D:10

10 (7D).

9.8. Exercises 7D

exer:7D:1 1. According to solution in exerciseexer:7A:15exer:7A:15

15 (7A) then T ⇤w = hw, xiu so T ⇤Tv = hv, uiT ⇤x =

hv, ui hx, xiu. One can verify thatpT ⇤Tv =

kxkkuk hv, uiu is indeed true.

exer:7D:2 2. Let T (a, b) = (5b, 0) then T ⇤(x, y) = (0, 5x). Hence, T ⇤T (x, y) = (0, 25y). Hence,p

T ⇤T (x, y) = (0, 5y) which has two singular values 0, 5 while 0 is the only eigenvalueof T .

Page 88: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 88

exer:7D:3 3. Apply Polar Decompositiontheo:7.45:7Dtheo:7.45:7D

9.7.1 (7D) to T ⇤ we find there exists an isometry S 2 L(V )

so T ⇤= S

p(T ⇤

)

⇤T ⇤= S

pTT ⇤. Hence, T = (T ⇤

)

⇤=

⇣pTT ⇤

⌘⇤S⇤

=

pTT ⇤S⇤ because

pTT ⇤ is positive. Note that S is an isometry so S⇤ is also an isometry.

exer:7D:4 4. Since s is singular value of T so there exists v 2 V, kvk = 1 sopT ⇤Tv = sv. Hence, we

have kTvk = kpT ⇤Tvk = |s|kvk = s.

exer:7D:5 5. We have

h(a, b), T ⇤(x, y)i = hT (a, b), (x, y)i = h(�4b, a), (x, y)i = �4bx+ ay = h(a, b), (y,�4x)i .

Hence, T ⇤(x, y) = (y,�4x). Hence, T ⇤T (x, y) = T ⇤

(�4y, x) = (x, 16y). Thus, singularvalues of T are 1, 4.

exer:7D:6 6. We have for all a2

, a1

, a0

2 R then⌦a2

x2 + a1

x+ a0

, T ⇤(b

2

x2 + b1

x+ b0

)

↵=

⌦T (a

2

x2 + a1

x+ a0

), b2

x2 + b1

x+ b0

↵,

=

⌦2a

2

x+ a1

, b2

x2 + b1

x+ b0

↵,

=

Z1

�1

(2a2

x+ a1

)(b2

x2 + b1

x+ b0

) dx,

=

2

3

(2a2

b1

+ a1

b2

) + 2a1

b0

,

= a2

· 43

b1

+ a1

✓2

3

b2

+ 2b0

◆.

On the other hand, we have⌦a1

x2 + a1

x+ a0

, c2

x2 + c1

x+ c0

↵= 2a

0

c0

+

2

3

(a1

c1

+ a2

c0

+ a0

c2

) +

2

5

a2

c2

,

= a2

✓2

5

c2

+

2

3

c0

◆+ a

1

· 23

c1

+ 2a0

c0

.

Thus, we must have c0

= 0, 43

b1

=

2

5

c2

+

2

3

c0

and 2

3

b1

+2b0

=

2

3

c1

. This follows c0

= 0, c2

=

10

3

b1

and c1

= b1

+ 3b0

. We obtain T ⇤(b

2

x2 + b1

x+ b0

) =

10

3

b1

x2 + (b1

+ 3b0

)x. Thus,

T ⇤T (a2

x2 + a1

x+ a0

) = T ⇤(2a

2

x+ a1

) = (2a2

+ 3a1

)x.

Thus, T ⇤T has eigenvalues 3 which repeated 1 time and 0 which repeated 2 times. Thus,singular values of T are 0, 0,

p3 according to theorem

theo:7.52:7Dtheo:7.52:7D

9.7.3 (7D).

exer:7D:7 7. We have

h(z1

, z2

, z3

), T ⇤(x

1

, x2

, x3

)i = hT (z1

, z2

, z3

), (x1

, x2

, x3

)i ,= h(z

3

, 2z1

, 3z2

), (x1

, x2

, x3

)i ,= h(z

1

, z2

, z3

), (2x2

, 3x3

, x1

)i .

Thus, T ⇤(x

1

, x2

, x3

) = (2x2

, 3x3

, x1

). Hence, T ⇤T (z1

, z2

, z3

) = T ⇤(z

3

, 2z1

, 3z2

) = (4z1

, 9z2

, z3

).Thus,

pT ⇤T (z

1

, z2

, z3

) = (2z1

, 3z2

, z3

). Let S 2 L(V ) so S(2z1

, 3z2

, z3

) = (z3

, 2z1

, 3z2

)

then S is an isometry.

Page 89: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 89

exer:7D:8 8. Since T = SR so T ⇤= (SR)

⇤= R⇤S⇤

= RS⇤ so T ⇤T = RS⇤SR = R2 so R =

pT ⇤T .

exer:7D:9 9. Since kTvk = kpT ⇤Tvk so T is invertible iff

pT ⇤T is invertible iff null

pT ⇤T = {0} iff

rangepT ⇤T = V . Thus, it suffices to show that range T ⇤T = V iff there exists a unique

isometry S 2 L(V ) so T = SpT ⇤T .

Indeed, if rangepT ⇤T = V then obviously isometry operator S is uniquely determined

since S must satisfy T = SpT ⇤T . Conversely, if there exists a unique isometry S 2 L(V ) so

T = SpT ⇤T but range

pT ⇤T 6= V , which means range

pT ⇤T )? 6= {0}. Hence, by looking

back proof of Polar Decompositiontheo:7.45:7Dtheo:7.45:7D

9.7.1 (7D) from the book, for m � 1, let e1

, . . . , em

andf1

, . . . , fm

be orthonormal bases of (rangepT ⇤T )? and (range T )?, respectively. Hence,

we have at least two ways to choose linear map S2

: (rangepT ⇤T )? ! (range T )?, one is

S2

(a1

e1

+ . . . am

em

) = a1

f1

+ . . .+ am

fm

or S2

(a1

e1

+ . . . , am

fm

) = �a1

f1

� . . .� am

fm

.This follows there are at least two possible isometries S so T = S

pT ⇤T , a contradiction.

Thus, we must have rangepT ⇤T = V or T is invertible.

exer:7D:10 10. According to theoremtheo:7.52:7Dtheo:7.52:7D

9.7.3 (7D), the singular values of T are the nonnegative square rootsof the eigenvalues of T ⇤T . Since T is self-adjoint so T ⇤T = T 2 so eigenvalues of T are thenonnegative square roots of the eigenvalues of T 2. Thus, the singular values of T equalthe absolute values of the eigenvalues of T , repeated appropriately.

exer:7D:11 11. It suffices to show that T ⇤T and TT ⇤ have the same eigenvalues values that are repeatedunder same number of times. Since T ⇤T is self-adjoint so according to Spectral Theorem,there exists an orthonormal basis e

1

, . . . , en

consisting of eigenvectors of T ⇤T . WLOG,among these vectors, say f

1

, . . . , fm

are eigenvectors of T ⇤T corresponding to eigenvalue�. Note that m = dim E(�, T ⇤T ).One the other hand, TT ⇤

(Tfi

) = T (T ⇤Tfi

) = �Tfi

so Tf1

, . . . , T fm

are eigenvectorsof TT ⇤ corresponding to eigenvalues �. For any 1 i < j m then hTf

i

, T fj

i =

hfi

, T ⇤Tfj

i = � hfi

, fj

i = 0 so Tf1

, . . . , T fm

is also an orthonormal list which meansTf

1

, . . . , T fm

is linearly independent. This follows that eigenvalue � under operator TT ⇤

is repeated at least dim E(�, T ⇤T ) times, i.e. dim E(�, TT ⇤) � dim E(�, T ⇤T ). Similarly

and we obtain dim E(�, TT ⇤) = dim E(�, T ⇤T ) and can conclude that TT ⇤ and T ⇤T have

same eigenvalues that are repeated same number of times. Thus, T and T ⇤ have samesingular values.

exer:7D:12 12. The statement is false. According to theoremtheo:7.52:7Dtheo:7.52:7D

9.7.3 (7D), the singular values of T 2 arethe nonnegative square roots of the eigenvalues of (T 2

)

⇤T 2

= (T ⇤)

2T 2 and the singularvalues of T are the nonnegative square roots of the eigenvalues of T ⇤T . Hence, the singularvalues of T 2 equal the squares of the singular values of T when the eigenvalues of (T ⇤

)

2T 2

equals the squares of the eigenvalues of T ⇤T . Note that (T ⇤)

2T 2 and T ⇤T are self-adjointoperators so we must have (T ⇤T )2 = (T ⇤

)

2T 2, which is incorrect when T is not normal.Choose a nonnormal operator T in 2-dimensional vector space V as a counterexample.

exer:7D:13 13. Since for all v 2 V then kTvk = kpT ⇤Tvk so T is invertible iff

pT ⇤T is invertible iff 0 is

not an eigenvalue ofpT ⇤T iff 0 is not s singular value of T .

Page 90: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 90

exer:7D:14 14. Again, from kTvk = kpT ⇤Tvk so dim range T = dim range

pT ⇤T and dim range

pT ⇤T =

n�dim E(0,pT ⇤T ) from the fact that

pT ⇤T is positive so

pT ⇤T is diagonalizable. Note

that according to the definition of singular values, dim E(0,pT ⇤T ) is number of repetition

of singular value 0 of T . Thus, dim range T = n� dim E(0,pT ⇤T ) equals the number of

nonzero singular values of T .

exer:7D:15 15. If S has singular values s1

, . . . , sn

then according to theoremtheo:7.51:7Dtheo:7.51:7D

9.7.2 (7D), there existorthonormal bases e

1

, . . . , en

and f1

, . . . , fn

of V such that Sv = s1

hv, e1

i f1

+ . . . +sn

hv, en

i fn

for every v 2 V . This follows for every v 2 V , kSvk2 =

Pn

i=1

|si

hv, ei

i |2.Hence, if all singular values of S equal 1 then kSvk = kvk so S is an isometry. Conversely,if S is an isometry then kSe

i

k = si

= 1 for all 1 i n so all singular values of S equal1.

exer:7D:16 16. Note that with T1

= S1

T2

S2

then T ⇤1

T = S⇤2

(T ⇤2

T2

)S2

. The idea to solve this problem issimilar to exercise

exer:7C:11exer:7C:11

11 (7C).If T

1

, T2

have same singular values then two positive operators T ⇤1

T1

and T ⇤2

T2

have sameeigenvalues �

1

, . . . ,�n

. According to Spectral Theorem, let e1

, . . . , en

be orthonormalbasis consisting of eigenvectors of T ⇤

1

T1

and f1

, . . . , fn

be orthonormal basis consisting ofeigenvectors of T ⇤

2

T2

. Let S2

2 L(V ) so S2

ei

= fi

for all 1 i n then similarly toexercise

exer:7C:11exer:7C:11

11 (7C), we can show that S2

is an isometry and that T ⇤1

T1

= S⇤2

(T ⇤2

T2

)S2

. Thisleads to kT

1

vk = kT2

S2

vk for every v 2 V . With this, we can construct isometry S1

similarly to proof of Polar Decompositiontheo:7.45:7Dtheo:7.45:7D

9.7.1 (7D) from the book so that S1

T2

S2

= T1

.Conversely, if there exists isometries S

1

, S2

so S1

T2

S2

= T1

then T ⇤1

T1

= S⇤2

T ⇤2

T2

S2

. Iff1

, f2

, . . . , fn

is basis of V and are also eigenvectors of T ⇤2

T2

corresponding to eigenvalues�1

, . . . ,�n

. Since S2

is invertible, for each fi

, there exists ei

2 V so S2

ei

= fi

. This followse1

, . . . , en

is basis of V . Hence,

T ⇤1

T1

ei

= S⇤2

T ⇤2

T2

S2

ei

= S⇤2

T ⇤2

T2

fi

= �S⇤2

fi

= �i

ei

.

This follows T ⇤1

T1

and T ⇤2

T2

have same eigenvalues so T1

and T2

have same singular values.

exer:7D:17 17. (a) We have

hTv, ui = hs1

hv, e1

i f1

+ . . .+ sn

hv, en

i fn

, hu, f1

i f1

+ . . .+ hu, fn

i fn

i ,= s

1

hv, e1

i hu, f1

i+ . . .+ sn

hv, en

i hu, fn

i ,= hv, s

1

hu, f1

i e1

+ . . .+ sn

hu, fn

i en

i .

This follows T ⇤u = s1

hu, f1

i e1

+ . . .+ sn

hu, fn

i en

.(b) We have

T ⇤Tv = T ⇤(s

1

hv, e1

i f1

+ . . .+ sn

hv, en

i fn

) ,

=

nX

i=1

si

hv, ei

iT ⇤fi

,

= s21

hv, e1

i e1

+ . . .+ s2n

hv, en

i en

.

Page 91: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 91

(c) We havepT ⇤Te

i

= si

ei

sopT ⇤Tv =

pT ⇤T (hv, e

1

i e1

+ . . .+ hv, en

i en

) ,

= s1

hv, e1

i e1

+ . . .+ sn

hv, en

i en

.

(d) We have Tei

= si

fi

so T�1fi

=

eisi

. Hence,

T�1v = T�1

(hv, f1

i f1

+ . . .+ hv, fn

i fn

) ,

=

nX

i=1

hv, fi

iT�1fi

,

=

hv, f1

i e1

s1

+ . . .+hv, f

n

i en

sn

.

exer:7D:18 18. (a) If s1

s2

· · · sn

are all singular values of T then we have kTvk2 =P

n

i=1

s2i

| hv, ei

i |2and kvk2 =

Pn

i=1

| hv, ei

i |2 so it can be easily seen that s1

kvk kTvk sn

kvk.(b) Applying (a) we obtain s

1

|�| sn

.

exer:7D:19 19. It suffices to show that for every " > 0 there exists � > 0 so for all ku � vk < � thenkT (u� v)k < ". Indeed, if s the largest singular value of T then from previous exercise

exer:7D:18exer:7D:18

18(7D), we find kT (u� v)k sku� vk. Hence, if we choose � = "/s, we are done. Thus, Tis uniformly continuous with respect to the metric d on V .

exer:7D:20 20. According to exerciseexer:7D:18exer:7D:18

18 (7D), for any v 2 V , we have k(S + T )vk kSvk + kTvk (t + s)kvk. On the other hand, from exercise

exer:7D:4exer:7D:4

4 (7D), there exists v 2 V, kvk = 1 sok(S + T )vk = r so this follows r t+ s.

Page 92: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 92

10. Chapter 8: Operators on Complex Vector Spaces

In this chapter, V denotes a finite-dimensional nonzero vector space over F.

10.1. 8A: Generalized Eigenvectors and NilpotentOperators

theo:8.2:8A Theorem 10.1.1 (8.2) Suppose T 2 L(V ). Then

{0} = null T 0 ⇢ null T 1 ⇢ · · · ⇢ null T k ⇢ null T k+1 ⇢ · · ·

theo:8.3:8A Theorem 10.1.2 (8.3) Suppose T 2 L(V ). Suppose m is a nonnegative integer such thatnull Tm

= null Tm+1. Then

null Tm

= null Tm+1

= null Tm+2

= null Tm+3

= . . .

theo:8.4:8A Theorem 10.1.3 (8.4) Suppose T 2 L(V ). Let n = dim V . Then null T k

= null T k+1 forall positive integer k � n.

theo:8.5:8A Theorem 10.1.4 (8.5) Suppose T 2 L(V ). Let n = dim V . Then V = null Tn�range Tn.

theo:8.13:8A Theorem 10.1.5 (8.13) Let T 2 L(V ). Suppose �1

, . . . ,�m

are distinct eigenvalues ofT and v

1

, . . . , vm

are corresponding generalized eigenvectors. Then v1

, . . . , vm

is linearlyindependent.

theo:8.18:8A Theorem 10.1.6 (8.18) Suppose N 2 L(V ) is nilpotent. Then Ndim V

= 0.

theo:8.19:8A Theorem 10.1.7 (8.19) Suppose N is a nilpotent operator on V . Then there exists a basisof V with respect to which the matrix of N is an upper-triangular matrix with only 0’s inthe diagonal.

How two determine all the generalized eigenspaces corresponding to distinct eigenvalues ofoperator T 2 L(V ):

1. Find all eigenvalues of T .

2. For each eigenvalue � of T , find null space of (T � �I)dim V , which is equal to generalizedeigenspace G(�, T ) of T corresponding to �.

Page 93: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 93

10.2. Exercises 8A

exer:8A:1 1. 0 is the only eigenvalue of T . We have T 2

(w, z) = T (z, 0) = 0 for all (w, z) 2 C2 soG(0, T ) = null T 2

= C2.

exer:8A:2 2. If T (w, z) = �(w, z) = (�z, w) then �w = �z,�z = w. This follows �2 + 1 = 0 soeigenvalues of T are � = ±i. We have

(T + iI)2(w, z) = (T + iI)(�z + iw,w + iz),

= (�w � iz,�z + iw) + i(�z + iw,w + iz),

= (�2w � 2iz, 2iw � 2z).

Thus, (T + iI)2(w, z) = 0 when w + iz = 0. Thus, G(�i, T ) = span (�i, 1). We have

(T � iI)2(w, z) = (T � iI)(�z � iw,w � iz),

= (iz � w,�z � iw)� i(�z � iw,w � iz),

= (2iz � 2w,�2z � 2iw).

Thus, (T � iI)2(w, z) = 0 when iz � w = 0 so G(i, T ) = span (i, 1).

exer:8A:3 3. Let dim V = n. If v 2 G(�, T ) then (T � �I)nv = 0. Hence, (�T )�n

(T � �I)nv = 0 so(��1I � T�1

)

nv = 0 so v 2 G(��1, T�1

). Similarly, if v 2 G(��1, T�1

) then v 2 G(�, T ).Thus, G(�, T ) = G(

1

, T�1

).

exer:8A:4 4. The proof for this is completely similar to proof of theoremtheo:8.13:8Atheo:8.13:8A

10.1.5 (8A). Indeed, assume thecontrary there is v 2 G(↵, T )\G(�, T ), v 6= 0 then let k be the largest nonnegative integerso (T � ↵I)kv 6= 0. Hence, with w = (T � ↵I)kv we have (T � ↵I)w = (T � ↵I)k+1v = 0

so Tw = ↵w. This follows (T � �I)w = (↵ � �)w for any � 2 F. Hence, we obtain,(T � �I)nw = (↵ � �)nw. On the other hand, since v 2 G(�, T ) so (T � �I)nw =

(T�↵I)k(T��I)nv = 0. This follows (↵��)nw = 0 which leads to w = 0, a contradiction.Thus, G(↵, T ) \G(�, T ) = {0}.

exer:8A:5 5. Let a0

, . . . , am�1

2 F so a0

v + a1

Tv + . . . + am�1

Tm�1v = 0. Apply the the operatorTm�1 to both side of this we obtain a

0

Tm�1v = 0 which leads to a0

= 0. Similarly,we apply Tm�2 to obtain a

1

= 0, ..., T to obtain am�2

= 0. Thus, ai

= 0 for all i sov, Tv, . . . , Tm�1v is linearly independent.

exer:8A:6 6. Since T (z1

, z2

, z3

) = (z2

, z3

, 0) so T 2

= 0. If T has square root when S4

= T 2

= 0 so Sis nilpotent so according to theorem

theo:8.18:8Atheo:8.18:8A

10.1.6 (8A) then S2

= 0 or T = 0, a contradiction.Thus, T has no square root.

exer:8A:7 7. If N is nilpotent then from theoremtheo:8.18:8Atheo:8.18:8A

10.1.6 (8A), Ndim V

= 0 or G(0, N) = null Ndim V

= Vso there exists a basis of V consisting of generalized eigenvectors corresponding to 0.Combining with exercise

exer:8A:4exer:8A:4

4 (8A), we find 0 is the only eigenvalue of N .

exer:8A:8 8. It is false. When V = C2, consider to nilpotent operators T, S 2 L(C2

) so T (x, y) = (0, x)and S(x, y) = (y, 0). Then (T + S)2(x, y) = (T + S)(y, x) = (x, y). Thus, T + S is not anilpotent operator. Thus, set of nilpotent operators on C2 is not a subspace of L(C2

).

Page 94: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 94

exer:8A:9 9. Since ST is nilpotent so ST dim V

= 0 which leads to TSdim V+1

= T (ST )dim V S = 0 soTS is nilpotent.

exer:8A:10 10. It suffices to show that null Tn�1 \ range Tn�1

= {0}. Assume the contrary, if thereexists v 2 null Tn�1 \ range Tn�1, v 6= 0 then Tn�1v = 0 and there exists u 2 V, u 6= 0

so Tn�1u = v. This follows T 2n�2u = 0 so Tnu = 0 according to theoremtheo:8.4:8Atheo:8.4:8A

10.1.3 (8A).Note that v = Tn�1u 6= 0 and Tnu = 0 so according to exercise

exer:8A:5exer:8A:5

5 (8A), u, Tu, . . . , Tn�1uis linearly independent so it is basis of V . On the other hand, not that T iu 2 null Tn

so that means Tn

= 0, a contradiction since T is not nilputent. Thus, we must havenull Tn�1 \ range Tn�1

= {0}, which leads to V = null Tn�1 � range Tn�1.

exer:8A:11 11. It is false. The idea is to pick T so T 2 doesn’t have enough eigenvectors to form a basisof V . Let T 2 L(C2

) so T (x, y) = (x, x + y) then T 2

(x, y) = T (x, x + y) = (x, 2x + y).Hence, if �(x, y) = (x, 2x+ y) then (�� 1)x = 0 and (�� 1)y = 2x. With this, we obtain1 is the only eigenvalue of T 2 and that E(1, T 2

) = span {(0, 1)}. This follows T 2 is notdiagonalizable.

exer:8A:12 12. Let v1

, . . . , vn

be such basis of V . We prove inductively on i n that there exists positiveinteger k

i

so T kivi

= 0. Indeed, since Tv1

= 0 so it’s true for i = 1. If it’s true forall i m < n. Consider v

m+1

then according to the assumption, we have Tvm+1

2span (v

1

, . . . , vm

) so with k = max{k1

, . . . , km

} we obtain T k+1vm+1

= 0. Thus, thisfollows N is nilpotent.

exer:8A:13 13. Since N is nilpotent so there exists a basis of V with respect to which the matrix of N isan upper triangular matrix with only 0’s in the diagonal. Therefore, from theorem

theo:6.37:6Btheo:6.37:6B

8.3.4(look at the proof of it, because only apply the theorem itself doesn’t help), there existsan orthonormal basis of V with respect to which the matrix of N is an upper triangularmatrix with only 0’s in the diagonal. With this and the from the proof of Complex SpectralTheorem

theo:7.24:7Btheo:7.24:7B

9.3.1 given that N is normal, we can deduce that matrix of N is the 0 matrix,in other words N = 0.

exer:8A:14 14. (copy from part of exerciseexer:8A:13exer:8A:13

13 (8A)) Since N is nilpotent, there exists a basis of V withrespect to which the matrix of N is an upper triangular matrix with only 0’s in thediagonal. Therefore, from theorem

theo:6.37:6Btheo:6.37:6B

8.3.4 (look at the proof of it, because only apply thetheorem itself doesn’t help), there exists an orthonormal basis of V with respect to whichthe matrix of N is an upper triangular matrix with only 0’s in the diagonal.

exer:8A:15 15. Since null Ndim V�1 6= null Ndim V so there exists v 2 V so Ndim V v = 0 but Ndim V�1v 6=0. Therefore, according to exercise

exer:8A:5exer:8A:5

5 (8A), v, Tv, . . . , T dim V�1v is linearly independent soit is basis of V . Note that T iv 2 null T dim V so that means V ⇢ null T dim V so T dim V

= 0

so T is nilpotent.Since null Ndim V�1 6= null Ndim V so according to theorem

theo:8.3:8Atheo:8.3:8A

10.1.2 (8A), we obtain null N j 6=null N j+1 for all 0 j dim V � 1. Hence, from theorem

theo:8.2:8Atheo:8.2:8A

10.1.1 (8A), we obtaindim null N j < null N j+1 for every 0 j dim V � 1, which leads to dim null N i

= i forevery 1 i dim V .

Page 95: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 95

exer:8A:16 16. For any nonnegative integer k, if v 2 range T k+1 then v 2 range T k so range T k+1 ⇢range T k.

exer:8A:17 17. From exerciseexer:8A:16exer:8A:16

16 (8A), we already know range Tm+k+1 ⇢ range Tm+k. Now, if v 2range Tm+k then v = Tm+ku. Since Tmu 2 range Tm

= range Tm+1 so there exists w 2 Vso Tmu = Tm+1w so v = Tm+ku = Tm+k+1w so v 2 range Tm+k+1. Thus, we obtainrange Tm+k ⇢ range Tm+k+1. With this, we conclude range Tm+k

= range Tm+k+1 forevery nonnegative integer k, in other words range T k

= range Tm for all k > m.

exer:8A:18 18. According to exerciseexer:8A:17exer:8A:17

17 (8A), it suffices to prove range Tn

= range Tn+1. Assume it isnot true, then from exercises

exer:8A:16exer:8A:16

16 andexer:8A:17exer:8A:17

17 (8A), we find

V = range T 0 ) range T 1 ) · · · ) range Tn+1.

This follows dim range Tn+1 < 0, a contradiction. Thus, we must have range Tn

=

range Tn+1.

exer:8A:19 19. null Tm

= null Tm+1 iff dim null Tm

= dim null Tm+1 iff dim range Tm

= dim range Tm+1

iff range Tm

= range Tm+1.

exer:8A:20 20. From exerciseexer:8A:19exer:8A:19

19 (8A), range T 4 6= range T 5 follows null T 4 6= null T 5 so from exerciseexer:8A:15exer:8A:15

15(8A), T is nilpotent.

exer:8A:21 21. W 2 Cn so T (z1

, . . . , zn

) = (z2

, . . . , zn

, 0) then null T i

= {(z1

, . . . , zi

, 0, . . . , 0) : zi

2 C}.

10.3. 8B: Decomposition of an Operator

theo:8.21:8B Theorem 10.3.1 (8.21, Description of operators on complex vector spaces) Suppose V iscomplex vector space and T 2 L(V ). Let �

1

, . . . ,�m

be the distinct eigenvalues of T .Then:

(a) V = G(�1

, T )� · · ·�G(�m

, T );

(b) each G(�i

, T ) is invariant under T ;

(c) each (T � �i

I)G(�i,T )

is nilpotent.

The idea for the proof of (a) is similar to of exerciseexer:5C:5exer:5C:5

5 (5C).

Proof of (a). We induct on dim V . From theoremtheo:8.5:8Atheo:8.5:8A

10.1.4 (8A) then V = null (T � �i

I)n �range (T � �

i

I)n for all 1 i m. Let U = range (T � �i

I)n then dim U < dim V .First, we show that for all j 6= i then G(�

j

, T ) = G(�j

, T |U

). It’s obvious that G(�j

, T |U

) ⇢G(�

j

, T ). If v 2 G(�j

, T ) then (T��j

I)nv = 0. With V = null (T��i

I)n�range (T��i

I)n, wecan write v = u+(T��

i

I)nw where u 2 G(�i

, T ) so this follows (T��j

I)nu = 0�(T��j

I)n(T��i

I)nw 2 U so (T � �j

I)nu 2 U . This follows there exists s 2 V so (T � �j

I)nu = (T � �i

I)ns.Note that u 2 G(�

i

, T ) so that means (T ��i

I)2ns = (T ��j

I)n(T ��i

I)nu = 0 which leads to(T��

i

I)ns = 0 according to theoremtheo:8.4:8Atheo:8.4:8A

10.1.3 (8A) or (T��j

I)nu = 0 or u 2 G(�j

, T ). Therefore,

Page 96: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 96

u 2 G(�i

, T )\G(�j

, T ) so from exerciseexer:8A:4exer:8A:4

4 (8A) we find u = 0. Thus, v = (T � �i

I)nw 2 U . Weconclude that any generalized eigenvector v of T corresponding to �

j

6= �i

is in U , which meansv 2 G(�

j

, T |U

). With this, we deduce, G(�j

, T ) ⇢ G(�j

, T |U

) so G(�j

, T ) = G(�j

, T |U

) for allj 6= i.

Second, note that �j

(j 6= i) are the only eigenvalues of operator T |U

over U . Hence, sincedim U < dim V so from inductive hypothesis, we obtain

range (T � �i

I)n = U =

M

j 6=i

G(�j

, T |U

) =

M

j 6=i

G(�j

, T ).

Thus, V = G(�i

, T )� range (T � �i

I)n =

Lm

j=1

G(�j

, T ).

theo:8.26:8B Theorem 10.3.2 (8.26) Suppose V is a complex vector space and T 2 L(V ). Then thesum of multiplicities of all eigenvalues of T equals dim V .

theo:8.29:8B Theorem 10.3.3 (8.29) Suppose V is a complex vector space and T 2 L(V ). Let �1

, . . . ,�m

be the distinct eigenvalues of T , with multiplicities d1

, . . . , dm

. Then there is a basis of Vwith respect to which T has a block diagonal matrix of the form

0

B@A

1

0

. . .0 A

m

1

CA ,

where each Aj

is a dj

-by-dj

upper triangular matrix of the form

Aj

=

0

B@�j

?. . .

0 �j

1

CA .

theo:8.31:8B Theorem 10.3.4 (8.31) Suppose N 2 L(V ) is nipotent. Then I +N has kth root.

theo:8.33:8B Theorem 10.3.5 (8.33) Suppose V is a complex vector space and T 2 L(V ) is invertible.Then T has a kth root.

10.4. Exercises 8B

exer:8B:1 1. Since 0 is the only eigenvalue of N and that V is a complex vector space, from theoremtheo:5.27:5Btheo:5.27:5B

7.3.2 (5B) andtheo:5.32:5Btheo:5.32:5B

7.3.4 (5B), we follow there exists a basis of V with respect to which thematrix of N is an upper-triangular matrix with only 0’s on the diagonal. From exerciseexer:8A:12exer:8A:12

12 (8A), we follow N is nilputent.

Page 97: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 97

exer:8B:2 2. The idea is choose operator T so T has 0 as the only eigenvalue and that there doesn’texists any basis of V with respect to which T has upper-triangular matrix. Hence:

• First pick null T = span (v1

).• Choose U = range T so T |

U

2 L(U) does not have any eigenvalue. Let dim U = 2

then say Tv2

= v3

, T v3

= �v2

, we can find that T |U

does not have any eigenvalue.• Confirm that T is not nilpotent. Now we have V = span (v

1

, v2

, v3

) so it suffices toshow T 3 6= 0. Indeed, we have

T 3

(av1

+ bv2

+ cv3

) = T 2

(bv3

� cv2

) = T (�bv2

� cv3

) = �bv3

+ cv2

.

exer:8B:3 3. If � is eigenvalue of T and v 2 E(�, T ) then since S is invertible, there exists u 2 V soSu = v. This follows S�1TSu = S�1Tv = �S�1v = �u so � is also eigenvalue of T .Similarly, if � is eigenvalue of S�1TS and v 2 E(�, S�1TS) then TSv = �Sv so � is alsoeigenvalue of T . Thus, T and S�1TS have same eigenvalues.Now it suffices to show that dim G(�, T ) = dim G(�, S�1TS) for every eigenvalue of�. Note that from exercise

exer:5B:5exer:5B:5

5 (5B) we have (S�1TS � �I)n = S�1

(T � �I)nS. Letv1

, . . . , vm

be basis of G(�, S�1TS) then (S�1TS � �I)nvi

= S�1

(T � �I)nSvi

= 0

so (T � �I)nSvi

= 0. This follows Svi

2 G(�, T ) and also observe that Sv1

, . . . , Svm

is linearly independent so that means dim G(�, S�1TS) dim G(�, T ). Similarly, ifSv

1

, . . . , Svm

is basis of dim G(�, T ) then v1

, . . . , vm

2 G(�, S�1TS) and v1

, . . . , vm

islinearly independeny. With this, we obtain dim G(�, T ) dim G(�, S�1TS). Hence, weobtain dim G(�, T ) = dim G(�, S�1TS).

exer:8B:4 4. If null Tn�2 6= null Tn�1 then from theoremtheo:8.3:8Atheo:8.3:8A

10.1.2 (8A), we find null T i 6= null T i+1 forall 0 i n � 2. This follows dim G(0, T ) � n � 1, i.e. the multiplicity of 0 of T is atleast n� 1. Hence, from theorem

theo:8.26:8Btheo:8.26:8B

10.3.2 (8B), T has at most 2 eigenvalues.

exer:8B:5 5. If every generalized eigenvector of T is an eigenvector of T then obviously V is has basisconsisting of eigenvectors of T . For the other direction, if V has basis consisting of eigen-vectors of T then that means

Pm

i=1

dim E(�i

, T ) =

Pm

i=1

dim G(�i

, T ) and the equalityholds when E(�

i

, T ) = G(�i

, T ) for every 1 i m, i.e. every generalized eigenvector ofT is an eigenvector of T .

exer:8B:6 6. We have N5

= 0 so N is nilpotent. According to proof of theoremtheo:8.31:8Btheo:8.31:8B

10.3.4 (8B), letR = I + a

1

N + a2

N2

+ a3

N3

+ a4

N4 be the square root of N + I then

R2

= I + 2a1

N + (a21

+ 2a2

)N2

+ (2a3

+ 2a1

a2

)N3

+ (2a4

+ 2a3

a1

+ a22

)N4.

We find a1

= 1/2, a2

= �1/8, a3

= 1/16, a4

= �5/128.

exer:8B:7 7. First is show that I + N where N is nilpotent has a cube root then follow similarly toproof of theorem

theo:8.33:8Btheo:8.33:8B

10.3.5 (8B).

Page 98: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 98

exer:8B:8 8. If 3, 8 are the only eigenvalues of T then null Tn�2

= null T 0

= {0} so according to exerciseexer:8A:19exer:8A:19

19 (8A), range Tn�2

= range T 0

= V .If T has at least 3 eigenvalues then from exercise

exer:8B:4exer:8B:4

4 (8B), we obtain null Tn�2

= null Tn�1

=

null Tn so from exerciseexer:8A:19exer:8A:19

19 (8A) again, we have range Tn�2

= range Tn�1

= range Tn.Thus, from theorem

theo:8.5:8Atheo:8.5:8A

10.1.4 (8A), we find V = null Tn�range Tn

= null Tn�2�range Tn�2.

exer:8B:9 9. Not hard.

exer:8B:10 10. Since T is an operator on a complex vector space, from theoremtheo:8.29:8Btheo:8.29:8B

10.3.3 (8B), there existsa basis of V with respect to which T has a block diagonal matrix.Let D

i

be a diagonalizable dj

-by-dj

matrix with its diagonal same as matrix Aj

. Then letD 2 L(V ) so matrix of D is 0

B@D

1

0

. . .0 D

m

1

CA .

Let Ni

= A �Di

= �i

Ii

where Ii

is di

-by-di

identity matrix. Let N 2 L(V ) so matrix ofN is 0

B@N

1

0

. . .0 N

m

1

CA =

0

B@�1

I1

0

. . .0 �

m

Im

1

CA

From here, we obtain N is nilpotent, T = N+D and from exerciseexer:8B:9exer:8B:9

9 (8B), we have matrixof ND is

0

B@�1

I1

0

. . .0 �

m

Im

1

CA

0

B@D

1

0

. . .0 D

m

1

CA =

0

B@�1

D1

0

. . .0 �

m

Dm

1

CA ,

which is also equal to matrix of DN . Thus, ND = DN .

exer:8B:11 11. First we show that number of times that 0 appears on the diagonal of the matrix of T isat most dim null Tn. For a basis of V with respect to which T has an upper-triangularmatrix, let v

a1 , . . . , vam be subset of that basis so the i-th 0 on the diagonal (from top leftto bottom right) is also in a

i

-th column of matrix of T .We prove inductively on i that there exist u

i

2 span (v1

, . . . , vai) so Tu

1

= 0 and Tui

2span (u

1

, . . . , ui�1

) for i � 2. For i = 1, let u1

=

Pa1�1

i=1

↵i

vi

+ va1 . Since Tv

a1 2span (v

1

, . . . , va1�1

) so we can write

T (u1

) = T

a1�1X

i=1

↵i

vi

!+

a1�1X

i=1

�i

vi

=

a1�1X

i=1

(↵i

Tvi

+ �i

vi

) .

I suffices to choose ↵i

so T (u1

) = 0. Indeed, observe that the first a1

� 1 numbers on thediagonal (from top left) are nonzero, i.e. for every 1 i a

1

� 1 then Tvi

=

Pi

j=1

cj,i

vj

Page 99: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 99

with cj,i

6= 0 for all 1 j i. Hence,

T (u1

) =

a1�1X

i=1

0

@a1�1X

j=i

↵j

ci,j

+ �i

1

A vi

.

Therefore, since ci,j

6= 0 for all 1 i a1

� 1, 1 j i, we can choose ↵a1�1

so↵a1�1

ca1�1,a1�1

+ �a1�1

= 0, then we can choose ↵a1�2

soP

a1�1

j=a1�2

↵j

ca1�2,j

+ �ai�2

= 0.Continuing this, we guarantee to find u

1

so Tu1

= 0.If the statement is true for all numbers less than i, consider u

i

= vai +

Pai�1

j=1

↵j

vj

. Weshow that we can find ↵

j

so Tui

2 span (u1

, . . . , ui�1

). Indeed, we have

Tui

= T

0

@ai�1X

j=1

↵j

vj

1

A+ T

0

@ai�1X

j=ai�1+1

↵j

vj

1

A+ Tv

ai .

Observe that on the diagonal, there is no 0 between ai�1

-th number and ai

-th number, sowith similar approach to case i = 1, we can choose ↵

ai�1+1

, . . . ,↵ai�1

so scalar multipli-cation of v

j

(ai�1

+ 1 j ai

� 1) in representation of Tui

is 0. With this and note thatui�1

2 span (v1

, . . . , vai�1), we obtain:

Tui

= T

0

@ai�1X

j=1

↵j

vj

1

A+

ai�1X

i=1

�i

vi

,

= T

0

@ai�1X

j=1

↵j

vj

1

A+

ai�1�1X

i=1

ki

vi

+ �i�1

ui�1

,

= T

0

@ai�2X

j=1

↵j

vj

1

A+ T

0

@ai�1X

j=ai�2+1

↵j

vj

1

A+

ai�1�1X

i=1

ki

vi

+ �i�1

ui�1

.

where ki

2 F. Similarly, we can choose ↵ai�2+1

, . . . ,↵ai�1 so scalar multiplications of

vj

(ai�2

+ 1 j ai�1

� 1 in Tui

are 0. With this, we obtain:

Tui

= T

0

@ai�2X

j=1

↵j

vj

1

A+

↵i�2�1X

i=1

`i

vi

+ �i�2

ui�2

+ �i�1

ui�1

.

By continuing this method, we can deduce Tui

2 span (u1

, . . . , ui�1

).From the above, since Tu

1

= 0 and Tui

2 span (u1

, . . . , ui�1

), we obtain u1

, . . . , um

2null Tn. On the other hand, we also find that u

1

, . . . , um

is linearly independent so thatmeans m dim nullTn.Back to our original question, for �

i

as an eigenvalue of T , M(T ) is an upper-triangularmatrix with respect to some basis of V then M(T � �

i

I) is also an upper-triangularmatrix. Furthermore, number of times �

i

appears on the diagonal of M(T ) (denote this

Page 100: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 100

as Si

) equals to number of times 0 appears on the diagonal of M(T � �i

I), which is atmost dim null (T � �

i

I)n times according to what we’ve proven above. We write Si

dim null (T � �

i

I)n. On the other hand,P

i

Si

=

Pi

dim null (T � �i

I)n = n so thatmeans S

i

= dim G(�i

, T ) for all i. We’re done.

10.5. 8C: Characteristic and Minimal Polynomials

Definition 10.5.1 (8.34, characteristic polynomial). Suppose V is a complex vector space andT 2 L(V ). Let �

1

, . . . ,�m

denote the distinct eigenvalues of T , with multiplicities d1

, . . . , dm

.The polynomial (z � �1)d1 · · · (z � �

m

)

dm is called the characteristic polynomial of T .

theo:8.37:8C:cayley_hamilton Theorem 10.5.2 (Cayley-Hamilton Theorem) Suppose V is a complex vector space andT 2 L(V ). Let q denote the characteristic polynomial of T . Then q(T ) = 0.

Definition 10.5.3 (8.43, minimal polynomial). Suppose T 2 L(V ). Then the minimal popy-nomial of T is the unique monic polynomial p of smallest degree such that p(T ) = 0.

theo:8.46:8C Theorem 10.5.4 (8.46) Suppose T 2 L(V ) and q 2 P(F). Then q(T ) = 0 if and only if qis a polynomial multiple of the minimal polynomial of T .

With F = C, theoremtheo:8.46:8Ctheo:8.46:8C

10.5.4 and Cayley-Hamilton Theoremtheo:8.37:8C:cayley_hamiltontheo:8.37:8C:cayley_hamilton

10.5.2 follows that the charac-teristict polynomial of T is a polynomial multiple of the minimal polynomial of T .

theo:8.49:8C Theorem 10.5.5 (8.49) Let T 2 L(V ). Then the zeros of the minimal polynomial of Tare precisely the eigenvalues of T .

Not from the book:

theo:3:8C Theorem 10.5.6 Let p(z) = (z � �1

)

d1 · · · (z � �m

)

dm be the minimal polynomial of Tthen G(�

i

, T ) = null (T � �i

I)di for all 1 i m. In particular, di

is the smallest numberso null (T ��

i

I)di = null (T ��i

)

n, or we can say that (z��i

)

di is the minimal polynomialof T |

G(�i,T )

.

The idea for the proof is similar to proof of exerciseexer:8C:12exer:8C:12

12 (8C).How to find characteristic/minimal polynomial of an operator.

Good sum-

mary ques-

tion

Good sum-

mary ques-

tion

Question 10.5.7. What can you tell about an operator from looking at its minimal/characteristic

polynomial?

10.6. Exercises 8C

exer:8C:1 1. If 3, 5, 8 are the only eigenvalues of T then each of them has multiplicity of at most 2. Thisfollows characteristic polynomial of T is (z � 3)

d1(z � 5)

d2(z � 8)

d3 with di

2. Hence,(T � 3I)2(T � 5I)2(T � 8I)2 = 0.

Page 101: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 101

exer:8C:2 2. If 5, 6 are the only eigenvalues of T then each of them has multiplicity of at most n � 1.This follows (T � 5I)n�1

(T � 6I)n�1

= 0.

exer:8C:3 3. Let T (1, 0, 0, 0) = 7(1, 0, 0, 0), T (0, 1, 0, 0) = 7(0, 1, 0, 0) and T (0, 0, 1, 0) = 8(0, 0, 1, 0),T (0, 0, 0, 1) = 8(0, 0, 0, 1).

exer:8C:4 4. If the minimal polynomial is (z�1)(z�5)

2, this could mean that G(5, T ) = null (T �5I)2.Choose T (a, b, c, d) = (a, 5b, 5c, c+5d) then T 2

(a, b, c, d) = (a, 25b, 25c, 10c+25d). Hence,we find G(1, T ) = span ((1, 0, 0, 0)) and G(5, T ) = span ((0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1)),in particular (T � 5I)(0, 0, 1, 0) 6= 0 but (T � 5I)(0, 0, 1, 0) = 0; (T � 5I)(0, 1, 0, 0) =

(T � 5I)(0, 0, 0, 1) = 0 so indeed G(5, T ) = null (T � 5I)2. The characteristic polynomialequals (z � 1)(z � 5)

3.Note that (z� 1)(z� 5) is not minimal polynomial because (T � I)(T � 5I)2(0, 0, 1, 0) 6= 0

and (T � I)(T � 5I)2 = 0 because G(5, T ) = null (T � 5I)2. Hence (z � 1)(z � 5)

2 is theminimal polynomial.

exer:8C:5 5. This means G(1, T ) = null (T�I)2 6= null (T�I). Hence, choose T (a, b, c, d) = (0, 3b, c, c+d).

exer:8C:6 6. This means G(1, T ) = null (T � I). Choose T (a, b, c, d) = (0, 3b, c, d).

exer:8C:7 7. From P 2

= P we follow 0, 1 are the only eigenvalues of P . Since P 2

= P so dim G(0, T ) =dim null P so the characteristic polynomial of P is zdim null P

(z� 1)

dim V�dim null P . FromP 2

= P we also find that V = range P � null P according to exerciseexer:5B:4exer:5B:4

4 (5B). Hencedim null P + dim P = dim V so the characteristic polynomial of P is zdim null P

(z �1)

dim range P .

exer:8C:8 8. T is invertible iff 0 is not an eigenvalue of T iff 0 iso not a zero of the minimal polynomialof T iff the constant term in the minimal polynomial of T is nonzero.

exer:8C:9 9. If p(z) = 4+ 5z � 6z2 � 7z3 + 2z4 + z5 is minimal polynomial of T then q(z) = 1

4

z5p(z�1

)

the is minimal polynomial of T�1. Indeed, because p(T )v = 0 for all v 2 V so q(T�1

)v =

T�5p(T )v = 0 for all v 2 V . Hence, from theoremtheo:8.46:8Ctheo:8.46:8C

10.5.4 (8C), q(z) is polynomial multipleof the minimal polynomial of T�1.If the minimal polynomial of T�1 has degree less than degree of q(z) then with similarconstruction as above, we can show that there exists a minimal polynomial of T whosedegree is less than degree of q(z), which is 5, a contradiction. Thus, we obtain q(z) is theminimal polynomial of T�1.

exer:8C:10 10. Since T is invertible so the constant term in the characteristc polynomial of T is nonzero,i.e. p(0) 6= 0. Let �

1

, . . . ,�m

be eigenvalues of T with corresponding multiplicitiesd1

, . . . , dm

. Hence, p(z) = (z � �1

)

d1 · · · (z � �m

)

dm . From exerciseexer:8A:3exer:8A:3

3 (8A), we have

Page 102: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 102

G(�i

, T ) = G(

1

�i, T�1

) so characteristic polynomial of T�1 is

q(z) = (z � ��1

1

)

d1 · · · (z � ��1

m

)

dm ,

= ��d11

· · ·��dmm

· zdim V · (�1

� z�1

)

d1 · · · (�m

� z�1

)

dm ,

=

(�1)

dim V

p(0)· zdim V

(�1

� z�1

)

d1 · · · (�m

� z�1

)

dm ,

=

1

p(0)zdim V p

✓1

z

◆.

exer:8C:11 11. Since T is invertible so the minimal polynomial of T p(z) = a0

+ a1

z + . . . + am

zm hasa0

6= 0 according to exerciseexer:8C:8exer:8C:8

8 (8C). Therefore T�1p(T ) = 0 or a0

T�1

= �a1

I � a2

T �. . .� a

m

Tm�1.

exer:8C:12 12. According to exerciseexer:8B:5exer:8B:5

5 (8B), V has basis consisting of eigenvectors of T if and only ifevery generalized eigenvector of T is an eigenvector of T .Let p(z) = (z � �

1

)

d1 · · · (z � �m

)

dm be the minimal polynomial of T . Since p(T )v = 0 forall v 2 V so that means

Qj 6=i

(T � �j

I)djv 2 null (T � �i

I)n for all v 2 V .Next, we show that p(z) has no repeated zero iff all generalized eigenvector of T is aneigenvalue of T . Indeed, if p(z) has a repeated zero �

i

, i.e. di

� 2 then there exists v 2 Vso v 62 null (T ��

i

I) but v 2 null (T ��i

I)di , otherwiseQ

j 6=i

(T ��j

I)djv 2 null (T ��i

I)

for all v 2 V which leads to q(T ) = (T � �i

I)Q

j 6=i

(T � �j

I)dj = 0, where q(z) isa polynomial with degree less than p(z), a contradiction. Conversely, if there exists ageneralized eigenvector v of T corresponding to eigenvalues �

i

but v is not an eigenvectorof T corresponding to �

i

, i.e. (T��i

I)v 6= 0 but v 2 G(�i

, T ). Note that for any 1 j mthen T��

j

I is invariant under G(�i

, T ). Combining with the fact that G(↵, T )\G(�, T ) ={0} for ↵ 6= � from exercise

exer:8A:4exer:8A:4

4 (8A), we follow thatQ

j 6=i

(T � �j

I)dj (T � �i

I)v 6= 0 andQj 6=i

(T � �j

I)dj (T � �i

I)v 2 G(�i

, T ). This follows p(z) has a repeated zero �i

.From these two claims, we are done.

exer:8C:13 13. If F = C then according Complex Spectral Theoremtheo:7.24:7Btheo:7.24:7B

9.3.1 (7B), T has a diagonal matrixwith respect to some orthonormal basis of V which means V has a basis consisting ofeigenvectors of T . Hence, according to previous exercise, the minimal polynomial of T hasno repeated zero.

exer:8C:14 14. Since S is an isometry so according to theoremtheo:7.43:7Ctheo:7.43:7C

9.5.4 (7C), there exists a basis of Vconsisting of eigenvalues of S whose corresponding eigenvectors all have absolute value1. On the other hand, the constant term in the characteristic polynomial of S is p(0) =(�1)

n�d11

· · ·�dmm

so |p(0)| = 1.

exer:8C:15 15. (a) We know that there exists a monic polynomial p(z) (such as minimal polynomial ofT ) so p(T )v = 0. Hence, it suffices the uniqueness of such polynomial of smallest degree.Assume the contrary, there exists two polynomial p, q with same and smallest degree so

Page 103: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 103

p(T )v = q(T )v = 0. We follow (p � q)(T )v = 0 but p � q 6= 0 and p � q has degree lessthan of p, q, which is a contradiction. Thus, uniqueness is proven.(b) Let q be the minimal polynomial of T . We follow hat deg p deg q. Hence, there existspolynomials r, s so deg s < deg p and q = pr + s. Since p(T )v = q(T )v = 0 so s(T )v = 0

but deg s < deg p so s = 0. This follows p divides the minimal polynomial of T .

exer:8C:16 16. Let p(z) = a0

+ a1

z + . . . + am�1

zm�1

+ am

be the minimal polynomial of T and q(z) =a0

+a1

z+ . . .+am�1

zm�1

+am

. Then we find q(T ⇤) = (p(T ))⇤. Note that since p(T )v = 0

for all v 2 V so 0 = hp(T )q(T ⇤)v, vi = hq(T ⇤

)v, (p(T ))⇤vi = kq(T ⇤)vk2 for all v 2 V so

q(T ⇤)v = 0. Therefore, from theorem

theo:8.46:8Ctheo:8.46:8C

10.5.4 (8C), q is the polynomial multiple of theminimal polynomial h of T ⇤.Now it suffices to show that deg q = deg h. Assume the contrary deg h < deg q then withsimilar approach, we can construct a polynomial g from h so g(T ) = 0 and deg g = deg h <deg q = deg p, a contradiction. Thus, we must have deg g = deg h so q(z) is the minimalpolynomial of T ⇤.

exer:8C:17 17. Let q(z) = (z��1

)

k1 · · · (z��m

)

km be the minimal polynomial of T with k1

+ . . .+km

= nand p(z) = (z � �

1

)

d1 · · · (z � �m

)

dm be the characteristic polynomial of T . According totheorem

theo:8.46:8Ctheo:8.46:8C

10.5.4 (8C), we follow di

� ki

for all 1 i m. On the other hand,P

m

i=1

ki

=Pm

i=1

di

= n so that means di

= ki

for all 1 i m, i.e. the characteristic polynomial ofT equals the minimal polynomial of T .

exer:8C:18 18. Call such operator T and let e1

, . . . , en

be the standard basis of Cn with ei

has 1 inthe i-th coordinate and 0’s in the remaining coordinates. We can easily find that for1 i n� 1, 1 k n then T ke

i

= ei+k

if i+ k n.We have Te

n

= �(a0

, a1

, . . . , an�1

) so

T 2en

= �T (a0

, . . . , an�2

, 0)� an�1

Ten

= �(0, a0

, . . . , an�2

)� an�1

Ten

.

Hence,

T 3en

= �T (0, a0

, . . . , an�3

, 0)� an�2

Ten

� an�1

T 2en

,

= �(0, 0, a0

, . . . , an�3

)� an�2

Ten

� an�1

T 2en

.

Continuing doing this, we will obtain

T ien

= �(0, 0, . . . , 0, a0

, . . . , an�i

)� an�i+1

Ten

� . . .� an�1

T i�1en

.

For i = n then Tnen

= �a0

en

� a1

Ten

� . . . � an�1

Tn�1en

or (Tn

+ an�1

Tn�1

+ . . . +a1

T + a0

I)en

= p(T )en

= 0.For 1 i < n, notice that T ke

i

= ei+k

for 0 k n� i. Hence, we have

p(T )ei

= (Tn

+ an�1

Tn�1

+ . . .+ an�i+1

Tn�i+1

)ei

+ an�i

en

+ . . .+ a1

ei+1

+ a0

ei

,

= (T i

+ an�1

T i�1

+ . . .+ an�i+1

T )(Tn�iei

) + (0, . . . , 0, a0

, . . . , an�i�1

),

= (T i

+ an�1

T i�1

+ . . .+ an�i+1

T )en

+ (0, . . . , 0, a0

, . . . , an�i

),

= 0.

Page 104: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 104

Thus, p(T )v = 0 for all v 2 V . Since deg p = n so that means p(z) is both the minimalpolynomial and the characteristic polynomial of T .

exer:8C:19 19. According to exerciseexer:8B:11exer:8B:11

11 (8B), the number of times that � appears on the diagonal of thematrix of T equals the multiplicity of � as an eigenvalue of T. Combining with the defi-nition of characteristic polynomial for complex vector space, we obtain the characteristicpolynomial of T is (z � �

1

) · · · (z � �n

).

exer:8C:20 20. Since we are talking about complex vector space, according to theoremtheo:5.27:5Btheo:5.27:5B

7.3.2 (5B), T |Vj

has an upper-triangular matrix M(T |Vj ) with respect to some basis of V

j

.Therefore, if we combines all the bases of V

j

we obtain a basis of V since V = V1

� · · ·�Vm

which will give an block diagonal matrix for T as follow:

M(T ) =

0

B@M(T |

V1)

. . .0 M(T |

Vm)

1

CA .

Notice that M(T ) is an upper-triangular matrix since M(T |Vj ) are upper-triangular matri-

ces. Thus, by applying exerciseexer:8C:19exer:8C:19

19 (8C) to M(T ),M(T |Vj ) we can conclude that p

1

· · · pm

is the characteristic polynomial of T .

10.7. 8D: Jordan Form

theo:8.55:8D Theorem 10.7.1 (8.55, basis correspoding to a nilpotent operator) Suppose N 2 L(V ) isnilpotent. There exists vectors v

1

, . . . , vn

2 V and nonnegative m1

, . . . ,mn

such that

(a) Nm1v1

, . . . , Nv1

, v1

, . . . , Nmnvn

, . . . , Nvn

, vn

is a basis of V .

(b) Nm1+1v1

= . . . = Nmn+1vn

= 0.

Furthermore, according to exerciseexer:8D:6exer:8D:6

6 (8D), n = dim null N and Nm1v1

, . . . , Nmnvn

is basis ofnull N .

theo:8.60:8D Theorem 10.7.2 (8.60, Jordan Form) Suppose V is a complex vector space. If T 2 L(V ),then there is a basis of V that is a Jordan basis for T .

In particular, there is a Jordan basis of V for T so

M(T ) =

0

B@M(T |

G(�1,T )

) 0

. . .0 M(T |

G(�m,T )

)

1

CA .

In here, M(T |G(�i,T )

) is Jordan matrix of T |G(�i,T )

on G(�i

, T ), i.e.

M(T |G(�i,T )

) =

0

B@A

1

0

. . .0 A

p

1

CA ,

Page 105: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 105

where Aj

is an upper-triangular matrix of the form:0

BBBB@

�i

1 0

. . . . . .. . .

1

0 �i

1

CCCCA.

Indeed, since (T � �i

I)|G(�i,T )

is nilpotent so from theoremtheo:8.55:8Dtheo:8.55:8D

10.7.1, there exists such basis toform Jordan matrix for T |

G(�i,T )

.

theo:4:8D Theorem 10.7.3 The minimal polynomial of T isQ

m

i=1

(z��i

)

mi+1 where mi

is the lengthof the longest consecutive string of 1’s that appears on the line directly above the diagonalin the Jordan matrix of M(T |

G(�i,T )

).

Proof. From theoremtheo:3:8Ctheo:3:8C

10.5.6 (8C), minimal polynomial of T equals product of minimal polyno-mials of T |

G(�i,T )

for all eigenvalues �i

of T .From exercise

exer:8D:3exer:8D:3

3 (8D), minimal polynomial of the nilpotent operator (T ��i

I)|G(�i,T )

is zmi+1

where mi

is the length of the longest consecutive string of 1’s that appears on the line directlyabove the diagonal in the Jordan matrix of M(T |

G(�i,T )

). We follow (T � �i

I)mi+1

= 0 but(T � �

i

i)mi 6= 0, which leads to (z � �i

)

mi+1 as the minimal polynomial of T |G(�i,T )

.Thus, the minimal polynomial of T is

Qm

i=1

(z � �i

)

mi+1 where mi

is the length of the longestconsecutive string of 1’s that appears on the line directly above the diagonal in the Jordanmatrix of M(T |

G(�i,T )

).

example_jordan_form Example 10.7.4 we can find the minimal polynomial of T from its Jordan matrix as shown:

M(T ) =

N2v1

Nv1

v1

Nv2

v2

Nv3

v3

0

BBBBBBB@

1

CCCCCCCA

N2v1

�1

1 0 0 0 0 0

Nv1

0 �1

1 0 0 0 0

v1

0 0 �1

0 0 0 0

Nv2

0 0 0 �1

1 0 0

v2

0 0 0 0 �1

0 0

Nv3

0 0 0 0 0 �2

1

v3

0 0 0 0 0 0 �2

So N2v1

, Nv1

, v1

, Nv2

, v2

2 G(�1

, T ) and Nv3

, v3

2 G(�2

, T ). In the top left submatrix ofM(T ) that only has �

1

on the diagonal, the longest length of a consecutive strings of 1’s is 2,which follows (z� �

1

)

3 is the minimal polynomial of T |G(�1,T )

according to theoremtheo:4:8Dtheo:4:8D

10.7.3.Similarly, (z � �

2

)

2 is the minimal polynomial of T |G(�2,T )

. Thus, the minimal polynomialof T is (z � �

1

)

3

(z � �2

)

2.

Page 106: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 106

10.8. Exercises 8D

exer:8D:1 1. Since N is nilpotent so N4

= 0 and since N3 6= 0, we find z4 is both the minimal polynomialand the characteristic polynomial of T .

exer:8D:2 2. In example 8.54, with the basis N2v1

, Nv1

, v1

, Nv2

, v2

, v3

given that N3v1

= N2v2

=

Nv3

= 0, we follows that N3

= 0 and N2 6= 0. Hence, N3 is the minimal polynomial of Tand N6 is the characteristic polynomial of T .

exer:8D:3 3. We know so far according to theoremtheo:8.55:8Dtheo:8.55:8D

10.7.1 (8D) andtheo:8.60:8Dtheo:8.60:8D

10.7.2 (8D), Jordan basis of V is acombination of linear independent lists of the form Nmv,Nm�1v, . . . , Nv, v. Each of thislist will produce a consecutive string of 1’s with length m that appears on the line directlyabove the diagonal in the Jordan matrix of T . Therefore, the longest consecutive string of1’s corresponds to the list Nmv,Nm�1v, . . . , Nv, v with maximal m. We follow Nm+1

= 0

and Nm 6= 0 (because Nmv is in the Jordan basis of V ). Therefore, zm+1 is the minimalpolynomial of T . Note that m here is the length of the longest consecutive string of 1’s.

exer:8D:4 4. If the Jordan basis v1

, . . . , vn

is reversed then the columns of N jvi

are reversed, i.e. if itis (a

1

, a2

, . . . , an

)

T then its reverse is (an

, . . . , a1

)

T . All of the columns are then arrangedin reversed order. Thus, with this, we conclude that if we rotate 180

� the Jordan matrixof T , we will obtain a matrix with respect to the reversed order of Jordan basis of V .

exer:8D:5 5. In the matrix of T 2 with respect to Jordan basis of V for T , all the eigenvalues on thediagonal are squared and all the 1’s that is in the same column with � changes to 2�. Allthe 0’s in row of Nkv

i

and column Nk�2vi

changes to 1. Below is an example:

M(T 2

) =

N2v1

Nv1

v1

Nv2

v2

Nv3

v3

0

BBBBBBB@

1

CCCCCCCA

N2v1

�21

2�1

1 0 0 0 0

Nv1

0 �21

2�1

0 0 0 0

v1

0 0 �21

0 0 0 0

Nv2

0 0 0 �22

2�2

0 0

v2

0 0 0 0 �22

0 0

Nv3

0 0 0 0 0 �23

2�3

v3

0 0 0 0 0 0 �23

exer:8D:6 6. Since we already know that Nm1v1

, . . . , Nmnvn

is linearly independent list in null N , itsuffices to show that this list spans null N . Indeed, consider v 2 null N then from thebasis given in theorem

theo:8.55:8Dtheo:8.55:8D

10.7.1 (8D), we can represent v as v =

P0imj ,1jn

↵i,j

N ivj

so0 = Nv =

P1imj

↵i,j

N ivj

. We follows ↵i,j

= 0 for all 1 j n, 1 i mj

, whichmeans v =

Pn

j=1

↵mj ,jN

mjvj

. Thus, Nm1v1

, . . . , Nmnvn

is basis of null N .

exer:8D:7 7. From theoremtheo:4:8Dtheo:4:8D

10.7.3 (8D), draw out the Jordan matrix for T that makes p the minimalpolynomial and q the characteristic polynomial. After that, we can choose the standardbasis e

1

, . . . , en

as the Jordan basis for this matrix. This will define an operator T thatsatisfies the condition.

Page 107: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 107

exer:8D:8 8. If there does not exist a direct sum decomposition of V into two proper subspaces in-variant under T then from theorem

theo:8.21:8Btheo:8.21:8B

10.3.1 (8B), T must have only one eigenvalue � andV = G(�, T ) = null (T � �I)dim V . We follow (T � �I) is a nilpotent operator on V . Ifdim null (T��I) � 2 then according to theorem

theo:8.55:8Dtheo:8.55:8D

10.7.1 (8D), for n = dim null (T��I) � 2,V has a Jordan basis (T � �I)m1v

1

, . . . , v1

, . . . , (T � �I)mnvn

, . . . , vn

for T . Let U =

span ((T��I)m1v1

, . . . , v1

) and W = span ((T��I)m2v2

, . . . , v2

, . . . , (T��I)mnvn

, . . . , vn

)

then V = U�W . If u 2 U then Tu = (T ��I)u+�u 2 U so U is invariant under T . Simi-larly, W is invariant under T . This contradicts to the original assumption. Hence, we musthave dim null (T��I) = 1, which again from theorem

theo:8.55:8Dtheo:8.55:8D

10.7.1 (8D), (T��I)dim V�1v, . . . , vis a basis of V . This follows the smallest d so (T � �I)d = 0 is d = dim V . We find(z � �)dim V is the minimal polynomial of T .

If (z��)dim V is the minimal polynomial of T then there exists v so (T ��I)n�1v, . . . , v isa basis of V with n = dim V . Assume the contrary that V has a direct sum decompositioninto two proper subspaces U,W invariant under T . If u 2 U , write u =

Pn�1

i=0

ai

(T ��I)ivwhere a

i

2 C for 0 i n� 1. Let ai

be the first nonzero number in a0

, . . . , an�1

. SinceU is invariant under T so U is also invariant under T ��I, which means (T ��I)n�i�1u =

ai

(T � �I)n�1v 2 U . Therefore, (T � �I)n�1v 2 U . We can apply similar method to Wcan conclude that (T ��I)n�1v 2 W . Hence, U \W 6= {0}, so we can’t have V = U �W .Thus, there does not exist a direct sum decomposition of V into two proper subspacesinvariant under T .

11. Chepter 9: Operators on Real Vector Spaces

11.1. 9A: Complexification

Definition 11.1.1 (Complexification VC of V ). The complexification VC of V is V ⇥ V , whereeach element is of the form (u, v) or u+ iv for u, v 2 V . We define addition on VC as (u

1

+ iv1

)+

(u2

+ iv2

) = (u1

+ v1

) + i(u2

+ v2

) for ui

, vi

2 V and scalar multiplication as (a+ ib)(u+ iv) =(au� bv) + i(by + av) for a, b 2 R and u, v 2 V .

Construction of VC from V can be thought as generalizing the construction of Cn fromRn.

Theorem 11.1.2Complexification of V is a complex vector space. Any basis of V is a basis of VC. Evenbetter, any subspace of VC has a basis containing vectors in V .

Definition 11.1.3 (Complexification of operator T ). For real vector space V , complexificationTC of any T 2 L(V ) is the operator TC 2 L(VC) such that TC(u + iv) = Tu + iTv for anyu, v 2 V .

Page 108: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 108

complexification_property Theorem 11.1.4 An operator T on a real vector space V has minimal/characteristic poly-nomial same as its complexification TC. This means that:

1. Minimal/characteristic polynomials of TC have real coefficients.

2. Real zeros of the characteristic polynomial are eigenvalues of T .

3. Multiplicity of � 2 C as eigenvalue of TC equals the multiplicity of � as eigenvalue ofTC. In other words dim G(�, TC) = dim G(�, TC).

4. If dim V is odd then T has at least one eigenvalue.

11.2. Exercises 9A

exer:9A:1 1. Done.

exer:9A:2 2. Show TC 2 L(VC). For u1

+ iv1

, u2

+ iv2

2 VC then

TC(u1 + iv1

+ u2

+ iv2

) = T (u1

+ u2

) + iT (v1

+ v2

) = TC(u1 + iv1

) + TC(u2 + iv2

).

Similarly, TC(�(u+ iv)) = �TC(u+ iv).

exer:9A:3 3. Not hard.

exer:9A:4 4. Not hard.

exer:9A:5 5. (S + T )C(u+ iv) = (S + T )u+ i(S + T )v = (SC + TC)(u+ iv).

exer:9A:6 6. Prove injectivity and surjectivity.

exer:9A:7 7. Since (NC)n

(u+ iv) = Nnu+ iNnv so N is nilpotent iff NC is nilpotent.

exer:9A:8 8. This follows 5, 7 are eigenvalues of TC and since characteristic polynomial of TC of degree3 so TC has no nonreal eigenvalues.

exer:9A:9 9. Since T is an operator on odd-dimensional real vector space so T has a real eigenvalue.On the other hand, since T 2

+ T + I is nilpotent so minimal polynomial p(T ) of T divides(T 2

+ T + I)7. Since minimal polynomial p(T ) of T has real coefficients so p(T ) = (T 2

+

T + I)k for some k < 7. However, this implies p(T ) has no real roots, which means T hasno real eigenvalue according to

theo:8.49:8Ctheo:8.49:8C

10.5.5, a contradiction to previous claim. Thus, there doesnot exists T 2 L(R7

) so T 2

+ T + I is a nilpotent.

exer:9A:10 10. The difference between this exercise and the previous exerciseexer:9A:9exer:9A:9

9 is that minimal polynomialof T need not to have real coefficients. Since T 2

+T+I is nilpotent, we aim to construct T soits minimal polynomial is of the form p(T ) = (T 2

+T+I)k =

⇣T � �1+i

p3

2

I⌘k

⇣T � �1�i

p3

2

I⌘k

where k < 7/2 because minimal polynomial must have degree at most dim C7

= 7. Say

Page 109: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 109

k = 3. From exampleexample_jordan_formexample_jordan_form

10.7.4, let �+

=

�1+i

p3

2

and �� =

�1�i

p3

2

, we can construct aJordan matrix as follow:

M(T ) =

0

BBBBBBBB@

�+

1

�+

1

�+

�+

�� 1

�� 1

��

1

CCCCCCCCA

WLOG, say that (0, . . . , 0, 1, 0, . . . , 0) is Jordan basis of T then from the above matrix wecan construct T .

exer:9A:11 11. The minimal polynomial p of T must divide x2 + bx + c and since eigenvalue � of T areroots of p so T has real eigenvalue � if � is root of x2 + bx+ c, i.e. b2 � 4c.

exer:9A:12 12. Similarly to exerciseexer:9A:11exer:9A:11

11, minimal polynomial p of T must divide (z2 + bz + c)dim V , andsince (z2 + bz + c)dim V has no real root, p has no real root, i.e. T has no eigenvalues.

exer:9A:13 13. Since W = null (T 2

+ bT + cI)j is a vector space so T 2

+ bT + cI is nilpotent on W , whichaccording to previous exercise

exer:9A:12exer:9A:12

12 implies that T has no eigenvalues. This follows dim Wis even.

exer:9A:14 14. T 2

+T+I is nilpotent implies TC has exactly two eigenvalues � and � with same multiplicityaccording to theorem

complexification_propertycomplexification_property

11.1.4. Therefore, the characteristic polynomial of TC is p(z) =

(z2 + z+1)

k. Since dim V = 8 so deg p = 8 or k = 4. Thus, the characteristic polynomialof T is (z2 + z + 1)

4 so (T 2

+ T + I)4 = 0.

exer:9A:15 15. Let U be an invariant subspace of V . Since T has no eigenvalue so T |U

2 L(U) has noeigenvalue. According to theorem

complexification_propertycomplexification_property

11.1.4, U has even dimension.

exer:9A:16 16. If T 2

= �I then T has no eigenvalue according to exerciseexer:9A:12exer:9A:12

12, which implies V has evendimension according to theorem

complexification_propertycomplexification_property

11.1.4.Conversely, if V has even dimension. Inspired from example

example:eigenvalueexample:eigenvalue

7.1.1, for any basis a1

, b1

, a2

, b2

, . . . , an

, bn

of V , let T 2 L(V ) such that T (ai

) = bi

, T (bi

) = �ai

. With this, we can see that T 2

+I = 0.

exer:9A:17 17. (a) We already know that V is a real vector space. For any a, b, c, d 2 R and any v, u 2 Vthen (a+ ib)v = av + bTv 2 V and

(a+ bi)[(c+ di)v] = (a+ bi)(cv + dTv),

= (acv + bcTv) + (adTv + bdT 2v),

= (ac� bd)v + (bc+ ad)Tv,

= (ac� bd+ i(ad+ bc))v,

= [(a+ bi)(c+ di)]v.

Page 110: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 110

We also have (a + bi)(u + v) = a(u + v) + bT (u + v) = (au + bTu) + (av + bTv) =

(a+ bi)u+ (a+ bi)v and similarly (a+ bi+ c+ di)u = (a+ bi)u+ (c+ di)u. Thus, V is acomplex vector space.(b) Let dim V = 2n. Pick a

i

2 V inductively such that ai+1

62 span (a1

, Ta1

, a2

, Ta1

, . . . , ai

, Tai

).We will first show that a

1

, Ta1

, . . . , an

, Tan

is linearly independent.Assume the contrary that the list is linearly dependent, then there exists i such thatTa

i+1

2 span (a1

, Ta1

, . . . , ai

, Tai

, ai+1

). Let Tai+1

=

Pi

j=1

(↵i

ai

+ �i

Tai

) + cai+1

where↵i

,�i

2 R. Since T 2

= �I so

�ai+1

= T 2ai+1

=

iX

j=1

(↵i

Tai

� �i

ai

) + cTai+1

.

Substituting Tai+1

into the above to obtain ai+1

2 span (a1

, Ta1

, . . . , ai

, Tai

), a contra-diction to our condition. Thus, a

1

, Ta2

, . . . , an

, Tan

must be linearly independent andtherefore it is a basis of real vector space V .We prove that with such definition of complex scalar multiplication then a

1

, . . . , an

isthe basis of V as complex vector space. Since a

j

2 V so iaj

= Taj

2 V . Therefore,a1

, Ta1

, . . . , an

, Tan

2 V which implies that a1

, . . . , an

spans V . The list is also linearindependent because

Pn

i=1

(↵i

+ i�i

)ai

=

Pn

i=1

↵i

ai

+ �i

Tai

.Thus, the dimension of V as a complex vector space is half the dimension of V as realvector space.

exer:9A:18 18. (b) =) (a): If there exists a basis of V wrt to which T has an upper-triangular matrix,then it is also a basis of VC with respect to which TC has an upper-triangular matrix.Therefore, from theorem

theo:5.32:5Btheo:5.32:5B

7.3.4 (5B), we find TC has real eigenvalues.(a) =) (c): If all eigenvalues of TC are real. Consider G(�, TC) where � 2 R is aneigenvalue of TC. Consider basis x

1

+ iy1

, . . . , xn

+ iyn

of G(�, TC) where xj

, yj

2 V . Wehave TC(xj + iy

j

) = Txj

+ iTyj

= �(xj

+ iyj

) which implies Txj

= �xj

, T yj

= �yj

for all1 j n. On the other hand, note that x

j

+ iyj

62 U = span (x1

+ iy1

, . . . , xj�1

+ iyj�1

)

so at least one of xj

, yj

must not be in U . From this, we can inductively choose either xi

or yi

to create a basis of G(�, TC) that are all in V .From theorem

theo:8.21:8Btheo:8.21:8B

10.3.1 (8B), since VC =

Lm

i=1

G(�i

, TC), we can construct a basis of VC

consisting generalized eigenvectors that are in V . This follows V has a basis consisting ofgeneralized eigenvectors of T .(c) =) (a): If there is a basis of V consisting of generalized eigenvectors of T thenV =

Lm

i=1

G(�i

, T ) where �i

2 R are eigenvalues of T .If v

1

, . . . , vn

is basis of G(�, T ) then since � 2 R, we have (TC � �I)k(u + iv) = 0

when u, v 2 G(�, T ). Therefore, v1

, . . . , vn

is a basis of G(�, TC). Hence, we obtainVC =

Lm

i=1

G(�i

, TC). This follows that TC does not have any nonreal eigenvalue.(a) =) (c) =) (b): If all eigenvalues of TC are real and then there exists a basis of Vconsisting of generalized eigenvectors of T . Hence, V =

Lm

i=1

G(�i

, T ) where �i

2 R are

Page 111: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 111

eigenvalues of T . With this and from the proof of theoremtheo:8.60:8Dtheo:8.60:8D

10.7.2 (8D), we find that thereis a Jordan basis for T of V . This proves (b).

exer:9A:19 19. Bring back to working with complex vector space. Since null Tn�2 6= null Tn�1 sonull Tn�2

C 6= null Tn�1

C . Hence, from exerciseexer:8B:4exer:8B:4

4, we find TC has at most 2 eigenvalues,one of which must be 0. This follows the other must be real eigenvalue. Therefore, T hasat most 2 eigenvalues and TC has no nonreal eigenvalues.

11.3. 9B: Operators on Real Inner Product Spaces

theo:9.34:9B Theorem 11.3.1 (Characterization of normal vectors when F = R) Suppose V is a realinner product space and T 2 L(V ). Then the following are equivalent:

1. T is normal.

2. There is an orthonormal basis of V with respect to which T has a block diagonal

matrix such that each block is a 1-by-1 matrix or a 2-by-2 matrix of the form✓a �bb a

with b > 0.

By looking at the characteristic polynomial of normal operator T , one can count number of2-by-2 block diagonal matrices.

Using this theoremtheo:9.34:9Btheo:9.34:9B

11.3.1 and theoremtheo:7.10:7Atheo:7.10:7A

9.1.4, one can easily prove the Real Spectral Theoremtheo:7.29:7Btheo:7.29:7B

9.3.2.

theo:9.36:9B Theorem 11.3.2 (Description of isometries when F = R) Suppose V real inner productspace and S 2 L(V ). Then the following are equivalent:

1. S is an isometry.

2. There is an orthonormal basis of V with respect to which S has a block diagonalmatrix such that each block on the diagonal is a 1-by-1 matrix containing 1 or �1 or

is a 2-by-2 matrix of the form✓cos ✓ � sin ✓sin ✓ cos ✓

◆with ✓ 2 (0, 2⇡).

We can also define an inner product for complexification VC as indicated in exercisesexer:9B:3exer:9B:3

3, i.e.

hu+ iv, x+ iyi = hu, xi+ hv, yi+ (hv, xi � hu, yi) i.

11.4. Exercises 9B

exer:9B:1 1. From theoremtheo:9.36:9Btheo:9.36:9B

11.3.2, S has block diagonal 3-by-3 matrix so at least one block on thediagonal is a 1-by-1 matrix. If it’s the first block then pick x = (1, 0, 0), if it’s the lastblock then x = (0, 0, 1).

exer:9B:2 2. Obviously true from theoremtheo:9.36:9Btheo:9.36:9B

11.3.2.

Page 112: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 112

exer:9B:3 3. Jump back to the definition of inner product.Check positivity and definiteness: hu+ iv, u+ ivi = hu, ui + hv, vi � 0 and is 0 whenu = v = 0.Check additivity in first slot and homogeneity in first slot.Check conjugate symmetry:

hu+ iv, x+ iyi = hu, xi+ hv, yi � (hv, xi � hu, yi) i,= hx, ui+ hy, vi+ (hy, ui � hx, vi) i,= hx+ iy, u+ ivi .

exer:9B:4 4. It suffices to show hTC(u+ iv), x+ iyi = hu+ iv, TC(x+ iy)i.

exer:9B:5 5. If T is self-adjoint then TC is self-adjoint according to exerciseexer:9B:4exer:9B:4

4, which means all eigenval-ues of TC are real according to theorem

theo:7.13:7Atheo:7.13:7A

9.1.5. By Complex Spectral Theoremtheo:7.24:7Btheo:7.24:7B

9.3.1, VC hasorthonormal basis consisting of eigenvectors of TC. This implies VC =

Lm

i=1

E(�i

, TC).In each subspace E(�

i

, TC), there exists a basis containing vectors in V . By applyingGram-Schmidt procedure

theo:6.31:6Btheo:6.31:6B

8.3.2 to this basis, we obtain an orthonormal basis of E(�, TC)

containing vectors in V . Combining all these vectors and the fact that eigenvectors cor-responding to distinct eigenvalues of normal operator are orthorgonal

theo:7.22:7Atheo:7.22:7A

9.1.7, we obtain anorthonormal basis for V containing all eigenvectors of T .

exer:9B:6 6. Look at the proof and write out the matrix so B is nonzero matrix. Say M(T ) =

✓1 2

0 1

wrt to standard basis where T 2 L(R2

). Then U = span (1, 0) is invariant under T butnot U?

= span (0, 1).

exer:9B:7 7. Let such basis of V be v1

, . . . , vn

and show that Tvi

= (T1

· · ·Tm

)vi

.

exer:9B:8 8. According to exerciseexer:7A:21exer:7A:21

21 (7A), the list

1p2⇡

,cosxp⇡

, . . . ,cosnxp

⇡,sinxp⇡, · · · , sinnxp

is the orthonormal basis of V , and the matrix of D with this respect to this basis will givethe desired form in

theo:9.34:9Btheo:9.34:9B

11.3.1. In particular, Ui

= span⇣cos ixp

, sin ixp⇡

⌘is invariant under T

where M(T |Ui) =

✓a �bb a

◆.

Page 113: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 113

12. Chapter 10: Trace and Determinant

12.1. 10A: Trace

prop:10.4:10A Proposition 12.1.1 (Matrix of product of linear maps) Suppose u1

, . . . , un

and v1

, . . . , vn

and w1

, . . . , wn

are bases of V . Suppose S, T 2 L(V ). Then

M(ST, (u1

, . . . , un

), (w1

, . . . , wn

)) =M(S, (v1

, . . . , vn

), (w1

, . . . , wn

))

M(T, (u1

, . . . , un

), (v1

, . . . , vn

)).

coroll:10.7:10A Corollary 12.1.2 (Change of basis) Suppose T 2 L(V ). Let u1

, . . . , un

and v1

, . . . , vn

bebases of V .Let A = M(I, (u

1

, . . . , un

), (v1

, . . . , vn

)) then

M(T, (u1

, . . . , un

)) = A�1M(T, (v1

, . . . , vn

))A.

theo:10.16:10A Theorem 12.1.3 (Trace of an operator equals to trace of its matrix) For T 2 L(V ) thentrace T = trace M(T ) = (trace M(TC)). In other words, for any basis of T , the sum ofdiagonal entries of M(T ) is constant and is equal to sum of eigenvalues of T (of TC if V isreal vector space), with each eigenvalue repeated according to its multiplicity.

12.2. Exercises 10A

exer:10A:1 1. If A = M(T, v1

, . . . , vn

)) is invertible there exists inverse B of A. We define an operatorS 2 L(V ) such that M(S, (v

1

, . . . , vn

)) = B. By using propositionprop:10.4:10Aprop:10.4:10A

12.1.1, we can showthat B is inverse of T .If T is invertible, then there exists inverse T�1 of T . Hence, using proposition

prop:10.4:10Aprop:10.4:10A

12.1.1 to ma-trices M(TT�1, (v

1

, . . . , vn

)) and M(T�1T, (v1

, . . . , vn

)) we obtain that M(T, (v1

, . . . , vn

))

is invertible.

exer:10A:2 2. Bring back to woring with operators. Define S, T 2 L(V ) so M(S, (v1

, . . . , vn

)) = A,M(T, (v1

, . . . , vn

)) =

B then ST = I as M(ST ) = I. Applying exerciseexer:3D:10exer:3D:10

10, we implies TS = I so BA = I.

exer:10A:3 3. Consider an arbitrary basis v1

, . . . , vn

of V . Let Tv1

=

Pn

i=1

↵i

vi

. Since M(T, (v1

, . . . , vn

)) =

M(T, (2v1

, . . . , vn

)) so ↵i

= 2↵i

for all i 6= 1 which implies ↵i

= 0 for all i 6= 1. Hence,Tv

1

= ↵1

v1

. Similarly, we obtain Tvj

= ↵j

vj

for all j.On the other hand, since M(T, (v

1

, v2

, . . . , vn

)) = M(T, (v2

, v1

, . . . , vn

)) so ↵1

= ↵2

. As aresult, we obtain T = ↵I.

exer:10A:4 4. Since Tvk

= uk

so M(T, (v1

, . . . , vn

), (u1

, . . . , un

)) = I. Therefore, by applying propositionprop:10.4:10Aprop:10.4:10A

12.1.1, we obtain

M(T, (v1

, . . . , vn

)) = M(I, (u1

, . . . , un

), (v1

, . . . , vn

)).

Page 114: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 114

exer:10A:5 5. Bring back to working with operator. Let T 2 L(V ) be an operator so M(T, (v1

, . . . , vn

)) =

B. Since V is complex vector space so according to theoremtheo:8.29:8Btheo:8.29:8B

10.3.3 (ortheo:8.60:8Dtheo:8.60:8D

10.7.2), thereexists basis u

1

, . . . , un

so M(T, (u1

, . . . , un

)) is an upper-triangular matrix. Now applyingcorollary

coroll:10.7:10Acoroll:10.7:10A

12.1.2 and we are done.

exer:10A:6 6. Let T 2 L(R2

) so M(T ) =

✓0 1

�1 0

◆. Then trace (T 2

) = �2 < 0.

exer:10A:7 7. trace (T 2

) =

Pm

i=1

�2i

� 0.

exer:10A:8 8. If w = 0 then trace T = 0. If w 6= 0 then choose basis of V contatining w, with this, weobtain trace T = hw, vi.

exer:10A:9 9. If P 2

= P then according to exerciseexer:8C:7exer:8C:7

7 (8C), the characteristic polynomial of P (or PC) iszm(z� 1)

n where n = dim range P = dim range PC. This follows trace T = dim range Paccording to the definition of trace of an operator.

exer:10A:10 10. For orthonormal basis v1

, . . . , vn

of V , then according to theoremtheo:7.10:7Atheo:7.10:7A

9.1.4 (7A), M(T, (v1

, . . . , vn

))

is conjugate transpose of M(T ⇤, (v1

, . . . , vn

)). Therefore, trace T ⇤= trace T according to

theoremtheo:10.16:10Atheo:10.16:10A

12.1.3 (10A)

exer:10A:11 11. From theoremtheo:7.35:7Ctheo:7.35:7C

9.5.2, if T is positive then all eigenvalues of T are nonnegative. As trace Tis sum of all eigenvalues and trace T = 0, we obtain all eigenvalues are 0. Thus, T = 0.

exer:10A:12 12. We have trace (PQ) = trace M(PQ)C = trace (M(PC)M(QC)). With this, we canchoose basis for PC and QC as in theorem

theo:8.29:8Btheo:8.29:8B

10.3.3 (8B) so M(PC) and M(QC) are upper-triangular matrices. Since P,Q are orthogonal projections then so are PC, QC. Hence,P 2

C = PC and Q2

C = QC according to theoremtheo:6.55:6Ctheo:6.55:6C

8.5.5 (6C). This implies that eigenvalues ofPC and QC are only 0 and 1. This means trace M(PC)M(QC) � 0, as desired.

exer:10A:13 13. 51� 40 + 1 = 12 and 12� 24� (�48) = 36 so the third eigenvalue if 36.

exer:10A:14 14. We have trace (cT ) = trace M(cT ) = trace cM(T ) = ctrace T .

exer:10A:15 15. We have

trace (ST ) = trace M(ST ),

= trace (M(S)M(T )),

= trace (M(T )M(S)),

= trace M(TS),

= trace TS.

exer:10A:16 16. Let ST = T = S = I with dim V = 2 then trace S = trace T = trace ST = 2 andtherefore, trace ST 6= trace Strace T .

exer:10A:17 17. Work with M(T ) and M(S). Choose Si,j

2 L(V ) so M(Si,j

)

i,j

= 1 while the rest entriesare 0. Hence, this implies M(T )

j,i

= 0. Thus, this concludes T = 0.

Page 115: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 115

exer:10A:18 18. For orthonormal basis e1

, . . . , em

of V then from theoremtheo:7.10:7Atheo:7.10:7A

9.1.4 (7A), M(T, (e1

, . . . , em

))

is the conjugate transpose of M(T ⇤, (e1

, . . . , em

)). This means, the i-th row of M(T ⇤)

is the conjugate of the i-th column of M(T ). Hence, kTei

k2 equals the inner product ofthese two vectors, which is the (i, i-th entry of M(T ⇤T, (e

1

, . . . , em

)). Thus, trace (T ⇤T ) =Pm

i=1

kTei

k2. This implies that then sum is independent from the choice of orthonormalbasis.

exer:10A:19 19. From previous exercise, we have hT, T i � 0 and is 0 when when trace TT ⇤= 0, which

follows kT ⇤ei

k = 0 for orthonormal basis e1

, . . . , em

of V . This implies T ⇤= 0 so T = 0.

Thus, positivity and definiteness are proven.Addivitity in first slot is true because of the distributive property for matrix addition andmatrix multiplication

exer:3C:13exer:3C:13

13 (3C).Homogeneity in first slot is true using exercise

exer:10A:14exer:10A:14

14 (10A).Conjugate Symmetry is true because ST ⇤ is adjoint of TS⇤, which according to exerciseexer:10A:10exer:10A:10

10 (10A) thenhS, T i = trace (ST ⇤

) = trace (TS⇤) = hT, Si.

exer:10A:20 20. Let e1

, . . . , en

be orthonormal basis of V which gives the desired matrix. From exerciseexer:10A:18exer:10A:18

18(10A), we find

trace (T ⇤T ) =nX

i=1

kTei

k2 =nX

k=1

nX

j=1

|Aj,k

|2.

It suffices to showP

n

i=1

|�i

|2 trace (T ⇤T ). We go and pick different basis of V . Pickorthonormal basis e

1

, . . . , en

as in theoremtheo:8.29:8Btheo:8.29:8B

10.3.3 (8B) so M(T ) is an upper triangularmatrix where all the eigenvalues are on its diagonal. Hence, kTe

i

k2 � |�i

|2 for all 1 i n.Combining with exercise

exer:10A:18exer:10A:18

18 (10A), we obtain the desired inequality.

exer:10A:21 21. Applying exerciseexer:10A:18exer:10A:18

18 (10A), for any orthonormal basis e1

, . . . , en

of V so e1

= v, we have

trace (TT ⇤) =

nX

i=1

kT ⇤ei

k2 nX

i=1

kTei

k2 = trace (T ⇤T ).

On the other hand, according to exerciseexer:10A:15exer:10A:15

15 (10A), we find trace (TT ⇤) = trace (T ⇤T ).

Thus, the equality holds when kT ⇤ei

k = kTei

k so kT ⇤vk = kTvk. Since this is true forany v 2 V , we obtain that T is normal

Page 116: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 116

13. Summary

Example 13.0.1 (Inner product space of continuous real-valued functions)Let C[�⇡,⇡] be the vector space of continuous real-valued functions on [�⇡,⇡] with innerproduct

hf, gi =Z

�⇡

f(x)g(x)dx.

For some positive integer integer n then

1. (Exerciseexer:6B:4exer:6B:4

4 (6B)) 1p2⇡

, cosxp⇡

, . . . , cosnxp⇡

, sinxp⇡

, sin 2xp⇡

, . . . , sinnxp⇡

is an orthonormal list ofvectors in C[�⇡,⇡].

2. Let V = span (1, cosx, cos 2x, . . . , cosnx, sinx, sin 2x, . . . , sinnx) be a subspace ofC[�⇡,⇡].

a) Define D 2 L(V ) by Df = f 0. Then D⇤= �D which implies D is normal

but not self-adjoint (exerciseexer:7A:21exer:7A:21

21 (7A)). From previous remark, we can also findan orthonormal basis of V such that the matrix of D has the form described intheorem

theo:9.34:9Btheo:9.34:9B

11.3.1 (see exerciseexer:9B:8exer:9B:8

8 (9B)).b) Define T 2 L(V ) by Tf = f 00. Then T is self-adjoint and �T is positive operator

(exerciseexer:7C:14exer:7C:14

14 (7C)).

Example 13.0.2 (Projection operator)A linear operator P on vector space V is a projection on V if P 2

= P . That is, it leaves itsimage unchanged.

1. V = null P � range P (exerciseexer:5B:4exer:5B:4

4(5B)).

2. Characteristic polynomial of P is zm(z�1)

n where m = dim null P and n = dim range P(exercise

exer:8C:7exer:8C:7

7 (8C)).

3. trace P = dim range P (exerciseexer:10A:9exer:10A:9

9 (10A)).

4. P is an orthogonal projectiondef:6.53:6Cdef:6.53:6C

8.5.3 iff it is self-adjoint (exerciseexer:7A:11exer:7A:11

11 (7A)).

All norms are equivalent?? https://math.stackexchange.com/questions/112985/every-linear-mapping-on-a-finite-dimensional-space-is-continuous/

833995#833995

All we just learned is about normed vector space, i.e. vector space over the real or complexnumbers, on which a norm is defined. See wiki for more.

Two norms are equivalent on finite dimensional space?? See exerciseexer:6A:29exer:6A:29

29 (6A).Every inner product space is a normed space, but not every normed space is an inner product

space. See here.The motivation for the equivalence of norms on V come from the fact that we want the two

Page 117: Solutions for exercises/Notes for Linear Algebra Done

Toan Quang Pham page 117

norms to define the same open subsets of V . See here for more. See here for proofs about normsbeing equivalent in finite dimensional normed space.

14. Interesting problems

1. How many there are k-dimensional subspaces in n-dimensional vector space V over Fp

forp is a prime. (See here)

Proof. So there are (pn�1)(pn�p) · · · (pn�pk�1

) linearly independent lists of length k inV and since each k-dimensional subspace of V contributes (pk � 1)(pk � p) · · · (pk � pk�1

)

such lists, so there are (p

n�1)(p

n�p)···(pn�p

k�1)

(p

k�1)(p

k�p)···(pk�p

k�1)

k-dimensional subspaces of V .

2. (AoPS) Let f 2 Q[X] be a polynomial of degree n > 0.Let p1

, p2

, .., pn+1

be distinctprime numbers.Show that there exists a non-zero polynomial g 2 Q[X] such that fg =P

n+1

i=1

ci

Xpi with ci

2 Q.

Proof. Let Pn

(Q) be the Q-vector space of polynomials with degree at most n. For each i,let Xpi

= fqi

+ri

where ri

is the remainder of Xpi modulo f . Since deg f = n so ri

2 Pn

(Q)

for all i. On the other hand, since p1

, . . . , pn+1

are primes, this follows r1

, . . . , rn+1

arelinearly independent in P

n

(Q), which means they form a basis of Pn

(Q). Therefore, asf 2 P

n

(Q), there exists ci

2 Q such that f =

Pn+1

i=1

ci

ri

.Therefore, we have

n+1X

i=1

ci

Xpi=

n+1X

i=1

ci

(fqi

+ ri

) = f

n+1X

i=1

ci

qi

+ 1

!= fg.

Since pn+1

> n so deg qn+1

� 1 so deg g � 1.

3. (MSE) Show that any n⇥ n marix A can be written as sum of two non-singular matrices.

Proof. There are two ways presented in the link.

15. New knowledge

1. Rational canonical form, AoPS

2. Difference between R and C? algebraic closed field?