Upload
defsnds
View
214
Download
0
Embed Size (px)
Citation preview
7/29/2019 115b hw3
1/5
3 Math 115B, Winter 2007: Homework 3
Exercise 3.1. (FIS, 6.5 Exercise 10) LetA be annn real symmetric [complexnormal] matrix. Then
tr(A) =
ni=1
i, tr(AA) =
ni=1
|i|2,
where the is are the (not necessarily distinct) eigenvalues of A.
Proof. By Theorems 6.19 and 6.20 from FIS, A is orthogonally [unitarily] equiv-alent to a diagonal matrix D with the eigenvalues ofA on the diagonal, i.e. thereexists an orthogonal [unitary] matrix P such that A = PDP. Since similarmatrices have the same trace, tr(A) = tr(PDP). Moreover, for any squarematrices M and N, tr(M N) = tr(N M), so in particular,
tr(PDP) = tr((PD)P) = tr(P(PD)) = tr((P P)D)
= tr(ID) = tr(D) =n
i=1
i.
For the second claim, note that A = (PDP) = PD(P) = PDP,
and that the diagonal entries of D are the is. This implies that DD is a
diagonal matrix with entries |i|2, so that we have
tr(AA) = tr(PDP PDP) = tr(PDDP)
= tr(DP PD) = tr(DD) =n
i=1
|i|2
Exercise 3.2. (FIS, 6.5 Exercise 31) Let V be a finite dimensional complexinner product space, and let u be a unit vector in V. Define the Householderoperator Hu : V V by Hu(x) = x 2 x|u u for all x V.(a) Hu is linear.(b) Hu(x) = x if and only if x|u = 0.(c) Hu(u) = u.(d) Hu = Hu and H
2
u = IV, and hence Hu is a unitary operator on V.Note that if V is a real inner product space, Hu is a reflection.
Proof. (a) For any a C, and x, y V,
Hu(ax + y) = ax + y 2 ax + y|u u = ax + y 2a x|u u 2 y|u u= a(x 2 x|u u) + (y 2 y|u u) = aHu(x) + Hu(y).
1
7/29/2019 115b hw3
2/5
(b) For any x V,
Hu(x) = x x 2 x|u u = x 2 x|u u = 0 x|u = 0.
(c) Hu(u) = u 2 u|u u = u 2u = u.(d) For any x and y V, we have
x|Hu(y) = x|y 2 y|u u = x|y + x| y|u u= x|y 2y|u x|u = x|y 2 u|y x|u= x|y + 2 x|u u|y = x|y + 2 x|u u|y= x 2 x|u u|y = Hu(x)|y ,
so the uniqueness of the adjoint guarantees that Hu = Hu. To see that H2
u = IV,simply compute
H2u(x) = Hu(x 2 x|u u) = Hu(x) 2 x|u Hu(u)= (x 2 x|u u) + 2 x|u u = x = IV(x).
Exercise 3.3. (FIS, 6.6 Exercise 6) Let T be a normal operator on a finite di-mensional inner product space. IfT is a projection, then T is also an orthogonalprojection.
Proof. An orthogonal projection is simply a self-adjoint projection, so it sufficesto prove that T is self-adjoint. As a corollary to the spectral theorem, we knowthat a normal operator is self-adjoint if and only if its eigenvalues are real. Theeigenvalues of a projection are real. Indeed, if is an eigenvalue and x aneigenvector for , then
x = T(x) = T2
(x) = 2
(x),
therefore (1) = 0, i.e. = 0 or 1. Hence T is an orthogonal projection.
Exercise 3.4. (FIS, 6.6 Exercise 7) Let T be a normal operator on a finite-dimensional complex inner product space V. Using the spectral decomposition
T = 1T1 + ... + kTk,
we prove that(a) If g is a polynomial, then
g(T) =
ki=1
g(i)Ti.
2
7/29/2019 115b hw3
3/5
(b) If Tn = 0 for some n, then T = 0.(c) A linear operator U : V V commutes with T if and only if U commuteswith each Ti.(d) There exists a normal operator U : V V such that U2 = T.(e) T is invertible if and only if i
= 0 for 1
i
k.
(f) T is a projection if and only if every eigenvalue of T is 1 or 0.
Proof. (a) We proceed by induction on deg(g). For constant polynomials, theresult is trivial. Let g = anX
n + ... + a0 and suppose that the result holds forany polynomial of degree n 1. Then
g(T) = anTn + (an1T
n1 + ... + a0)
= anT Tn1 + (an1(
n11
T1 + ... + n1k Tk) + ... + a0)
= an
k
r=1
rTr
k
s=1
n1s Ts
+
n1i=0
ai k
j=1
ijTj
= an k,k
r=1,s=1r
n1
s TrT
n1
s
+
n1,ki=0,j=1
ai
i
j
Tj
=
k
r=1
(annr )T
nr
+
n1,ki=0,j=1
ai
ij
Tj
=
n,ki=0,j=1
ai
ij
Tj =
kj=1
g(j)Tj
(b) This follows from the fact that T1,...,Tk are linearly independent inhomF(V, V). Indeed, since Ti maps V to Wi, every T(v) can be written uniquelyas a linear combination of the (finitely many) basis vectors {vi,j} for Wi. Thenwe have
Tn(v) = n1
T1(v) + ... + nkTk(v) =
ii=1
ni
j
ai,jvi,j
= 0
i = 1, 2, . .., n ni ai,j = 0.
Since this holds for every v V, we may select v so that for each i, at least oneai,j = 0. Then we must have ni = 0 for all i = 1,...,n. In particular, i = 0 foreach i so that the eigenvalues of T are all zero. Hence T = 0.
(c) We have, by linear independence of the Tis,
U T T U = 1(U T1 T1U) + ... + k(U Tk TkU) = 0 U Ti = TiU i = 1, 2,...,k
consider
3
7/29/2019 115b hw3
4/5
(d) Every complex number has a square root, so setting U =k
i=1
iTi,
one sees immediately by (a) that
U2 =k
i=1
(i)2Ti =
k
i=1
iTi = T.
As a corollary to the spectral theorem, we know that U = g(U) =k
i=1 g(
i)Ti,which obviously commutes with each Ti. By (c), U
U = U U, therefore U isnormal.
(e) This one is almost tautological, since an eigenvector is by definitionnonzero.
T is invertible ker(T) = {0} 0 is not an eigenvalue of T
(f) As we saw in (b), the Tis are linearly independent, so that
(T2
T) =
k
i=1
(2i
i)Ti = 0
2i i = 0 i = 1, 2, ...,n.
In other words, T2 = T if and only if every i is a root ofX(X1), i.e. T2 = Tiff i = 0 or 1.
Exercise 3.5. (FIS, 6.6 Exercise 8) If T is a normal operator on a complexfinite dimensional inner product space andU is a linear operator that commuteswith T, then U commutes with T.
Proof. As a corollary to the Spectral Theorem, we know that T = g(T) forsome polynomial g. If g(X) = anX
n + ... + a0, then
U T = U g(T) = U(anTn + ... + a0) = anU T
n + ... + a0U
= an(T U)Tn1 + ... + a0U = anT
nU + ... + a0U
= (anTn + ... + a0)U = g(T)U = T
U.
Exercise 3.6. (FIS, 6.6 Exercise 10) Simultaneous Diagonalization. LetU and T be normal operators on a finite-dimensional complex inner productspace V such that T U = U T. Prove that there exists an orthonormal basis forV consisting of vectors that are eigenvectors of both T and U.
Proof. Let 1,...,k be the eigenvalues of T, and set Wi = ker (T iI). EachWi is trivially T- and T
-invariant. That Wi is U-invariant follows directly
4
7/29/2019 115b hw3
5/5
from the fact that U and (T iIV) commute. Indeed, by Exercise 3.5, Ucommutes with T, therefore
(T iIV)U = (T U iU) = (UT iU) = U(T iIV).
Whenever x W, (T iIV)(U
(x)) = U
(T iIV)(x) = U
(0) = 0, i.e.U(x) W. Replacing U by U in the above two lines, one sees that Wi is alsoU-invariant. Since W is both T and U-invariant, we know that W is bothT and W-invariant. This allows us to consider T and W as operators on eachWi and W
i .
From this point, the proof proceeds by induction on dim(V). Ifdim(V) = 1there is nothing to prove - every operator is diagonalizable. Suppose that theresult holds for any space of dimension strictly less than dim(V). In particular,dim(Wi) < dim(V). By the induction hypothesis, W1 and W
1
have bases{ui}iI and {wj}jJ consisting of simultaneous eigenvectors for T and U. Itsuffices to prove that the union of these two bases is a basis for V. Every vectorv V has a unique expression v = u + w for some u W1 and w W1 (seeTheorem 6.6, FIS), so that v = i aiui +j bjwj is a unique. In particular,the zero vector is a unique linera combination so that = {ui}iI {wj}jJis linearly independent and spans V. Hence is a basis for V consisting ofsimultaneous eigenvectors for T and U.
5