Correction Homework 9

Embed Size (px)

Citation preview

  • 7/30/2019 Correction Homework 9

    1/9

    CORRECTION OF HOMEWORK

    In all the following, we let F denote either R, the field of real nubers or C thefield of complex numbers. We shall identify polynomials and polynomial functions.We let F[X] be the set of polynomials with coefficients in F, i.e. the set of functionsP such that

    P(X) = a0 + a1X + a2X2 + + anX

    n =

    ni=0

    aiXi

    for some a0, a1, . . . , an in F. For such a function, if an = 0, we call n the degree ofP. Finally, we let Fn[X] be the set of elements in F[X] of degree less than or equalto n.

    1. Exercise 9 page 123

    Here, we give a possible correction of this exercise. See at the bottom for otherpossibilities of proofs.

    On F[X], we define the follwong transformations, T , D , F [X] F[X] by

    (T(P)) (X) =X0 P(x)dx =

    ni=0

    ai

    i + 1Xi+1 (1.1)

    (D(P)) (X) = P(X) =ni=0

    iaiXi1 (1.2)

    where we have expressed

    P = a0 + a1X + a2X2 + . . . anXn. (1.3)

    Show that T is nonsingular on F[X], but is not invertibleT is nonsingular if and only if its null space is the zero space null( T) = {0}. But

    if P as (1.3) is such that T(P) = 0, then, we deduce that

    a0X +a1

    2X2 + +

    an

    n + 1Xn+1 = 0,

    and since (X, X2, X3, . . . , X n+1) is linearly independent, we deduce that

    a0 =a1

    2= =

    an

    n + 1= 0,

    from which it follows that P expressed as (1.3) is 0. Consequently, null(T) = {0}and T is nonsingular.

    To prove that T is not invertible, we must prove that it is not onto (or at least,prove something that would imply it). Here, we remark that every polynomial Qin the range of T satisfies Q(0) = 0, and so it suffices to find a polynomial P suchthat P(0) = 0. For example, P = X + 1 (or P = 1 or P = X2 + 1,. . . ) is not inthe range of T.

    Find the null space of T

    Date: Wednesday March 25, 2008.

    1

  • 7/30/2019 Correction Homework 9

    2/9

    Suppose that P given by (1.3) satisfies D(P) = 0, then, we get that

    a1 + 2a2X + + nanXn1 = 0,

    and since (1, X , X 2, . . . , X n1) is linearly independent, we get that

    a1 = 2a2 = = nan = 0,

    and consequently, P = a0 is constant. Sor null(D) Span(1) = F0[X]. Conversely,using the definition of D, we get that every constant polynomial p = a0 satisfiesT(P) = 0a0 = 0. In conclusion, null(D) = Span(1).

    Show that DT = IdF[X] and that T D = IdF[X].Let P be any polynomial in F[X] written as in (1.3). Applying the definition of

    T in (1.1), we get that

    T(P) = a0X +a1

    2X2 + +

    an

    n + 1Xn+1,

    and applying the rule in (1.1), we get that

    (D(T(P)))(X) = a0 + 2 a2

    2X + + (n + 1) an

    n + 1Xn = P.

    Since P was an arbitrary polynomial, we get that for all polynomials, DT(P) = P,hence DT = IdF[X].

    To prove that T D = IdF[X], we need only find a polynomial Q such thatT D(Q) = Q. Inspired by the previous question, we can (for example) compute

    T D(X + 1) = T(1) = X= X + 1.

    Consequently, T D = IdF[X].

    Show thatT (T(f)g) = (T f)(T g) T (f(T g)) (1.4)

    for all f, g F[X] .

    One way to answer this question uses the following very useful principle in linearanalysis: to verify that a linear relation/equality holds, it suffices to test it with abasis. And here, we want to check (1.4) which is both linear in f and g.

    To see how this works, we first prove the following claim: Suppose that (1.4) holdsfor every polynomial g, and every f, element of the basis B = (1, X , X 2, X3, . . . ),then (1.4) holds for every polynomial f and and g.

    Indeed, let us assume the claim and let P be a polynomial given by (1.3), and gbe another polynomial. We get, using linearity, that

    T (T(P)g) = T (T(a0 + a1X + . . . anXn)g) = T ((a0T(1) + a1T(X) + + anT(X

    n)) g)

    = T (a0T(1)g + a1T(X)g + + anT(Xn)g)

    = a0T(T(1)g) + a1T(T(X)g) + + anT(T(Xn)g)

    and using the claim, for each term, we get that

    T (T(P)g) = a0 ((T1)(T g) T (1(T g))) + a1 ((T X)(T g) T (X(T g))) + . . .

    + an ((T Xn)(T g) T (Xn(T g)))

    = a0(T1)(T g) + a1T(X)T(g) + + anT(Xn)T(g)

    (a0T(1(T g)) + a1T(X(T g)) + + anT(Xn(T g)))

    = T(a0 + a1X + + anXn)(T g) T((a0 + a1X + + anX

    n)(T g))

    = (T P)(T g) T(P(T g))

  • 7/30/2019 Correction Homework 9

    3/9

    which proves that the claim is true. Hence to prove (1.4), it suffices to prove it inthe special case when f = Xj for some j 0 and g is any polynomial. Let us fix j,and prove that for all polynomial g F[X], (1.4) holds. Here again, this statementis linear in g. Applying a similar reasoning, we see that it suffices to prove it wheng is an arbitrary element of the basis B, i.e. g = Xk for some k 0. To sum up,it suffices to prove that for j, k 0, we have that

    T

    T(Xj)Xk

    = (T Xj)(T Xk) T(Xj(T Xk)). (1.5)

    But this is now a simple matter of computation, since

    T

    T(Xj)Xk

    = T

    1

    j + 1Xj+1Xk

    +

    1

    j + 1T(Xj+k+1)

    =1

    (j + 1)(j + k + 2)Xj+k+2 and

    (T Xj)(T Xk) T(Xj(T Xk)) =1

    j + 1

    Xj+11

    k + 1

    Xk+1 T(Xj1

    k + 1

    Xk+1)

    =1

    (j + 1)(k + 1)Xj+k+2

    1

    k + 1T(Xj+k+1)

    =

    1

    (j + 1)(k + 1)

    1

    (k + 1)(j + k + 2)

    Xj+k+2

    =1

    (j + 1)(j + k + 2)Xj+k+2

    hence, we get that (1.5) holds, and consequently that (1.4) is true whenever g isany polynomial, and f = Xj . Since we made no hypothesis on j, (1.4) is true forall polynomial g and all element in the basis B. Finally, by the claim, (1.4) is truefor all f, g polynomials.

    state and prove a similar statement forD

    .We claim that for all polynomials f and g, there holds that

    D(f g) = D(f)g + f D(g). (1.6)

    Here agin, it suffices to prove this when f and g are arbitrary elements of the basisB = (1, X , X 2, X3, . . . ). Hence, let f = Xj and g = Xk for j, k 0. We compute

    D(XjXk) = D(Xj+k) = (j + k)Xj+k1, and

    D(Xj)Xk + XjD(Xk) = jXj1+k + kXj+k1 = (j + k)Xj+k1.

    Consequently, (1.6) holds whenever f and g are elements in the basis B, and bylinearity this is also true whenever f and g are arbitrary polynomials.

    Suppose that V is a nonzero subspace ofF[X] such that T V V (i.e. for all f V,

    there holds that T f V). Show that V is not finite dimensional.To prove this, we prove the contra-posit, namely that if V is finite dimensional,

    then either V = {0} or there exists f V such that T f does not belong to V. Letus suppose that V is finite dimensional, and let B be a basis of V. Then, eitherB is empty, and V = {0}, or there exists a polynomial in B of maximal degreeP with degP = n 0. Then for all element Q in the basis B, Q Fn[X], andconsequently,

    V = Span(B) Fn[X]

  • 7/30/2019 Correction Homework 9

    4/9

    (i.e. every polynomial in F has degree lower or equal to n). But then, T(P) is ofdegree n + 1, and hence does not belong to Fn[X], and not to V. This finishes theproof.

    Suppose V is a finite dimensional subspace of F[X], prove that there exists m 0such that DmV = {0}.

    Since V is finite dimensional, V admits a basis B. Either B is empty, thenV = {0}, and DV = {0}, or, as above, V Fn[X] for some n 0. Now, forevery polynomial P in Fn[X] (and in particular in V), we have that D

    n+1P = 0(indeed, DnP is a polynomial of degree lower or equal to 0, hence a constant, henceD(DnP) = 0).

    Remark 1.1. (1) Another possibility to prove all the claims above was to usethe tools from Calculous since (because we have identified polynomials andpolynomial function) any polynomial is in particular a differentiable func-tion. Then formulas (1.1) allowed very short proofs to the questions above.

    (2) Introducing the notion of the valuation Val(P) which is the smallest indexi such that ai = 0, where ai is the coefficient of Xi in the expansion of

    P in (1.3), we see that Val(P Q) = Val(P)Val(Q), and that Val(P + Q) min(Val(P), Val(Q)). Thus Val has some properties similar to the degree.Then, we see that, if P is not constant,

    deg(T P) = degP + 1,degD(P) = degP 1

    Val(T P) = ValP + 1,ValD(P) = ValP 1.

    Considering the valuation instead of the degree could provide alternate proofs.

    2. p126-127

    In this section, we will use the following notations: For S = (x1, x2, . . . , xn+1)

    n + 1 distinct points in F, we denote Pi the i-th Lagrange interpolation polynomialassociated to S. This is the only polynomial P such that deg(Pi) n and P(xk) =jk . Besides, we have the explicit formula for Pi:

    Pi(X) =X x1xi x1

    X x2xi x2

    . . .X xi1xi xi1

    X xi+1xi xi+1

    . . .X xn+1xi xn+1

    . (2.1)

    2.1. Exercise 1. Find a polynomial f of degree lower or equal to 3 such thatf(1) = 6, f(0) = 2, f(1) = 2 and f(2) = 6.

    We shall give two proofs of that exercise. First, using the Lagrange theorem,and formula (2.1), we et that f is uniquely given by

    f = 6P1 + 2P2 2P3 + 6P4

    where the Pi are the Lagrange interpolation polynomials associated to S = (1, 0, 1, 2).Then it suffices to compute the Pi to find that1

    f = 2 2X 6X2 + 4X3. (2.2)

    Another possibility to find directly f without computing the Pi is as follows.

    1At this point, it is convenient to take some time to check that f satisfies the interpolation

    problem

  • 7/30/2019 Correction Homework 9

    5/9

    Following what we did in class, we introduce the mapping EvalS, R3[X] R4

    such thatEval

    S(P) = (P(1), P(0), P(1), P(2)).

    We saw in class (and it is not difficult to check, using the matrix of the application)that this is an isomorphism. Introducing B = (1, X , X 2, X3), a basis ofR3[X], andBC, the canonical basis ofR4, we can compute the matrix of EvalS in basis B andBC.

    [EvalS]BC,B =

    1 1 1 11 0 0 01 1 1 11 2 4 8

    and we want to find the unique polynomial P = a0 + a1X+ a2X2 + a3X

    3 such thatEvalS(P) = (6, 2,2, 6), or in coordinates,

    [EvalS]BC,B

    a0a1a2a3

    B

    =

    1 1 1 11 0 0 01 1 1 11 2 4 8

    a0a1a2a3

    B

    =6226

    BC

    Then, it is easy to solve this system to get that

    a0a1a2a3

    B

    =

    2264

    BC

    which gives (2.2).

    2.2. Exercise 2. Let, , and be real numbers. Prove that it is possible to finda polynomial f R[X] of degree not more than 2 such that f(1) = , f(1) = ,f(3) = and f(0) = if and only if

    3 + 6 8 = 0. (2.3)

    In this case, we have a degenerate interpolation problem since the degree allowedis smaller than Card(S)1, where S = (1, 1, 3, 0). There are at least three differentand interesting ways to attack that problem.

    (1) One can set P = a0 + a1X + a2X2, and reduce the question to solving a

    system: we can rephrase the question as prove that the system

    P(1) = a0 a1 + a2 = P(1) = a0 + a1 + a2 = P(3) = a0 + 3a1 + 9a2 = P(0) = a0 =

    has a system if and only if (2.3) holds.(2) One can use the Lagrange interpolation theorem. Indeed, there exists a

    unique polynomial g of degree lower or equal to 2 that solves the (partial)interpolation problem g(1) = , g(1) = and g(3) = , and we know gexplicitly in terms of the Lagrange polynomials associated to S = (1, 1, 3).Hence, if there is a solution to the full interpolation problem, it must be g(that we now know explicitly). And conversely, g is a solution if and onlyif g(0) = . This, again gives (2.3).

  • 7/30/2019 Correction Homework 9

    6/9

    (3) One can use our knowledge of matrices. Indeed we can rephrase the questionas follows: prove that ( , , , ) is in the range of the application EvalS,R2

    [X] R4 defined by

    EvalS(P) = (P(1), P(1), P(3), P(0))

    if and only if (2.3) holds. But ifB = (1, X , X 2) and BC is the canonicalbasis ofR4, we get that

    M = [Evals]BC,B =

    1 1 11 1 11 3 91 0 0

    and the vector ( , , , ) is in the range of EvalS if and only if

    Col(M).Column-reducing M, we get (2.3).

    2.3. Exercise 3. Let

    A =

    2 0 0 00 2 0 00 0 3 00 0 0 1

    and

    P = (X 2)(X 3)(X 1) = X3 6X2 + 11X 6.

    Show that p(A)=0. Computing directly, we get the claim. Alternatively, re-

    marking that for a polynomial matrix, one can directly compute the image of thediagonal coefficient by P, one gets that

    P(A) =

    P(2) 0 0 00 P(2) 0 00 0 P(3) 00 0 0 P(1)

    =

    0

    .

    Let P1, P2 and P3 be the Lagrange polynomials associated to S = (2, 3, 1), com-pute Ei = Pi(A). We will give two proofs

    (1) Again, one can compute the Pi explicitly. And we will see how to do it veryefficiently. Indeed, let B = (1, X , X 2) be a basis ofR2[X], we want to find

    P1 =ab

    c

    B

    , P2 =de

    f

    B

    , P3 =gh

    i

    B

    such that EvalS(Pi) = Ei, where Ei is the i-th vector in BC, the canonicalbasis ofR3. But since

    [EvalS]BC,B = M =

    1 2 41 3 9

    1 1 1

    ,

  • 7/30/2019 Correction Homework 9

    7/9

    this amounts to

    [EvalS]BC,B ((P1)B (P2)B (P3)B) =1 0 0

    0 1 00 0 1

    or equivalently that

    M

    a d gb e h

    c f i

    = I3.

    hence, it suffice to compute M1 to read the coefficients of the Pi. Andthis can be done efficiently, row-reducing the matrix M. This gives

    M1 =

    a d g

    b e h

    c f i

    =

    3 1 34 32

    52

    1 1212

    ,

    hence2

    P1 = 3 + 4X X2

    P2 =13

    2X +

    1

    2X2

    P3 =35

    2X +

    1

    2X2.

    (2.4)

    Now that we know explicitly the Pi, we can see that

    E1 =

    1 0 0 00 1 0 00 0 0 00 0 0 0

    , E2 =

    0 0 0 00 0 0 00 0 1 00 0 0 0

    and E3 =

    0 0 0 00 0 0 00 0 0 00 0 0 1

    (2.5)

    (2) Another possibility was to use again the computation of polynomials ofdiagonal matrices and the fact that the Pi solve a known interpolationproblem to get that

    E1 =

    P1(2) 0 0 00 P1(2) 0 00 0 P1(3) 00 0 0 P1(1)

    =

    1 0 0 00 1 0 00 0 0 00 0 0 0

    E2 =

    P2(2) 0 0 00 P2(2) 0 00 0 P2(3) 00 0 0 P2(1)

    =

    0 0 0 00 0 0 00 0 1 00 0 0 0

    E3 =

    P3(2) 0 0 00 P3(2) 0 00 0 P3(3) 00 0 0 P3(1)

    =0 0 0 00 0 0 00 0 0 0

    0 0 0 1.

    and we recover (2.5)

    2once again, at this point it is convenient to pause to check that the Pi satisfy the interpolation

    problem

  • 7/30/2019 Correction Homework 9

    8/9

    Show that E1 + E2 + E3 = I3, EiEj = ijEi and A = 2E1 + 3E2 + E3. Thiscould be done by direct computations. A more theoretical proof is below.

    2.4. Exercise 4. LetP be as above, and let T be any linear transformation of onR4such that P(T) = 0. Let P1, P2P3 be the Lagrange polynomials defined in Exercise3 above, and let Ei = Pi(T). Prove thatE1 + E2 + E3 = IdR4, EiEj = ijEi andT = 2E1 + 3E2 + E3.

    We introduce the linear transformation (substitution X T) SubT, R[X] L(R4) defined by SubT(Q) = Q(T), i.e.

    SubT(a0 + a1X + a2X2 + + anX

    n) = a0IdR4 + a1T + a2T2 + + anT

    n.

    We remind that this is a linear transformation satisfying that

    SubT(QR) = SubT(Q)SubT(R).

    To prove the first formula, we need to show that

    SubT(P1 + P2 + P3) = IdR4

    = SubT(1).If one has computed explicitly the Pi, one can compute P1 + P2 + P3 = 1. Hence,evaluating both sides at T, we get E1 + E2 + E3 = IdR4. If one has not computedthe Pi before, one can still see that Q = P1 + P2 + P3 is the only polynomial thatsatisfy the following interpolation problem: Q is of degree lower or equal to 2 andQ(2) = Q(3) = Q(1) = 1. But the polynomial 1 satisfies the same interpolationproblem. By unicity of the solution of the Lagrange interpolation problem, we seethat Q = 1, and we can conclude as before.

    To prove the second formula, we first prove that

    EiEj = 0 if i = j. (2.6)

    Thus, we suppose i = j and we want to prove that SubT(PiPj) = 0. But using theexplicit formula for Pi, Pj given by (2.1), we see that if{i,j,k} = {1, 2, 3} (but not

    necessarily in that order) that

    Pi =P

    (X xi)(xi xj)(xi xk),

    and hence that

    PiPj =P

    (X xi)(xi xj)(xi xk)

    P

    (X xj)(xj xi)(xj xk)

    = PP

    (X xi)(X xj)

    1

    (xi xj)(xi xk)(xj xi)(xj xk)

    =1

    (xi xj)(xi xk)(xj xi)(xj xk)(X xk)P

    = QijP

    and consequently, using the multiplicative property of SubT, we get thatEiEj = Pi(T)Pj(T) = Qij(T)P(T) = 0

    where we have used that P(T) = 0 by hypothesis. Hence, we get (2.6). Now, toconsider the case i = j, we start from the fact that E1+E2+E3 = IdR4. Multiplyingboth sides by Ei, and using (2.6), we get that all the terms on the left hand sidevanish except E2i , and on the right hand side, we get Ei.

    .

  • 7/30/2019 Correction Homework 9

    9/9

    Finally, to prove the last equality, we just need to prove that

    SubT(2P1 + 3P2 + P3) = SubT(X),

    but we can check directly that 2P1 + 3P2 + P3 = X, either by direct computations,or because these polynomials satisfy the same interpolation problem.

    2.5. Exercise 6. Suppose that T is a linear transformation F[X] F such that

    T(f g) = T(f)T(g). (2.7)

    Show that T = 0 or that there existsx F such thatT(f) = f(t) for all polynomialf.

    The goal of the exercise is to explain that if we are consider a linear transfor-mation, it suffices so check an identity on a basis (or on any family that generatesthe space given the rules that we are given (addition, scalar multiplication)), then,in the case of an algebra, it suffices to check an identity on a generating set (in thesense of algebra: a set such that any element can be expressed in terms of linear

    combinations of products of element in the set).More precisely, we let t = T(X). Then, we know exactly the image of all elementsof the basis B = (1, X , X 2, . . . ), except for the image of 1, T(1). Indeed, using(2.7), we get that T(X2) = T(X)T(X) = t2, and more generally that T(Xi) = ti

    for j 1. Now to get the image of the last element in the basis, we remark using(2.7) that T(1) = T(1 1) = T(1)T(1) = T(1)2 is a root of X2 X = X(X 1)on F. Consequently,

    (1) either T(1) = 0, then T(X) = T(1 X) = T(1)x = 0, and hence for allj 0. So T coincides with the 0 transformation on a basis, hence on thewhole space.

    (2) either T(1) = 1. Then, using linearity and (2.7), we get, for P given as in(1.3), that

    T(P) = T(a0 + a1X + a2X2 + . . . anX

    n)

    = a0T(1) + a1T(X) + a2T(X2) + + anT(X

    n)

    = a0 + a1t + a2t2 + + ant

    n

    = P(t)

    and T(P) = P(t).

    In conclusion, either T = 0, or there exists an element t = T(X) such that T(P) =P(t).

    Brown University

    E-mail address: [email protected]