0538735457_252355

Embed Size (px)

Citation preview

  • 8/22/2019 0538735457_252355

    1/46

    Although most of the examples and exercises in this book can be done withouaid of technology, there are some that are intended to be done with a macapable calculator or a computer algebra system. These are clearly marked the symbol . If you have access to such technology, it is also useful for chechand calculations and for doing computations once you have mastered the btechniques.

    In this section, you will see what selected examples from each chapter lookwhen done using the computer algebra systems Maple, Mathematica, and MATLBy imitating these examples (and making use of the extensive built-in help faci

    of each of the systems), you should be able to exploit the power of these systemhelp you with the exercises throughout this book.

    CASAny sufficiently advanced technology

    is indistinguishable from magic.

    Arthur C. Clarke

    Technology and the Future, in

    Report on Planet Three and Other

    Speculations

    Harper & Row, 1972

    Maple In order to use Maple to do linear algebra, you must first load the linear algebra page. This is done with the command

    > with (LinearAlgebra):

    The package must be loaded for each new Maple session and each new workshe

    Chapter 1: Vectors

    A calculation that results in a vector or matrix must be performed usingcommand evalm. Observe that ending an input statement with a colon presses the output, while ending an input statement with a semicolon displayoutput.

    Example 1.6

    > u:=vector([1,0,-1]):

    > v:=vector([2,-3,1]):

    > w:=vector([5,-4,0]):

    > evalm(3*u+2*v-w);

    Example 1.15

    > u:=vector([1,2,-3]): v:=vector([-3,5,2]):

    > dotprod(u,v);

    1

    32,2,1 4

    Technology Bytes

  • 8/22/2019 0538735457_252355

    2/46

    2 Appendix E

    Ifv is a vector, the Maple command norm(v,2) gives its (Euclidean) length.

    Example 1.17

    (> norm(vector([2,3]),2);

    The cross product of vectors in3 is computed using the command crossprod.

    > u:=vector([1,2,3]): v:=vector([-1,1,5]):

    > crossprod(u,v);

    All linear algebra commands can be performed symbolically as well as numerically.

    > u:=vector([u1,u2,u3]): v:=vector([v1,v2,v3]):

    > evalm(u+3*v);

    (> dotprod(u,v);

    What are the bars? Maple is assuming that the vectors may have entries that arecomplex numbers, in which case this is the version of the dot product that is needed.(See Exploration: Vectors and Matrices with Complex Entries in Chapter 7) Use thefollowing command to obtain the real dot product symbolically:

    > dotprod(u,v,'orthogonal');

    > norm(u,2);

    > crossprod(u,v);

    Maple can also perform modular arithmetic.

    Example 1.12

    > 3548 mod 3;

    2

    Example 1.14

    > u:=vector([2,2,0,1,2]): v:=vector([1,2,2,2,1]):

    > dotprod(u,v) mod 3;

    1

    3u2v3 u3v2, u3v1 u1v3, u1v2 u2v1 4

    2u12 u22 u32u1v1 u2v2 u3v3

    u11v1 2 u21v22 u31v3 2

    3u1 3v1, u2 3 v2, u3 3v3 4

    37,8, 3 4

    113

  • 8/22/2019 0538735457_252355

    3/46

    Example 1.40

    > c:=vector([3,1,3,1,3,1,3,1,3,1,3,1]):

    > v:=vector([0,7,4,9,2,7,0,2,0,9,4,6]):

    > dotprod(c,v) mod 10;

    0

    To perform modular arithmetic calculations where the result is a vector or mayou need to first define a procedure called matmod.

    > matmod:=proc(A,p):

    > map('mod',evalm(A),p):

    > end;

    matmod: proc(A, p)map(mod, evalm(A), p) end proc

    > u:=vector([2,2,0,1,2]): v:=vector([1,2,2,2,1]):

    > matmod(u+v,3);

    Chapter 2: Systems of Linear Equations

    Example 2.6

    > A:=matrix(3,3,[1,-1,-1,3,-3,2,2,-1,1]);

    > b:=vector([2,16,9]);

    > linsolve(A,b);

    Elementary row operations can be performed individually. To construct an mented matrix, use the augment command.

    > Ab:=augment(A,b);

    > addrow(A,1,2,-3);C1 1 1 20 0 5 102 1 1 9

    SAb: C

    1 1 1 2

    3 3 2 162 1 1 9S

    33,1, 2 4

    b: 32, 16, 9 4

    A:

    C1 1 1

    3 3 2

    2 1 1

    S

    30, 1, 2, 0, 0 4

    Appendix E

  • 8/22/2019 0538735457_252355

    4/46

    This command adds3 times row 1 to row 2 (putting the result in row 2).

    > swaprow(%,2,3);

    This command swaps rows 2 and 3 of the previous output (denoted by %).

    > mulrow(%,3,1/5);

    This command multiplies row 3 of the last result by .

    There are two other commands that can be useful in solving a linear system di-rectly: rref and gaussjord.

    Examples 2.11 and 2.13

    > A:=matrix(3,4,[1,-1,-1,2,2,-2,-1,3,-1,1,-1,0]):

    > b:=vector([1,3,-3]):

    > Ab:=augment(A,b);

    > rref(Ab);

    > gaussjord(Ab);

    Applying linsolve to this example produces the following:

    > linsolve(A,b);

    _t2, _t23 _t1, _t1,1 _t1

    C1 1 0 1 2

    0 0 1 1 1

    0 0 0 0 0S

    C1 1 0 1 20 0 1 1 10 0 0 0 0

    SC1 1 1 2 1

    2 2 1 3 3

    1 1 1 0 3

    S

    15

    C1 1 1 2

    2 1 1 9

    0 0 1 2

    S

    C1 1 1 2

    2 1 1 9

    0 0 5 10S

    4 Appendix E

  • 8/22/2019 0538735457_252355

    5/46

    Although this solution looks different from the solution to Example 2.11,substitutions

    show that they are equivalent.

    You can combine the procedure matmod with the above techniques to hasystems over .p

    Example 2.16

    > A:=matrix(3,3,[1,2,1,1,0,1,0,1,2]): b:=vector([0,2,1]):

    > Ab:=augment(A,b);

    > matmod(rref(Ab),3);

    > matmod(linsolve(A,b),3);

    Chapter 3: Matrices

    All of the basic matrix operations are built into Maple, as the following exam

    illustrate.

    > A:=matrix(2,2,[2,-1,3,4]); B:=matrix(2,2,[1,2,0,1]);

    > evalm(A-2*B);

    To evaluate the matrixA2, use the command

    > evalm(A^2);

    c 1 618 13

    d

    c0 5

    3 2 d

    B: c 1 20 1

    d

    A: c 2 13 4

    d

    31, 2, 1 4

    C1 0 0 10 1 0 20 0 1 1SAb:

    C1 2 1 0

    1 0 1 2

    0 1 2 1

    S

    s _t2 3 _t1 and t 1 _t1

    Appendix E

  • 8/22/2019 0538735457_252355

    6/46

    Use the command &* for matrix multiplication.

    > evalm(A&*B);

    > transpose(A);

    > inverse(A)

    > det(A);

    11

    Like vector operations, matrix operations can be performed symbolically inMaple.

    > A:=matrix(2,2,[a,b,c,d]);

    > evalm(A^2);

    > det(A);

    > inverse(A);

    The matmod procedure, as defined for the examples in Chapter 1, is used for matrixcalculations involving modular arithmetic.

    Dd

    ad bc

    b

    ad bc

    c

    ad bc

    a

    ad bcTad bc

    c a2 bc ab bdca dc bc d2

    d

    A: c a bc d

    d

    D4

    11

    1

    11

    3

    11

    2

    11

    T

    c 2 31 4

    d

    c 2 33 10

    d

    6 Appendix E

  • 8/22/2019 0538735457_252355

    7/46

    > A:=matrix(2,2,[2,-1,3,4]):

    > A3:=matmod(A,3);

    > matmod(inverse(A3),3);

    So the matrixA3 is its own inverse modulo 3.Another built-in function of Maple will determine the rank of a matrix:

    > rank(A);

    2

    Maple can produce bases for the row, column, and null spaces of a matrix.

    Examples 3.45, 3.47, and 3.48

    > A:=matrix(4,5,[1,1,3,1,6,2,-1,0,1,-1,-3,2,1,-2,1,4,1,6,1,

    > rowspace(A);

    > colspace(A);

    > nullspace(A);

    The bases given above for the column and null spaces ofAare not the same as in

    text. Can you show that they are equivalent?You can define linear transformations in Maple.

    Examples 3.55 and 3.60

    > T:=(x,y)->[x,2*x-y,3*x+4*y]:

    > T(1,-1);

    > S:=(u,v,w)->[2*u+w,3*v-w,u-v,u+v+w]:

    > T(x,y);

    To compute S(T(x,y)), you have to pick out the components of the prevoutput. These are known to Maple as %[1], %[2], and%[3].

    > S(%[1],%[2],%[3]);

    35x 4y, 3x 7y,x y, 6x 3y4

    3x, 2xy, 3x 4y

    4

    31, 3,1 4

    5 30,5, 1,4, 1 4 , 3 1,3, 0,4, 1 4 6

    5 31, 0, 0, 1 4 , 3 0, 1, 0, 6 4 , 3 0, 0, 1, 3 4 6

    5 30, 0, 0, 1, 4 4 , 3 0, 1, 2, 0, 3 4 , 3 1, 0, 1, 0,1 4 6

    c 2 20 1

    d

    A3 : c 2 20 1

    d

    Appendix E

  • 8/22/2019 0538735457_252355

    8/46

    Maple has a built-in command for the LUfactorization of a matrix.

    Example 3.33

    > A:=matrix(3,3,[2,1,3,4,-1,3,-2,5,5]);

    > LUdecomp(A,L='L',U='U'):

    > L:=evalm(L); U:=evalm(U);

    To check that this result is correct, compute

    > evalm(L&*U);

    which is A, as expected.

    Chapter 4: Eigenvalues and Eigenvectors

    Eigenvalues, eigenvectors, and the characteristic polynomial are all included in theLinearAlgebra package.

    Example 4.18

    > A:=matrix(3,3,[0,1,0,0,0,1,2,-5,4]):

    The characteristic polynomial ofAcan be produced using the command

    > c:=charpoly(A,lambda);

    (Replacing lambda byxwould give the characteristic polynomial as a function ofx.)

    You can now factor this polynomial to find its zeros (the eigenvalues of A):

    > factor(c);

    Or, you can solve the characteristic equation directly, if you like:

    > solve(c=0,lambda);

    2,1 ,1

    1l 2 2 1l 1 22

    l3 4l2 5l 2

    C 2 1 34 1 32 5 5

    SU :

    C2 1 3

    0 3 3

    0 0 2SL: C 1 0 02 1 0

    1 2 1

    SA: C

    2 1 3

    4 1 3

    2 5 5S

    8 Appendix E

  • 8/22/2019 0538735457_252355

    9/46

    If all you require are the eigenvalues ofA, you can use

    > eigenvals(A);

    2,1 ,1

    The command eigenvects gives the eigenvalues of a matrix, their algebraic m

    plicities, and a basis for each corresponding eigenspace.

    > eigenvects(A);

    For a diagonalizable matrixA, you can find a matrixPsuch that P1 APis diag

    Example 4.26

    > A:=matrix(3,3,[-1,0,1,3,0,-3,1,0,-1]):

    > eig:=eigenvects(A);

    Now you need to extract the three eigenvectors.

    > p1:=eig[1,3,1];

    This command tells Maple that the first vector you want is inside the first set of brets, in the third entry (which is {[0, 1, 0], [1, 0, 1]}), and is the first vector in thiThe commands for the second and third eigenvectors are similar:

    > p2:=eig[1,3,2]; p3:=eig[2,3,1];

    Next, you build the matrixPwith these three eigenvectors as its columns:

    > P:=augment(p1,p2,p3);

    To check that this matrix is correct, compute

    > evalm(inverse(P)&*A&*P);

    which is, as expected, a diagonal matrix displaying the eigenvalues ofA.When approximating eigenvalues (or other numerical results) in Maple, us

    floating-point evaluation command, evalf.

    C0 0 00 0 00 0 2SP: C1 1 11 0 3

    0 1 1

    Sp3: 31, 3, 1 4p2: 31, 0, 1 4

    p1: 30, 1, 0 4

    eig:

    30, 2,

    5 30, 1, 0

    4,

    31, 0, 1

    4 6 4,

    32, 1,

    5 31, 3, 1

    4 6 4

    31, 2, 5 31, 1, 1 4 6 4 , 32, 1, 5 31, 2, 4 4 6 4

    Appendix E

  • 8/22/2019 0538735457_252355

    10/46

    > A:=matrix(2,2,[1,2,3,4]):

    > eigenvals(A);

    > evalf(eigenvals(A));

    Maple can also handle matrix exponentials.

    Example 4.47

    > A:=matrix(2,2,[1,2,3,2]):

    > exponential(A);

    Chapter 5: Orthogonality

    Maple includes the Gram-Schmidt Process as a built-in procedure. It can be used toconvert a basis for a subspace ofn into an orthogonal or an orthonormal basis forthat subspace.

    Example 5.13

    > x1:=vector([1,-1,-1,1]):

    > x2:=vector([2,1,0,1]):

    > x3:=vector([2,2,1,2]):

    > GramSchmidt([x1,x2,x3]);

    To produce an orthonormal basis, add the option normalized.

    > GramSchmidt([x1,x2,x3], normalized);

    The QR factorization of a matrix is performed in Maple using thecommand QRdecomp, which returns the upper triangular matrixR. To obtain theorthogonal matrix Q, you must instruct Maple to produce it. The command

    c161312, 0, 1

    61312, 1

    31312 d d

    c c1

    2,

    1

    2 ,

    1

    2 ,

    1

    2 d , c3

    10

    15,

    3

    10

    15,

    1

    10

    15,

    1

    10

    15 d ,

    c 31,1,1, 1 4 , c 32

    ,3

    2,

    1

    2,

    1

    2d , c1

    2, 0,

    1

    2, 1 d d

    D3

    5

    e112 2

    5

    e42

    5

    e4 2

    5

    e112

    3

    5e4

    3

    5e112 2

    5e112 3

    5e4T

    5.372281323, .372281323

    5

    2

    1

    2133, 5

    2

    1

    2133

    10 Appendix E

  • 8/22/2019 0538735457_252355

    11/46

    fullspan=false ensures that the dimensions ofQagree with those ofAthat Ris a square matrix.

    Example 5.15

    > A:=matrix(4,3,[1,2,2,-1,1,2,-1,0,1,1,1,2]):

    > R:=QRdecomp(A, Q='Q', fullspan=false);

    > Q:=evalm(Q);

    You can check that this result is correct by verifying that A QR:

    > evalm(Q&*R);

    Chapter 6: Vector Spaces

    You can adapt Maples matrix commands for use in finding bases for finite-dimensivector spaces or testing vectors for linear independence. The key is to use the isomphism between an n-dimensional vector space andn.

    Example 6.26

    To test the set {1 x, x x2, 1 x2} for linear independence in2, use the stanisomorphism of2 with

    3. This gives the correspondence

    1 x4 C110

    S , x x24 C011

    S , 1 x24 C101

    S

    D 1 2 21 1 21 0 11 1 2T

    Q: H1

    2

    3

    10

    15

    1

    6

    16

    1

    2

    3

    1015 0

    1

    2

    1

    1015 1

    616

    1

    2

    1

    1015 1

    316X

    R: F2 1 120 15 3215

    0 01

    216V

    Appendix E

  • 8/22/2019 0538735457_252355

    12/46

    Now build a matrix with these vectors as its columns:

    > A:=matrix(3,3,[1,0,1,1,1,0,0,1,1]);

    > rank(A);

    3

    Since rank(A) 3, the columns ofAare linearly independent. Hence, the set {1 x,x x2, 1 x2} is also linearly independent, by Theorem 6.7.

    Example 6.44

    Extend the set {1 x, 1 x} to S {1 x, 1 x, 1, x, x2} and use the standard iso-morphism of2 with

    3 to obtain

    Form a matrix with these vectors as its columns and find its reduced echelon form:

    > A:=matrix(3,5,[1,1,1,0,0,1,-1,0,1,0,0,0,0,0,1]);

    > rref(A);

    Since the leading 1s are in columns 1, 2, and 5, these are the vectors to use for a basisAccordingly, {1 x, 1 x, x2} is a basis for2 that contains 1 xand 1 x.

    Many questions about a linear transformation between finite-dimensional vectorsspaces can be handled by using the matrix of the linear transformation.

    Example 6.68

    The matrix of with respect to the

    basisB ofWand the standard basis C {1, x, x2}

    of2 is

    e c 1 00 0

    d , c 0 11 0

    d , c 0 00 1

    d f1a b2 1b c2x 1c a2x2T c a b

    b cd

    E1 0 12 12 00 1 12 12 00 0 0 0 1

    UA:

    C1 1 1 0 0

    1 1 0 1 0

    0 0 0 0 1

    S

    1 x4 C110S , 1 x4 C 110S , 14 C100S , x4 C010S , x24 C001S

    A:

    C1 0 1

    1 1 0

    0 1 1S

    12 Appendix E

  • 8/22/2019 0538735457_252355

    13/46

    (Check this.) By Exercises 40 and 41 in Section 6.6, the rank and nullity ofTar

    same as those ofA.> A:=matrix(3,3,[1,-1,0,0,1,-1,-1,0,1]):

    > rank(A);

    2

    It now follows from the Rank Theorem that nullity (T) 3 rank(T) 3 2If you want to construct bases for the kernel and range ofT,you can procee

    follows:

    > nullspace(A);

    This is the coordinate vector of a matrix with respect to the basisB ofW. Hencematrix is

    so a basis for ker(T) is .

    > colspace(A);

    This time you have coordinate vectors with respect to the basis C of2, so a basirange(T) is {x x2, 1 x2}.

    Chapter 7: Distance and Approximation

    Computations involving the inner product in[a, b] are

    to do in Maple.

    Example 7.6

    > f(x):=x:g(x):=3*x-2:

    > int(f(x)^2, x=0..1);

    > int((f(x)-g(x))^2, x=0..1);

    4

    3

    1

    3

    8f, g9 b

    a

    f 1x2g1x2dx

    5 30, 1,1 4 , 31, 0,1 4 6

    e c 1 11 1

    d f

    1 # c 1 00 0

    d 1 # c 0 11 0

    d 1 # c 0 00 1

    d c 1 11 1

    d

    5 31, 1, 1 4 6

    A

    C1 1 0

    0 1 1

    1 0 1

    S

    Appendix E

  • 8/22/2019 0538735457_252355

    14/46

    > int(f(x)*g(x),x=0..1);

    0

    The various norms discussed in Section 7.2 are built into Maple. The format of thecommand is norm(v,n), where v is a vector and n is a positive integer or the wordinfinity.Usen=1 for the sum norm,n=2 for the Euclidean norm,andn=infinityfor the max norm.

    > v:=vector([1,(2,5]):

    > norm(v,1);

    8

    > norm(v,2);

    > norm(v,infinity);

    5

    If the command norm is applied to a matrix, you get the induced operator norm.

    Example 7.19

    > A:=matrix(3,3,[1,-3,2,4,-1,-2,-5,1,3]);

    > norm(A,1);

    10

    > norm(A,infinity);

    9

    The condition number of a square matrixAcan be obtained with the commandcond(A,n), where n is one of1,2,infinity, or frobenius.

    Example 7.21

    > A:=matrix(2,2,[1,1,1,1.0005]):

    > cond(A,1); cond(A,2); cond(A,infinity); cond(A,frobenius);

    8004.0005008002.0003738004.0005008002.000501

    Maple can perform least squares approximation in one step.

    Example 7.26

    > A:=matrix(3,2,[1,1,1,2,1,3]): b:=vector([2,2,4]):

    > leastsqrs(A,b);

    c 23

    , 1 d

    130

    14 Appendix E

  • 8/22/2019 0538735457_252355

    15/46

    IfAdoes not have linearly independent columns, then there is not a unique squares solution ofAx b. The additional argument optimize tells Maple to duce the least squares solution of minimal (Euclidean) length.

    Example 7.40

    > A:=matrix(2,2,[1,1,1,1]): b:=vector([0,1]):> leastsqrs(A,b);

    > leastsqrs(A,b,optimize);

    The singular value decomposition is also built into Maple.

    Example 7.34(b)

    > A:=matrix(3,2,[1,1,1,0,0,1]):

    > singularvals(transpose(A));

    This command produces the exact singular values ofA. Their numerical approxtions are given by

    > evalf(sv);

    The same result is obtained with

    > sv:=evalf(Svd(A,U,V));

    (Note that evalf must be used.) This time, the orthogonal matrices Uand V

    computed (numerically) as well.

    > U:=evalm(U); V:=evalm(V);

    (Note that the signs of some of the entries differ from those in the solution intext. This is not surprising, since the SVD is not unique. Also notice that u0.3513817914 1016 0.) You can easily verify that you have an SVD ofcomputing .

    V: c.7071067812 .7071067812.7071067812 .707067812

    d

    U:

    C.8164965809 .35138179141016 .5773502692

    .4082482905 .7071067812 .5773502692

    .4082482905 .7071067812 .5773502692S

    sv: 31.732050808, 1.000000000 4

    31.732050808, 1. 4

    sv: 313, 1 4

    c 14

    ,1

    4d

    c 12 _t1, _t1 d

    Appendix E

  • 8/22/2019 0538735457_252355

    16/46

    > evalm(transpose(U)&*A&*V);

    Correct!The pseudoinverse is not a built-in function in Maple,but it is not hard to construct.

    Example 7.39(b)

    > A:=matrix(3,2,[1,1,1,0,0,1]: sv:=evalf(Svd(A,U,V)):

    > d:=diag(sv[1],sv[2]);

    > Splus:=augment(inverse(d),[0,0]);

    > Aplus:=evalm(V&*Splus&*transpose(U));

    Aplus: c .3333333332 .6666666666 .3333333334.3333333332 .3333333334 .6666666666

    d

    Splus: c .5773502690 0 00 1.000000000 0 d

    d: c 1.732050808 00 1.000000000

    d

    C1.732050807 0

    0 1.000000000

    0 0

    S

    16 Appendix E

    Mathematica In order to use Mathematica to do linear algebra, you must first load the appropriatelinear algebra package. For basic vector and matrix operations, this is done with the

    command

  • 8/22/2019 0538735457_252355

    17/46

    In[2]:=

    u.v

    Out[2]=

    1

    To find the (Euclidean) length of a vector, use the function Norm.

    Example 1.17

    In[1]:=

    Norm[{2,3}]

    Out[1]=

    The Cross product of vectors in3 is computed using the command Cross.

    In[1]:=

    u={1,2,3}; v={-1,1,5};

    In[2]:=Cross[u,v];

    Out[2]=

    [7,-8,3]

    All linear algebra commands can be performed symbolically as well as numeric

    In[1]:=

    u={u1,u2,u3}; v={v1,v2,v3};

    In[2]:=u+3*v;

    Out[2]=

    {u1+3 v1, u2+3 v2, u3+3 v3}

    In[1]:=u.v

    Out[1]=

    u1 v1+u2 v2+u3 v3

    In[1]:=

    Norm[u]

    Out[1]=

    In[1]:=

    Cross[u,v];

    Out[1]=

    {-u3 v2+u2 v3, u3 v1-u1 v3, -u2 v1+u1 v2}Mathematica can also perform modular arithmetic.

    Example 1.12

    In[1]:=

    Mod[3548, 3]

    Out[1]=

    2

    1u12 u22 u33

    113

    Appendix E

  • 8/22/2019 0538735457_252355

    18/46

    Example 1.14

    In[1]:=

    u={2,2,0,1,2}; v={1,2,2,2,1};

    In[2]:=

    Mod[u.v, 3]

    Out[2]=

    1

    Example 1.40

    In[1]:=

    c={3,1,3,1,3,1,3,1,3,1,3,1}; v={0,7,4,9,2,7,0,2,0,9,4,6};

    In[2]:=

    Mod[c.v, 10]

    Out[2]=

    0

    The same syntax is used to perform modular arithmetic calculations in which theresult is a vector or matrix.

    In[1]:=

    u={2,2,0,1,2}; v={1,2,2,2,1};

    In[2]:=

    Mod[u+v, 3]

    Out[2]=

    {0,1,2,0,0}

    Chapter 2: Systems of Linear Equations

    Example 2.6

    In[1]:=

    Solve[{x-y-z==2,3*x-3*y+2*z==16,2*x-y+z==9},{x,y,z}]

    Out[1]=

    {{x3, y-1, z2}}

    Alternatively, you can have Mathematica perform the same calculations using matri-ces and vectors:

    In[1]:=

    A={{1,-1,-1},{3,-3,2},{2,-1,1}}; b={{2},{16},{9}};

    In[2]:=LinearSolve[A,b]

    {{3},{-1},{2}}

    To perform elementary row operations individually, begin by entering the aug-mented matrix:

    In[1]:=

    A={{1,-1,-1,2},{3,-3,2,16},{2,-1,1,9}};

    In[2]:=

    A[[2]]=A[[2]]-3*A[[1]];

    18 Appendix E

  • 8/22/2019 0538735457_252355

    19/46

    In[3]:=

    MatrixForm[A]

    Out[3]=

    This command adds3 times row 1 to row 2 (putting the result in row 2). The cmand MatrixForm displays the result as a matrix, rather than as a list of lists.

    In[1]:=

    ReplacePart[A,A,3,2]; ReplacePart[%,A,2,3]//MatrixF

    Out[1]=

    This command swaps rows 2 and 3 of the previous output (in two steps).

    In[1]:=

    A[[3]]=1/5*%[[3]]; MatrixForm[A]

    This command multiplies row 3 of the last result by and stores the answer back

    Another command that can be useful in solving a linear system directRowReduce.

    Examples 2.11 and 2.13

    In[1]:=

    Ab={{1,-1,-1,2,1},{2,-2,-1,3,3},{-1,1,-1,0,-3}};

    In[2]:=

    RowReduce[Ab]//MatrixForm

    Out[2]=

    To see the solutions explicitly, use Solve rather than LinearSolve:

    In[1]:=

    Solve[{w-x-y+2z==1,2*w-2*x-y+3*z==3,-w+x-y==-3},{w,x,y,

    Out[1]=

    {{w2+x-z,y1+z}}

    You can solve systems over .pby specifying the modulus

    C1 -1 0 1 2

    0 0 1 -1 1

    0 0 0 0 0

    S

    15

    C1 -1 -1 22 -1 1 90 0 1 2

    SC1 -1 -1 2

    2 -1 1 9

    0 0 5 10

    S

    C1 -1 -1 2

    0 0 5 10

    2 -1 1 9S

    Appendix E

  • 8/22/2019 0538735457_252355

    20/46

    Example 2.16

    In[1]:=

    A={{1,2,1},{1,0,1},{0,1,2}}; b={{0},{2},{1}};

    In[2]:=

    LinearSolve[A,b,Modulus->3]

    Out[2]=

    {{1},{2},{1}}

    Chapter 3: Matrices

    Mathematica handles matrices as lists of lists. A matrix is input as a list whoseelements are themselves lists containing the entries of the rows of the matrix.To display the result of a matrix calculation as a matrix, use the commandMatrixForm.

    In[1]:=

    A={{2,-1},{3,4}}; B={{1,2},{0,1}};In[2]:=A-2*B//MatrixForm

    Out[2]=

    To evaluate the matrixA2, use the command

    In[3]:=

    MatrixPower[A,2]//MatrixForm

    (Do not use A^2; it squares all the entries ofA individually.) Use a period formatrix multiplication. (The command A*B will give an elementwise productinstead.)

    In[4]:=

    A.B//MatrixForm

    Out[4]=

    In[5]:=

    Transpose[A]//MatrixForm

    Out[5]=

    In[6]:=

    Inverse[A]//MatrixForm

    c 2 3-1 4

    d

    c2 33 10

    d

    c1 -6

    18 13 d

    c0 -53 2

    d

    20 Appendix E

  • 8/22/2019 0538735457_252355

    21/46

    Out[6]=

    In[7]:=

    Det[A]

    Out[7]=

    11

    Like vector operations, matrix operations can be performed symbolicalMathematica.

    In[1]:=

    A={{a,b},{c,d}};

    In[2]:=

    MatrixPower[A,2]//MatrixForm

    Out[2]=

    In[3]:=

    Det[A]

    Out[3]=

    -bc+ad

    In[4]:=

    Inverse[A]//MatrixForm

    Out[4]=

    The Mod and Modulus commands are used for matrix calculations involmodular arithmetic.

    In[1]:=

    A={{2,-1},{3,4}};

    In[2]:=

    A3=Mod[A,3]

    Out[2]=

    {{2,2},{0,1}}

    Do not use the MatrixForm command here, since you are going to perfanother calculation on the result.

    D dad-bc - bad-bc-

    c

    ad-bc

    a

    ad-bc

    T

    c a2 bc ab bdca dc bc d2

    d

    D4

    11

    1

    11

    -3

    11

    2

    11

    T

    Appendix E

  • 8/22/2019 0538735457_252355

    22/46

    In[3]:=

    Inverse[A3, Modulus->3]

    Out[3]=

    {{2,2},{0,1}}

    So the matrixA3 is its own inverse modulo 3.

    Mathematica can produce a basis for the null space of a matrix.

    Examples 3.45, 3.47, and 3.48

    In[1]:=

    A={{1,1,3,1,6},{2,-1,0,1,-1},{-3,2,1,-2,1},{4,1,6,1,3}};

    In[2]:=

    NullSpace[A]

    Out[2]=

    {{1,-3,0,-4,1},{-1,-2,1,0,0}}

    To find bases for the row and column spaces of A, use the RowReduce command.

    In[3]:=

    R=RowReduce[A]

    Out[3]=

    {{1,0,1,0,-1},{0,1,2,0,3},{0,0,0,1,4},{0,0,0,0,0}}

    From this output you see that the nonzero rows form a basis for the row space of A,and the columns ofAcorresponding to the positions of the leading 1s (columns 1, 2,and 4) give a basis for the column space of A. You can extract these as follows:

    In[4]:=

    TakeRows[R,3]

    Out[4]=

    {{1,0,1,0,-1},{0,1,2,0,3},{0,0,0,1,4}}

    In[5]:=

    {TakeColumns[A,{1}],TakeColumns[A,{2}], TakeColumns[A,{4}]}

    Out[5]=

    {{{1},{2},{-3},{4}},{{1},{-1},{2},{1}},{{1},{1},{-2},{1}}}

    You can define linear transformations in Mathematica.

    Examples 3.55 and 3.60

    In[1]:=

    T[x_,y_]:={x,2*x-y,3*x+4*y}

    In[2]:=

    T[1,-1]

    Out[2]=

    {1,3,-1}

    In[3]:=

    S[u_,v_,w_]:={2*u+w,3*v-w,u-v,u+v+w}

    In[4]:=

    T[x,y]

    22 Appendix E

  • 8/22/2019 0538735457_252355

    23/46

    Out[4]=

    {x,2 x-y,3 x+4 y}

    To compute S(T(x,y)), you have to pick out the components of the previousput. These are known to Mathematica as %[[1]], %[[2]], and%[[3]].

    In[5]:=

    S[%[[1]],%[[2]],%[[3]]]

    Out[5]=

    {5 x+4 y,-3 x+3 (2 x-y)-4 y,-x+y,6 x+3 y}

    In[6]:=

    Simplify[%]

    Out[6]=

    {5 x+4 y,3 x-7 y,-x+y,3 (2 x+y)}

    Mathematica has a built-in command for the LUfactorization of a matrix.

    Example 3.33

    In[1]:=

    A={{2,1,3},{4,-1,3},{-2,5,5}};

    In[2]:=

    {LU,P,cond}=LUDecomposition[A]

    Out[2]=

    {{{2,1,3},{2,-3,-3},{-1,-2,2}},{1,2,3},1}

    The term {{2,1,3},{2,-3,-3},{-1,-2,2}} contains the matriceand Ucombined into a single matrix that leaves out the diagonal 1s ofL. Mamaticas help facility gives the procedures Upper and Lower, which will exLand U:

    In[3]:=

    Upper[LU_?MatrixQ]:=LUTable[If[i j,1,0],{i,Length[LU

    {j,Length[LU]}]

    In[4]:=

    Lower[LU_?MatrixQ]:=LU-Upper[LU]+IdentityMatrix

    [Length[LU]]

    In[5]:=

    MatrixForm[L=Lower[LU]]

    Out[5]=

    In[6]:=

    MatrixForm[U=Upper[LU]]

    Out[6]=C2 1 30 -3 -30 0 2

    SC1 0 0

    2 1 0-1 -2 1S

    Appendix E

  • 8/22/2019 0538735457_252355

    24/46

    To check that this result is correct, compute

    In[7]:=

    MatrixForm[L.U]

    Out[7]=

    which is A, as expected.

    Chapter 4: Eigenvalues and Eigenvectors

    Eigenvalues and eigenvectors are included in Mathematica.

    Example 4.18

    In[1]:=

    A={{0,1,0},{0,0,1},{2,-5,4}};

    If all you require are the eigenvalues ofA,you can use

    In[2]:=

    Eigenvalues[A]

    Out[2]=

    {1,1,2}

    The command Eigenvectors gives the corresponding eigenvectors.

    In[3]:=

    Eigenvectors[A]

    Out[3]=

    {{1,1,1},{0,0,0},{1,2,4}}

    Entering Eigensystem combines the two previous calculations.

    In[4]:=

    Eigensystem[A]

    Out[4]=

    {{1,1,2},{{1,1,1},{0,0,0},{1,2,4}}}

    For a diagonalizable matrixA, you can find a matrixPsuch that P1 APis diagonal.

    Example 4.26

    In[1]:=

    A={{-1,0,1},{3,0,-3},{1,0,-1}};

    In[2]:=

    {evalues,evectors}=Eigensystem[A]

    Out[2]=

    {{-2,0,0},{{-1,3,1},{1,0,1},{0,1,0}}}

    Now you need to extract the three eigenvectors and put them into the columns ofP.

    In[3]:=

    MatrixForm[P = Transpose[evectors]]

    C2 1 3

    4 -1 3

    -2 5 5S

    24 Appendix E

  • 8/22/2019 0538735457_252355

    25/46

    Out[3]=

    The corresponding diagonal matrix is created with the commandIn[4]:=

    MatrixForm[Diag=DiagonalMatrix[evalues]]

    Out[4]=

    To check that the matrix is correct, compute

    In[5]:=

    P.Diag.Inverse[P]//MatrixForm

    Out[5]=

    As expected, this is A!When approximating eigenvalues (or other numerical results) in Mathema

    use the floating-point evaluation command,N.

    In[1]:=

    A=N[{{1,2},{3,4}}]

    Out[1]=

    {{1.,2.},{3.,4.}}

    In[2]:=

    Eigenvalues[A]

    Out[2]=

    {5.37228,-0.372281}

    Mathematica can also handle matrix exponentials.

    Example 4.47

    In[1]:=

    A={{1,2},{3,2}};

    In[2]:=

    MatrixExp[A]//MatrixFormD 3 2E55E 21-1 E5 25E31-1 E5 2

    5E

    2 3E5

    5E

    T

    C-1 0 13 0 -31 0 -1SC-2 0 00 0 00 0 0

    SC-1 1 0

    3 0 1

    1 1 0

    S

    Appendix E

  • 8/22/2019 0538735457_252355

    26/46

    Chapter 5: Orthogonality

    Mathematica includes the Gram-Schmidt Process as a built-in procedure. To use it,you must first load the orthogonalization package.

    In[1]:=

    False.

    In[4]:=

    GramSchmidt[{x1,x2,x3}, Normalized->False]

    Out[4]=

    The QRfactorization of a matrix is performed in Mathematica using the com-mand QRDecomposition, which returns an orthogonal matrixQand an upper

    triangular matrixRsuch that QT

    R

    A.

    Example 5.15

    In[1]:=

    A={{1,2,2},{-1,1,2},{-1,0,1},{1,1,2}};

    In[2]:=

    {Q,R}=QRDecomposition[A];

    In[3]:=

    Map[MatrixForm,{Q,R}]

    You can check that this result is correct by verifying that QTR A:

    G2 11

    2

    0 15 3152

    0 0 A3

    2

    0 0 0

    Wwg G1

    2-1

    2-1

    2

    1

    2

    3

    2153

    2151

    2151

    215-1

    16 01

    16 A2

    3

    0 0 0 0

    W,

    e 51,-1,-1,16, e 32,3

    2,1

    2,1

    2f, e-1

    2,0,

    1

    2,1f f

    e 12,-

    1

    2,-

    1

    2,1

    2f, e 3

    215,3

    215,1

    215,1

    215 f, e-1

    16,0,1

    16,A2

    3f

    26 Appendix E

  • 8/22/2019 0538735457_252355

    27/46

    In[4]:=

    Transpose[Q].R//MatrixForm

    Out[4]=

    Chapter 6: Vector Spaces

    You can adapt Mathematical matrix commands for use in finding bases for findimensional vector spaces or testing vectors for linear independence.The key is tothe isomorphism between an n-dimensional vector space and .n

    Example 6.26

    To test the set {1 x, x x2, 1 x2} for linear independence in 2

    , use the stanisomorphism of2 with

    3. This gives the correspondence

    Now build a matrix with these vectors as its columns:

    In[1]:=

    A={{1,0,1},{1,1,0},{0,1,1}};

    In[2]:=

    RowReduce[A]

    Out[2]=

    {{1,0,0},{0,1,0},{0,0,1}}

    Since the reduced row echelon form ofAis I,you know that the columns ofAareearly independent.Hence, the set {1 x, x x2, 1 x2} is also linearly independby Theorem 6.7.

    Example 6.44

    Extend the set {1 x, 1 x} to S {1 x, 1 x, 1, x, x2} and use the stanisomorphism of2 with

    3 to obtain

    Form a matrix with these vectors as its columns and find its reduced echelon fo

    In[1]:=

    MatrixForm[A={{1,1,1,0,0},{1,-1,0,1,0},{0,0,0,0,1}}]

    1 x4C110S , 1 x4C 110S , 14C100S , x4C010S , x24C001S

    1 x4C110S , x x24C011S , 1 x24C101S

    D1 2 2

    -1 1 2-1 0 1

    1 1 2T

    Appendix E

  • 8/22/2019 0538735457_252355

    28/46

    Out[1]=

    In[2]:=

    RowReduce[A]//MatrixForm

    Out[2]=

    Since the leading 1s are in columns 1, 2, and 5, these are the vectors to use for a basisAccordingly, {1 x, 1 x, x2} is a basis for2 that contains 1 xand 1 x.

    Many questions about a linear transformation between finite-dimensional vector

    spaces can be handled by using the matrix of the linear transformation.

    Example 6.68

    The matrix of with respect to the basis

    B ofWand the standard basis C {1, x, x2} of2 is

    (Check this.) By Exercises 40 and 41 in Section 6.6, the rank and nullity ofTare thesame as those ofA.

    In[1]:=

    A={{1,-1,0},{0,1,-1},{-1,0,1}};

    In[2]:=

    RowReduce[A]

    Out[2]:=

    {{1,0,-1},{0,1,-1},{0,0,0}}

    You can see that the rank is 2. It now follows from the Rank Theorem that nullity

    (T) 3 rank(T) 3 2 1.If you want to construct bases for the kernel and range ofT, you can proceed as

    follows:

    In[3]:=

    NullSpace[A]

    Out[3]=

    {{1,1,1}}

    A

    C1 1 0

    0 1 1

    1 0 1

    S e c 1 0

    0 0d , c0 1

    1 0d , c0 0

    0 1d f

    T c a bb c

    d 1a b2 1b c2x 1c a2x2

    C1 0 12 12 00 1 12 -12 00 0 0 0 1

    SC1 1 1 0 0

    1 -1 0 1 0

    0 0 0 0 1

    S

    28 Appendix E

  • 8/22/2019 0538735457_252355

    29/46

    This is the coordinate vector of a matrix with respect to the basisB ofW. Hencematrix is

    so a basis for ker(T) is . The first two columns ofAform a basis fo

    column space ofA, since the leading 1s are in those columns. These correspondbasis for the range ofT. This time you have coordinate vectors with respect tobasis C of2, so a basis for range(T) is {1 x

    2, x x2}.

    Chapter 7: Distance and Approximation

    Computations involving the inner product in[a, b] are

    to do in Mathematica.

    Example 7.6

    In[1]:=

    f=x; g=3*x-2;

    In[2]:=

    Integrate[f^2,{x,0,1}]

    Out[2]=

    In[3]:=

    Integrate[(f-g)^2,{x,0,1}]

    Out[3]=

    In[4]:=

    Integrate[f*g,{x,0,1}]

    Out[4]=

    0

    For vector norms, Mathematica has the following:

    In[1]:=

    u={1,-2,5};In[2]:=

    Norm[u]

    Out[2]=

    In[3]:=

    SumNorm[x_]:=Apply[Plus,Map[Abs,x]]

    130

    4

    3

    1

    3

    8f, g9 b

    a

    f 1x2g1x2dx

    e c1 11 1 d f

    1 # c1 00 0

    d 1 # c0 11 0

    d 1 # c0 00 1

    d c1 11 1

    d

    Appendix E

  • 8/22/2019 0538735457_252355

    30/46

    In[4]:=

    SumNorm[u]

    Out[4]=

    8

    In[5]:=

    MaxNorm[x_]:=Apply[Max,Map[Abs,x]]

    In[6]:=

    MaxNorm[u]

    Out[6]=

    5

    You can perform least squares approximation by having Mathematica solve thenormal equations.

    Example 7.26

    In[1]:=

    A={{1,1},{1,2},{1,3}}; b={{2},{2},{4}};

    In[2]:=

    LinearSolve[Transpose[A].A,Transpose[A].b]

    Out[2]=

    IfAdoes not have linearly independent columns, then there is not a unique leastsquares solution of Ax b. In this case, the version of least squares that uses thepseudoinverse should be used.

    Example 7.40

    In[1]:=

    A={{1,1},{1,1}}; b={{0},{1}};

    In[2]:=

    PseudoInverse[A].b

    Out[2]=

    The singular value decomposition is also built into Mathematica. It can be used onlyon numerical matrices. One way to ensure that a matrix is treated numerically is tomake sure that some of its entries include a decimal point.

    Example 7.34(b)In[1]:=

    A={{1.,1.},{1.,0},{0,1.}};

    In[2]:=

    {U,S,V}=SingularValueDecomposition[A]

    Out[2]=

    {{{0.816497,0.408248,0.408248},

    {2.33321 x 10-16,0.707107,-0.707107}}, {1.73205,1.},

    {{0.707107,0.707107}, {0.707107,-0.707107}}}

    ee 14f,e 1

    4ff

    e e 23f,1 f

    30 Appendix E

  • 8/22/2019 0538735457_252355

    31/46

    This command gives a full SVD ofA: The first entry in the list of lists is the matrwhich is followed by a list containing the singular values, and finallyV. To see thematrices, use the command

    In[3]:=

    Map[MatrixForm,{U,DiagonalMatrix[S],V}]

    Out[3]=

    Notice that Mathematica always returns the singular values in a square diagmatrix and that the matrices Uand Vare not necessarily square. This is diffefrom the way the text presents these values. However, the factorization has the sproperty:

    In[4]:=

    MatrixForm[Transpose[U].DiagonalMatrix[S].V]

    Out[4]=

    which, as expected, is A. [The (2, 2) and (3, 1) entries are effectively zeros.]As we have already noted, the pseudoinverse is a built-in function in Mathema

    Example 7.39(b)

    In[1]:=

    A={{1,1},{1,0},{0,1}};

    In[2]:=

    PseudoInverse[A]

    Out[2]=

    ee 13,2

    3,-

    1

    3f,e 1

    3,-

    1

    3,2

    3ff

    D 1 # 1 #1 # -3.17341x10167.60364x1016 1 #

    T

    c1.73205 00 1.

    d, c0.707107 0.7071070.707107 -0.707107

    d f

    e c0.816497 0.408248 0.4082482.33321x10-16 0.707107 -0.707107

    d,

    Appendix E

    MATLAB MATLAB stands for Matrix Laboratory. As its name suggests, it is designed to dle matrix (and vector) calculations, which it does with ease.

    Chapter 1: Vectors

    MATLAB distinguishes between row and column vectors. The entries of a row veare separated by spaces or commas, while the entries of a column vector are separby semicolons:

  • 8/22/2019 0538735457_252355

    32/46

    v=[1 2 3], w=[1;2;3]

    v=

    1 2 3

    w=

    1

    2

    3

    Example 1.6

    u=[1 0 -1]; v=[2 -3 1]; w=[5 -4 0];

    3*u+2*v-w

    ans =

    2 -2 -1

    A semicolon after an input statement tells MATLAB to suppress the output. Thus, inthe above example, the vectors u, v, and wwere not displayed.

    Example 1.15u=[1 2 -3]; v=[-3 5 2];

    dot(u,v)

    ans =

    1

    Example 1.17

    norm([2 3])

    ans =

    3.6056

    If MATLABs symbolic math toolbox is installed on your system, you can also ob-tain exact answers.

    sym(norm([2 3]))

    ans =

    sqrt(13)

    The cross product of vectors in3 is computed using the command cross.

    u=[1 2 3]; v=[-1 1 5];

    cross(u, v)

    ans =

    7 -8 3

    MATLAB can also perform modular arithmetic.

    Example 1.12

    mod(3548, 3)

    ans =

    2

    32 Appendix E

  • 8/22/2019 0538735457_252355

    33/46

    Example 1.14

    u=[2 2 0 1 2]; v=[1 2 2 2 1];

    mod(dot(u, v), 3)

    ans =

    1

    Example 1.40

    c=[3 1 3 1 3 1 3 1 3 1 3 1]; v=[0 7 4 9 2 7 0 2 0 9 4 6]

    mod(dot(c, v), 10)

    ans =

    0

    The same syntax is used to perform modular arithmetic calculations in whichresult is a vector or matrix.

    u=[2 2 0 1 2]; v=[1 2 2 2 1];

    mod(u+v, 3)

    ans =

    0 1 2 0 0

    Chapter 2: Systems of Linear Equations

    Matrices are entered in MATLAB the way they are written by hand: row by rowcolumn by column. The entries in each row are separated by spaces or commasa semicolon is used to indicate the end of a row.

    Example 2.6

    A=[1 -1 -1; 3 -3 2; 2 -1 1]

    A =

    1 -1 -1

    3 -3 2

    2 -1 1

    b=[2; 16; 9]

    b =

    2

    16

    9

    linsolve(A, b)

    ans=

    [ 3]

    [-1]

    [ 2]

    Appendix E

  • 8/22/2019 0538735457_252355

    34/46

    The following syntax does the same thing as that above:

    A\b

    ans =

    3.0000

    -1.0000

    2.0000

    You can also obtain the solution of a system from the reduced row echelon formof its augmented matrix. (This is necessary if the system has infinitely many or no so-lutions, since MATLAB cannot handle these cases.)

    Ab=[1 -1 -1 2; 3 -3 2 16; 2 -1 1 9];

    rref(Ab)

    ans =

    1 0 0 3

    0 1 0 -1

    0 0 1 2

    If you like, you can ask MATLAB to walk you through the row reduction process.(Keep in mind, though, that MATLAB uses partial pivoting strategies.) This is donewith the command

    rrefmovie(Ab)

    Use MATLABs mod command to handle systems over .p

    Example 2.16

    A=[1 2 1; 1 0 1; 0 1 2]; b=[0; 2; 1];

    mod(A\b, 3)

    ans =

    1

    2

    1

    This approach sometimes needs a little help, however. Suppose you need to solve thesystem Ax b, where

    over

    5. You obtainA=[1 1 2;3 0 1;2 2 3]; b=[3;3;2];

    sol=mod(A\b,5)

    sol =

    14/3

    1/3

    4

    This is not what you want, since the solution is not a vector with entries in .5 In thiscase, you need to use the following theorem, due to Euler:

    A

    C1 1 2

    3 0 1

    2 2 3

    S and b

    C3

    3

    2

    S

    34 Appendix E

  • 8/22/2019 0538735457_252355

    35/46

    Appendix E

    Theorem E.1 Ifpis prime, then for all k 0 in ,p

    kp1 1

    In this example, p 5, so 3 51 34 1 in .5 Therefore, multiplying the soluby 34 1 will not change it but will get rid of the fractions.

    mod(sol*(3^4),5)

    ans =

    3

    2

    4

    Chapter 3: MatricesAll of the basic matrix operations are built into MATLAB, as the following examillustrate.

    A=[2 -1;3 4]; B=[1 2;0 1];

    A-2*B

    ans =

    0 -5

    3 2

    To evaluate the matrixA2, use the command

    A^2

    ans =1 -6

    18 13

    The same calculation is done bympower(A,2).Use the command * for matrix multiplication.

    A*B

    ans =

    2 3

    3 10

    transpose(A)

    ans =

    2 3

    -1 4

    A more compact way to take the transpose of a matrixAis to type A.

    inv(A)

    ans =

    4/11 1/11

    -3/11 2/11

  • 8/22/2019 0538735457_252355

    36/46

    Entering A^((1) does the same thing.

    det(A)

    ans =

    11

    Like vector operations,matrix operations can be performed symbolically in MATLAB.

    A=sym('[a b; c d]')

    A =

    [a, b]

    [c, d]

    A^2

    ans =

    [a^2+b*c, a*b+b*d]

    [c*a+d*c, b*c+d^d]

    det(A)

    ans =

    a*d-b*c

    inv(A)

    ans =

    [d/(a*d-b*c), -b/(a*d-b*c)]

    [-c/(a*d-b*c), a/(a*d-b*c)]

    For matrix calculations involving modular arithmetic, proceed as with systems oflinear equations.

    A=[2 -1; 3 4];

    A3=mod(A, 3)

    A3 =

    2 2

    0 1

    mod(inv(A3), 3)

    ans =

    1/2 2

    0 1

    This is not a matrix over ,3 but Eulers Theorem tells us that 231 22 1. There-

    fore, multiplying the last answer by 22 mod 3 will give the inverse ofA3 over :3

    mod(ans*(2^2),3)

    ans =

    2 2

    0 1

    So A3 is its own inverse modulo 3.Another built-in function of MATLAB will determine the rank of a matrix.

    rank(A)

    ans =

    2

    36 Appendix E

  • 8/22/2019 0538735457_252355

    37/46

    MATLAB can produce bases for the row, column, and null spaces of a matrix

    Examples 3.45, 3.47, and 3.48

    A=[1 1 3 1 6; 2 -1 0 1 -1; -3 2 1 -2 1; 4 1 6 1 3];

    rref(A)

    ans =

    1 0 1 0 -1

    0 1 2 0 3

    0 0 0 1 4

    0 0 0 0 0

    From this matrix you can obtain a basis for row(A) by taking the nonzero rows.

    colspace(sym(A))

    ans =

    [1, 0, 0]

    [0, 1, 0]

    [0, 0, 1]

    [1, 6, 3]

    The basis given above for the column space ofAis not the same as in the text. Canshow that the two are equivalent?

    null(sym(A))

    ans =

    [-1, 1]

    [-2, -3]

    [-1, 0]

    [ 0, -4]

    [ 0, 1]

    The same result arises fromnull (A, 'r')

    ans =

    -1 1

    -2 -3

    1 0

    0 -4

    0 1

    The 'r' tells MATLAB to give the result in rational arithmetic. Without it, you numerical answer. [In fact, MATLAB gives an orthonormal basis for null(A).]

    null(A)ans =

    -0.5413 0.1443

    -0.5867 -0.6327

    0.4421 0.0400

    0.3968 -0.7371

    -0.0992 0.1843

    Appendix E

  • 8/22/2019 0538735457_252355

    38/46

    MATLAB has a built-in command for the LUfactorization of a matrix.

    Example 3.33

    A=[2 1 3; 4 -1 3; -2 5 5];

    [L, U]=1u(A)

    L =

    0.5000 0.3333 1.0000

    1.0000 0 0

    -0.5000 1.0000 0

    U =

    4.0000 -1.0000 3.0000

    0 4.5000 6.5000

    0 0 -0.6667

    To check that this result is correct, compute

    L*U

    ans =

    2.0000 1.0000 3.0000

    4.0000 -1.0000 3.0000

    -2.0000 5.0000 5.0000

    which is A, as expected.

    Chapter 4: Eigenvalues and Eigenvectors

    MATLAB can handle eigenvalues, eigenvectors, and the characteristic polynomial.

    Example 4.18

    A=[0 1 0; 0 0 1; 2 -5 4];

    The characteristic polynomial ofAcan be produced using the command

    c=poly(A)

    c =

    1.0000 -4.0000 5.0000 -2.0000

    This command gives the coefficients of the characteristic polynomial in order ofdecreasing powers ofx. To see the actual polynomial, convert this result into standard

    form:

    poly2sym(c)

    ans =

    x^3-4*x^2+5*x-2

    To solve the characteristic equation (in order to obtain the eigenvalues of A), ask forthe roots ofc.

    38 Appendix E

  • 8/22/2019 0538735457_252355

    39/46

    roots(c)

    ans =

    2.0000

    1.0000 + 0.0000i

    1.0000 - 0.0000i

    If all you require are the eigenvalues ofA,you can use

    eig(A)

    ans =

    1.0000 + 0.0000i

    1.0000 - 0.0000i

    2.0000

    To see both the eigenvalues and the bases for the corresponding eigenspaces, uslonger form:

    [P,D]=eig(A,'nobalance')

    P=0.5774 0.5774 -0.2182

    0.5774 0.5774 -0.4364

    0.5774 0.5774 -0.8729

    D=

    1.0000 0 0

    0 1.0000 0

    0 0 2.0000

    Here, the columns ofPare eigenvectors ofAcorresponding to the eigenvalues owhich appear on the diagonal ofD.

    For a diagonalizable matrixA, use the same syntax to find a matrixPsuchP1 APis diagonal.

    Example 4.26

    A=[-1 0 1; 3 0 -3; 1 0 -1];

    [P,D]=eig(A,'nobalance')

    P =

    0.3015 0.3015 -0.0134

    0.9045 -0.9045 -0.9998

    0.3015 -0.3015 -0.0134

    D =

    -0.0000 0 0

    0 -2.0000 0

    0 0 0.0000

    There are three distinct eigenvectors, so A is diagonalizable. Thus, PDP1 shequal A.

    Appendix E

  • 8/22/2019 0538735457_252355

    40/46

    p*D*inv(P)

    ans =

    -1.0000 0.0000 1.0000

    3.0000 0.0000 -3.0000

    1.0000 -0.0000 -1.0000

    Bingo!MATLAB can also handle matrix exponentials using the command expm.

    Example 4.47

    A=[1 2; 3 2];

    expm(A)

    ans =

    22.0600 21.6921

    32.5382 32.9060

    If the symbolic math toolbox is installed, you can get this result exactly.

    expm(sym(A))

    ans =

    [ 3/5*exp(-1)+2/5*exp(4), 2/5*exp(4)-2/5*exp(-1)]

    [ 3/5*exp(4)-3/5*exp(-1), 2/5*exp(-1)+3/5*exp(4)]

    Chapter 5: Orthogonality

    MATLAB includes the Gram-Schmidt Process as a built-in procedure. It can be usedto convert a basis for a subspace ofn into an orthonormal basis for that sub-space.You accomplish this by putting the given basis vectors into the columns of a matrix

    and then using the commandorth

    .

    Example 5.13

    x1=[1; -1; -1; 1]; x2=[2; 1; 0; 1]; x3=[2; 2; 1; 2];

    A=[x1 x2 x3]

    A =

    1 2 2

    -1 1 2

    -1 0 1

    1 1 2

    Q =

    0.6701 -0.3781 -0.5241

    0.4806 0.6444 -0.2321

    0.1609 0.6018 0.2805

    0.5423 -0.2823 0.7700

    You can check that these vectors are orthonormal by verifying that QTQ I.

    40 Appendix E

  • 8/22/2019 0538735457_252355

    41/46

    Q'*Q

    ans =

    1.0000 0.0000 -0.0000

    0.0000 1.0000 0.0000

    -0.0000 0.0000 1.0000

    MATLAB can also perform the QRfactorization of a matrix. It returns the utriangular matrixR.

    Example 5.15

    A=[1 2 2; -1 1 2; -1 0 1; 1 1 2];

    qr(A)

    ans =

    -2.0000 -1.0000 -0.5000

    -0.5000 -2.2361 -3.3541

    -0.0000 0.4472 -1.2247

    0.5000 0 0.9526

    To obtain the orthonormal matrixQ, you must use a longer form. The comm[Q,R] = qr(A,0) ensures that the dimensions ofQagree with those ofAandRis a square matrix.

    [Q,R]=qr(A,0)

    Q =

    -0.5000 -0.6708 0.4082

    0.5000 -0.6708 -0.0000

    0.5000 -0.2236 -0.4082

    -0.5000 -0.2236 -0.8165

    R =

    -2.0000 -1.0000 -0.5000

    0 -2.2361 -3.3541

    0 6 -1.2247

    You can check that this result is correct by verifying that A QR:

    Q*R

    ans =

    1.0000 2.0000 2.0000

    -1.0000 1.0000 2.0000

    -1.0000 0.0000 1.0000

    1.0000 1.0000 2.0000(The command [Q,R] = qr(A) produces the modified QRfactorization discuin the text.)

    Chapter 6: Vector Spaces

    You can adapt MATLABs matrix commands for use in finding bases for finitemensional vector spaces or testing vectors for linear independence. The key is tothe isomorphism between an n-dimensional vector space and .n

    Appendix E

  • 8/22/2019 0538735457_252355

    42/46

    Example 6.26

    To test the set {1 x, x x2, 1 x2} for linear independence in 2 use the standardisomorphism of2 with

    3. This gives the correspondence

    Now build a matrix with these vectors as its columns:

    A=[1 0 1;1 1 0;0 1 1]

    A =

    1 0 1

    1 1 0

    0 1 1

    rank(A)

    ans =3

    Since rank(A) 3, the columns ofAare linearly independent. Hence, the set {1 x,x x2, 1 x2} is also linearly independent, by Theorem 6.7.

    Example 6.44

    Extend the set {1 x, 1 x} to S {1 x, 1 x, 1, x, x2} and use the standard iso-morphism of2 with

    3 to obtain

    Form a matrix with these vectors as its columns and find its reduced echelon form:

    A=[1 1 1 0 0;1 -1 0 1 0;0 0 0 0 1]

    A =

    1 1 1 0 0

    1 -1 0 1 0

    0 0 0 0 1

    rref (A)

    ans =

    1.0000 0 0.5000 0.5000 0

    0 1.0000 0.5000 -0.5000 0

    0 0 0 0 1.0000

    Since the leading 1s are in columns 1, 2, and 5, these are the vectors to use for a basisAccordingly, {1 x, 1 x, x2} is a basis for2 that contains 1 xand 1 x.

    Many questions about a linear transformation between finite-dimensional vectorspaces can be handled by using the matrix of the linear transformation.

    1 x4

    C1

    1

    0S, 1 x4

    C1

    1

    0S, 14

    C1

    0

    0S, x4

    C0

    1

    0S, x24

    C0

    0

    1S

    1 x4 C1

    10S , x x

    2

    4

    C0

    11S , 1 x

    2

    4

    C1

    01S

    42 Appendix E

  • 8/22/2019 0538735457_252355

    43/46

    Example 6.68

    The matrix of with respect to the

    B ofWand the standard basis C {1, x, x2} of

    (Check this.) By Exercises 40 and 41 in Section 6.6, the rank and nullity ofTarsame as those ofA.

    A=[1 -1 0;0 1 -1;-1 0 1];

    rank(A)

    ans =

    2It now follows from the Rank Theorem that nullity(T) 3 rank(T) 3 2

    If you want to construct bases for the kernel and range ofT, you can proceefollows:

    null(A,'r')

    ans =

    1

    1

    1

    This is the coordinate vector of a matrix with respect to the basisB ofW. Hence

    matrix is

    so a basis for ker(T) is .

    A=[1 -1 0; 0 1 -1; -1 0 1];

    rref(A)

    ans =

    1 0 -1

    0 1 -10 0 0

    Since the leading 1s are in columns 1 and 2, these columns in the originalmatrix fa basis for col(A). That is,

    col1A2 span C 101

    S , C110

    S

    e c1 11 1

    d f

    1 # c 1 00 0

    d 1 # c 0 11 0

    d 1 # c 0 00 1

    d c 1 11 1

    d

    A C 1 1 00 1 11 0 1S e c1 0

    0 0

    d , c0 11 0

    d , c 0 00 1

    d f

    T ca bb c

    d 1a b2 1b c2x 1c a2x2

    Appendix E

  • 8/22/2019 0538735457_252355

    44/46

    This time you have coordinate vectors with respect to the basis C of2, so a basis forrange(T) is {x x2, 1 x2}.

    Chapter 7: Distance and Approximation

    Computations involving the inner product in [a, b] can be

    done using MATLABs symbolic math toolbox. Symbolic expressions are defined byplacing the expression in single quotes and using the sym command.

    Example 7.6

    f=sym('x'); g=sym('3*x-2');

    The command int(F,a,b) computes .

    int(f^2,0,1)

    ans =

    1/3

    int((f-g)^2,0,1)

    ans =

    4/3

    int(f*g,0,1)

    ans=

    0

    The various norms discussed in Section 7.2 are built into MATLAB.The format of thecommand is norm(v,n), where v is a vector and n is a positive integer or the ex-pression inf. Use n=1 for the sum norm, n=2 for the Euclidean norm, and n=inf

    for the max norm. Typing norm(v) is the same as typing norm(v,2).

    v=[1 -2 5];norm(v,1)

    ans =

    8

    norm(v)

    ans =

    5.4772

    norm(v,inf)

    ans =

    5

    If the command norm is applied to a matrix, you get the induced operator norm.

    Example 7.19

    A=[1 -3 2; 4 -1 -2; -5 1 3]

    norm(A,1)

    ans =

    10

    norm(A,inf)

    ans =

    9

    b

    a

    F 1x2dx

    8f, g

    9

    b

    a

    f

    1x

    2g1x

    2dx

    44 Appendix E

  • 8/22/2019 0538735457_252355

    45/46

    The condition number of a square matrixA can be obtained with the commcond(A,n), where n is equal to 1,2, or inf.

    Example 7.21

    A=[1 1; 1 1.0005];

    c1=cond(A,1), c2=cond(A), cinf=cond(A,inf)c1 =

    8.0040e+03

    c2 =

    8.0020e+03

    cinf =

    8.0040e+03

    MATLAB can perform least squares approximation in one step. If the syAx b has a nonsquare coefficient matrixA, then the command A\b returnleast squares solution of the system.

    Example 7.26

    A=[1 1; 1 2; 1 3]; b=[2; 2; 4];

    A\b

    ans =

    0.6667

    1.0000

    The singular value decomposition is also built into Maple.

    Example 7.34(b)

    A=[1 1; 1 0; 0 1];

    svd(A)ans =

    1.7321

    1.0000

    This command produces a vector containing numerical approximations to the silar values ofA. If you want to see the matrices U, , and V, then you must enter

    [U,S,V] = svd(A)

    U =

    0.8165 0.0000 -0.5774

    0.4082 -0.7071 0.5774

    0.4082 0.7071 0.5774

    S =

    1.7321 0

    0 1.0000

    0 0

    V =

    0.7071 -0.7071

    0.7071 0.7071

    Appendix E

  • 8/22/2019 0538735457_252355

    46/46

    You can easily verify that you have an SVD ofAby computing U VT.

    U*S*V'

    ans =

    1.0000 1.0000

    1.0000 -0.0000

    0.0000 1.0000

    As expected, the result is A.

    The pseudoinverse is a built-in function in MATLAB.

    Example 7.39(b)

    A=[1 1; 1 0; 0 1];

    pinv(A)

    ans =

    0.3333 0.6667 -0.3333

    0.3333 -0.3333 0.6667

    Using the SVD,you can find minimum length least squares solutions of linear systems

    Example 7.40

    A=[1 1; 1 1]; b=[0; 1];

    pinv(A)*b

    ans =

    0.2500

    0.2500

    MATLAB can apply the SVD to achieve digital image compression. The followingprocedure will convert a grayscale image called file.ext (where file is the name of

    the file and ext is the file formatJPEG, GIF, TIFF, etc.) into a matrix and then usethe SVD to produce rankkapproximations to the original for any value ofkup to therank of the matrix. The following example uses k 12.

    A = imread('file.ext');

    A = double(A);

    figure(1);

    colormap(gray);

    imagesc(A);

    [U,S,V] = svd(A);

    k = 12;

    Ak = U(:, 1:k)*S(1:k, 1:k)*V(:, 1:k)';

    figure(2);

    colormap(gray);

    imagesc(Ak);

    46 Appendix E