Linear Algebra Cheatsheet

Embed Size (px)

Citation preview

  • 7/30/2019 Linear Algebra Cheatsheet

    1/3

    A matrix is a rectangular array of numbers.

    A vector is a matrix with only one column.

    The components of a vector are the entries in it.

    A matrix is in RREF (Reduced Row-Echelon Form) if it satisfies all of the following:

    If a row has nonzero entries, then the first nonzero entry is 1 (the leading 1 or pivot)

    If a column contains a leading 1, then all other entries in that column are zero.

    If a row contains a leading 1, then the each row above it contains a leading 1 further to the

    leftSystems of linear equations can be solved by the following algorithm:

    Write the augmented matrix (also coefficient matrix) of the system. Place a cursor in the top entry

    of the first nonzero column of this matrix.

    If the cursor is zero, swap the row with some row below it to make the cursor entry nonzero.1.

    Divide the cursor row by the cursor entry. (after this, the cursor entry will be 1, which is

    exactly what we want)

    2.

    Eliminate all other entries in the cursor column, by substracting suitable multiples of the

    cursor row from the other rows.

    3.

    Move the cursor one row down and one column to the right. If the new cursor entry and all

    entries below are zero, move the cursor to the right in the same row. Repeat this step as

    necessary. If there are no more rows or columns, you're done.

    4.

    Return to step 1.5.

    Now you have an RREF of the original augmented matrix.

    Solve the resulting equations. If there is one of the form 0 = 1, then the system is inconsistent.

    There are no solutions.

    A system is inconsistent if and only if the RREF of its matrix contains a row of the form [0 0 ... 0 |

    1], representing the equation 0 = 1. In this case the system has no solutions.

    The rank of a matrix is defined by the number of leading 1's in its RREF.

    If the rank of the coefficient matrix of the system is smaller than the number of variables in the

    equations, then the system has either infinitely many solutions, or none at all.

    If the rank of the coefficient matrix of the system is equal to the number of variables in theequations, then the system has at most one solution.

    The Identity matrix (I) is an N x N matrix with 1's on the diagonal and 0's everywhere else

    A linear system of equations has one solution if and only if the RREF of its coefficient matrix is an

    Identity Matrix.

    The norm of a vector is the square root of the sum of the squares of the components (Pythagoras'

    theorem). The norm of some vector x can be notated as ||x||, or as the square root of the dot product

    of x and x.

    The dot product of two vectors is the sum of the product of one component of the first vector with

    the component with the same index in the second vector

    The sum of two matrices of the same size is defined entry by entry (so corresponding entries aresummed to make 1 new entry in the resulting matrix)

    The product of a matrix and a scalar is defined entry by entry (so each entry multiplied with the

    scalar results in 1 new entry in the resulting matrix)

    The product of a matrix and a vector is a new vector consisting of the sum of the vectors obtained

    by multiplying each component of the vector with a column in the matrix

    A linear transformation is the multiplication of one vector with a matrix to create a new vector

    A scaling is defined by a matrix of the form [k 0 ; 0 k] (Matlab notation)

    A projection is defined by a matrix of the form [u1^2 u2*u1 ; u2*u1 u2^2], where u is a vector

    on the line L we're projecting to, and u = [u1 ; u2].

    A reflection is defined by a matrix of the form [a b ; b -a], where a^2 + b^2 = 1.Furthermore, ref[L](x) = 2proj[L](x) - x (where x is the vector being reflected, and L the

    line in which it is inverted).

    A rotation done counterclockwise in R2 through angle t is [cos(t) -sin(t) ; sin(t) cos(t)].

    ear Algebra Cheatsheet http://www.gijsk.com/temp/matrices.html

    3 5/11/2013 1:45 AM

  • 7/30/2019 Linear Algebra Cheatsheet

    2/3

    A horizontal shear is of the form [1 k ; 0 1] - a vertical shear is of the form [1 0 ; k 1].

    A function T from X to Y is invertible if the equation T(x) = y has a unique solution x in X for each

    y in Y.

    A matrix A is invertible if and only if A is a square matrix (n x n) and rref(A) = In

    If A (an n x n matrix) is invertible, and b is a vector in Rn, the linear system Ax = b has exactly one

    solution - otherwise there are infinitely many or none at all.

    Computing the rref([A | In]) of an invertible matrix A will get you [In | B], where B is the

    inverse of A.

    The product of a matrix and its inverse is an identity matrix.

    The determinant of a 2x2 matrix [a b; c d] is ad - bc.

    Given matrices B and A:(BA)^-1 == A^-1 * B^-1

    So order matters!

    Given matrices B and A, AB does not necessarily equal BA. If it does, A and B commute

    Given matrices A, B and C, (AB)C == A(BC)

    Given matrices A, B, C and D, A(C + D) == AC + AD and (A + B)C == AC + BC (matrix

    distribution)

    The span of vectors v1 ... v[n] is the set of all linear combinations of c1 * v1 + c2 * v2 + ... + c[n] *

    v[n].

    A subset of Rn is called a subspace of Rn iff:

    it contains the zero vector.

    it is closed under addition - if v1 and v2 are both in im(T), then so is v1 + v2.

    it is closed under multiplication - if v1 is in im(T), then for any scalar k, k*v1 is in im(T) as

    well.

    The image of a matrix T is the span of the column vectors of T.

    im(A) is a subspace of A.

    The only images for R2 are R2 itself, the zero vector, and any of the lines through the origin.

    The kernel of a matrix consists of all zeros of the transformation, so the solutions of T(x) = Ax = 0vector.

    The kernel of a matrix A is obtained by solving the linear system Ax = 0 (so rref[A | 0], etc.).

    ker(A) is a subspace of A.

    A set of vectors is linearly independent if none of them can be described as a linear combination

    of the other vectors. If there is a vector that can be described in such a way, that vector is

    redundant.

    If a set of vectors is linearly independent, they span V and are in V, then they are a basis of the

    subspace V.

    To construct the basis of an image of matrix A, list all the column vectors of A and omit the

    dedundant ones.A relation between vectors (v1 ... v[n]) exists if c1*v1 + ... + c[n]*v[n].

    The dimension of a subspace V is the number of vectors in a basis of V.

    dim(ker A) + dim(im A) = the number of columns in A (Rank-Nullity theorem).

    Two vectors are orthogonal if their dotproduct is 0.

    A vector is a unit vector if its length is 1, so the dotproduct of the vector with itself is 1.

    A set of vectors is called orthonormal if they are all unit vectors and orthogonal to one another.

    Orthonormal vectors u[1] ... u[n] in Rn form an orthonormal basis of Rn

    The orthogonal projection of some vector x on a subspace V with orthonormal basis u1,...,u[m]

    can be described as: projV(x) = (u1 . x)u1 + ... + (u[m] . x)u[m].

    The orthogonal complement of some subspace V is the kernel of the orthogonal projection onto V,and consists of the set of vectors x in Rn that are orthogonal to all vectors in V. The intersection of

    V and its complement consists of the zero vector alone. dim(V) + dim(V[orthogonal]) = n.

    (V[orthogonal])[orthogonal] = V.

    ear Algebra Cheatsheet http://www.gijsk.com/temp/matrices.html

    3 5/11/2013 1:45 AM

  • 7/30/2019 Linear Algebra Cheatsheet

    3/3

    The orthogonal projection of a vector x is equal to x itself if and only if x is in V.

    If x and y are vectors in Rn, then the absolute value of their dot product is smaller than or equal to

    the product of their norms. (Cauchy-Schwarz inequality)

    The angle between two vectors is defined as the cos((x . y) / ||x|| * ||y||).

    Given a basis v1,...,v[m] of a subspace V of Rn. For j = 2,...,m, we resolve the vector v[j] into its

    components parallel and perpendicular to the span of the preceding vectors. Then we convert our

    results into unit vectors by dividing each vector by its norm. The resulting set of vectors is an

    orthonormal basis of V (Gram-Schmidt proces).

    Given vector v and a unit vector u, the component of vector v perpendicular to u can be obtained as

    follows: v - v[parallel] = v - projV(v) = (u1 . v)u1

    For any n x n matrix A, the following are either all true or all false:

    A is invertible.

    The linear system Ax = b has a unique solution x for all b in Rn

    rref[A] = In

    rank(A) = n

    im[A] = Rn

    ker[A] = 0 vector

    The column vectors of A form a basis of Rn

    The column vectors of A span Rn

    The column vectors of A are linearly independent

    ear Algebra Cheatsheet http://www.gijsk.com/temp/matrices.html

    3 5/11/2013 1:45 AM