9
Fri Feb 10 3.5 Matrix inverses. We've been talking about matrix algebra: addition, scalar multiplication, multiplication, and how these operations combine. If necessary, finish those notes. Here's part of the last exercise, filled in: Properties for the algebra of matrix addition and multiplication : Multiplication is not commutative in general (AB usually does not equal BA , even if you're multiplying square matrices so that at least the product matrices are the same size). But other properties you're used to do hold: is commutative A B = B A entry ij A B = a ij b ij = b ij a ij = entry ij B A is associative A B C = A B C the ij entry of each side is a ij b ij c ij scalar multiplication distributes over cA B = cA cB . ij entry of LHS is ca ij b ij = ca ij b ij = ij entry of RHS multiplication is associative AB C = A BC . matrix multiplication distributes over AB C = AB AC; ij entry of LHS = row i A col j B C row i A col j B row i A col j C = ij entry of RHS. A B C = AC BC

A B - math.utah.edukorevaar/2250spring17/feb13.pdf · Step 2: Matrix inverses: A square matrix A n n is invertible if there is a matrix B n n so that AB = BA = I. In this case we

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A B - math.utah.edukorevaar/2250spring17/feb13.pdf · Step 2: Matrix inverses: A square matrix A n n is invertible if there is a matrix B n n so that AB = BA = I. In this case we

Fri Feb 10

3.5 Matrix inverses.

We've been talking about matrix algebra: addition, scalar multiplication, multiplication, and how these operations combine. If necessary, finish those notes. Here's part of the last exercise, filled in:

Properties for the algebra of matrix addition and multiplication :

Multiplication is not commutative in general (AB usually does not equal BA , even if you're multiplying square matrices so that at least the product matrices are the same size).

But other properties you're used to do hold: is commutative A B = B A

entryij A B = aij bij = bij aij = entryij B A is associative A B C = A B C

the ij entry of each side is aij bij cij scalar multiplication distributes over c A B = cA cB .

ij entry of LHS is c aij bij = c aij bij = ij entry of RHS multiplication is associative AB C = A BC . matrix multiplication distributes over A B C = AB AC;

ij entry of LHS = rowi A colj B Crowi A colj B rowi A colj C = ij entry of RHS.

A B C = AC BC

Page 2: A B - math.utah.edukorevaar/2250spring17/feb13.pdf · Step 2: Matrix inverses: A square matrix A n n is invertible if there is a matrix B n n so that AB = BA = I. In this case we

But I haven't told you what the algebra on the previous page is good for. Today we'll start to find out. By way of comparison, think of a scalar linear equation with known numbers a, b, c, d and an unknown number x,

a x b = c x dWe know how to solve it by collecting terms and doing scalar algebra:

a x c x = d b a c x = d b *

x =d ba c

.

How would you solve such an equation if A, B, C, D were square matrices, and X was a vector (or matrix)? Well, you could use the matrix algebra properties we've been discussing to get to the ∗ step. And then ifX was a vector you could solve the system * with Gaussian elimination. In fact, if X was a matrix, you could solve for each column of X (and do it all at once) with Gaussian elimination.

But you couldn't proceed as with scalars and do the final step after the * because it is not possible to divideby a matrix. Today we'll talk about a potential shortcut for that last step that is an analog of of dividing, in order to solve for X . It involves the concept of inverse matrices.

Step 1:Identity matrices: Recall that the n n identity matrix In n has one's down the diagonal (by which we mean the diagonal from the upper left to lower right corner), and zeroes elsewhere. For example,

I1 1 = 1 , I2 2 =1 0

0 1, I3 3 =

1 0 0

0 1 0

0 0 1 , etc.

In other words, entryi i In n = 1 and entryi j In n = 0 if i j .

Exercise 1) Check thatAm n In n = A, Im m Am n = A .

Hint: check that the matrices on each side are the same size, and that each of their i j entries agree.

Remark: That's why these matrices are called identity matrices - they are the matrix version of multiplicative identities, e.g. like multiplying by the number 1 in the real number system.)

Page 3: A B - math.utah.edukorevaar/2250spring17/feb13.pdf · Step 2: Matrix inverses: A square matrix A n n is invertible if there is a matrix B n n so that AB = BA = I. In this case we

Step 2:Matrix inverses: A square matrix An n is invertible if there is a matrix Bn n so that

AB = BA = I .In this case we call B the inverse of A, and write B = A 1 .

Remark: A matrix A can have at most one inverse, because if we have two candidates B, C withAB = BA = I and also AC = CA = I

thenBA C = IC = C

B AC = BI = B so since the associative property BA C = B AC is true, it must be that

B = C.

Exercise 2a) Verify that for A =1 2

3 4 the inverse matrix is B =

2 1

32

12

.

Inverse matrices are very useful in solving algebra problems. For example

Theorem: If A 1 exists then the only solution to Ax = b is x = A 1b .

Exercise 2b) Use the theorem and A 1 in 2a, to write down the solution to the systemx 2 y = 5

3 x 4 y = 6

Page 4: A B - math.utah.edukorevaar/2250spring17/feb13.pdf · Step 2: Matrix inverses: A square matrix A n n is invertible if there is a matrix B n n so that AB = BA = I. In this case we

Exercise 3a) Use matrix algebra to verify why the Theorem on the previous page is true. Notice that the correct formula is x = A 1b and not x = b A 1 (this second product can't even be computed because the dimensions don't match up!).

3b) Assuming A is a square matrix with an inverse A 1, and that the matrices in the equation below have dimensions which make for meaningful equation, solve for X in terms of the other matrices:

XA C = B

Page 5: A B - math.utah.edukorevaar/2250spring17/feb13.pdf · Step 2: Matrix inverses: A square matrix A n n is invertible if there is a matrix B n n so that AB = BA = I. In this case we

Step 3:But where did that formula for A 1 come from?Answer: Consider A 1 as an unknown matrix, A 1 = X . We want

A X = I .We can break this matrix equation down by the columns of X. In the two by two case we get:

A col1 X col2 X =1

0

0

1 .

In other words, the two columns of the inverse matrix X should satisfy

A col1 X =1

0, A col2 X =

0

1 .

We can solve for both of these mystery columns at once, as we've done before when we had different righthand sides:

Exercise 4: Reduce the double augmented matrix1 2

3 4

1

0

0

1

to find the two columns of A 1 for the previous example.

Page 6: A B - math.utah.edukorevaar/2250spring17/feb13.pdf · Step 2: Matrix inverses: A square matrix A n n is invertible if there is a matrix B n n so that AB = BA = I. In this case we

Exercise 5: Will this always work? Can you find A 1 for

A :=

1 5 1

2 5 0

2 7 1 ?

Page 7: A B - math.utah.edukorevaar/2250spring17/feb13.pdf · Step 2: Matrix inverses: A square matrix A n n is invertible if there is a matrix B n n so that AB = BA = I. In this case we

> >

> >

> >

> >

Exercise 6) Will this always work? Try to find B 1 for B :=

1 5 5

2 5 0

2 7 4 .

Hint: We'll discover that it's impossible for B to have an inverse.

Exercise 7) What happens when we ask software like Maple for the inverse matrices above?

with LinearAlgebra :

1 5 1

2 5 0

2 7 1

1

;

1 5 5

2 5 0

2 7 4

1

;

Page 8: A B - math.utah.edukorevaar/2250spring17/feb13.pdf · Step 2: Matrix inverses: A square matrix A n n is invertible if there is a matrix B n n so that AB = BA = I. In this case we

Theorem: Let An n be a square matrix. Then A has an inverse matrix if and only if its reduced row echelon form is the identity. In this case the algorithm illustrated on the previous page will always yield the inverse matrix.

explanation: By the previous theorem, when A 1exists, the solutions to linear systemsA x = b

are unique (x = A 1b . From our discussions on Tuesday and Wednesday, we know that for square matrices, solutions to such linear systems exist and are unique only if the reduced row echelon form of A isthe identity. (Do you remember why?) Thus by logic, whenever A 1exists, A reduces to the identity.

In this case that A does reduce to I, we search for A 1 as the solution matrix X to the matrix equationA X = I

i.e.

A col1 X col2 X .... coln X =

1

0

0

0

0

1

0

0

....

0

0

0

1

Because A reduces to the identity matrix, we may solve for X column by column as in the examples we justworked, by using a chain of elementary row operations:

A I I B ,and deduce that the columns of X are exactly the columns of B, i.e. X = B. Thus we know that

A B = I .To realize that B A = I as well, we would try to solve B Y = I for Y, and hope Y = A . But we can actually verify this fact by reordering the columns of I B to read B I and then reversing each of the elementary row operations in the first computation, i.e. create the chain

B I I A .so B A = I also holds. (This is one of those rare times when matrix multiplication actually is commuative.)

To summarize: If A 1 exists, then solutions x to A x = b always exist and are unique, so the reduced row echelon form of A is the identity. If the reduced row echelon form of A is the identity, then A 1 exists. That's exactly what the Theorem claims.

Page 9: A B - math.utah.edukorevaar/2250spring17/feb13.pdf · Step 2: Matrix inverses: A square matrix A n n is invertible if there is a matrix B n n so that AB = BA = I. In this case we

Math 2250-004 Week 6 notes. Midterm exam this Friday Feb 17!

Mon Feb 133.5-3.6 Matrix inverses; matrix determinants.

Finish last Friday's notes: how to find matrix inverses when they exist, and how to figure out when they don't exist. This will lead naturally into determinants, section 3.6, in this week's notes.

.....................................................................................................................................................

Determinants are scalars defined for square matrices An n and they always determine whether or not the

inverse matrix A 1 exists, (i.e. whether the reduced row echelon form of A is the identity matrix). It turns out that the determinant of A is non-zero if and only if A 1 exists. The determinant of a 1 1 matrix a11 is defined to be the number a11; determinants of 2 2 matrices are defined as in Friday's notes; and in general determinants for n n matrices are defined recursively, in terms of determinants of n 1 n 1 submatrices:

Definition: Let An n = ai j . Then the determinant of A, written det A or A , is defined by

det Aj = 1

n

a1 j 1 1 jM1 j =j = 1

n

a1 jC1 j .

Here M1 j is the determinant of the n 1 n 1 matrix obtained from A by deleting the first row

and the jth column, and C1 j is simply 1 1 jM1 j .

More generally, the determinant of the n 1 n 1 matrix obtained by deleting row i and column j from A is called the i j Minor Mi j of A, and Ci j 1 i jMi j is called the i j Cofactor of A .

Theorem: (proof is in text appendix) det A can be computed by expanding across any row, say row i:

det Aj = 1

n

ai j 1 i jMi j =j = 1

n

ai jCi jor by expanding down any column, say column j:

det Ai = 1

n

ai j 1 i jMi j =i = 1

n

ai jCi j .