18
Subject : Numerical & Statistical methods for Computer Engineering. Topic : System of Linear Algebraic Equations

System of linear algebriac equations nsm

Embed Size (px)

Citation preview

Page 1: System of linear algebriac equations nsm

Subject : Numerical & Statistical methods for Computer Engineering.

Topic : System of Linear Algebraic Equations

Page 2: System of linear algebriac equations nsm

Serial No Topic01 Introduction02 Solutions to the equations graphical

representation03 Elementary Transformations04 Numerical solutions graphical representation05 Direct and iterative methods06 Gauss elimination and methodology07 Gauss jordan and methodology08 Gauss jacobi & gauss seidel09 Applications

Page 3: System of linear algebriac equations nsm

A system of linear Algebraic equations is nothing but a system of ' n' algebraic equations satisfied by a set of n unknown quantities. The aim is to find these n unknown quantities satisfying the n equations. 

It is a very common practice to write the system of n equations in matrix form as 

Ax = b where A is an n x n, non-singular matrix and x and b are n x 1 matrices out of which b is known. For small n the elementary methods like cramers rule, matrix inversion are very convenient to get the unknown vector x from the system Ax = b. However, for large ' n ' these methods will become computationally very expensive because of the evaluation of matrix determinants involved in these methods. 

Page 4: System of linear algebriac equations nsm
Page 5: System of linear algebriac equations nsm

Elementary Operations There are three kinds of elementary matrix

operations. Interchange two rows (or columns). Multiply each element in a row (or column) by a

non-zero number. Multiply a row (or column) by a non-zero number

and add the result to another row (or column). When these operations are performed on rows,

they are called elementary row operations; and when they are performed on columns, they are called elementary column operations.

Page 6: System of linear algebriac equations nsm
Page 7: System of linear algebriac equations nsm
Page 8: System of linear algebriac equations nsm
Page 9: System of linear algebriac equations nsm

In linear algebra, Gaussian elimination (also known as row reduction) is an algorithm for solving systems of linear equations. It is usually understood as a sequence of operations performed on the associated matrix of coefficients. This method can also be used to find the rank of a matrix, to calculate the determinant of a matrix, and to calculate the inverse of an invertible square matrix. The method is named after Carl Friedrich Gauss (1777–1855), although it was known to Chinese mathematicians as early as 179 CE (see History section).

Page 10: System of linear algebriac equations nsm

To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations: 1) Swapping two rows, 2) Multiplying a row by a non-zero number, 3) Adding a multiple of one row to another row. Using these operations, a matrix can always be transformed into an upper triangular matrix, and in fact one that is in row echelon form. Once all of the leading coefficients (the left-most non-zero entry in each row) are 1, and every column containing a leading coefficient has zeros elsewhere, the matrix is said to be in reduced row echelon form. This final form is unique; in other words, it is independent of the sequence of row operations used.

Page 11: System of linear algebriac equations nsm

The Gauss-Jordan elimination method to solve a system of linear equations is described in the following steps. 1. Write the augmented matrix of the system. 2. Use row operations to transform the augmented matrix in the form described below, which is called the reduced row echelon form (RREF).

Page 12: System of linear algebriac equations nsm

The Gauss-Jordan elimination method to solve a system of linear equations is described in the following steps. 1. Write the augmented matrix of the system. 2. Use row operations to transform the augmented matrix in the form described below, which is called the reduced row echelon form (RREF). (a) The rows (if any) consisting entirely of zeros are grouped together at the bottom of the matrix. (b) In each row that does not consist entirely of zeros, the leftmost nonzero element is a 1 (called a leading 1 or a pivot). (c) Each column that contains a leading 1 has zeros in all other entries. (d) The leading 1 in any row is to the left of any leading 1’s in the rows below it.

Page 13: System of linear algebriac equations nsm
Page 14: System of linear algebriac equations nsm

. Perhaps the simplest iterative method for solving Ax = b is Jacobi’s Method. Note that the simplicity of this method is both good and bad: good, because it is relatively easy to understand and thus is a good first taste of iterative methods; bad, because it is not typically used in practice (although its potential usefulness has been reconsidered with the advent of parallel computing). Still, it is a good starting point for learning about more useful, but more complicated, iterative methods.

Page 15: System of linear algebriac equations nsm

In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a linear system of equations. It is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel, and is similar to the Jacobi method. Though it can be applied to any matrix with non-zero elements on the diagonals.

Page 16: System of linear algebriac equations nsm

The solutions of some linear systems (that can be represented by systems of linear equations) are more sensitive to round-off error than others. For some linear systems a small change in one of the values of the coefficient matrix or the right-hand side vector causes a large change in the solution vector

Page 17: System of linear algebriac equations nsm

Linear algebra shows up in the theory of a lot of fields in computer science. Statistical learning models frequently rely on matrix algebra and decomposition. Image manipulation relies on vector manipulation and matrix transformations. Anything with physics will use vector manipulation and differential equations which require linear algebra to truly understand. To get into the theory of it all, you need to know linear algebra. If you want to read white papers and consider cutting edge new algorithms and systems, you need to know a lot of math. 

Page 18: System of linear algebriac equations nsm