# Nacp Manual

hunabhi

• View
238

1

Embed Size (px)

DESCRIPTION

Numerical analysis and Computer programming lab manual

Citation preview

Department of Computer Science & Engineering

Lab Manual

NUMERICAL ANALYSIS & COMPUTER PROGRAMMING

(UMA-201)

CONTENTS

Sr. No.PracticalPage

1.Program to find the roots of non-linear equation using Bisection method2

2.Program to find the roots of non-linear equation using Mullers method.5

3.Program to see Curve fitting by least-squares approximations.8

4.Program to solve the system of linear equations using Gauss-Elimination method.11

5.Program to solve the system of linear equations using Gauss-Seidal iteration method.14

6.Program to solve the system of linear equations using Gauss-Jordan method.17

7.Program to solve integral equation numerically using Trapezoidal rule.20

8.Program to solve integral equation numerically using Simpsons Rule22

9.Program to find the largest Eigen value of a matrix by power method.24

10.Program to find numerical solution of ordinary differential equations by Eulers method.28

11.Program to find numerical solution of ordinary differential equations by Runga-Kutta method.31

12.Program to find numerical solution of heat equation.33

13.Program to find numerical solution of ordinary differential equations by Milnes method.36

14.Program To solve a given problem using Newtons forward interpolation formula.40

15.Program to solve a given problem using Lagranges forward interpolation formula.43

PRACTICAL-1

Aim: Program to find the roots of non-linear equation using Bisection method.Theory:Bisection method:

The bisection method is a root-finding algorithm which repeatedly bisects an interval then selects a subinterval in which a root must lie for further processing. It is a very simple and robust method, but it is also relatively slow.

The method

The bisection method requires two initial points a and b such that f(a) and f(b) have opposite signs. This is called a bracket of a root, for by the intermediate value theorem the continuous function f must have at least one root in the interval (a, b). The method now divides the interval in two by computing the midpoint c = (a+b) / 2 of the interval. Unless c is itself a root--which is very unlikely, but possible--there are now two possibilities: either f(a) and f(c) have opposite signs and bracket a root, or f(c) and f(b) have opposite signs and bracket a root. We select the subinterval that is a bracket, and apply the same bisection step to it. In this way the interval that might contain a zero of f is reduced in width by 50% at each step. We continue until we have a bracket sufficiently small for our purposes.

Explicitly, if f(a) f(c) < 0, then the method sets b equal to c, and if f(b) f(c) < 0, then the method sets a equal to c. In both cases, the new f(a) and f(b) have opposite signs, so the method is applicable to this smaller interval. A practical implementation of this method must guard against the uncommon occurrence that the midpoint is indeed a solution.

Abbreviations :

a , b are the limits in which the root lies.

aerr is the allowed error.

itr is a counter which keeps track of the no. of iterations performed.

maxitr is the maximum no. of iterations to be performed.

x is the value of root at nth iteration .

x1 is the value of root at (n+1)th iteration.

Func bisect:

Purpose: performs and prints the result of one iteration

Test run :

Input: enter the values of a ,b , allowed error ,maximum iterations 3 2 .0001 20Output: Iteration No. 1 x= 2.50000

Iteration No. 2 x= 2.75000

Iteration No. 3 x= 2.62500

Iteration No. 4 x= 2.68750

Iteration No. 5 x= 2.71875

Iteration No. 6 x= 2.70313

Iteration No. 7 x= 2.71094

Iteration No. 8 x= 2.70703

Iteration No. 9 x= 2.70508

Iteration No. 10 x= 2.70605

Iteration No. 11 x= 2.70654

Iteration No. 12 x= 2.70630

Iteration No. 13 x= 2.70642

Iteration No. 14 x= 2.70648

After 14 iterations , root = 2.7065

PRACTICAL-2Aim: Program to find the roots of non-linear equation using Mullers method.

Theory:Muller's method

Muller's method is a root-finding algorithm, a numerical method for solving equations of the form f(x) = 0.

Muller's method is based on the secant method, which constructs at every iteration a line through two points on the graph of f. Instead, Muller's method uses three points, constructs the parabola through these three points, and takes the intersection of the x-axis with the parabola to be the next approximation.

The three initial values needed are denoted as xk, xk-1 and xk-2. The parabola going through the three points (xk,f(xk)), (xk-1,f(xk-1)) and (xk-2,f(xk-2)), when written in the Newton form, is

where f[xk, xk-1] and f[xk, xk-1, xk-2] denote divided differences. This can be rewritten as

where

The next iterate is now given by the root of the quadratic equation y = 0. This yields the recurrence relation

In this formula, the sign should be chosen such that the denominator is as large as possible in magnitude. We do not use the standard formula for solving quadratic equations because that may lead to loss of significance.

Note that xk+1 can be complex, even if the previous iterates were all real. This is in contrast with other root-finding algorithms like the secant method or Newton's method, whose iterates will remain real if one starts with real numbers. Having complex iterates can be an advantage (if one is looking for complex roots) or a disadvantage (if it is known that all roots are real), depending on the problem.

Test run:

Input: enter the initial approximations.

-101

Enter allowed error , maximum iterations

.0001

10Output: iteration no. 1 , x = 0.44152

iteration no. 2 , x = 0.51225

iteration no. 3 , x = 0.51769

iteration no. 4 , x = 0.51776

after 4th iteration , the solution is 0.5178PRACTICAL-3

Aim: Program to see Curve fitting by least-squares approximations.

Theory: Least square

The method of least squares is a standard approach to the approximate solution of over determined systems, i.e. sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in solving every single equation.

The most important application is in data fitting. The best fit in the least-squares sense minimizes the sum of squared residuals, a residual being the difference between an observed value and the value provided by a model.

Least squares problems fall into two categories, linear least squares and nonlinear least squares, depending on whether or not the residuals are linear in all unknowns. The linear least-squares problem occurs in statistical regression analysis; it has a closed form solution. The non-linear problem has no closed solution and is usually solved by iterative refinement; at each iteration the system is approximated by a linear one, thus the core calculation is similar in both cases.

The minimum of the sum of squares is found by setting the gradient to zero. Since the model contains m parameters there are m gradient equations.

and since the gradient equations become

The gradient equations apply to all least squares problems. Each particular problem requires particular expressions for the model and its partial derivatives.

Test run:

Input: enter the no. of pairs of observed values.

7

Pair no. 1

1 1.1

Pair no. 2

1.5 1.3

Pair no. 3

2 1.6

Pair no. 4

2.5 2

Pair no. 5

3.0 2.7

Pair no. 6

3.5 3.4

Pair no. 7

4 4.1

Output: the argumented matrix is:-

7.0000

17.500050.750016.2000

17.500050.7500161.875047.6500

50.7500161.8750548.1875154.4750

a= 1.0357b= -0.1929c= 0.2429

PRACTICAL-4Aim: Program to solve the system of linear equations using Gauss-Elimination method.Theory:-Gauss-elimination methodGaussian elimination is a method of solving a linear system (consisting of equations in unknowns) by bringing the augmented matrix

to an upper triangular form

This elimination process is also called the forward elimination method.

Test run:

Input: enter the elements of augmented matrix rowwise.

10-7356

-68-1-45

314112

5-9-247

The upper triangular matrix is:-

10.000

-7.00003.0000

5.0000

6.0000

0.0000

3.8000

0.8000

-1.00008.6000

-0.0000-0.00002.4474

10.3158-6.8158

0.0000

-0.0000-0.00009.9247

9.9247

Output: the solution is:-

X [1] =5.0000

X [2] =4.0000

X [3] =-7.0000

X [4] =1.0000PRACTICAL-5Aim:- Program to solve the system of linear equations using Gauss-Seidal iteration method.Theory:- Gauss Seidel MethodGaussSeidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a linear system of equations.

Given a square system of n linear equations with unknown x: where:

Then A can be decomposed into a lower triangular component L*, and a strictly upper triangular component U:

The system of linear equations may be rewritten as:

The GaussSeidel method is an iterative technique that solves the left hand side of this expression for x, using previous value for x on the right hand side. Analytically, this may be written as:

However, by taking advantage of the triangular form of L*, the elements of x(k+1) can be computed sequentially using forward substitution:

PRACTICAL-6

Aim:- Program to solve the system of linear equations using Gauss-Jordan method.Theory:-GaussJordan elimination is a version of Gaussian elimination that puts zeros both above and below each pivot element as it goes from the top row of the given matrix to the bottom. If GaussJordan elimination is applied on a square matrix, it can be used to calculate the matrix's inverse. This can be done by augmenting the square matrix with the identity matrix of the same dimensions, and through the following matrix operations:

If the original square matrix, A, is given by the following expression:

Then, after augmenting by the identity, the following is obtained:

By performing elementary row operations on the [AI] matrix until it reaches reduced row echelon form, the following is the final result:

The matrix augmentation can now be undone, which gives the following:

A matrix is non-singular (meaning that it has an inverse matrix) if and only if the identity matrix can be obtained using only elementary row operations

Test run:

Input: enter the elements of augmented matrix rowwise.

10-7356

-68-1-45

314112

5-9-247

The upper triangular matrix is:-

10.000

-7.00003.0000

5.0000

6.0000

0.0000

3.8000

0.8000

-1.00008.6000

-0.0000-0.00002.4474

10.3158-6.8158

0.0000

-0.0000-0.00009.9247

9.9247

Output: the solution is:-

X [1] =5.0000

X [2] =4.0000

X [3] =-7.0000

X [4] =1.0000

PRACTICAL-7

Aim: - Program to solve integral equation numerically using Trapezoidal rule.Theory:- Trapezoidal rule

Trapezoidal rule for solving initial value problems.

In mathematics, the trapezium rule (also known as the trapezoid rule, or the trapezoidal rule in American English is an approximate technique for calculating the definite integral

The trapezoidal rule works by approximating the region under the graph of the function f(x) as a trapezoid and calculating its area. It follows that

To calculate this integral more accurately, one first splits the interval of integration [a, b] into N smaller, uniform subintervals, and then applies the trapezoidal rule on each of them. One obtains the composite trapezoidal rule:

This can alternatively be written as:

Where

This formula is not valid for a non-uniform grid; however the composite trapezoidal rule can be used with variable trapezium widths.

Abbreviations:

Y(x) is the function to be integrated.

Test run :

Input: enter xo , xn , no. of subintervals 0 6 6

Output: value of integral is 1.4108PRACTICAL-8Aim:- Program to solve integral equation numerically using Simpsons rule.Theory:- Simpsons rule:

Simpson's rule finds the area under the parabola which passes through 3 points (the endpoints and the midpoint) on a curve. In essence, the rule approximates the curve by a series of parabolic arcs and the area under the parabolas is approximately the area under the curve. There is a unique curve with the equation

y = ax2 + bx + c

passing through the points (-x,y0), (0,y1), and (x,y2). There is a unique solution for a, b, and c generated by the three equations:

y0 = a(-x)2 + b(-x) + c

y1 = c

y2 = a(x)2 + b(x) + c

The area under the curve from -x to x is

but the part in the square brackets can be rewritten as y0 + 4y1 + y2 and so

For the adjoining parabola, y2 is a collocation point; it is evaluated twice. The number of collocation points is one less than the number of parabolas. The series of coefficients for the yi's for N points then is

I01234567...N-3N-2N-1

coeff.14242424...241

Abbreviations:

Y(x) is the func. to be integrated so that yi = y(xi) = y(x0+i*h)

Test run :

Input: enter xo , xn , no. of subintervals 0 6 6

Output: value of integral is 1.3662PRACTICAL-9Aim:- Program to find the largest Eigen value of a matrix by power method.Theory:- Power method

The power iteration is an eigenvalue algorithm: given a matrix A, the algorithm will produce a number (the eigenvalue) and a nonzero vector v (the eigenvector), such that Av = v.

The power iteration is a very simple algorithm. It does not compute matrix decomposition, and hence it can be used when A is a very large sparse matrix. It will find only one eigenvalue (the one with the greatest absolute value) and it may converge only slowly.

The power iteration algorithm starts with a vector b0, which may be an approximation to the dominant eigenvector or a random vector. The method is described by the iteration

So, at every iteration, the vector bk is multiplied by the matrix A and normalized.

Under the assumptions:

A has an eigenvalue that is strictly greater in magnitude than its other eigenvalues

The starting vector b0 has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue.

then:

A subsequence of converges to an eigenvector associated with the dominant eigenvalue

The sequence does not necessarily converge. It can be shown that:

bk = eikv1 + rk where: v1 is an eigenvector associated with the dominant eigenvalue, and . The presence of the term eik implies that does not converge unless eik = 1 Under the two assumptions listed above, the sequence defined by: converges to the dominant eigenvalue.

PRACTICAL-10Aim:- Program to find numerical solution of ordinary differential equations by Eulers method.Theory:-Eulers methodThe Euler method, named after Leonhard Euler, is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic kind of explicit method for numerical integration of ordinary differential equations.

Consider the problem of calculating the shape of an unknown curve which starts at a given point and satisfies a given differential equation. Here, a differential equation can be thought of as a formula by which the slope of the tangent line to the curve can be computed at any point on the curve, once the position of that point has been calculated.

The idea is that while the curve is initially unknown, its starting point, which we denote by A0, is known. Then, from the differential equation, the slope to the curve at A0 can be computed, and so, the tangent line.

Take a small step along that tangent line up to a point A1. If we pretend that A1 is still on the curve, the same reasoning as for the point A0 above can be used. After several steps, a polygonal curve is computed. In general, this curve does not diverge too far from the original unknown curve, and the error between the two curves can be made small if the step size is small enough and the interval of computation is finite .

DerivationWe want to approximate the solution of the initial value problem

by using the first two terms of the Taylor expansion of y, which represents the linear approximation around the point (t0,y(t0)) . One step of the Euler method from tn to tn+1=tn+h is

The Euler method is explicit, i.e. the solution yn + 1 is an explicit function of yi for .

While the Euler method integrates a first order ODE, any ODE of order N can be represented as a first-order ODE in more than one variable by introducing N 1 further variables, y', y'', ..., y(N), and formulating N first order equations in these new variables. The Euler method can be applied to the vector to integrate the higher-order system.

Abbreviations:

df(x,y) is dy/dx .

yes

No

Test run :

Input: enter the values of xo , y0 , h , x 0 1 .1 1

Output: when x = 0.1 y = 1.10when x = 0.2 y = 1.22

when x = 0.3 y = 1.36

when x = 0.4 y = 1.53

when x = 0.5 y = 1.72

when x = 0.6 y = 1.94

when x = 0.7 y = 2.20

when x = 0.8 y = 2.49

when x = 0.9 y = 2.82

when x = 1.0 y = 3.19

PRACTICAL-11

Aim: - Program to find numerical solution of ordinary differential equations by Runga-Kutta method.Theory: -RungeKutta methods

In numerical analysis, the RungeKutta methods are an important family of implicit and explicit iterative methods for the approximation of solutions of ordinary differential equations.

Let an initial value problem be specified as follows.

Then, the RK4 method for this problem is given by the following equations:

Where yn + 1 is the RK4 approximation of y(tn + 1), and

Thus, the next value (yn + 1) is determined by the present value (yn) plus the product of the size of the interval (h) and an estimated slope. The slope is a weighted average of slopes:

k1 is the slope at the beginning of the interval;

k2 is the slope at the midpoint of the interval, using slope k1 to determine the value of y at the point tn + h / 2 using Euler's method;

k3 is again the slope at the midpoint, but now using the slope k2 to determine the y-value;

k4 is the slope at the end of the interval, with its y-value determined using k3.

In averaging the four slopes, greater weight is given to the slopes at the midpoint:

The RK4 method is a fourth-order method, meaning that the error per step is on the order of h5, while the total accumulated error has order h4.

Abbreviations:

x0 is starting value of x i.e. , x0

xn is the value of x for which y is to be determined.

yes

No

Test run:

Input: enter the values of xo, y0, h , xn . 0.0 1.0 0.1 0.2

Output: when x = 0.1000 y= 1.1165 when x = 0.2000 y= 1.2736

PRACTICAL-12Aim:- Program to find numerical solution of heat equation.

Theory:-

Heat equationThe heat equation is an important partial differential equation which describes the distribution of heat (or variation in temperature) in a given region over time. For a function u(x,y,z,t) of three spatial variables (x,y,z) and the time variable t, the heat equation is

also written

or sometimes

where is a constant and or denotes the Laplacian operator. For the mathematical treatment it is sufficient to consider the case =1.

The heat equation is of fundamental importance in diverse scientific fields. In mathematics, it is the prototypical parabolic partial differential equation. In probability theory, the heat equation is connected with the study of Brownian motion via the FokkerPlanck equation. The diffusion equation, a more general version of the heat equation, arises in connection with the study of chemical diffusion and other related processes.

Abbreviations:

XEND is the ending value of t

TEND is the ending value of x

h is the spacing in values of x

k is the spacing in values of y

f(x) is value of u(x,0)

csqr is value of c2

ust is the value in the first column

Test run :

Input: enter the square of c 4

Enter value of u(0 , t) 0

Enter value of u(8 , t) 0

The value of alpha is 0.50

Output: The values of u(i , j ) are

0.0000

3.5000

6.0000

7.5000

8.0000

7.5000

0.00003.0000

5.5000

7.0000

7.5000

7.0000

0.0000

2.7000

5.0000

6.5000

7.0000

6.5000

0.0000

2.5000

4.6250

6.0000

6.5000

6.0000

0.0000

2.3124

4.2500

5.5625

6.0000

5.5625

0.0000

2.1250

3.9375

5.1250

5.5625

5.1250

PRACTICAL-13Aim:- Program to find numerical solution of ordinary differential equations by Milnes method.Theory:-Milne's method

Milne's method for obtaining eigenvalues and eigenfunctions to the cases of long-range and singular potentials, for which we have conjectured that it is difficult to apply the method.

Contrary to our conjecture it turned out that the method is valid also for Coulomb potential and repulsive 1/xn (n=2,3,) type potential. Further we applied the method for two cases, for which the solutions are not known, in order to investigate the stability of the multi-dimensional universe. It has been shown that the extra-dimensional (internal) space of our universe is not stable in classical Einstein gravity as well as canonically quantized one. Two possibilities for stabilization were investigated: (i) non-canonically quantized Einstein gravity and (ii) Canonically quantized higher curvature gravity. It has been suggested that the space is stable by qualitative and approximate methods. Exact analytical treatment is very difficult, so that numerical investigation is highly desirable. Numerical investigation shows that the space is stable with sufficient reliability.Abbreviations:-X is an array such that X[i] represents x (n + i).

Y is an array such that Y[i] represents y(n + i).

xr is the last value of x at which value of y is required.

h is the spacing in value of x.

aerr is the allowed error in value of y.

yc is the latest corrected value for y.

f is the function which returns value of y.

Test run:

Input: enter the values of xo, xr, h, allowed error ,. 01.2 0.0001

Enter values of y [I ] ; I =0 , 3

00.02 0.07950.1762

Output: x predicted y f corrected y f

0.80 0.3049 0.7070

0.3046

0.7072

0.3046

0.7072

1.00 0.4554 0.7926

0.4556

0.7925

0.4556

0.7925PRACTICAL-14Aim:-Program to solve a given problem using Newtons forward interpolation formula.Theory:-A Newton polynomial is the interpolation polynomial for a given set of data points in the Newton form. The Newton polynomial is sometimes called Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using divided differences.

For any given set of data points, there is only one polynomial (of least possible degree) that passes through all of them. Thus, it is more appropriate to speak of "the Newton form of the interpolation polynomial" rather than of "the Newton interpolation polynomial.

Given a set of k+1 data points

where no two xj are the same, the interpolation polynomial in the Newton form is a linear combination of Newton basis polynomialswith the Newton basis polynomials defined as

and the coefficients defined as

Where

is the notation for divided differences.

Thus the Newton polynomial can be written as

The Newton Polynomial above can be expressed in a simplified form when are arranged consecutively with equal space. Introducing the notation for each and , the difference can be written as . So the Newton Polynomial above becomes:

is called the Newton Forward Divided Difference Formula.

Test run :

Input: enter the value of n

6

Enter the set of values

10010.6315013.0320015.04

25016.81

30018.4235019.9040021.27Enter the value of x for which value of y is wanted

218Output: x= 218y= 15.70PRACTICAL-15Aim:- Program to solve a given problem using Lagranges forward interpolation formula.Theory:-Lagrange's interpolation

Lagrange's interpolation method is a simple and clever way of finding the unique Lth-order polynomial that exactly passes through L+1 distinct samples of a signal. Once the polynomial is known, its value can easily be interpolated at any point using the polynomial equation.Lagrange interpolation is a well known, classical technique for interpolation .The term polynomial interpolation normally refers to Lagrange interpolation. In the first-order case, it reduces to linear interpolation.

Given a set of N+ 1 known samples f(xk) , k =0,1,2,..,N, the problem is to find the unique order polynomial y(x) which interpolates the samples. The solution can be expressed as a linear combination of elementary th order polynomials:

where

From the numerator of the above definition, we see that is an order polynomial having zeros at all of the samples except the th. The denominator is simply the constant which normalizes its value to at. Thus, we have

In other words, the polynomial lk is the th basis polynomial for constructing a polynomial interpolation of order over the N+1 sample points xk . It is an order polynomial having zeros at all of the samples except the th, where it is 1.

Abbreviations:

MAX is the maximum value of n.

ax is an array containing values of x(x0,x1,..,xn)

ay is an array containing values of y(y0,y1,..,yn)

No

Yes

Test run :

Input: enter the value of n 4

Enter the set of values

5150

7392

111452

132366

175202

Enter the value of x for which value of y is wanted

9Output: x= 9.0y= 810.0Mullers Mehod of Finding Roots of a polynomial10.6 Mller's MethodIntroduction Theory HOWTO Error Analysis Examples Questions Applications in Engineering Matlab Maple

Introduction

Mller's method is a technique for finding the root of a scalar-valued function f(x) of a single variable x when no information about the derivative exists. It is a generalization of the secant method, but instead of using two points, it uses three points and finds an interpolating quadratic polynomial.

This method is better suited to finding the roots of polynomials, and therefore we will focus on this particular application of Mller's method.

Background

Useful background for this topic includes:

3. Iteration 5. Interpolation 5.1 Vandermonde MethodReferences

Mathews, Section 2.5, Muller's Method, p.92.

Weisstein, http://mathworld.wolfram.com/MullersMethod.html.

Theory

Shifting

Given a function f(x) of a single variable, the modified function f(x + T) shifts the function to the left by T. (This will be used extensively in your course on linear systems and signals.) For example, if you examine function f(x) in Figure 1, you will note that the interesting behaviour around the point x = 3 is shifted to the origin by evaluating f(x + 3).

Figure 1. Shifting a function f(x) to the left.

Mller's Method

Given a function p(x), suppose we have three approximations of a root, x1, x2, and x3. Using the Vandermonde method, we can easily find the interpolating quadratic polynomial ax2 + bx + c by solving:

We can then let x4 be a root of this interpolating quadratic polynomial, and this point should be a better approximation of the root than any of x1, x2, or x3. Unfortunately, we run into two problems:

1. Which of the two roots do we choose (the larger or the smaller), and

2. Which formula do we use to find the root.

You will recall from the example of the quadratic equation where the different forms of the quadratic formula may result in numerically inaccurate values.

For example, consider the points function f(x) (in red) and the interpolating polynomial (in blue) shown in Figure 2.

Figure 2. A function p(x) (red), three points, and an interpolating quadratic polynomial (blue).

It appears that the interpolating polynomial is concave up, and therefore we want the larger root. This is made clearer in Figure 3 which plots just the interpolating polynomial.

Figure 3. The three points and the interpolating quadratic polynomial.

However, it is apparent that with a slightly different function p(x), we may want the larger root.

To remedy this situation, consider plotting the function p(x - x2), in this example, p(x - 1.81). Because 1.81 is a good approximation to the root, the root of p(x - x2) is now near the origin. This is shown in Figure 4.

Figure 4. The three points and the interpolating quadratic polynomial.

The interpolating quadratic function will now, similarly have a root near the origin, as is shown in Figure 5.

Figure 5. The polynomial interpolating the shifted function.

To emphasize, a plot of the interpolating polynomial indicates quite clearly that we are now interested in the root with the smaller absolute value, as shown in Figure 6.

Figure 6. The interpolating polynomial.

Thus, if the interpolating polynomial is ax2 + bx + c, we must use the formula

Note, this gives us the root of the shifted function p(x + 1.81). If we want the root of the quadratic function in Figure 2, we must add 1.81 back onto the value of this root.

When the coefficient b and the discriminant b2 4ac 0, then to maximize the denominator, we need only choose the sign equal to the sign of b, that is use

Suppose, however, that these are not real. In this case, we must be more careful. Let b = br + jbj and let the square root of the discriminant be represented by d = dr + jdj. Thus, the norm of b d are

br2 + 2brdr + dr2 + bj2 + 2bjdj + dj2br2 2brdr + dr2 + bj2 2bjdj + dj2Thus, to maximize this, we must choose whichever sign makes brdr + bjdj positive. In the case where this sum is zero, then either:

We are at a root, or

One of b and d is real, the other imaginary. In this case, choose whichever makes d negative.

The second case ensures that we are always finding a root with a positive imaginary part. There is no mathematical benefit to this, it is simply a choice.

HOWTO

Problem

Given a polynomial of one variable, p(x), find a value r (called a root) such that p(r)=0.

Assumptions

This method will work for non-polynomial functions, but it is more appropriate for finding the roots of polynomials due to its ability to jump from real to complex iterates.

Tools

We will use sampling, quadratic interpolation, and iteration.

Initial Requirements

We have three initial approximation x0, x1, and x2 of the root. It would be useful if |f(x0)| > |f(x1)| > |f(x2)|, that is, the points are in descending absolute value when evaluated by f.

Iteration Process

Given three approximation xn - 2, xn - 1, and xn, can find an interpolating quadratic polynomial which passes through the points:

(xn 2 xn 1, f(xn 2)) (0, f(xn 1)) (xn xn 1, f(xn))In this case, it is easiest (and numerically stable) to use the Vandermonde method to find the interpolating quadratic polynomial ax2 + bx + c:

Having found the coefficients of the interpolating polynomial, we may now choose to find the root. There are two formulae for finding the roots of a quadratic polynomial: one with the radical in the numerator, the other with the radical in the denominator:

and

Because we are assuming that the three points xn - 2, xn - 1, and xn are good approximations to the root, it follows that we want to find the root closest to these points, that is, the smallest root of the polynomial we found. The second formula is more numerically stable for finding the smallest root of a quadratic polynomial, and therefore we set:

where the sign in the denominator is equal to the sign of b.

Halting Conditions

There are three conditions which may cause the iteration process to halt (these are the same as the halting conditions for Newton's method):

1. We halt if both of the following conditions are met:

The step between successive iterates is sufficiently small, |xn+1-xn|> % 1st iteration ---------------------------------------

>> M = [(x(1) - x(2))^2, x(1) - x(2), 1

0, 0, 1

(x(3) - x(2))^2, x(3) - x(2), 1]

M =

0.01000 0.10000 1.00000

0.00000 0.00000 1.00000

0.01000 -0.10000 1.00000

>> y = polyval( p, x )

y =

5.00000

4.51503

4.03954

>> c = M \ y

c =

0.47367

4.80230

4.51503

>> x = [x(2), x(3), x(2) - 2*c(3)/(c(2) + sqrt(c(2)^2 - 4*c(1)*c(3)))]'

x =

-0.10000

-0.20000

-1.14864

>> % 2nd iteration ---------------------------------------

>> M = [(x(1) - x(2))^2, x(1) - x(2), 1

0, 0, 1

(x(3) - x(2))^2, x(3) - x(2), 1]

M =

0.01000 0.10000 1.00000

0.00000 0.00000 1.00000

0.89992 -0.94864 1.00000

>> y = polyval( p, x )

y =

4.5150

4.0395

-13.6858

>> c = M \ y

c =

-13.2838

6.0833

4.0395

>> x = [x(2), x(3), x(2) - 2*c(3)/(c(2) + sqrt(c(2)^2 - 4*c(1)*c(3)))]'

x =

-0.20000

-1.14864

-0.56812

>> % 3rd iteration ---------------------------------------

>> M = [(x(1) - x(2))^2, x(1) - x(2), 1

0, 0, 1

(x(3) - x(2))^2, x(3) - x(2), 1]

M =

0.89992 0.94864 1.00000

0.00000 0.00000 1.00000

0.33701 0.58052 1.00000

>> y = polyval( p, x )

y =

4.0395

-13.6858

1.6597

>> c = M \ y

c =

-21.0503

38.6541

-13.6858

>> x = [x(2), x(3), x(2) - 2*c(3)/(c(2) + sqrt(c(2)^2 - 4*c(1)*c(3)))]'

x =

-1.14864

-0.56812

-0.66963

>> % 4th iteration ---------------------------------------

>> M = [(x(1) - x(2))^2, x(1) - x(2), 1

0, 0, 1

(x(3) - x(2))^2, x(3) - x(2), 1]

M =

0.33701 -0.58052 1.00000

0.00000 0.00000 1.00000

0.01030 -0.10151 1.00000

>> y = polyval( p, x )

y =

-13.6858

1.6597

0.5160

>> c = M \ y

c =

-31.6627

8.0531

1.6597

>> x = [x(2), x(3), x(2) - 2*c(3)/(c(2) + sqrt(c(2)^2 - 4*c(1)*c(3)))]'

x =

-0.56812

-0.66963

-0.70285

>> % 5th iteration ---------------------------------------

>> M = [(x(1) - x(2))^2, x(1) - x(2), 1

0, 0, 1

(x(3) - x(2))^2, x(3) - x(2), 1]

M =

0.01030 0.10151 1.00000

0.00000 0.00000 1.00000

0.00110 -0.03322 1.00000

>> y = polyval( p, x )

y =

1.65973

0.51602

0.05802

>> c = M \ y

c =

-18.6991

13.1653

0.5160

>> x = [x(2), x(3), x(2) - 2*c(3)/(c(2) + sqrt(c(2)^2 - 4*c(1)*c(3)))]'

x =

-0.66963

-0.70285

-0.70686

>> % 6th iteration ---------------------------------------

>> M = [(x(1) - x(2))^2, x(1) - x(2), 1

0, 0, 1

(x(3) - x(2))^2, x(3) - x(2), 1]

M =

0.00110 0.03322 1.00000

0.00000 0.00000 1.00000

0.00002 -0.00401 1.00000

>> y = polyval( p, x )

y =

0.51602

0.05802

-0.00046

>> c = M \ y

c =

-21.8018

14.5107

0.0580

>> x = [x(2), x(3), x(2) - 2*c(3)/(c(2) + sqrt(c(2)^2 - 4*c(1)*c(3)))]'

x =

-0.70285

-0.70686

-0.70683

The list of all iterations using Matlab are:

0.000000000000000

-0.100000000000000

-0.200000000000000

-1.148643697414111

-0.568122032631211

-0.669630566165950

-0.702851144883234

-0.706857484921269

-0.706825973130949

-0.706825980788168

-0.706825980788170

Note how convergence speeds up after none of the first three initial approximations are used to calculate the next iterate?

Questions

Question 1

Using Matlab, perform four iterations of finding a root of the polynomial p(x) = x3 + 3x2 + 5x - 7 starting with the points x0 = 1, x1 = 2, x3 = 3.

Question 1

Using Matlab, perform four iterations of finding a root of the polynomial p(x) = x4 + 3x3 + 5x - 7 starting with the points x0 = 1, x1 = 2, x3 = 3.

Question 3

Given the polynomial p = [2 3 5 2 1], the roots are approximately

-0.55786+1.21699j, -0.55786-1.21699j, -0.19214+0.49199j, -0.19214-0.49199j

First use deconv to divide out the first root, and then use it again on the answer to divide out the second root. Compare this to the answer when you divide out the product of the roots with deconv( p, [1 1.11572 1.79227] ). (We get the second from the formula (x - z)(x - z*) = x2 - 2(z) + |z|2).Applications to Engineering

As mentioned in the the engineering application of polynomials, finding the roots of a polynomial are necessary when determining the behaviour and stability of a linear system.

Matlab

Implementing Mller's method in Matlab is not that difficult:

eps_step = 1e-5;

eps_abs = 1e-5;

p = [1 2 3 4 2];

x = [0 -0.1 -0.2]';

y = polyval( p, x );

while ( true )

V = vander( x - x(2) );

c = V \ y;

disc = sqrt( c(2)^2 - 4*c(1)*c(3) );

% if ( real(c(2))*real(disc) + imag(c(2))*imag(disc) > 0 )

if abs( c(2) + disc ) > abs( c(2) - disc )

denom = c(2) + disc;

else

denom = c(2) - disc;

end

[roots(c)', -2*c(3)/denom, x'];

x = [x(2), x(3), x(2) - 2*c(3)/denom]';

y = [y(2), y(3), polyval( p, x(3) )]';

if ( abs( x(2) - x(3) ) < eps_step && abs( y(3) ) < eps_abs )

break;

end

end

x(3)

Maple

Implementing Mller's method in Maple is not that difficult:

To be completed...

> eps_step := 1e-5;

> eps_abs := 1e-5;

> p := x -> x^4 + 2*x^3 + 3*x^2 + 4*x - 7;

> x = ;

> y = map( p, x );

> for i from 1 to 100 do

V = vander( x - x(2) );

c = V \ y;

x =

Start

Define function f(x)

Define function bisect

Initialize itr

Call function bisect with x,a,b,itr (B)

Call function Bisect with x1,a,b,r,itr (B)

Is f(a)*f(x)