8
Chapter 2 Iterative Methods for Solving Sets of Equations 2.2 The Successive-Over-Relaxation (SOR) Method The SOR method is very similar to the Gauss-Seidel method except that it uses a scaling factor to reduce the approximation error. Consider the following set of equations x j = b i , i = 1, 2, , n For Gauss-Seidel method, the values at the k iteration are given by = It should be noted that for the calculation of x i , the variables with index less than i are at the (k) iteration while the variables with index greater than i are at still at the previous (k-1) iteration. The equation for the SOR method is given as = + The term in the bracket is just difference between the variables of the previous and present iterations for the Gauss-Seidel method = [ ] Gauss- Seidel 7

I - Cal Poly Pomonatknguyen/egr511/Notes/Ch2-2.doc · Web view2.3 Newton’s Method for Systems of Nonlinear Algebraic Equations Consider two equations f1(x1, x2) and f2(x1, x2) for

Embed Size (px)

Citation preview

Page 1: I - Cal Poly Pomonatknguyen/egr511/Notes/Ch2-2.doc · Web view2.3 Newton’s Method for Systems of Nonlinear Algebraic Equations Consider two equations f1(x1, x2) and f2(x1, x2) for

Chapter 2

Iterative Methods for Solving Sets of Equations

2.2 The Successive-Over-Relaxation (SOR) Method

The SOR method is very similar to the Gauss-Seidel method except that it uses a scaling factor to reduce the approximation error. Consider the following set of equations

xj = bi , i = 1, 2, , n

For Gauss-Seidel method, the values at the k iteration are given by

=

It should be noted that for the calculation of xi, the variables with index less than i are at the (k) iteration while the variables with index greater than i are at still at the previous (k-1) iteration. The equation for the SOR method is given as

= +

The term in the bracket is just difference between

the variables of the previous and present iterations for the Gauss-Seidel method

= [ ]Gauss-Seidel

This difference is essentially the error for the iteration since at convergence this difference must approach zero. The SOR method obtains the new estimated value by multiplying this difference by a scaling factor and adding it to the previous value. The SOR equation can also be written in the following form

= (1 ) +

When = 1 the above equation is the formula for Gauss-Seidel method, when < 1 it is the under-relaxation method, and when < 1 it is the over-relaxation method. We use the SOR method to solve the set of equations presented in Example 2.1-1.

7

Page 2: I - Cal Poly Pomonatknguyen/egr511/Notes/Ch2-2.doc · Web view2.3 Newton’s Method for Systems of Nonlinear Algebraic Equations Consider two equations f1(x1, x2) and f2(x1, x2) for

T1 = ( T2 + T3 + 500 + 500)

T2 = ( T1 + T4 + T1 + 500)

T3 = ( T1 + T4 + T5 + 500)

T4 = ( T2 + T3 + T6 + T3)

T5 = ( T3 + T6 + T7 + 500)

T6 = ( T4 + T5 + T8 + T5)

T7 = (2T5 + T8 + 2000)

T8 = (2T6 + 2T7 + 1500)

Table 2.2-1 lists the Matlab program and some results using SOR iteration with various scaling factor .

Table 2.2-1 ________________________%scale=0.2:.1:1.8;n=length(scale);iter=scale;errorT=ones(8,1);for k=1:nomega=scale(k);% Initial guesses for the temperaturesT=400*ones(8,1);for i=1:100 errorT(1)=.25*(T(2)+T(3)+1000)-T(1);T(1)=T(1)+omega*errorT(1); errorT(2)=.25*(2*T(1)+T(4)+500)-T(2);T(2)=T(2)+omega*errorT(2); errorT(3)=.25*(T(1)+T(4)+T(5)+500)-T(3);T(3)=T(3)+omega*errorT(3); errorT(4)=.25*(T(2)+2*T(3)+T(6))-T(4);T(4)=T(4)+omega*errorT(4); errorT(5)=.25*(T(3)+T(6)+T(7)+500)-T(5);T(5)=T(5)+omega*errorT(5); errorT(6)=.25*(T(4)+2*T(5)+T(8))-T(6);T(6)=T(6)+omega*errorT(6); errorT(7)=(2*T(5)+T(8)+2000)/9-T(7);T(7)=T(7)+omega*errorT(7); errorT(8)=(2*T(6)+2*T(7)+1500)/9-T(8);T(8)=T(8)+omega*errorT(8); eT=abs(errorT); if max(eT)<.01, break, endenditer(k)=i;fprintf('scale factor = %g, # of iteration = %g\n',omega,i)fprintf('T = %6.2f %6.2f %6.2f %6.2f %6.2f %6.2f %6.2f %6.2f\n',T)endplot(scale,iter)xlabel('Scale factor');ylabel('Number of iterations');grid on

>> sor

8

Page 3: I - Cal Poly Pomonatknguyen/egr511/Notes/Ch2-2.doc · Web view2.3 Newton’s Method for Systems of Nonlinear Algebraic Equations Consider two equations f1(x1, x2) and f2(x1, x2) for

scale factor = 0.9, # of iteration = 17T = 489.30 485.15 472.06 462.00 436.94 418.73 356.99 339.05scale factor = 1, # of iteration = 13T = 489.30 485.15 472.06 462.00 436.95 418.73 356.99 339.05scale factor = 1.1, # of iteration = 7T = 489.30 485.15 472.06 462.01 436.95 418.74 356.99 339.05scale factor = 1.2, # of iteration = 9

Figure 2.2-1 shows the number of iterations required for convergence as a function of the scaling factor . There is a minimum in the number of iterations at of about 1.2. Normally the value of the scaling factor for a minimum iteration is between 1 and 2 and this value can not be determined before hand except for some special cases. Under-relaxation method ( < 1) always requires more iterations than the Gauss-Seidel method. However under-relaxation is sometimes used to slow the convergence if a value of the scaling factor 1 leads to divergence.

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.80

10

20

30

40

50

60

70

80

90

100

Scale factor

Num

ber

of it

erat

ions

Figure 2.1-1 The variation of number of iterations with scaling factor.

9

Page 4: I - Cal Poly Pomonatknguyen/egr511/Notes/Ch2-2.doc · Web view2.3 Newton’s Method for Systems of Nonlinear Algebraic Equations Consider two equations f1(x1, x2) and f2(x1, x2) for

2.3 Newton’s Method for Systems of Nonlinear Algebraic Equations

Consider two equations f1(x1, x2) and f2(x1, x2) for which the roots are desired. Let , be the guessed values for the roots. f1(x1, x2) and f2(x1, x2) can be expanded about point ( , ) to obtain

f1(x1, x2) = f1( , ) + (x1 ) + (x2 ) = 0

f2(x1, x2) = f2( , ) + (x1 ) + (x2 ) = 0

Let = (x1 ) and = (x2 ), the above set can be written in the matrix form

=

orJ(p(0))y(0) = F(p(0))

In general, the superscript (0) can be replaced by (k1)

J(p(k-1))y(k-1) = F(p(k-1))

J(p(k-1)) is the Jacobian matrix of the system. The new guessed values x at iteration k are given by

x = p(k) = p(k-1) + y(k-1)

Example 2.3-1

Use Newton’s method with the initial guess x = [0.1 0.1 0.1] to obtain the solutions to the following equations2

f1(x1, x2, x3) = 3x1 cos(x2 x3) = 0

f2(x1, x2, x3) = 81(x2 + 0.1)2 + sin x3 + 1.06 = 0

f2(x1, x2, x3) = + 20x3 + = 0

SolutionThe following two formula can be applied to obtain the roots

2 Numerical Analysis by Burden and Faires

10

Page 5: I - Cal Poly Pomonatknguyen/egr511/Notes/Ch2-2.doc · Web view2.3 Newton’s Method for Systems of Nonlinear Algebraic Equations Consider two equations f1(x1, x2) and f2(x1, x2) for

J(p(k-1))y(k-1) = F(p(k-1))

J(p(k-1)) is the Jacobian matrix of the system.

J(p(k-1)) =

F(p(k-1)) is the column vector of the given functions

F(p(k-1)) =

The new guessed values x at iteration k are given by

x = p(k) = p(k-1) + y(k-1)

Table 2.3-1 lists the Matlab program to evaluate the roots from the given initial guesses.

Table 2.3-1 Matlab program for Example 2.3-1 (Ex2d3d1) -------------% Newton Method% f1='3*x(1)-cos(x(2)*x(3))-.5';f2='x(1)*x(1)-81*(x(2)+.1)^2+sin(x(3))+1.06';f3= 'exp(-x(1)*x(2))+20*x(3)+10*pi/3-1' ;% Initial guess%x=[0.1 0.1 -0.1];for i=1:5f=[eval(f1) eval(f2) eval(f3)];Jt=[3 2*x(1) -x(2)*exp(-x(1)*x(2)) x(3)*sin(x(2)*x(3)) -162*(x(2)+.1) -x(1)*exp(-x(1)*x(2)) x(2)*sin(x(2)*x(3)) cos(x(3)) 20]';%dx=Jt\f';x=x-dx';fprintf('x = ');disp(x)end

Matlab can also evaluate the Jacobian matrix of the system analytically as shown in Table 2.3-2

11

Page 6: I - Cal Poly Pomonatknguyen/egr511/Notes/Ch2-2.doc · Web view2.3 Newton’s Method for Systems of Nonlinear Algebraic Equations Consider two equations f1(x1, x2) and f2(x1, x2) for

Table 2.3-2 Matlab program for Example 2.3-1 -------------% Newton Method with Jacobian matrix evaluated analytically by Matlab% %syms x1 x2 x3F=[3*x1-cos(x2*x3)-.5 x1^2-81*(x2+.1)^2+sin(x3)+1.06 exp(-x1*x2)+20*x3+(10*pi-3)/3];Jac=[diff(F,x1) diff(F,x2) diff(F,x3)];x1=.1;x2=.1;x3=-.1;k=0;disp(' k x1 x2 x3')fprintf('%3.0f %10.7f %10.7f %10.7f\n',k,x1,x2,x3)for k=1:10Am=eval(Jac);Bc=eval(F);yk=Am\Bc;x1=x1-yk(1);x2=x2-yk(2);x3=x3-yk(3);fprintf('%3.0f %10.7f %10.7f %10.7f\n',k,x1,x2,x3)if max(abs(yk))<.00001, break, endend

The printouts from Matlab are

Jac =[ 3, sin(x2*x3)*x3, sin(x2*x3)*x2][ 2*x1, -162*x2-81/5, cos(x3)][ -x2*exp(-x1*x2), -x1*exp(-x1*x2), 20]

k x1 x2 x3

0 0.1000000 0.1000000 -0.1000000 1 0.4998697 0.0194668 -0.5215205 2 0.5000142 0.0015886 -0.5235570 3 0.5000001 0.0000124 -0.5235985 4 0.5000000 0.0000000 -0.5235988 5 0.5000000 0.0000000 -0.5235988

12