Upload
riyandika-jastin
View
599
Download
2
Embed Size (px)
Citation preview
TMS-301 NUMERICAL METHODS Lecture 4: Newton-Raphson, Secant method, etc
3 Newton-Raphson method
3.1 Iterations
The Newton-Raphson method uses the slope (tangent) of the function f (x) at the current iterativesolution (xi) to find the solution (xi+1) in the next iteration.The slope at (xi, f (xi)) is given by
f′(xi) =
f (xi)− 0
xi − xi+1
Then xi+1 can be solved as
xi+1 = xi − f (xi)
f ′(xi)
which is known as the Newton-Raphson formula.
Relative error: εa =∣∣∣xi+1−xi
xi+1
∣∣∣× 100%.
Iterations stop if εa ≤ εthreshold.
14
Figure 5: Newton-Raphson method to find the roots of an equation
15
Example: Find the root of e−x − 3x = 0.
Solution:
f (x) = e−x − 3x
f′(x) = −e−x − 3
With these, the Newton-Raphson solu-tion can be updated as
xi+1 = xi − e−xi − 3xi
−e−xi − 3
−1 → 0.2795 → 0.5680 → 0.6172 →0.6191 → 0.6191
Converges much faster than the bisectionmethod.
−1.5 −1 −0.5 0 0.5 1−1
0
1
2
3
4
5
xf(
x)
Newton−Raphson Method (from−1)
3.2 Errors and termination condition
The approximate percentage relative error is
εa =
∣∣∣∣xi+1 − xi
xi+1
∣∣∣∣× 100%
16
The Newton-Raphson iteration can be terminated when εa is less than a certain threshold (e.g.,1%).
3.3 Error analysis of Newton-Raphson method using Taylor series
Let xt be the true solution to f (x) = 0. That is
f (xt) = 0
According to the Taylor’s theorem, we have
f (xt) = f (xi) + f′(xi)(xt − xi) +
f′′(α)
2(xt − xi)
2
where α is an unknown value between xt and xi. Since f (xt) = 0, the above equation becomes
f (xi) + f′(xi)(xt − xi) +
f′′(α)
2(xt − xi)
2 = 0 (1)
In Newton-Raphson method, we use the following iteration:
xi+1 = xi − f (xi)
f ′(xi)
That isf (xi) + f
′(xi)(xi+1 − xi) = 0 (2)
17
Subtracting (2) from (1), we have
f′(xi)(xt − xi+1) +
f′′(α)
2(xt − xi)
2 = 0
Denoting ei = xt − xi, which is the error in the i-th iteration, we have
f′(xi)ei+1 +
f′′(α)
2(ei)
2 = 0
With convergence, xi → xt, and α → xt. Then
ei+1 = − f′′(xt)
2f ′(xt)e2i
and
|ei+1| =
∣∣∣∣∣f′′(xt)
2f ′(xt)
∣∣∣∣∣ e2i
|ei+1| ∝ |ei|2
The error in the current iteration is proportional to the square of the previous error. That is, wehave quadratic convergence with the Newton-Raphson method. The number of correct decimalplaces in a Newton-Raphson solution doubles after each iteration.
Drawbacks of the Newton-Raphson method: (see Fig. 6.6 in [Chapra])
• It cannot handle repeated roots
18
• The solution may diverge near a point of inflection.
• The solution may oscillate near local minima or maxima.
• With near-zero slope, the solution may diverge or reach a different root.
Example:Find the roots of function f (x) = x3 − 2x2 + 0.25x + 0.75.Solution:To find the exact roots of f (x), we first factorize f (x) as
f (x) = x3 − 2x2 + 0.25x + 0.75
= (x− 1)(x2 − x− 0.75)
= (x− 1) · (x− 1.5) · (x + 0.5)
Thus, x = 1, x = 1.5 and x = −0.5 are the exact roots of f (x).To find the roots using the Newton-Raphson methods, write the iteration formula as
xi+1 = xi − f (xi)
f ′(xi)
= xi − x3 − 2x2 + 0.25x + 0.75
3x2 − 4x + 0.25
• x0 = −1, x1 = −0.6552, x2 = −0.5221, x3 = −0.5005, x4 = −0.5000, x5 = −0.5000.
• x0 = 0, x1 = −3, x2 = −1.8535, x3 = −1.1328, x4 = −0.7211, x5 = −0.5410, x6 = −0.5018,x7 = −0.5000, x8 = −0.5000.
19
Figure 6: Drawbacks of using NR method
20
• x0 = 0.5, x1 = 1, x2 = 1.
• x0 = 1.3, x1 = 1.5434, x2 = 1.5040, x3 = 1.5000, x4 = 1.5000.
• x0 = 1.25, x1 = −0.5000, x2 = −0.5000.
21
4 Secant method
In order to implement the Newton-Raphson method, f′(x) needs to be found analytically and
evaluated numerically. In some cases, the analytical (or its numerical) evaluation may not befeasible or desirable.The Secant method is to evaluate the derivative online using two-point differencing.
f(x)
i+1 x i x i−1
x i−1 x i−1( ,f( ))
x i x i( , f( ))
xi+1 x i−1x ix
x
Figure 7: Secant method to find the roots of an equation
As shown in the figure,
f′(x) ≈ f (xi−1)− f (xi)
xi−1 − xi
22
Then the Newton-Raphson iteration can be modified as
xi+1 = xi − f (xi)
f ′(xi)
= xi − f (xi)f(xi−1)−f(xi)
xi−1−xi
= xi − f (xi)(xi − xi−1)
f (xi)− f (xi−1)
which is the Secant iteration.The performance of the Secant iteration is typically inferior to that of the Newton-Raphsonmethod.Example: f (x) = ex − 3x = 0, find x.Solution:x0 = −1.1, x1 = −1, x2 = x1 − f(x1)(x1−x0)
f(x1)−f(x0)= 0.2709, εa =
∣∣∣x2−x1x2
∣∣∣× 100% = 469.09%
x1 = −1, x2 = 0.2709, x3 = x2 − f(x2)(x2−x1)f(x2)−f(x1)
= 0.4917, εa =∣∣∣x3−x2
x3
∣∣∣× 100% = 44.90%
x2 = 0.2709, x3 = 0.4917, x4 = x3 − f(x3)(x3−x2)f(x3)−f(x2)
= 0.5961, εa =∣∣∣x4−x3
x4
∣∣∣× 100% = 17.51%
x5 = 0.6170, εa = 3.4%
x6 = 0.6190, εa = 0.32%
x7 = 0.6191, εa = 5.93× 10−5
23
5 False position method
f( )
x
xr xuxu
lx
xr
xuf( )
lxf( )x
f(x)
xt lx
xu
f( )
l
Figure 8: False position method to find the roots of an equation
Idea: if f (xl) is closer to zero than f (xn), then the root is more likely to be closer to xl than toxu. (NOT always true!)False position steps:
(1). Find xl, xu, xl < xu, f (xl)f (xu) < 0
(2). Estimate xr using similar triangles:
−f (xl)
f (xu)=
xr − xl
xu − xr
24
Find xr as
xr =[f (xl)− f (xu)]xu + f (xu)(xu − xl)
f (xl)− f (xu)or
xr = xu − f (xu)(xu − xl)
f (xu)− f (xl)
(3). Determine next iteration interval
– If f (xl) · f (xr) < 0, then the root lies in (xl, xr), set xu = xr and return to Step (2).– If f (xu) · f (xr) < 0, then the root lies in (xr, xu), set xl = xr and return to Step (2).– If f (xu) · f (xr) = 0, then the root has been found. Set the solution x = xr and terminate
the computation.
(4). If εa < εthreshold, x = xr; else, back to (2).
False position method is one of the incremental search methods.In general, false position method performs better than bisection method. However, special caseexists (see fig. 5.14 in textbook).
25
Figure 9: Comparison between false position and Secant methods
26
27
6 Handling repeated (multiple) roots
Examples of multiple roots:
• f (x) = x2 − 4x + 3 = (x− 1)(x− 3) has unrepeated roots: x = 1, x = 3
• f (x) = (x− 1)2(x− 3) has 2 repeated roots at x = 1
• f (x) = (x− 1)3(x− 3) has 3 repeated roots at x = 1
• f (x) = (x− 1)4(x− 3) has 4 repeated roots at x = 1
Observations:
• The sign of the function does not change around even multiple roots — bisection and falseposition methods do not work
• Both f (x) and f′(x) go to zero around multiple roots — Newton-Raphson and secant methods
may converge slowly or even diverge.
6.1 Modified Newton-Raphson method
Fact: Both f (x) and u(x) = f(x)
f′(x)
have the same roots, but u(x) has better convergence properties.
Idea: to find the roots of u(x) instead of f (x).
28
0 1 2 3 4−1
0
1
2
3
x
f(x)
=(x
−3)
.*(x
−1)
0 1 2 3 4−4
−2
0
2
4
x
f(x)
=(x
−3)
.*(x
−1)
.2
0 1 2 3 4−4
−2
0
2
4
x
f(x)
=(x
−3)
.*(x
−1)
.3
0 1 2 3 4−4
−2
0
2
4
x
f(x)
=(x
−3)
.*(x
−1)
.4
Figure 10: Repeated roots
29
The Newton-Raphson method iteration for u(x):
xi+1 = xi − u(xi)
u′(xi)
u′(x) =
[f′(x)
]2
− f (x)f′′(x)
[f ′(x)
]2
xi+1 = xi −
[f′(xi)
]2
[f ′(xi)
]2 − f (xi)f′′(xi)
f (xi)
f ′(xi)
or
xi+1 = xi − f (xi)f′(xi)[
f ′(xi)]2 − f (xi)f
′′(xi)
Modified Newton-Raphson method has quadratic convergence even for multiple roots.
xt is an unrepeated root of u(x) = 0
f (x) has n repeated roots, n ≥ 2
f (x) = g(x) · (x− xt)n
where g(xt) 6= 0. Then
f′(x) = g
′(x)(x− xt)
n + g(x)n(x− xt)n−1
= (x− xt)n−1
[g′(x)(x− xt) + ng(x)
]
30
and
u(x) =f (x)
f ′(x)
=g(x) · (x− xt)
n
(x− xt)n−1[g′(x)(x− xt) + ng(x)
]
=g(x) · (x− xt)
g′(x)(x− xt) + ng(x)
Therefore, xt is an unrepeated root of u(x) = 0.
31