4

Click here to load reader

Csc349a Cribsheet Rev3.0

  • Upload
    paba

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Csc349a Cribsheet Rev3.0

7/23/2019 Csc349a Cribsheet Rev3.0

http://slidepdf.com/reader/full/csc349a-cribsheet-rev30 1/4

aylor’s Theorem:

xf x0  f 'x0x ‐ x0  f ''x0x ‐ x022!   …f nx0x ‐ x0nn!   f n1ξxx ‐ x0n1n 1!  

here x  ξ x 

x  f x Rx  Truncation Error (remainder)

etermine second order (n=2) Taylor polynomial approx forx) = x1/4 expanded about x0=1. Include remainder term.olution:x  1, f 'x0  14 x0‐3 4   14, f ''x0  ‐  316 x0‐7 4   ‐  316 

'x0  2164 x0‐11 4   2164 

f x1 14 x‐1‐   332 x‐12   7128 ξx‐11 4  x‐13 etermine a good upper bound for the truncation error of

e Taylor poly approximation above when 0.95≤ x ≤1.06 byounding the remainder term. Give 4 s ignificant digits.olution:x‐Px|max   7128 ξx‐11 4  x‐13   7128 .95‐11 4  1.06‐131.36010 ‐5 

x0h  f x0  hf 'x0  h22! f ''x0  h33! f '''x0 … 

se this form of Taylor’s theorem to determine theuadratic approx to sin(1+h).

olution:

n1 h  sin1  hcos1  h22 ‐sin1 aylor's Theorem can be expressed in two equivalent forms:he way as defined, or by using a change of variable

eplacing x by x0 + h, so that h = x - x0 is the independentariable):x0 + h) = f(x0) + hf'(x0) + h2f"(x0)/2! + h3f"'(x0) / 3! + ...

sing the latter form of Taylor's Theorem (without theemainder term), determine the quadratic (in h) Taylorolynomial approximation to sin(1+h). Note: leave yournswer in terms of cos(1) and sin(1).

olution: sin(1+h) ≈ sin(1) + h(cos(1)) + h2 (-sin(1)) / 2

se the Taylor polynomial approximation from above tobtain a polynomial approximation, say p(h), to g(h).

olution: p(h) = [sin(1) + hcos(1) – (h2 / 2)sin(1)) –n(1)] / h = cos(1) – hsin(1)/2

oating point operations: relative error |1-p*/p| is verymall: <b1-k(chopping), <0.5b1-k(rounding) (b=base) butcreases w/more fl() operations.se the following to eliminate subtractive Cancellation:

RationalizationTaylor approximations:

  1  x

1! x

2!  x

3!  x

n! 

inx  x , cosx  1         Trig Substitution:

Sin(2θ)=2sin(θ)cos(θ)sin(a-b)=sin(a)cos(b)-cos(a)sin(b)cos(a-b)=cos(a)cos(b)+sin(a)sin(b)

cos(2θ)=cos2(θ)-sin2(θ)=2cos2(θ)-1=1-2sin2(θ)

hat values of x, where x>1 is f x  √ x √ x 1 subject

subtractive cancellation? How should f(x) be evaluated inoating point arithmetic in order to avoid cancellation?

olution: X large and positive, Use Rationalization:

  √  √  1 √  √  1√  √  1    1√  √  1 

onsider: f x    , where: tanhx    fl(f(x)) is to be evaluated for each of the following ranges

f values of x, specify whether the computed floating-pointesult will be accurate or inaccurate.is large and positive (eg. x > 4) Answer: Inaccurate

is close to 0 (eg. x = 0. 001) Answer: Accurateis large and negative (eg. x < -4) Answer: Accurate

onsider g(h) = [sin(1 + h) - sin(1)] / h, h ≠ 0, where the

rguments for sin are in radians . When h is close to 0,valuation of g(h) is inaccurate in floating-point arithmetic. the parts below, use 4 digit, idealized, rounding floating-

oint arithmetic. If x is a floating-point number, assumeat fl(sin(x)) is determined by rounding the exact value of

n(x) to 4 significant digits.

valuate fl( g(h) ) for h = 0. 00351: (1+h)= fl(1+0.00351)= f l(1.00351)=1.004(sin(1+h))= fl(sin(1.004))= fl(0.843625...)=0.8436(sin(1))= f l(0.841470...)=0.8415(sin(1+h)-sin(1))= fl(0.8436-0.8415)=0.002100

((sin(1+h)-sin(1))/h)= fl(0.002100/0.00351)=(0.598290...)=0.5983ote: exact value is 0.538824...

x) =√x - √(x-1), where x > 1. Formula is inaccurate for

rge positive x. No problem when x ≈ 1. How should f(x)e evaluated in floating point arithmetic to avoid the

ubtractive cancellation?

Solution: f(x) = [√x - √(x-1)] x (√x + √(x-1)) / (√x - √(x-1))=1 / (√x + √(x-1))

Let f(x) = [(sin(x) - ex) + 1] / x2, x ≠ 0, x in radians.To 4 significant digits, the exact value of f(0.123) is -0.5416,and fpn computation is inaccurate. In order to obtain a better

formula for approximating f(x) when x is close to 0, use theTaylor polynomial approximations for ex and sin(x) (bothexpanded about x0 = 0) in order to obtain a quadraticpolynomial approximation for f(x). Show that this polynomial is

unstable for fl(f(0.123))Solution: sin(x) ≈ x – x3 /6, ex ≈ 1 + x + x2 /2 + x3 /6 + x4 /24 

f(x) ≈ [x – x3 /6 – (1 + x + x2 /2 + x3 /6 + x4 /24) + 1] / x2  =-0.5 – x/3 – x2 /24

Let x = 0.123 + ξ, where ξ / 0.123 is small. Given the problemwhere x = 0.123 gives a solution of –0.4629, now compare to

the perturbed problem where x = 0.123 + ξ. f(x) = -0.5 –

(0.123 + ξ)/3 – (0.123 + ξ)2 /24 = -0.54163 – 0.34358ξ +O(ξ)2 ≈  0.5416 for all small ξ, so problem is unstable.

Problem Conditioning: Ill conditioned if exact solutionchanges greatly with small changes in data. Only well-

conditioned if for ALL small εi, r  r Data:d computed solution r Perturbed Data: d   d  ε exact solution r With small |   ⁄ | If a, b, c, d, e, f have known values then a bc d xy ef   is a

system of 2 linear equations in the 2 unknowns x, y. If ad-bc ≠ 

0, then the solution is: x     and: y   . Consider the

system: 0.96 1.234.91 6.29 xy 0.271.38 Show that the problem

of computing the solution

xy is ill conditioned.

Solution:

Almost any perturbation of the 6 constants in the data will do.

For example, 0.961 1.234.89 6.29 xy 0.271.38 has the exact

solution 0.030.196, whereas the given system has solution 11 Algorithm Stability: an algorithm is stable if it determines acomputed solution (using floating-point arithmetic) that is close

to the exact solution of some small perturbation of the problem.

Stable if ANY data   , ̂    Only unstable if NO data   , ̂    :  Perturbed Data:   ̂ With small |   ⁄ | Let

f x 

  

  

. F(x) is accurate when x is close to 0.

Show that the computation of fl(f(0.123)) is unstable.Solution:given problemx 0.123    computed solution0.4629  perturbed problemx 0.123ε   f x       

 for x close to 0 

f x  12  0.123ε3   0.123ε24  f x  0.541630.34358ε Oε f x  0.5416 for all ε such that   . is small.

Since this is not close to -0.4629 for all small ε, thecomputation is unstable.

llw x ly z for w=16.00, x=43.61, y=12.31,

z=56.68Using idealized, rounding, floating point arithmetic (base 10,

k=4), the above equation evaluates to 0.1. The exact value is0.0292. Use the definition of stability to show (by using only aperturbation in y) that the computation is stable.

Solution:To show that its stable, find a value ε for which the exact valueof (16.00)(43.61)-(12.31+ε)(56.68) is approx equal to 1.

Solving for ε gives ε 1643.61‐0.156.68   ‐ 12.31 ‐0.0012491 Thus,

for example, if y =12.31 is perturbed to y yε 12.310.00125 , then the exact value of w x y z 0.10005,

which is close to 0.1, and since   ε0.123 0.0012512.31   is small, the

computation is stable.

Bisection Method: used to compute a zero of any function f(x)

that is continuous on any interval [a,b] for which f a f b 0 has linear convergence α 1 

•  a and b are two initial approximations•  new approximation is the midpoint of [a,b],

•  c  a b   2⁄  •  if f c  0, stop

otherwise, a new interval that is half the length•  of the previous interval is determined:

o  if

f a f c  0 set b ← c

o  if f b f c  0  set a ← c

•  repeat until [a,b] is sufficiently small – c will•  be close to a zero of f(x)

•  if tolerance is specified, stop when b a   2⁄    

Max absolute error of successive approx:|p p|  b a2   , p  mida, b # of iterations, N, to achieve absolute error < ε: N  Newton’s Method: if p is a zero of f(x), and p0 is an approxima

then: P  P      •  Always converges if the initial approximation p0 is sufficien

close to the root p.

•  converges quadratically α 2 if p is a simple zero

(multiplicity, m = 1)

f C

a, b means:

f x,

f '

x,

f '''

x all exist and are continuou

[a,b], p  a, b is a root of f x  0, f p  0Apply Newton’s method to f x  x    in order to determine a

iterative formula for computing1√ c 

Solution:f x  x    , f x  2x 

P  P  P  1c2P    P  1c2P   or 12 P    1cP Apply the Newton-Raphson method to     ⁄  in o

determine an iterative formula for computing √  

Solution:

xn1  xn – f xnf 'xn   xn – xn2‐ Rxn2xn   Rxn2  xn – xn3‐Rxn     xn22xn3R

xn 1 ‐ xn3

‐R2xn3R   xn 2xn3

  R ‐ xn3

  R2xn3  R     xn xn3

  2R2xn3  RApply Newton-Raphson to f(x) = x2 – 1/c to determine an iteratformula for computing 1/sqrt(c).Solution:  f(x) = x2 – 1/c; f'(x) = 2xpn = pn-1 – (pn-1

2 – 1/c)/2pn-1 = (pn-12 + 1/c) / 2pn-1 = 0.5(pn-1 + 1

For arbitrary c > 0, let p0 be the initial approximation to 1 / sqrt{p1 , p2 , p3, ...} be the sequence of computed approximations

sqrt(c) using the iterative formula from (a), and let en = pn – 1 / for n = 0,1,2,3,... Show using algebra that en = en-1

2 / 2pn-1. Al

followed that lim(n = ∞) |en| / |en-1|2 = lim(n = ∞) 1/2pn-1 = √c /

Solution: en = pn – 1/sqrt(c) = [pn-12 + 1/c]/2pn-1 – 1/sqrt(c) =

[sqrt(c)pn-12 + 1/c – 2pn-1] / 2pn-1sqrt(c) = sqrt(c)[pn-1

2 – 2pn-1 /s+ 1/c] / 2pn-1sqrt(c) = [pn-1 – 1/sqrt(c)]2 / 2pn-1 = en-1

2 / 2pn-1

Let R denote any positive number. Apply the Newton-Raphson m

to f(x) = x2 - (R / x) in order to determine an iterative formula focomputing 3√R Simplify the formula so that it is in the form: xn [

h(xn)], where g( xn) and h( xn) are simple polynomials in xn.Solution: xn+1 = xn – f(xn)/f(xn) = xn – (x2 – (R/xn)) / (2xn + R/xxn[1- (xn

3 – R)/ (2xn3 + R)] = xn(2xn

3 - R - xn3 + R)/(2xn

3 + R) =

+ 2R)/(2xn

3 + R)

Consider the case R = 2. Given some initial value x 0, if the iterat

formula in (a) converges to 3√2, what will be the order of converSolution: f'(x) = 2x + R/x2 = 2x + 2/x2, so at the root x = 3√2f'(3√2) = 2 3√2 + 2 / 22/3 ≠ 0 ⇒  3√2 is a simple zero ⇒ quadraticonvergence

Convergence:Suppose that the absolute errors in three consecutive approxima

with some iterative method are en= 0.07; en+1 = 0.013; e n+2 = 0Use the definition of convergence to estimate the order of conveof the iterative method.Solution:

Definition of the order of convergence ∝: limn ⇒∞ en+1 / en∝ = k; if

sufficiently large then, en+1 / en∝ = k = en+2 / en+1

∝ ⇒ en+1 / en+2 = k

en+1)∝; solve for ∝ (∝=1.62)

Multiplicity:•  If a zero exists at p for f(x), keep taking derivatives untilf x  0  

•  m = multiplicity

•  If m= 1, then p is a simple zero of f(x) 

Secant Method:  P  P  f P      Error in kth approx.: e  p   p; e  |e||e| Order of convergence: α   √    1.618 for a simple 0.

Horner’s Algorithm: Given a polynomial Px  ∑   ax  

and a value x0, it evaluates P(x 0 ) and P’(x 0 ) Given a0, a1,…an and x0, compute:

bn = an cn = bn 

bn-1 = an-1+bnx0  cn-1 = bn-1+cnx0 b1 = a1+b2x0  c1 = b1+c2x0 = P’(x0)b0 = a0+b1x0 = P(x0)

P(x) = (x-x0)Q(x)+b0 Original polynomial: P(x) = anx

n+ an-1xn-1+… a1x+a0 

Deflated polynomial: Q(x) = b1+b2x2+b3x

3+bnxn-1 

An approximation to one of the zeros of P(x) = x4 + x3 - 6x2 - 7x

x0 = 2.64 . If x0 is used as an approximation to a zero of P(x), ussynthetic division (Horner's algorithm) to determine the associat

Page 2: Csc349a Cribsheet Rev3.0

7/23/2019 Csc349a Cribsheet Rev3.0

http://slidepdf.com/reader/full/csc349a-cribsheet-rev30 2/4

eflated polynomial. Note: do not do any computations withe Newton-Raphson method.

olution: a0 = -7, a2 = -6, a3 = 1, a4 = b4 = 1, b3 = a3 +

4x0 = 1 + 1(2.64) = 3.64, b2 = a2 + b3x0 = -6 +64(2.64) = 3.6096, b1 = 2.529344, b0 = -0.322532

eflated Poly: x3 + 3.64x2 + 3.6096x + 2.529344

ewton’s Method w/Horner’s Algorithm andolynomial Deflation: 

•  to approx. a zero of P(x)•  choose an initial approx. p0 

= N: use Horner’s Algorithm to evaluate {bn ... b0} and {cn 

. c0} so b0 = P(pN-1) and c1 = P’(pN-1) compute p.n. ← pN-1-

0/c1; pN is the approx. to a zero of P(x) deflated polynomialQ(x) = b1+b2x+b3x

2+...bNxN-1 

uller’s Method:  Px  ax x  bx x c 

P(x) and f(x) are equal at x0, x1, x2

f x 

 x  xf x f x x  xf x f xx  xx  xx  x  

 x  xf x f x x  xf x f xx  xx  xx  x  

determine the next approx, x3:

  x    2cbsignb√b  4ac en reinitialize x0, x1, x2 to be x1, x2, x3 and repeat.rder of convergence = 1.84 for simple zero

et P(x)=x4+x2-3. If Muller’s method is used to approx. a

ero of P(x) using initial approx. 0,1,2, give the Lagrangerm of the interpolating polynomial.olution:

  0, x  1, x  2 

x    x xx xx  xx  x  x 1x 212  

x    x xx xx  xx  x  xx 211 

x    x xx xx  xx  x  xx 121  

x  Px  3, f x  Px  1,x  Px  17 Px  3Lx Lx 17Lx olynomial Interpolation: 

let y=f(x) and y1=f(x1) for given values x1 then P(x1)=y1 interpolates f(x)

general form:x   x‐x1x‐x2x0‐x1x0‐x2 y0   x‐x0x‐x2x1‐x0x1‐x2 y1   x‐x0x‐x1x2‐x0x2‐x1 y2 

or linear interpolation (n=1):

x  x‐x1x0‐x1 y0  x‐x0x1‐x0 y1 et P(x) denote the (linear) polynomial of degree 1 that

terpolates f(x) = cos(x) at the points x0 = -0.1 and x1 =

1 (where x is in radians). Use the error term ofolynomial interpolation to determine an upper bound for(x) - f(x), where x ∈[-0.1, 0.1]. Do not construct P(x).olution:

P(x) – f(x)| = | f"(E) (x – 0.1)(x + 0.1) / 2! | with –0.1 << 0.1= | -cos(E) (x2 – 0.01) / 2 | <= [cos(0) / 2] Max(–1 < E < 0.1)|x2 – 0.01|= 0.5 |0 – 0.01| = 0.005

rror Term of Polynomial Interpolation:

x  Px f ξxn 1!   x x x x … x x here P(x) is the interpolation polynomial and x0, x1, xn arestinct points, therefore:x  Px|  1n 1! maxf ξxmax|x xx x … x x| 

to determine max, either use inspection or takederivative and set it equal to 0

assume f(x)∈C(n+1)[a,b]

et P(x) denote the (linear) polynomial of degree 1 thatterpolates f(x) = cos x at the points x0 = -0.1 and x1 =1 (where x is in radians). Use the error term of

olynomial interpolation to determine an upper bound forP(x) - f(x)|, where x ∈ [-0.1, 0.1] Do not construct P(x).

olution: |P(x) - f(x)| = |f"(ξ)(x-0.1)(x+0.1)/2!| with –

1< ξ < 0.1= |-cos(ξ)(x2 – 0.01)/2| <= cos(0)/2 max(-0.1 <= ξ <=1)|x2 – 0.01| = 0.5 |0 – 0.01| = 0.005

unge Phenomenon:as n → ∞, Pn(x) diverges from f(x) for all values of x

such that 0.726 ≤ x ≤ 1 (except at the points ofinterpolation xi) consider the error term for polynomial interpolationas an example of this 

agrange Interpolation Polynomials:x  Px for k 0,1,2… n x  f xL,x f  xL,x

  f xL,x

 

Ln,kx  x ‐ x0x ‐ x1…x ‐ xk‐1x ‐ xk1…x ‐ xnxk ‐ x0xk ‐ x1…xk ‐ xk‐1xk ‐ xk1…xk ‐ xn 

L,x      x xx  x

,  

Order of Convergence:lim ||||   λ ,  λ is positive → α is order of convergence 

Cubic Spline Interpolation: S(x) is a cubic spline interpolant for f(x) if:(a) S(x) is a cubic polynomial, denoted by S j(x), on each

subinterval [x j, x j+1], 0≤ j ≤n-1

(b) S j(x j) = f(x j), for 0≤ j ≤n-1 and Sn-1(xn)=f(xn)(c) S j(x j) = S j(x j+1), for 0≤ j ≤n-2

(d) S j’(x j) = S j’(x j+1), for 0≤ j ≤n-2

(e) S j”(x j) = S j”(x j+1), for 0≤ j ≤n-2(f) and either one of:

(i) S”(x0) = S”(xn) = 0 (free/natural boundary condition)(ii) S’(x0) = f’(x0) and S’(xn) = f’(xn)

(clamped boundary condition)There are infinite solutions for conditions (a)-(e) with 4n-2conditions to be satisfied in 4n unknowns. If (f) is satisfied,there are only 4n conditions in 4n unknowns and a unique S(x).

Determine a0, b0, d0, a1, b1, c1 and d1 so that

S(x) = [a0 + b0x - 3x2 + d0x3 , -1 ≤ x ≤ 0; a1 + b1x + c1x2 +d1x3, 0 ≤ x ≤ 1]is the natural cubic spline function such that

S(-1) = 1, S(0) = 2 and S(1) = -1. Clearly identify the 8conditions that the unknowns must satisfy, and then solve forthe 7 unknowns.Solution: S0'(x) = b0 – 6x + 3d0x

2  S1'(x) = b1 + 2c1x +

3d1x2 

S0"(x) = -6 + 6d0x S1'(x) = 2c1 + 6d1x

The 8 conditions are: S0(-1) = 1 ⇒ a0 – b0 – 3 – d0 = 1 ⇒ a0 –b0 – d0 = 4S1(0) = 2 ⇒  a1 = 2

S1(1) = -1 ⇒  a1 + b1 + c1 + d1 = -1 ⇒  b1 + c1 + d1 = -3

S1(0) = S0(0) ⇒  a1 = a0  ⇒  a0 = 2S1'(0) = S0'(0) ⇒  b1 = b0 

S1"(0) = S0"(0) ⇒  2c1 = -6 ⇒  c1 = -3

S0"(-1) = 0 ⇒  -6 – 6d0 = 0 ⇒  d0 = -1S1"(1) = 0 ⇒  2c1 + 6d1 = 0 ⇒  -6 + 6d1 = 0 ⇒  d1 = 1From the first condition, b0 = a0 – d0 – 4 = -1

From the fifth condition, b1 = b0  ⇒  b1 = -1

Numerical Differentiation Formulas: 

Given f(x) and x. Write a Lagrange interpolating polynomial,

and differentiate it and its error term. Solve  .f 'x f xkLk'xf n1ξx ddx x‐x0x‐x1…x‐xnn1!   n

k0  

simplified error term:

f n1 ξxj ddx x‐x0x‐x1…x‐xnn1!   xxj 

f n1 ξxjn1!   xj‐xknk0, kj    

For n = 1, j = 0:

f x       with error term:  ξ   x  x 

For n = 2 (middle of 3 data points), j = 1:

f 'x1  f x0   x1 ‐ x2x0 ‐ x1x0 ‐ x2   f x1   2x1 ‐ x0 ‐ x2x1 ‐ x0x1 ‐ x2  f x2   x1 ‐ x0x2 ‐ x0x2 ‐ x1 

with error term ξ   x  xx  x where ξ1 lies

between x0 and x2. For equally spaced data, x1-x0 = x2-x1 = h:f x    f x f x  f ξ 

where: x  h ξ  x   h 

For n = 2, j = 0 or 2 (equally spaced data):

f x   12h 3f x 4f x  h f x  2h h3 f ξ Where: x  ξ  x 2h

 

Numerical Differentiation Using Taylor’s Theorem:f x  h  f x hf x h2 f x   hn 1! f ξ for –h, etc, replace all h by –h. Write Taylor for all points, dolinear combination to cancel as many derivatives as possible.

Consider the initial-value problem y'(t) = (1/t)(y2 + y) and y(1)= -2. Use the Taylor method of order n = 2 with h = 0.1 toapproximate y(1.1). Show all of your work and the iterative

formula.Solution:  f(t, y(t)) = (1/t)[(y(t))2 + y(t)] so y"(t) = ∂f/∂t +∂f/∂y = (-1/t2)(y2 + y) + (1/t)(2y + 1)((y2 + y) / t) = 2y2(y +1) / t2 Taylor method of order 2 is wi+1 = wi + hf(ti, wi) + h2f'(ti, wi)/2= wi + h(wi

2 + wi)/ti + (h22wi2 / 2ti

2)(wi + 1)

= wi + hwi(wi + 1)/ti + (h2wi2 /ti

2)(wi + 1)So, w1 = w0 + hw0(w0 + 1)/t0  + h2w0

2(w0 + 1)/t02 

= -2 + 0.1(-2)(-2 + 1) / 1 + 0.01(4)(-2 + 1) / 12  = -1.84

Richardson’s Extrapolation:Case 1:

M Nh Kh Kh  Kh   

M = exact value, N1(h) = computed approx. using stepsize h, K1htruncation error of O(h)

Use same formula to approx M, but replace h by h/2 (or h/some Determine linear combination of two formulas for M so that largeterm in truncation error (O(h)) cancels out. N calculation for

extrapolation table:

N h2  N   h2  N   h2 N h22  1   for j 2 

Case 2:M Nh Kh  Kh  Kh   M has truncation error of O(h2).Replace h by h/2 (or h/some value) and followsteps in case 1.N calculation for extrapolation table:

N h

2  N   h

2  N   h2 N h2

4

  1  for j 2 

has truncation error of O(h2j).

Newton-Cotes Closed Quadrature Formulas: Trapezoidal rule (n=1):

f xdx  h2 f x f x  h12 f ξ  

h x  x, degree = 1 Simpson’s Rule (n=2):

f xdx  h3 f x 4f x f x  h90   f ξ 

    2⁄ , degree = 3 Simpson’s Rule Precision Table (degree of precision is 3)

x  f xdx     xdx h3 f x04f x1f x2‐ h590b

a  4

1  b a  b a6   1 411 1  b a 

x  b  a

b a

6  a 4 a b

2  b  b  a

x  b  a3   b a6   a  4 a b2   b  b  a3  

x  b  a4  b a6   a  4 a b2   b  b  a4  

x  b  a5  b a6   a  4 a b2   b  b  a5  

Newton-Cotes Open Quadrature:

The open-newton cotes quadrature formula(for n=1) that approx∫xg(x) dx is 3h/2 (g(xo)+ g(x1)) where h = (xo - x1)/3 and xi = x

(i+1)h for i= 0,1,2,…; recall that Euler's method can be derived bintergrating this differential equation y'(t) = f(t,y(t)) over [tI, tI+1

using a simple "rectangular" rule to approximate ∫ f(t,y(t)) dt. Deformula for approximating th solution to y'(t) = f(t,y(t)) by interg

this differential equation over [tI, tI+1] and approximating ∫ f(t,y(tby the above open newton cotes quadrature formula for n=1.Solution:

∫ y'(t) = ∫ f(t,y(t)) ∴y(tI+3) = y(tI) + ∫ f(t,y(t)) ≈ y(tI) + 3h/2(f(tI+1,y(tI+1))+ f(tI+2,y(tI+2))) which suggests ωI+3 = ωi + 3h/2

(f(tI+1,ωI+1))+ f(tI+2,ωI+2)))

Quadrature Precision:

Determine the degree of precision of the quadrature formula(3h/4)[3f(h) + f(3h)] which is an approximiation to∫0

3h f(x)dx.

Solution:  f(x) = 1 ∫03h dx = 3h 3h/4[3 + 1] = 3h

f(x) = x ∫03h xdx = 9h2 / 2 3h/4[3h + 3h] = 9h2 / 2

f(x) = x2  ∫03h x2 dx = 9h3  3h/4[3h2 + 9h2] = 9h3 

f(x) = x3  ∫03h x3 dx = 81h4 / 4 3h/4[3h3 + 27h3] = 45h4 /

81h4 / 4thus the degree of precision is 2

Composite Simpson’s Rule: m applications of Simpson’s rule on [a,b] requires [a,b] be subdi

into an even # of subintervals, each of length: h    

Case m = 2: Requires 5 quadrature points and 4 subintervals.

f xdx  h3 f   4f   2f   4f   f   

Case m = 4: Requires 9 quadrature points and 8 subintervals.

f xdx  h

3f   4f   2f   4f   2f   4f   2f   4f  

Truncation error: Ef     f µ Compute two approximations computed to ∫x/(1+x2) dx , using thnon-composite Simpsons rule(with h=0.1) and using the composSimpsons rule (with h=0.05 and bounds: [0,0.2]).Solution:

Simpsons rule: I1 = h/3 [f(0) + 4f(0.1) + f(0.2)] = 0.01961158Composite Simpsons rule:

I2 = h/3 [f(0) + 4f(0.05) + 2f(0.1) + 4f(0.15) + f(0.2)] = 0.0196

Find the truncation error term for the two approximations to obtaO(h4) approximation to the value of the given definite intergal.Solution:

Width h= 0.1(1) I = I1 + kh4 + O(h6)(2) I = I2 + k(h/2)4 + O(h6)

16*(2) - (1) solve for I ⇒ I = I2 + (I2 - I1)/15 + O(h6)∴ I = 0.019103533

Romberg Integration: is the application of Richardson’s Extrapto the composite trapezoidal rule approximations. Rk,1 is the com

trapezoidal rule approx. to

  f x

  dx using 2k-1 sub intervals.

Page 3: Csc349a Cribsheet Rev3.0

7/23/2019 Csc349a Cribsheet Rev3.0

http://slidepdf.com/reader/full/csc349a-cribsheet-rev30 3/4

1,1 2,1  R2,2 3,1  R3,2  R3,3  M M O,    f   f , where h b a 

,        , where h    

,            , where h    

or Columns >1:

,  R ,  R,  R,4  1  

runcation Error: 1st Column = O(h2), 2nd Column = O(h4),d O(h6) …

rror Term, First column:

  h

µ, where

a  

onsider approximating ∫ab f(x) dx using Romberg

tegration. Denote the Romberg table by

1,1 2,1  R2,2 3,1  R3,2  R3,3 here Rk,1 = the trapezoidal rule approximation to ∫a

b f(x)dxsing 2k-1 subintervals on [a,b]. Use R1,1 and R2,1 and

chardson extrapolation to show that R2,2 is equal tompsons Rule approximation to ∫a

b f(x)dxolution:  R22 = [4R21

 – R11] / 3 or R21 + [R21 – R11] / 3

= 4[(b-a)(f 0 + 2f 1 + f 2)/4 – (b-a)(f 0 + f 2)/2] / 3(b-a)[2(f 0 + 2f 1 + f 2) – (f 0 + f 2)] / 6 = (b-a)[f 0 + 4f 1 +

] / 6h/3 (f 0 + 4f 1 + f 2) with h = (b – a) / 2

daptive Quadrature: 

1 = (non-composite) Simpson’s rule approx to    

2 = composite Simpson’s rule approx using 2 applicationsf Simpson’s rule

f xdx

  S    h

1440 f µ  S    115 S  S 

f xdx   S    115 |S  S| |S  S|  15, f xdx

  S  auss-Legendre: To approximate an integral using thisethod it must be modified using a change of variable to antegral on [-1,1]. The integral is then approximated as a

near combination of f(x) using values from the followingble: 

i Coefficients (ci) Roots(xi)

1 5/9 0.7745967

2 8/9 0.0

3 5/9 -0.7745967

f xdx   gtdt cgx

 

pproximate

 

√ dx

 using Gauss-Legendre with n = 3.

olution: f x    √   f xdx11   59 f x1 89 f x2 59 f x3   5cos0.774596791‐0.77459672   8cos0.091‐0.02   5cos‐0.774596791‐‐0.77459672 2.144494 uler’s Method: is obtained from this truncated Taylorolynomial approx. by replacing y(ti+1) by its numerical

pprox. wi+1, and similarly replacing y(ti) by wi:w  α (initial condition)w  w   h f  t, w eometric interpretation of Euler’s Method:w  α yt w  w  h f  t, w  w   h yt w  wh   y t 

w  w  h f  t, w 

lobal Truncation Error at ti is |yi-wi|the global truncation error is O(hk), numerical method for

omputing wi is said to be of the order k , the larger the

rder of k, the faster the rate of convergence. A differenceethod is said to be convergent (wrt the differential

quation it approx.) if: lim max |y  w|  0  max|y  w|  hM2L e  1 uler’s method is of order 1.

ocal Truncation Error: the error incurred in one step of aumerical method assuming that the value at the previousesh point is exact.For Euler’s Method,v  w  h f  t, yt |y  v|  h2 y ξ  h2 M, yξ  M 

Local truncation error is O(h2)If the local truncation error is O(hk+1) then the global isO(hk) – method is order k

Disadvantages of Euler’s: not sufficiently accurate. Globaltruncation is O(h), means that h must be small for high

accuracy approx.

Taylor’s Method of order n:

wi1  wi  hf ti,wi  h22 f 'ti,wi … 

hnn! f n‐1ti,wi, int n1 

Truncation error is O(hn+1): just above remainder term:

  hn 1! yξ Global Truncation is O(hn), so order is n.Euler’s method is just special case where n = 1

Runge-Kutta Methods: higher order (≥1) formulas that onlyrequire evaluations of f(t, y(t)), and not any of its derivatives.

Methods of order 4 and 5 are most commonly used.

Consider the initial-value problem y'(t) = (1/t)(y2 + y) and y(1)= -2.

Approximate y(1.1) using h = 0.1 and the following secondorder Runge-Kutta method: wi+1 = wi + (h/2)[f(ti, wi) + f(ti+1,wi + hf(ti, wi))]Solution:  w1 = w0 + (h/2)[f(t0, w0) + f(t1, w1 + hf(t0, w0))]= -2 + (0.1/2)[f(1, -2) + f(1.1, -2 + 0.1f(1, -2))]

= -2 + 0.05[(4 – 2) + f(1.1, -2 + 0.1(2))] = -2 + 0.05[2+ ((-1.8)2 – 1.8)/1.1]= -2 + 0.05(2 + 1.309090) = 1.8345

The Order of the local truncation error of the Runge-Kuttamethod is O(h3)

Direct Methods for Solving Linear Systems:

Matrix algebra: Ax=B → x=A-1BAx=b, n equations, n unknownsCase 1:

a11x1 + a12x2 +…+ a1nxn = b1 a21x1 + a22x2 +…+ a2nxn = b2 …

An1x1 + an2x2 +…+ annxn = bn Case 2:

P(x) = a0 + a1x + a2x2 

1 x   x1 x   x1 x   x aaa  f xf xf x 

Forward Elimination:m  aa , E  E  mE m  aa , E  E  mE m  aa , E  E  mE Make triangular and back Substitute.

Algorithms:Forward elimination:For i=1 to n-1

For j=(i+1) to nmult←a ji /aii 

For k=i+1 to na jk← a jk-mult· aik

b j←b j-mult·bi

Back substitution:xn←bn /ann 

For i=n-1 to 1

x  b      ax

    a  

Fails with divide by zero (a ii=0). Use Partial Pivoting to reduce

error:Find largest pivot:p←i

for k=1 to nif |aki|>|api| then p←k

row swap if necessary

if p≠iFor k=i to n

temp←aik 

aik←apk

apk←temp

temp←bi bi←bp 

bp←temp

continue with Gaussian.

Matlab Ax=b type A\b for Gaussian Elimination with PartialPivoting:det A 1aa … awhere m=# of row swaps. To find

inverse solve [A|I]. Better method, if need A-1b, solve for x inAx=b. If A-1B is needed, more efficient to solve AX=B for X

Stability: Stable if there exists small perturbations E,e such that

x̂ is close to the exact solution (A+E)y=b+e.

small perturbation means E A⁄   , e b⁄  are small. 

A    a , x    x 

Gaussian elimination with PPivoting is almost always stableCondition of problem Ax=b Ill-conditioned if there exists onesmall perturbation of the data (A+E)y=b+e for which the exact

value of y Can be measured by the condition number of A:AA Matlab cond(A)

Interpolation?For the following data, (xi, f(xi): (-1,0); (1,1); (2,3); Suppose a

function g(x) of the form g(x) = c0 + c1e-x + c2e

x is to bedetermined so that g(x) interpolates the above data at the

specified points xi. Write a system of linear equations in matrix/vform Ac = b whose solution will give the values of the unknown c

and c2 that solve this interpolation. Leave answer in terms of e, dsolve the system.Solution:  [1 e e-1; 1 e-1 e; 1 e-2 e2] * [c0; c1; c2] = [0; 1; 3]

Page 4: Csc349a Cribsheet Rev3.0

7/23/2019 Csc349a Cribsheet Rev3.0

http://slidepdf.com/reader/full/csc349a-cribsheet-rev30 4/4

atlab Code:plot(‘0.5*exp(x/3)-sin(x)’, [-4, 5] ) lot( ‘3*x^3-x-1’, [-1 2], ’-‘ )

ero to compute the zeros of f(x):ero(' 0.5*exp(x/3)-sin(x)', [ -5, -3] )hange bounds for each interval in which a zero is located.

olynomial Operations:

he coefficients of a polynomial are stored in a vector ptarting with the coefficient of the highest power of  x).

x) = x3 - 2 x – 5= [1 0 -2 -5];= roots(p)

Will compute and store all of the roots of the polynomial in r.lternate form: r = roots( [1 0 -2 -5] )r is any vector, p = poly(r), will result in p being a vectorhose entries are the coefficients of a polynomial p( x)at has as its roots the entries of r.

xample:= [1 2 3 4];= poly(r)esults in p = [1 -10 35 -50 24],x) = x 4 -10 x3 + 35 x2 - 50 x + 24he function polyval evaluates a polynomial at a

ecified value. If p = [1 -10 35 -50 24], then polyval(p,-1)ves the value of p( x) = x 4 -10 x3 + 35 x2 - 50 x + 24

x = -1, namely 120. MATLAB uses HORNER'SLGORITHM for polyval.

ut with polynewton!!!!!!!!olynewton computes one zero of a polynomial. Inputariables: n the polynomial degree a a vector of the

oefficients of the polynomial P ( x )

zero the initial approximation to a zero Nzero theaximum number of iterations allowed tol relative error

lerance used to test for convergence and the outputariables are p the final computed approximation to a zerof P ( x ) b the final vector of values b computed by Horner’sgorithm, from which the deflated polynomial can bebtained.

olynewton(4, [-3.3 -1 0 2 5], 1.3, 20, 1e-8);

ill output to the screen each computed approximation toe zero.x, y] = polynewton(4, [-3.3 -1 0 2 5], 1.3, 20, 1e-8);ill output to the screen each computed approximation to a

ero, will store the final computed approximation in the

ariable x , and will store the final computed vector of valuesfrom Horner’s algorithm in the vector y.

he MATLAB function INTERP1 can be used to donear and cubic polynomial interpolation. If X denotes aector of values, Y = F(X) a set of corresponding function

alues, then z = interp1(X, Y, x) or z = interp1(X, Y, x,near').= interp1(X, Y, x, 'cubic') determines z based on a cubic

olynomial approximation.

or the cubic spline with clamped boundary conditions, theata to be interpolated should be stored in vectors, X and Y,here Y has 2 more entries than X and the first and last

ntries of Y are the two boundary conditions. If S(x)enotes the cubic spline interpolant, and z is a given

umber, then the value of  S(z) can be computed byntering spline(X, Y, z). To determine the coefficients ofe spline, first determine the pp (piecewise polynomial)rm of the spline by entering pp = spline(X, Y). Then enter

breaks, coefs] = unmkpp(pp). breaks: a vector of the

nots (or nodes) of the spline, coefs: an array, the i-th rowf which contains the coefficients of the i-th spline

uad - Numerically evaluate integral, adaptive Simpsonuadrature. Use array operators .*, ./ and .^

( )/1sin dx x

 using tol = 10-5 

Q, fnc] = quad (@f, 0.1, 2, 10^-5 )

nction y=f(x)y = sin(1./x);

de45: is variable stepsize Runge-Kutta method that useswo Runge-Kutta formulas of orders 4 and 5. Solve non-stiff

fferential equations, medium order method.

( ) ( ) ( ) ( ) 10,5sin5   =−−=′   −  yt et  yt  t   on [0,3]

, y] = ode45('f', [0 3], 1)nction z=f(t,y)

z = -y-5*exp(-t)*sin(5*t);

Gaussian Elimination Algorithm

(forward elimination)For i=1,2,…n-1 do

For j=i+1,i+2,…,n do

mult  a ji /aii for k=i+1,i+2,…,n do

a jk  a jk-mult*aik b j  b j-mult*bi 

(back substitution)xn  bn /ann for i=n-1,n-2,…,1 do

xi  [bi-sum(aijx j, from j=i+1 to n)]/a ii

Gaussian Elimination with partial pivoting

(forward elimination)for i=1,2,…,n-1 do

(find the largest pivot)

pifor k=i+1,i+2,…,n do

if |ak,i|>|ap,i| then

pkif p!=i then

for k=i,i+1,…,n dotempai,k 

ai,kap,k ap,ktemp

tempbi bibp 

bptempfor j=i+1,i+2,…,n do

multa j,i /aii for k=i+1,i+2,…,n do

a j,ka j,k-mult*ai,k b jb j-mult*bi 

(back-substitution)

xnbn /an,n for i=n-1,n-2,…,1 do

xi[bi-sum(aijx j, from j=i+1 to n)]/aii 

Matlab version of Newtons method for computing 1/R  

function p = reciprocal(R,pzero,Nzero,tol)

i=1;while i<=Nzero

p=2*pzero-R*pzero*pzero;fprintf('i = %g',i),fprintf(' approximation = %18.10f\n',p)y(i)=p;if abs(1-pzero/p)<tol

return

endi=i+1;pzero=p;

endfprintf('failed to converge in %g',Nzero),fprintf(' iterations\n')

Horner: to evaluate the polynomial. Input – location x0,degree n, coefficients a. Outputs: b = P(x0), c = P’(x0)

function [b, c] = horner(x0, n, a)b(n+1)=a(n+1);c(n+1)=a(n+1);for j = n: -1: 2b(j) = a(j) + b(j+1)*x0;

c(j) = b(j) + c(j+1)*x0;end

b(1) = a(1) + b(2)*x0;

function [p,b] = polynewton(n, a, pzero, Nzero, tol)

i=1;while i<=Nzero

[b, c] = horner(pzero, n, a);p = pzero - b(1)/c(2);fprintf('i = %g',i),fprintf(' approximation = %18.10f\n',p)

if abs(1-pzero/p)<tolreturn

end

i=i+1;pzero=p;

End

fprintf('failed to converge in %g',Nzero),fprintf(' iterations\n

Secant: To find the zero of the function. Input –initial approx. p0, p1, tolerance tol, and max # ofiterations N.

function p = secant(p0, p1, N, tol)

i=2;q0=f(p0);q1=f(p1);while i<=N

p=p1-q1*(p1-p0)/(q1-q0);if abs(p-p1)<tol

return;endi=i+1;

p0=p1;q0=q1p1=p;q1=f(p);

endfprintf('failed to converge in %g',Nzero),fprintf('iterations\n')

Assignment #3plot a graph of the function

>> ezplot('0.5*exp(x/3)-sin(x)',[-4,5])

fzero to compute the 3 zeros of>> fzero('0.5*exp(x/3)-sin(x)',[-5,-3])ans = -3.3083

h=fzero('10*(0.5*pi*1^2-1^2*asin(x/1)-x*sqrt(1 2̂-x^2))

12.4',[0,1])h = 0.1662; >> r=1; >> x=r-h; x = 0.8338

Newton.mTo find a zero of the function. Input – Initial approx. pzero,

maximum interations Nzero, tolerance tol.function p = newton(pzero,Nzero,tol)i=1;while i <= Nzero

p = pzero-f(pzero)/fp(pzero);fprintf('i = %g',i),fprintf(' approximation = %18.10f\n',p)if abs(1-pzero/p)<tol

return

endi=i+1;pzero=p;

end

fprintf('failed to converge in %g',Nzero),fprintf(' iterations\nf.m function y=f(x)

y=32*(x-sin(x))-35; fp.mfunction y=f(x)

y=32*(1-cos(x));

Multiplicity of the roots.

function test_mult(x)

root = newton (pi/2, 40, 1e-8);fprintf('\nroot used = %18.10f\n',root);for i = 0:x

fprintf('m = %g',i),fprintf(' p = %18.10f\n',f_diff(root,i))

end

Use INTERP1 to approximate z = P(1.3), where P(x) is thepiecewise linear interpolating polynomial to y = sin x at the

equally-spaced points

for i = 1:11x(i)=(i-1)*pi/20;

y(i)=sin(x(i));end

z = interp1(x, y, 1.3, 'linear');

fprintf(' z = %18.10f\n',z);fprintf(' sin(1.3) = %18.10f\n',sin(1.3));

fprintf(' abs(sin(1.3)-z) = %18.10f\n',abs(sin(1.3)-z) );

Use p o l y v a l to evaluate q( x ) at the first zero of q( x ) compu

MATLAB in (a). This should give a result very close to 0.

r = [1 1 1 1 1 1 1 1];p = poly(r)roots_of_p = roots(p)

q = p + [0 0 0 0.001 0 0 0 0 0]qr = roots(q)

polyval(q,qr(1))

ewtons method to find reciprocal

2

1)(

1)(

 x x f 

 R x

 x f 

−=′

−=

2

00

2

000

2

0

00

0

00

2

1

1

)(

)(

 p R p p

 p R p p p

 p

 R p

 p p

 p f 

 p f  p p

⋅−⋅=

⋅−+=

−−=

′−=

Composite Trapezoidal Rule: Input – upper and lower limits integral a and b, max # of iterations maxiter, and tolerance

function trap(a, b, maxiter, tol)

m = 1;x = linspace(a, b, m+1);y = f(x);approx = trapz(x, y);

disp(' m integral approximation');fprintf(' %5.0f %16.10f \n ', m, approx);for i = 1 : maxiterm = m*2 ;

oldapprox = approx ;x = linspace ( a , b , m+1 ) ;y = f(x);approx = trapz(x, y);

fprintf(' %5.0f %16.10f \n ', m, approx);if abs( 1-approx/oldapprox) < tolreturn

endendfprintf('The iteration did not converge in %g', maxiter), fpri

iterations.')

% Use back-substitution to solve Ax = y for x.

function x = solve(n, A, y)x(n) = y(n) / A(n, n);for i = n-1 : -1 : 1

sum = 0;for j = i+1 : nsum = sum + A(i, j) * x(j);endx(i) = (y(i) – sum)/A(i, i);

end