handout1

Embed Size (px)

DESCRIPTION

handout

Citation preview

  • Course EE317 Computation and Simulation 1

    Computation and Simulation EE317Dr Conor Brennan

    Room S339School of Electronic Engineering

    Dublin City [email protected]

    Section One: Numerical Differentiation

    This section examines ways to numerically approximate the derivative and secondderivative of a function. Firstly, a word about notation. Let f(x) be a function whotakes the specific value f(x0) at the point x0. In this handout we use the notationf (1)(x0) to denote the value of the first derivative of f at the point x0, that is

    f (1)(x0) =df

    dx

    x=x0

    . (1)

    The second and third derivatives evaluated at x0 are thus denoted by f(2)(x0) and

    f (3)(x0) respectively and so on. Note that in some textbooks derivatives are denotedusing a prime, that is the first derivative is denoted f (x), the second derivative byf (x) and so on.

    1 Forward, Backward and Central Difference

    The Taylor Series relates the value of a function at a point x0 to the value of thefunction at a nearby point (x0 + h), that is

    f (x0 + h) = f (x0) + hf(1) (x0) +

    h2

    2!f (2) (x0) +

    h3

    3!f (3)(x0) + . . . . (2)

    We can rearrange the above to get

    f (x0 + h) f (x0)h

    = f (1) (x0) +h

    2!f (2) (x0) + . . . , (3)

    and this can be used to make the forward difference approximation to f (1)(x0) thus

    f (1) (x0) ' f (x0 + h) f (x0)h

    (4)

    Obviously this is only an approximation, albeit one that is more accurate as h getssmaller1. How precisely does the accuracy depend on h? Let f denote the errorincurred using this forward difference approximation. Then

    f f (1)(x0) f(x0 + h) f(x0 h)h

    (5)

    = h2!f (2) (x0) + . . . (6)

    1Indeed in the limit as h 0 this is exact.

  • Course EE317 Computation and Simulation 2

    where we have used equation (3). So this approximation is of order O (h). Thismeans that as h gets smaller the error of the forward different approximation alsogets smaller, in a linear fashion. Can we do better than this? Well we could exam-ine how the function varies as we decrease, that is create the backward differenceapproximation. Using the Taylor series we can write

    f (x0 h) = f (x0) hf (1) (x0) + h2

    2f (2) (x0) h

    3

    3!f (3) (x0) + . (7)

    Rearranging yields

    f (x0 h) f (x0)h

    = f (1) (x0) + h2f (2) (x0) h

    2

    3!f (3) (x0) + , (8)

    and so we can construct the following backward difference approximation

    f (1) (x0) ' f (x0) f (x0 h)h

    (9)

    Again, this is an approximation that is more accurate as h gets smaller, but can wequantify how the error depends on h? Inspection of the equations yields

    b f (1)(x0) f (x0) f (x0 h)h

    (10)

    =h

    2f (2) (x0) h

    2

    3!f (3) (x0) (11)

    This approximation is again of order O (h). As h gets smaller the error gets smallerin a linear fashion, and so we have not improved on our forward difference approxi-mation. However we can combine these two approximations to get an improved one.By taking equation (7) from equation (2) we see that

    f(x0 + h) f(x0 h) =(f (x0) + hf

    (1) (x0) +h2

    2f (2) (x0) +

    h3

    3!f (3) (x0)

    )(12)(

    f (x0) hf (1) (x0) + h2

    2f (2) (x0) h

    3

    3!f (3) (x0)

    )(13)

    = 2

    (hf (1)(x0) +

    h3

    3!f (3)(x0) +

    )(14)

    Thereforef(x0 + h) f(x0 h)

    2h= f (1)(x0) +

    h2

    3!f (3)(x0) + (15)

    and we can therefore construct the central difference approximation

    f (1)(x0) ' f(x0 + h) f(x0 h)2h

    (16)

    The error for this approximation is given by

    c = h2

    3!f (3) (x0) + (17)

  • Course EE317 Computation and Simulation 3

    The central difference approximation is thus accurate to order O(h2) and so, forsmall values of h, will in general be better than both the forward and backwardapproximations.Example Consider the polynomial

    f(x) = x3 + x2 1.25x 0.75 (18)Figure (1) shows sampled values of the polynomial in the range [2, 1.5] where wehave sampled in steps of h = 0.2. The derivative of this polynomial is given by

    2 1.5 1 0.5 0 0.5 1 1.58

    6

    4

    2

    0

    2

    4

    6Sampled values of f

    Figure 1: Sampled values of polynomial

    f (1)(x) = 3x2 + 2x 1.25 (19)and is shown in figure (2) along with the forward, backward and central differenceapproximations. We see that the central difference approximation is clearly superiorto both the forward and backward difference approximations.

    1.1 Second order Derivatives

    A similar procedure can be followed to derive approximations to higher order deriva-tives. If we add equation (2) and equation (7) we get

    f(x0 + h) + f(x0 h) =(f (x0) + hf

    (1) (x0) +h2

    2f (2) (x0) +

    h3

    3!f (3) (x0)

    )+(20)(

    f (x0) hf (1) (x0) + h2

    2f (2) (x0) h

    3

    3!f (3) (x0)

    )(21)

    = 2

    (f(x0) +

    h2

    2f (2)(x0) +

    h4

    4!f (4)(x0) +

    )(22)

  • Course EE317 Computation and Simulation 4

    2 1.5 1 0.5 0 0.5 1 1.50

    1

    2

    3

    4

    5

    6

    7

    8

    9

    x

    Firs

    t der

    ivativ

    e

    Exact derivative versus approximations

    exact backwardforward central

    Figure 2: Exact versus approximate derivative

    Therefore

    f(x0 + h) 2f(x0) + f(x0 h)h2

    = f (2)(x0) +h2

    12f (4)(x0) (23)

    So we can construct the central difference approximation to the second derivative

    f (2)(x0) ' f(x0 + h) 2f(x0) + f(x0 h)h2

    (24)

    The error is of order O(h2) and is given by

    c = h2

    12f (4) (x0) + (25)

    2 Richardsons Extrapolation

    Clearly it is possible to obtain better approximations by using more and more sam-ples of the function, thereby using smaller and smaller step sizes h. However ash becomes smaller we run the increased risk of encountering numerical round-offerror due to the finite precision of the computer. Also when we are using finitedifferences to solve partial or ordinary differential equations we would like to getthe most accurate answers with the fewest number of samples, this requires that wekeep h relatively large. Is there a way that we can reduce the error of the approxi-mation even further without having to reduce h? Well, we have the following Taylor

  • Course EE317 Computation and Simulation 5

    expansions

    f(x0 + h) =n=0

    hn

    n!f (n)(x0) (26)

    f(x0 h) =n=0

    (1)nhn

    n!f (n)(x0) (27)

    which enable us to write the following central difference approximation.

    f(x0 + h) f(x0 h)2h

    = f (1)(x0) +h2

    3!f (3)(x0) +

    h4

    5!f (5)(x0) + (28)

    If we have evenly sampled function data we can also create an estimate based onstep size 2h. The Taylor series expansion is

    f(x0 + 2h) =n=0

    (2h)n

    n!f (n)(x0) (29)

    f(x0 2h) =n=0

    (1)n (2h)n

    n!f (n)(x0) (30)

    and so the central difference approximation is given by

    f(x0 + 2h) f(x0 2h)4h

    = f (1)(x0) +4h2

    3!f (3)(x0) +

    16h4

    5!f (5)(x0) + (31)

    Why would we do this? A central difference based on equation (31) wont be asaccurate as one based on equation (28) as the step size between samples is greater (2hrather than h). However we can combine the estimates to create an approximationthat is better than either of them. This process is called Richardson extrapolation.If we multiply equation (28) by 4 and subtract (31) we get

    f(x0 2h) 8f(x0 h) + 8f(x0 + h) f(x0 + 2h)4h

    = 3f (1)(x0)12h4

    5!f (5)(x0)+

    (32)and so an improved estimate is given by

    f (1)(x0) ' f(x0 2h) 8f(x0 h) + 8f(x0 + h) f(x0 + 2h)12h

    (33)

    The error is O(h4) and is given by

    R = 4h4

    5!f (5)(x0) + (34)

    We can also use Richardsons extrapolation to improve our estimate of a secondderivative from O(h2) to O(h4). We get

    f (2)(x0) =f(x0 2h) + 16f(x0 h) 30f(x0) + 16f(x0 + h) f(x0 + 2h)

    12h2+O(h4)(35)

  • Course EE317 Computation and Simulation 6

    Example: A function is given in tabular form below. Find approximationsto the 1st derivative of f(x) at x = 0.8 with error O(h),O(h2) and O(h4) respec-tively. Find approximations to the 2nd derivative with error O(h),O(h2) and O(h4)respectively.

    x 0.6 0.7 0.8 0.9 1.0f(x) 5.9072 6.0092 6.3552 6.9992 8.0000

    Here h = 0.1.Approximation of f (1)(x) accurate to O(h) is

    f (1)(0.8) ' f(0.9) f(0.8)0.1

    =6.9992 6.3552

    0.1= 6.44

    Approximation of f (1)(x) accurate to O(h2) is

    f (1)(0.8) ' f(0.9) f(0.7)2(0.1)

    =6.9992 6.0092

    0.2= 4.995

    Approximation of f (1)(x) accurate to O(h4) is

    f (1)(0.8) ' f(0.6) 8 f(0.7) + 8 f(0.9) f(1.0)12(0.1)

    =5.9072 8 6.0092 + 8 6.9992 8

    1.2= 4.856

    Approximation of f (2)(x) accurate to O(h2) is

    f (2)(0.8) ' f(0.9) 2 f(0.8) + f(0.7)0.12

    =6.9992 2 6.3552 + 6.0092

    0.01= 29.8

    Approximation of f (2)(x) accurate to O(h4) is

    f (2)(0.8) ' f(0.6) + 16 f(0.7) 30 f(0.8) + 16 f(0.9) f(1.0)12(0.1)2

    =5.9072 + 16 6.0092 30 6.3552 + 16 6.9992 8

    0.12= 29.76