97
Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numer UNSW Mathematics Society presents MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods Presented by Janzen Choi T2, 2020

MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

UNSW Mathematics Society presents

MATH2089 Revision Seminar

Numerical Methods & StatisticsNumerical Methods

Presented by Janzen ChoiT2, 2020

Page 2: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Table of Contents I1 Part I: Linear Systems

LU factorisationCholesky factorisationSparsity of matricesNorm and condition numberSensitivity of a linear system

2 Part II: Least Squares & Polynomial InterpolationLeast squares

Normal equationsQR factorisation

Polynomial interpolationLagrange polynomialsChebyshev points

3 Part III: Nonlinear EquationsBisectionFixed point iteration

2 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 3: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Table of Contents IINewton’s methodSecant method

4 Part IV: Numerical Differentiation and IntegrationTaylor approximationFinite difference methods

Forward difference approximationCentral difference approximation

Quadrature rulesTrapezoidal ruleSimpson’s ruleGauss-Legendre rule

Change of variables

5 Part V: Ordinary Differential EquationsFirst order IVP

Existence and uniquenessEuler’s method

3 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 4: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Table of Contents IIISystem of first order ODEs

2 and 4-stage Runge-Kutta methods

6 Part VI: Partial Differential EquationsFinite difference methods of PDEs

4 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 5: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Part I: Linear Systems

5 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 6: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Matrix factorisation − LU factorisationGiven an n × n matrix A, if the leading principal sub-matrices Akare non-singular for all k = 1, . . . , n, then there exist n × nmatrices L and U where L is unit lower triangular and U is uppertriangular, such that

A = LU.

If the factorisation exists, it is also unique.Obtain the LU factorisation by applying row operations of the form

Ri ← Ri − LijRj .

Cost of factorisationThe cost of the LU factorisation is

2n3

3 +O(n2) flops.

6 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 7: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Effects of the permutation matrix

If A is non-singular, then there exist n × n matrices L,U and Pwith P being a permutation matrix such that

PA = LU.

PA reorders the rows of A but does not change the solution to thelinear system.

AP reorders the columns of A and affects the solution to linearsystem.

7 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 8: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Solving linear systems using LU factorisationWe note that

Ax = b =⇒ (PA)x = Pb =⇒ LUx = Pb.

1 Forward substitution: Solve Ly = Pb = z for y. Then

y1 = z1, yi = zi −i−1∑j=1

Lijyj , i = 2, . . . , n.

2 Back substitution: Solve Ux = y for x. Then

xn = ynUnn

, xi = 1Uii

(yi −

n∑j=i+1

Uijxj

), i = n − 1, . . . , 1.

8 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 9: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Solving linear systems using LU factorisation

Cost of solution

LU factorisation: 2n3

3 +O(n2) flops.

Forward substitution: n2 +O(n) flops.

Back substitution: n2 +O(n) flops.

Total cost: 2n3

3 +O(n2) flops.

9 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 10: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Matrix factorisation − Cholesky factorisationGiven an n × n matrix A, if the Cholesky factorisation exists, thenit is of the form

A = RᵀR

where R is an n × n upper triangular matrix, with Rii > 0 for alli = 1, . . . , n.

A Cholesky factorisation is unique when it exists.The matrix A is positive definite and symmetric.All eigenvalues of A are positive.

Cost of factorisationThe cost of the Cholesky factorisation is

n3

3 +O(n2) flops.

10 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 11: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Solving linear systems using Choleskyfactorisation

We note thatAx = b =⇒ (RᵀR) x = b.

1 Forward substitution: Solve Rᵀy = b for y.

y1 = b1R11

, yi = 1Rii

(bi −

i−1∑j=1

Rji yj

), i = 2, . . . , n.

2 Back substitution: Solve Rx = y for x.

xn = ynRnn

, xi = 1Rii

(yi −

n∑j=i+1

Rijxj

), i = n − 1, . . . , 1.

11 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 12: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Solving linear systems using Choleskyfactorisation

Cost of solution

Cholesky factorisation: n3

3 +O(n2) flops.

Forward substitution: n2 +O(n) flops.

Back substitution: n2 +O(n) flops.

Total cost: n3

3 +O(n2) flops.

12 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 13: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Sparsity of matrices

The sparsity of a matrix A is given by

Sparsity =( non-zero elements of A

total number of elements in A

)%.

13 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 14: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Example: (MATH2089, 2010 Q3h)

You are given that using a row-ordering of the variables c`i ,j

produces the coefficient matrix A whose non-zero entries areillustrated below.

Calculate the sparsity of A.

14 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 15: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Recall that the sparsity of a matrix is given by

Sparsity =( non-zero elements of A

total number of elements in A

)%.

The number of nonzero entries in a spy plot is given by thevariable nz. The dimension of the matrix is given by the grid size,which in this case is 9× 4 = 36. Hence the sparsity is

Sparsity =( 154

36× 36

)% ≈ 11.9%.

15 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 16: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Vector normProperties of vector normsA vector norm ‖·‖ is an operation on the vector with the followingproperties:

‖x‖ ≥ 0 with ‖x‖ = 0 only if x = 0.

Triangle inequality: ‖x + y‖ ≤ ‖x‖+ ‖y‖.

For any constant α, ‖αx‖ = |α| ‖x‖.

Vector p-normsVector p norms are special types of norms on n × 1 vectors. Bydefinition, for p ≥ 1, the p-norm of an n × 1 vector is given by

‖x‖p =( n∑

i=1xp)1/p

16 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 17: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Vector p norms

Examples of p-normsVector 1-norm:

‖x‖1 =n∑

i=1|x|.

Vector 2-norm (Euclidean norm):

‖x‖2 =( n∑

i=1x2

)1/2

=√

xᵀx.

Vector ∞-norm (max norm):

‖x‖∞ = max1≤i≤n

|xi |.

17 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 18: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Matrix norms

Properties of vector normsA matrix norm ‖·‖ is an operation on a matrix with the followingproperties:

‖A‖ ≥ 0 with ‖A‖ = 0 only if A = 0.

Triangle inequality: ‖A + B‖ ≤ ‖A‖+ ‖B‖.

For any constant α, ‖αA‖ = |α| ‖A‖.

Consistent matrix normsA matrix norm is said to be consistent if

‖AB‖ ≤ ‖A‖ ‖B‖.

18 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 19: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Matrix p-norms

For p ≥ 1, the p-norm of an m × n matrix is given by

‖A‖p = maxx 6=0

‖Ax‖p‖x‖p

19 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 20: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Examples of matrix p-norms

Matrix 1-norm (maximum column sum):

‖A‖1 = max1≤j≤n

∑1≤i≤m

|aij |

.

Matrix ∞-norm (maximum row sum):

‖A‖∞ = max1≤j≤m

∑1≤i≤n

|aij |

.

Matrix 2-norm (square root of the largest eigenvalue of AᵀA):

‖A‖2 =√

max1≤j≤n

λj (AᵀA)

20 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 21: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Condition number

A square matrix A is non-singular if and only ifdet(A) 6= 0 (invertible).

All eigenvalues of A are non-zero.

Condition numberFor a non-singular matrix A, the condition number is defined as

κ(A) = ‖A‖ ‖A−1‖.

21 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 22: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Condition number

Properties of condition numbers

κ(A) ≥ 1 for consistent matrix norms.

κ(αI) = 1 for all α 6= 0.

For a real symmetric matrix, the 2-norm condition number is

κ2(A) = ‖A‖2 ‖A−1‖2 = max1≤i≤n|λi (A)|min1≤i≤n|λi (A)| .

A is said to be ill-conditioned if κ(A) > 1ε≈ 1016.

22 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 23: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Example: (MATH2089, S1 2010, Q1c)The coefficient matrix A and the right-hand-side vector b areknown to 8 significant figures, and

‖A‖ = 1.9× 101, ‖A−1‖ = 2.2× 103.

What is the condition number κ(A)?

By definition, for non-singular matrices,

κ(A) = ‖A‖‖A−1‖.

Hence,

κ(A) = ‖A‖‖A−1‖ =(

1.9× 101)×(

2.2× 103)

= 4.18× 104.

23 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 24: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Example

Let A =

2 −1 2−1 1 −12 −1 3

and A−1 =

2 1 −11 2 0−1 0 1

.

Compute the condition numbers κ∞(A) and κ1(A).

The condition number κ∞(A) is simply just

κ∞(A) = ‖A‖∞‖A−1‖∞.

The sum of the magnitude of the rows of A are simply|2|+ | − 1|+ |2| = 5, | − 1|+ |1|+ | − 1| = 3, and|2|+ | − 1|+ |3| = 6. Hence ‖A‖∞ is just 6. We repeat the sameprocess for A−1 and get ‖A−1‖∞ = 4. So

κ∞(A) = 6× 4 = 24.

Repeat the process for the columns to get κ1(A).

24 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 25: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Sensitivity of a linear systemLet x be an approximation to x. Then the absolute error of x is‖x− x‖ and the relative error is

ρx = ‖x− x‖‖x‖ .

Let A be an approximation to A. Then the absolute error is‖A− A‖ and the relative error is

ρA = ‖A− A‖‖A‖ .

(Theorem) Sensitivity of a linear systemThe sensitivity of a linear system Ax = b to the error in input dataA and b is given by

ρx ≈ κ(A)× (ρA + ρb) .

25 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 26: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Sensitivity of a linear system

Properties of the errorsIf A or b are known exactly, then the errors ρA and ρx are 0. Thatis, there is no error in precision.

If x is known to k significant figures, then

ρx ≤ 0.5× 10−k .

26 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 27: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Example: (MATH2089, 2009 Q1c)The following MATLAB code generates the given output for apre-defined real square array A.chk1 = norm(A - A’, 1)chk1 = 1.4052e-015ev = eig(A);evlim = [min(ev) max(ev)]evlim = 4.5107e-002 9.1213e+004

Is A symmetric?

Is A positive definite?

Calculate the 2-norm condition number κ2(A) of A.

When solving the linear system Ax = b, the elements of A and b areknown to 6 significant decimal digits. Estimate the relative error inthe computed solution x.

Given the Cholesky factorisation A = RᵀR, explain how to solve thelinear system Ax = b.

27 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 28: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Example: (MATH2089, 2009 Q1c)The following MATLAB code generates the given output for apre-defined real square array A.chk1 = norm(A - A’, 1)chk1 = 1.4052e-015

Is A symmetric?

From the MATLAB command chk1 = norm(A - A’, 1), thisimplies that ‖A−Aᵀ‖1 ≈ 1.4× 10−15 ≈ 7ε where ε = 2.2× 10−16.Hence the value is small enough such that

‖A− Aᵀ‖1 ≈ 0 =⇒ A = Aᵀ.

Thus A is symmetric with rounding error.

28 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 29: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Example: (MATH2089, 2009 Q1c)The following MATLAB code generates the given output for apre-defined real square array A.ev = eig(A);evlim = [min(ev) max(ev)]evlim = 4.5107e-002 9.1213e+004

Is A positive definite?

Because A is symmetric, then the following statements areequivalent.

A is positive definite.

All of the eigenvalues in A are positive.

From our MATLAB command, we see that the minimum eigenvalue,given by the command min(ev), is 4.5× 10−2 > 0. Hence all ofthe eigenvalues are positive and thus, A is positive definite.

29 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 30: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Example: (MATH2089, 2009 Q1c)The following MATLAB code generates the given output for apre-defined real square array A.ev = eig(A);evlim = [min(ev) max(ev)]evlim = 4.5107e-002 9.1213e+004

Calculate the 2-norm condition number κ2(A) of A.

For a real symmetric matrix, the 2-norm condition number is

κ2(A) = ‖A‖2‖A−1‖2 = |λmax (A)||λmin (A)| = 9.12× 104

4.51× 10−2 = 2.02× 106.

30 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 31: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Example: (MATH2089, 2009 Q1c)

When solving the linear system Ax = b, the elements of A and b areknown to 6 significant decimal digits. Estimate the relative error inthe computed solution x.

Since A and b are known to 6 significant decimal digits, then wehave

ρA ≤ 0.5× 10−6, ρb ≤ 0.5× 10−6.

Then we have

ρx ≈ κ2(A) [ρA + ρb]

≤(

2× 106) [

0.5× 10−6 + 0.5× 10−6]

= 2.

Hence the relative error of x is 2.

31 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 32: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Example: (MATH2089, 2009 Q1c)

Given the Cholesky factorisation A = RᵀR, explain how to solve thelinear system Ax = b.

Apply forward substitution and back substitution. Let A = RᵀR sothat

Ax = b =⇒ RᵀRx = b =⇒ Rᵀy = b,

where Rx = y. Solve Rᵀy = b by forward substitution to getRx = y and solve Rx = y by back substitution to get x.

32 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 33: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Part II: Least Squares & PolynomialInterpolation

33 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 34: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Least squares

Given a set of data points, determine the line or curve of best fit.

Methods to finding least squaresFor a given m × n matrix A with m > n, we can apply twomethods to finding least square solutions.

1 Normal equation: AᵀAu = Aᵀy.O(mn2) flops.

2 QR factorisation and back substitution.O(mn2) flops.

34 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 35: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Method 1: Normal equationsAssumptions.

A is an m × n matrix (with m > n) and A has full rank.

Au = y =⇒ (AᵀA) u = Aᵀy.

Define a new matrix B to be the matrix B = AᵀA. B is symmetricand positive definite.

Solve Bu = Aᵀy by applying either Cholesky or LU factorisationwith forward and backward substitutions.

Cost of method and issue

Dominated by computing B: O(mn2) flops.

Issue: Condition number is squared!

κ2(B) = κ2(AᵀA) = [κ2(A)]2

35 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 36: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Method 2: QR factorisation

We can try and write A as a product of two matrices

A = QR,

where Q is an orthogonal matrix and R is an upper triangularmatrix.

Q =[Y Z

], where Y is an m× n matrix and Z is an m× (m− n)

matrix.

Cost of method

Cost: O(mn2) flops.

36 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 37: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Polynomial interpolation

Idea: We want to make a new polynomial (interpolation) thatpasses through the data points.

Given data points (x0, y0) and (x1, y1), we solve the simultaneousequation

p(x) = a0 + a1x =⇒{

y0 = a0 + a1x0

y1 = a0 + a1x1

Given n number of data points, an interpolating polynomial willhave degree (n − 1).

37 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 38: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Interpolating polynomials in Lagrange form

Given (n + 1) data points (xj , yj), construct Lagrange polynomialsof degree n of the form

`i (x) =n∏

j=0j 6=i

(x − xjxi − yj

)for i = 0, . . . , n.

Note that `i (xj) = 1 for i = j and `i (xj) = 0 if i 6= j .

The interpolating polynomial is then

p(x) =n∑

i=0yi`i (x).

38 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 39: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Fora function f , the following data are known:

f (0) = 12.6, f (1) = 6.7, f (2) = 4.3, f (3) = 2.7.

What is the degree of the interpolating polynomial P for these data?

Assume that we want to find P in the form

P(x) = a0 + a1x + · · · .

Write down the system of linear equations you need to solve toobtain a0, a1, . . ..Use MATLAB to set up and solve this linear system.

Write down the Lagrange polynomials `j(x) for j = 0, 1, 2, 3.

Write down the interpolating polynomial P using the Lagrangepolynomials.

39 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 40: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Fora function f , the following data are known:

f (0) = 12.6, f (1) = 6.7, f (2) = 4.3, f (3) = 2.7.

What is the degree of the interpolating polynomial P for these data?

As there are 4 data values and a polynomial of degree n has n + 1coefficients, then the degree of the interpolating polynomial isn = 4− 1 = 3.

40 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 41: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Assume that we want to find P in the form

P(x) = a0 + a1x + · · · .

Write down the system of linear equations you need to solve toobtain a0, a1, . . ..Use MATLAB to set up and solve this linear system.

As the interpolating polynomial is of degree 3, then

P(x) = a0 + a1x + a2x2 + a3x3.

We obtain the following system of linear equations

f (0) = 12.6 =⇒ P(0) = a0 = 12.6f (1) = 6.7 =⇒ a0 + a1 + a2 + a3 = 6.7f (2) = 4.3 =⇒ P(2) = a0 + 2a1 + 4a2 + 8a3 = 4.3f (3) = 2.7 =⇒ a0 + 3a1 + 9a2 + 27a3 = 2.7.

41 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 42: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Assume that we want to find P in the form

P(x) = a0 + a1x + · · · .

Write down the system of linear equations you need to solve toobtain a0, a1, . . ..Use MATLAB to set up and solve this linear system.

Use the backslash command to solve for a to obtain the followingsolution

P(x) = 12.6− 8.5x + 3.1x2 − 0.45x3.

42 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 43: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Write down the Lagrange polynomials `j(x) for j = 0, 1, 2, 3.

Recall that the Lagrange polynomials are the degree n polynomials

`j(x) =n∏

k=0k 6=j

x − xkxj − xk

.

For the given data x0 = 0, x1 = 1, x2 = 2, x3 = 3, this gives

`0(x) = (x − 1)(x − 2)(x − 3)(0− 1)(0− 2)(0− 3) = −1

6(x − 1)(x − 2)(x − 3)

`1(x) = (x − 0)(x − 2)(x − 3)(1− 0)(1− 2)(1− 3) = 1

2x(x − 2)(x − 3)

`2(x) = (x − 0)(x − 1)(x − 3)(2− 0)(2− 1)(2− 3) = −1

2x(x − 1)(x − 3)

`3(x) = (x − 0)(x − 1)(x − 2)(3− 0)(3− 1)(3− 2) = 1

6x(x − 1)(x − 2)

43 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 44: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Fora function f , the following data are known:

f (0) = 12.6, f (1) = 6.7, f (2) = 4.3, f (3) = 2.7.

Write down the interpolating polynomial P using the Lagrangepolynomials.

The interpolating polynomial is simply

P(x) =3∑

j=0fj`j(x)

= 12.6× `0(x) + 6.7× `1(x) + 4.3× `2(x) + 2.7× `3(x)

= −12.66 (x − 1)(x − 2)(x − 3) + 6.7

2 x(x − 2)(x − 3)+

− 4.32 x(x − 1)(x − 3) + 2.7

6 x(x − 1)(x − 2).

44 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 45: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

(Theorem) Interpolating polynomial errorIf f is (n + 1) times continuously differentiable on the interval[a, b], then the error in approximating f (x) by p(x) is

f (x)− p(x) = f (n+1)(ξ)(n + 1)!

n∏j=0

(x − xj)

for some unknown ξ ∈ [a, b] depending on x .

45 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 46: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Chebyshev points

Choose xj to minimise

maxx∈[−1,1]

n∏j=0

(x − xj).

On [−1, 1], set tj = cos[(

2n + 1− 2j2n + 2

]for j = 0, . . . , n.

On [a, b], set xj = a + b2 +

(b − a

2

)tj for j = 0, . . . , n.

Chebyshev nodes are the zeros of the Chebyshev polynomialTn+1(x).

Interpolation error is minimised by choosing Chebyshev nodes!

46 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 47: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Part III: Nonlinear Equations

47 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 48: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Nonlinear equation in standard form

f (x) = 0, x ∈ R.

We aim to solve for x (ie find the zeros of f )

If necessary, rearrange the equation to have the equation instandard form.

48 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 49: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

NotationContinuity and differentiability

If f ∈ Cn([a, b]), then f is continuous on [a, b] and n timesdifferentiable on the interval (a, b).

Results of continuity

(Intermediate Value Theorem)If f ∈ C([a, b]) and f (a)f (b) < 0, then there exists at least onezero of f in the interval on (a, b).

(Strictly monotone)If f ′(x) > 0 OR f ′(x) < 0 for all x ∈ (a, b), then f is strictlymonotone on the interval [a, b].

(Uniqueness of root)If f is continuous AND strictly monotone on the interval [a, b] andf (a)f (b) < 0, then f has a unique root on (a, b).

49 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 50: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Iterative methods for solving equations

IterationsHave an initial guess or starting point x1.

Generate sequences of iterates xk for k = 2, 3, . . . based onapproximations of the problem.

Determine whether the sequence generated converges to the truesolution x∗.

Order of convergenceLargest ν such that

limk→∞

ek+1eνk

= β,

where β is the asymptotic constant and ek = |xk − x∗|.

50 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 51: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Iterative method: Bisection

Suppose that f (a)f (b) < 0 and we have f ∈ C([a, b]).

Take the midpoint xmid = a + b2 as x1.

Choose a new interval depending on the result of f (xmid.If f (a)f (xmid) < 0, then choose new interval to be [a, xmid].If f (xmid)f (b) > 0, then choose [xmid, b].

Iterate using the same process.

Order of convergence

Linear convergence with asymptotic constant β = 12 .

51 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 52: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Iterative method: Fixed point iteration

Given a starting point x1, compute

xk+1 = g(xk), for k = 1, 2, . . .

Order of convergence

If g ∈ C1([a, b]) and there exists a K ∈ (0, 1) such that |g ′(x)| ≤ Kfor all x ∈ (a, b), then the fixed point iteration converges linearlywith asymptotic constant β ≤ K , for any x1 ∈ [a, b].

52 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 53: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Iterative method: Newton’s method

Newton’s approximation

Approximate f (x) by its tangent at the point (xk , f (xk)) to form

f (x) ≈ f (xk) + (x − xk)f ′(xk).

Choose x = xk+1 and set f (x) = 0 to form

xk+1 = xk −f (xk)f ′(xk) , assuming f ′(xk) 6= 0.

Order of convergence

If f ∈ C2([a, b]) and x1 is sufficiently close to a simple rootx∗ ∈ (a, b), then Newton’s method converges quadratically to x∗.

53 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 54: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Example: (MATH2089, S1 2009)To find the root of a real number, computers typically implementNewton’s method. Let a > 1 and consider finding the cube root ofa, that is a1/3.Show that Newton’s method can be written as

xk+1 = 13

(2xk + a

x2k

).

54 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 55: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

We want to find x = a1/3 =⇒ x3 − a = 0. So set f (x) = x3 − a.We then have

f (xk) = x3k − a, f ′(xk) = 3x2

k .

By Newton’s method, we obtain the result

xk+1 = xk −f (xk)f ′(xk)

= xk −x3

k − a3x2

k

= xk −13xk + a

3x2k

= 13

[(3xk − xk) + a

x2k

]

= 13

(2xk + a

x2k

).

55 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 56: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Example: (MATH2089, S1 2013, Q1d)Consider the function f (x) = ex sin(x)− 100.

You are given that f (x) has a simple zero at x∗ ≈ 6.443. If you usea starting value x1 near x∗, what is the expected order ofconvergence for Newton’s method?

The function behaves well since ex and sin(x) are continuousfunctions. So it passes the theorem! Hence the expected order ofconvergence is 2.

56 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 57: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Iterative method: Secant method

Approximate f (x) by a line through the point (xk , f (xk)) and(xk−1, f (xk−1)) to form

f (x) ≈ f (xk) + (x − xk) f (xk)− f (xk−1)xk − xk−1

.

Choose x = xk+1 and set f (x) = 0 to form

xk+1 = xk −f (xk)(xk − xk−1)f (xk)− f (xk−1) .

Order of convergence

If f ∈ C2([a, b]) and x1, x2 are sufficiently close to x∗, then the

secant method converges superlinearly with order ν = 1 +√

52 .

57 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 58: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Part IV: Numerical Differentiationand Integration

58 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 59: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Taylor series

Taylor seriesA Taylor series is an infinite polynomial series that approximatesnon-polynomial functions by taking higher order derivatives centredaround a point x0.

Examples of well-known Taylor series

ex =∞∑

k=0

xk

k! = 1 + x + x2

2 + x3

6 + . . .

ln(1 + x) =∞∑

k=1

(−1)k+1xk

k = x − x2

2 + x3

3 − . . . for |x | < 1.

59 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 60: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

(Theorem)

Let f ∈ Cn+1([a, b]). In other words, let f be continuous on [a, b]and n + 1 times differentiable on (a, b). Then

f (x + h) =n∑

k=0

f (k)(x)k! hk + f (n+1)(ξ)

(n + 1)! hn+1

= f (x) + f ′(x)h + f ′′(x)2! h2 + · · ·+ f (n)(x)

n! hn +O(hn+1)

for some unknown ξ ∈ (a, b)

60 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 61: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Finite difference methods

Forward difference approximationLet f ∈ C2([a, b]). That is, let f be twice differentiable in theinterval [a, b]. Then

f ′(x) = f (x + h)− f (x)h +O(h).

The roundoff error is

O( ε

h

)= ε

∣∣∣∣ f (x + h)− f (x)h

∣∣∣∣ .The truncation error is O(h) and the total error is O

( εh

)+O(h).

61 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 62: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Finite difference methods

Central difference approximationLet f ∈ C4([a, b]|). That is, let f be four times differentiable onthe interval [a, b]. Then

f ′′(x) = f (x + h)− 2f (x) + f (x − h)h2 +O(h2).

The roundoff error is O( ε

h2

)and the truncation error is O(h2).

The total error is O( ε

h2

)+O(h2).

62 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 63: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Quadrature rules

We are approximating integrals using weighted sums of functionsvalues. That is,

Quadrature rule: QN(f ) =N∑

j=0 or 1wj f (xj),

where∑

jwj = b − a.

Quadrature error:

EN = I(f )− QN(f ) =∫ b

af (x) dx − QN(f )

We want EN → 0 as N → ∞ for convergence.

63 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 64: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Quadrature rules

We look at three quadrature rules:Trapezoidal rule

Simpson’s rule

Gauss-Legendre rule

64 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 65: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Quadrature rule − Trapezoidal rule

QN(f ) = h( f0

2 + f1 + f2 + · · ·+ fN−1 + fN2

).

Approximate an integral using a bunch of trapeziums and sum upthe area under f (x) using the area of each trapezium.

The height h is fixed: h = b − aN .

Function values are fj = f (xj) for all j = 0, . . . ,N.

Weights are w0 = wN = h2 and wj = h for all h = 1, . . . ,N − 1.

65 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 66: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Quadrature rule − Trapezoidal rule

(Theorem) Error of trapezoidal rule

Let f ∈ C2([a, b]). Then

EN(f ) = −b − a12 h2f ′′(ξ),

for some unknown ξ ∈ [a, b].

Rate of convergence: EN(f ) = O(h2) or EN(f ) = O(N−2).

66 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 67: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Quadrature rule − Simpson’s rule

QN(f ) = h3 (f0 + 4f1 + 2f2 + 4f3 + 2f4 + · · ·+ 2fN−2 + 4fN−1 + fN) .

Approximate an integral using a bunch of parabolas and sum up thearea under f (x) using the area of each parabola through integration.

The height is fixed: h = b − aN , with N being even.

Function values are fj = f (xj) for all j = 0, . . . ,N.

Weights are w0 = wN = h3 and wj =

{4h3 for odd j

2h3 for even j

.

67 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 68: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Quadrature rule − Simpson’s rule

(Theorem) Error of Simpson’s rule

Let f ∈ C4([a, b]). Then

EN(f ) = −b − a180 h4f (4)(ξ),

for some unknown ξ ∈ [a, b].

Rate of convergence: EN(f ) = O(h4) or EN(f ) = O(N−4).

68 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 69: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Quadrature rule − Gauss-Legendre rule

∫ 1

−1f (x) dx ≈ QN(f ) =

N∑j=1

wj f (xj).

Nodes xj are the zeros of the Legendre polynomial of degree N on[−1, 1].

Weights wj are given in terms of the Legendre polynomials.

69 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 70: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Quadrature rule − Gauss-Legendre rule

(Theorem) Error of Gauss-Legendre rule

Let f ∈ C2N([−1, 1]). Then

EN(f ) = − eN(2N)! f (2N)(ξ),

where ξ ∈ [−1, 1] and eN is some number that depends on N.

70 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 71: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Quadrature properties

Quadratures assume integrand f is sufficiently smooth on [a, b].Assume that f ∈ C2([a, b]) for trapezoidal rule.Assume that f ∈ C4([a, b]) for Simpson’s rule.Assume that f ∈ C2N([a, b]) for Gauss-Legendre rule.

71 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 72: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Change of variables

Transform integral∫ b

af (x) dx → b − a

2

∫ 1

−1f(a + b

2 + b − a2 y

)dy

by substitutingx = a + b

2 + b − a2 y .

72 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 73: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Tips for estimating difficult integrals

Unbounded derivatives: apply a change of variables.

Discontinuity on derivative: split the integral and remove thediscontinuous derivative.

High oscillatory: requires a special analytic method.

Narrow spike: either underestimate or overestimate the spikes.

73 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 74: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Part V: Ordinary DifferentialEquations

74 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 75: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

First order Initial Value Problems (IVP)

First order ODEOrdinary differential equations are equations that involve itsderivative. A first order differential equation is one such equationwhere the highest order of derivative is 1. A first order initial valueproblem is simply a first order ODE with initial conditions.

First order ODE: dydx = y .

First order IVP: dydx = y with y(0) = 1.

75 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 76: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Existence and uniqueness of solutions

(Theorem)

If f (t, y) and ∂f (t, y)∂y are continuous and bounded for all

t ∈ [t0, tmax] and y ∈ R, then the IVP has a unique solution in thetime interval [t0, tmax].

76 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 77: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Euler’s method

Solve a first order IVP y ′ = f (t, y), t ∈ [t0, tmax], y(t0) = y0by

yn+1 = yn + h · f (tn, yn), n = 0, 1, . . . ,N − 1.

77 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 78: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

System of first order ODEs

Often times, we may have a system of many equations involvingderivatives.

We can write it in the form

dxdt = f(t, x).

78 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 79: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Example: (MATH2089, 2010 Q2b)Consider the initial value problem (IVP)

y ′′′ + 2y ′ − (π2 + 1)y = π(π2 + 1)e−t sin(πt)y(0) = 1, y ′(0) = −1, y ′′(0) = 1− π2.

Reformulate the IVP as a system of first-order differential equations

x′(t) = f(t, x)

with the appropriate initial condition.

79 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 80: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Begin by observing that the degree of derivative is 3. Hence, weuse

x =

x1x2x3

=

yy ′y ′′

.Then we see that

x′ =

x ′1x ′2x ′3

=

y ′y ′′y ′′′

=

x2x3y ′′′

,where y ′′′ is just

y ′′′ = π(π2 + 1)e−t sin(πt)− 2x2 + (π2 + 1)x1.

80 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 81: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

So we obtain

x′ = f(t, x) =

x2x3

π(π2 + 1)e−t sin(πt)− 2x2 + (π2 + 1)x1

.To find the appropriate initial conditions, take t = 0 to get

x(0) =

y(0)y ′(0)y ′′(0)

=

1−1

1− π2

.

81 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 82: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

2-stage Runge Kutta method

k1 = f (tn, yn), k2 = f(

tn + 23h, yn + 2

3hk1

)yn+1 = yn + h

4 [k1 + 3k2]

4-stage Runge Kutta method

k1 = f (tn, yn), k2 = f(

tn + 12h, yn + 1

2hk1

)k3 = f

(tn + 1

2h, yn + 12hk2

), k4 = f (tn + h, yn + hk3)

yn+1 = yn + h6 [k1 + 2k2 + 2k3 + k4] .

82 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 83: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

ExampleFind the solution of the initial value problem

y ′ = 3y + 3t, y(0) = 1, t = 0.2

,Using Euler’s method with h = 0.2.

Using the fourth-order Runge Kutta method with h = 0.2.

83 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 84: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Using Euler’s method:We observe that, here, f (tn, yn) = 3y + 3t.Next, using our initial value of y(t0) = y0 =⇒ y(0) = 1, we obtaint0 = 0 and y0 = 1.Next, we want to find y0.2 given t = 0.2.

y0.2 = y0 + h [f (t0, y0)]= 1 + 0.2 [3y0 + 3t0]= 1 + 0.2 [3 + 0]= 1.6.

84 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 85: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Using the fourth-order Runge Kutta method:Observe that

y0.2 = y0 + h6 [k1 + 2k2 + 2k3 + k4] .

Calculate the values of k1, k2, k3 and k4 respectively.

k1 = f (t0, y0) = 3.

k2 = f(

t0 + 12h, y0 + 1

2hk1

)= 4.2

k3 = f(

t0 + 12h, y0 + 1

2hk2

)= 4.56

k4 = f (t0 + h, y0 + hk3) = 6.336.

Then,

y0.2 = 1 + 0.26 [3 + 2× 4.2 + 2× 4.56 + 6.336] = 1.8952.

85 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 86: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Other useful methods for solving IVPsTaylor method of order 2

yn+1 = yn + hf (tn, yn) + h2

2∂f (t, y)∂t

∣∣∣t=tn,y=tn

Implicit Euler’s method

yn+1 = yn + hf (tn+1, yn+1)

Trapezoidal method

yn+1 = yn + h2 [f (tn, yn) + f (tn+1, yn+1]

Heun’s method

yn+1 = yn + h2 [f (tn, yn) + f (tn+1, yn + hf (tn, yn))]

86 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 87: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Part VI: Partial DifferentialEquations

87 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 88: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Partial Differential Equations

Partial Differential Equations (PDEs) are functions of more thanone variable defined by equations involving their partial derivatives.

Order of the PDE is the order of the highest derivative present!

88 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 89: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Finite difference methods

Treat them no differently to functions of one variable.

The only difference is changing the variable in the derivative!

∂u(x , t)∂x = u(x + h, t)− u(x − h, t)

2h +O(h2).

89 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 90: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Types of PDEsWe will restrict ourselves to PDEs involving only two variables. Asecond order quasi-linear PDE is of the form

A∂2u∂x2 + B ∂2u

∂x∂y + C ∂2u∂y2 = F

(x , y , u, ∂u

∂x ,∂u∂y

).

Elliptic if B2 − 4AC < 0.

Parabolic if B2 − 4AC = 0.

Hyperbolic if B2 − 4AC > 0.

90 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 91: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Elliptic PDEs

Divide x interval [0, Lx ] into m + 1 equal length subintervals suchthat

hx = Lxm + 1 .

Divide y interval [0, Ly ] into n + 1 equal length subintervals suchthat

hy = Lyn + 1 .

Central difference approximation of O(h2) at the grid points xi

∂2u∂x2 (xi , yj) ≈

ui−1,j − 2ui,j + ui+1,jh2

x

∂2u∂y2 (xi , yj) ≈

ui,j−1 − 2ui,j + ui,j+1h2

y

91 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 92: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Parabolic PDEs

Method 1 (Explicit method)

Forward difference approximation to the time derivative

∂u∂t (xi , t`) ≈

u`+1i − u`i

ht.

Central difference approximation to the space derivative

∂2u∂x2 (xi , t`) ≈

u`i−1 − 2u`i + u`i+1h2

x.

Substitute into PDE and multiply by ht to obtain

u`+1i = su`i−1 + (1− 2s)u`i + su`i+1, s = Dht

h2x.

92 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 93: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Parabolic PDEs

Method 2 (Implicit method)

Backward difference approximation to the time derivative

∂u∂t (xi , t`+1) ≈ u`+1

i − u`iht

.

Central difference approximation to the space derivative

∂2u∂x2 (xi , t`+1) ≈

u`+1i−1 − 2u`+1

i + u`+1i+1

h2x

.

Substitute into PDE and multiply by ht to obtain

−su`+1i−1 + (1− 2s)u`+1

i − su`+1i+1 = u`i , s = Dht

h2x.

93 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 94: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Parabolic PDEs

Method 3 (Crank-Nicolson method)

Take the average between the explicit and implicit method to obtain

u`+1i = u`i + s

2[(

u`i−1 − 2u`i + u`i+1)

+(u`+1

i−1 − 2u`+1i + u`+1

i+1)]

Stability of methods

Explicit method: Stable if and only if s ≤ 12 .

Implicit method: Unconditionally stable.

Crank-Nicolson method: Unconditionally stable.

94 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 95: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

Hyperbolic PDEs

Method (Explicit method)

Central difference approximation to the time derivative

∂2u∂t2 (xi , t`) ≈

u`−1i − 2u`i + u`+1

ih2

t.

Central difference approximation to the space derivative

∂2u∂x2 (xi , t`) ≈

u`i−1 − 2u`i + u`i+1h2

x.

Substitute into PDE and multiply through h2t to obtain

u`+1i = ru`i−1 + 2(1− r)u`i + ru`i+1 − u`−1

i , r = c2h2t

h2x.

95 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 96: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

A final problem

The heat conduction equation which models the temperature in aninsulated rod with ends held at constant temperatures can bewritten in the dimensionless form as

∂Θ(x , t)∂t = ∂2Θ(x , t)

∂x2

Write a finite difference approximation of this equation using theForward-Time, Central-Space scheme and rearrange it to be solvedby an explicit method.

By applying the FTCS scheme, we get

∂Θ∂t (xi , t`) ≈ Θ`+1

i −Θ`i

ht

∂2Θ∂t2 (xi , t`) ≈

Θ`i−1 − 2Θ`

i + Θ`i+1

h2x

.

96 / 97MATH2089 Revision SeminarPresented by: Janzen Choi

Page 97: MATH2089 Revision Seminar - mathsoc.unsw.edu.au · MATH2089 Revision Seminar Numerical Methods & Statistics Numerical Methods T2, 2020 Presented by Janzen Choi. Part I: Linear Systems

Part I: Linear Systems Part II: Least Squares & Polynomial Interpolation Part III: Nonlinear Equations Part IV: Numerical Differentiation and Integration Part V: Ordinary Differential Equations Part VI: Partial Differential Equations

By rearranging to allow explicit solving, we get

Θ`+1i =

( ∆t∆x2

)Θ`

i−1 +(

1− 2∆t∆x2

)Θ`

i +( ∆t

∆x2

)Θ`

i+1.

97 / 97MATH2089 Revision SeminarPresented by: Janzen Choi