79
Lecture 22 - Exam 2 Review Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Embed Size (px)

Citation preview

Page 1: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Lecture 22 - Exam 2 ReviewLecture 22 - Exam 2 Review

CVEN 302

July 29, 2002

Page 2: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Lecture’s GoalsLecture’s Goals

• Chapter 6 - LU Decomposition

• Chapter 7 - Eigen-analysis

• Chapter 8 - Interpolation

• Chapter 9 - Approximation

• Chapter 11 - Numerical Differentiation and Integration

Page 3: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Chapter 6Chapter 6

LU Decomposition of Matrices

Page 4: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

LU DecompositionLU Decomposition

• A modification of the elimination method, called the LU decomposition. The technique will rewrite the matrix as the product of two matrices.

A = LU

Page 5: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

LU DecompositionLU Decomposition

There are variation of the technique using different methods.– Crout’s reduction (U has ones on the diagonal).– Doolittle’s method( L has ones on the

diagonal).– Cholesky’s method ( The diagonal terms are the

same value for the L and U matrices).

Page 6: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

LU Decomposition SolvingLU Decomposition Solving

Using the LU decomposition

[A]{x} = [L][U]{x} = [L]{[U]{x}} = {b}

Solve

[L]{y} = {b}

and then solve

[U]{x} = {y}

Page 7: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

LU DecompositionLU Decomposition

The matrices are represented by

Page 8: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

LU Decomposition (Crout’s reduction)LU Decomposition (Crout’s reduction)

Matrix decomposition

Page 9: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

LU Decomposition (Doolittle’s Method)LU Decomposition (Doolittle’s Method)

Matrix decomposition

Page 10: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Cholesky’s MethodCholesky’s Method

Matrix is decomposed into:

where, lii = uii

Page 11: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Tridiagonal MatrixTridiagonal Matrix

For a banded matrix using Doolittle’s method, i.e. a tridiagonal matrix.

4443

343332

232221

1211

44

3433

2322

1211

43

32

21

aa00

aaa0

0aaa

00aa

u000

uu00

0uu0

00uu

1l00

01l0

001l

0001

Page 12: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Pivoting of the LU DecompositionPivoting of the LU Decomposition• Still need pivoting in LU decomposition

• Messes up order of [L]

• What to do?

• Need to pivot both [L] and a permutation matrix [P]

• Initialize [P] as identity matrix and pivot when [A] is pivoted Also pivot [L]

Page 13: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Pivoting of the LU DecompositionPivoting of the LU Decomposition• Permutation matrix [ P ] - permutation of identity matrix [ I ]• Permutation matrix performs

“bookkeeping” associated with the row exchanges

• Permuted matrix [ P ] [ A ]• LU factorization of the permuted matrix [ P ] [ A ] = [ L ] [ U ]

Page 14: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Chapter 7Chapter 7

Eigen-analysis

Page 15: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

• Matrix eigenvalues arise from discrete models of physical systems

• Discrete models– Finite number of degrees of freedom result in a

finite number of eigenvalues and eigenvectors.

Eigen-AnalysisEigen-Analysis

Page 16: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

EigenvaluesEigenvaluesComputing eigenvalues of a matrix is important in numerous

applications.– In numerical analysis, the convergence of an iterative sequence

involving matrices is determined by the size of the eigenvalues of the iterative matrix.

– In dynamic systems, the eigenvalues indicate whether a system is oscillatory, stable (decaying oscillations) or unstable(growing oscillation).

– Oscillator system, the eigenvalues of differential equations or the coefficient matrix of a finite element model are directly related to natural frequencies of the system.

– Regression analysis, eigenvectors of correlation matrix are used to select new predictor variables that are linear combinations of the original predictor variables.

Page 17: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

General Form of the EquationsGeneral Form of the Equations

The general form of the equations

0

xIxA

xxA

0

0

IA

xIA

Page 18: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Power MethodPower MethodThe basic computation of the power method is summarized as

lim and 1kk

1-k

1-kk

uAu

Auu

The equation can be written as:

1-k

1-k11-k11-k u

AuuAu

Page 19: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Power MethodPower MethodThe basic computation of the power method is summarized as

lim and 1kk

1-k

1-kk

uAu

Auu

The equation can be written as:

1-k

1-k11-k11-k u

AuuAu

Page 20: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Shift MethodShift Method

It is possible to obtain another eigenvalue from the set equations by using a technique known as shifting the matrix.

xxA Subtract the a vector from each side, thereby changing the maximum eigenvalue

xsxIsxA

Page 21: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Shift MethodShift Method

The eigenvalue, s, is the maximum value of the matrix A. The matrix is rewritten in a form.

IAB max

Use the Power Method to obtain the largest eigenvalue of [B].

Page 22: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Inverse Power MethodInverse Power MethodThe inverse method is similar to the power method, except that it finds the smallest eigenvalue. Using the following technique.

xxA xAxAA 11

xAx 11

xBx

Page 23: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Inverse Power MethodInverse Power Method

The algorithm is the same as the Power method and the “eigenvector” is not the eigenvector for the smallest eigenvalue. To obtain the smallest eigenvalue from the power method.

1

1

Page 24: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Accelerated Power MethodAccelerated Power MethodThe Power method can be accelerated by using the Rayleigh Quotient instead of the largest wk value.

The Rayeigh Quotient is defined as:

11 zA

zz

wz

'

'1

Page 25: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Accelerated Power MethodAccelerated Power MethodThe values of the next z term is defined as:

The Power method is adapted to use the new value.

12

wz

Page 26: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

QR FactorizationQR Factorization• Another form of factorization

A = Q*R

• Produces an orthogonal matrix (“Q”) and a right upper triangular matrix (“R”)

• Orthogonal matrix - inverse is transpose

T1 QQ

Page 27: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Why do we care?

We can use Q and R to find eigenvalues

1. Get Q and R (A = Q*R)2. Let A = R*Q3. Diagonal elements of A are eigenvalue approximations 4. Iterate until converged

QR FactorizationQR Factorization

Note: QR eigenvalue method gives all eigenvalues simultaneously, not just the dominant

Page 28: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Householder MatrixHouseholder Matrix

• Householder matrix reduces zk+1 ,…,zn to zero

00yyyHxy

xxxxxx

v

v w;ww2IH

k21

n1kk21

2

Page 29: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Householder MatrixHouseholder Matrix• To achieve the above operation, v must be a

linear combination of x and ek

n1kk1k21k

Tk

xxxxxxexv

001000e

,,,,,,,

....,,,,...,,

Page 30: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Chapter 8Chapter 8

Interpolation

Page 31: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Interpolation MethodsInterpolation Methods

Interpolation uses the data to approximate a function, which will fit all of the data points. All of the data is used to approximate the values of the function inside the bounds of the data.

We will look at polynomial and rational function interpolation of the data and piece-wise interpolation of the data.

Page 32: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Polynomial Interpolation Polynomial Interpolation MethodsMethods

• Lagrange Interpolation Polynomial - a straightforward, but computational awkward way to construct an interpolating polynomial.

• Newton Interpolation Polynomial - there is no difference between the Newton and Lagrange results. The difference between the two is the approach to obtaining the coefficients.

Page 33: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Hermite InterpolationHermite Interpolation

The Advantages• The segments of the piecewise Hermite

polynomial have a continuous first derivative at support points.

• The shape of the function being interpolated is better matched, because the tangent of this function and tangent of Hermite polynomial agree at the support points.

Page 34: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Rational Function InterpolationRational Function Interpolation

Polynomial are not always the best match of data. A rational function can be used to represent the steps. A rational function is a ratio of two polynomials. This is useful when you deal with fitting imaginary functions z=x + iy. The Bulirsch-Stoer algorithm creates a function where the numerator is of the same order as the denominator or 1 less.

Page 35: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Rational Function InterpolationRational Function Interpolation

The Rational Function interpolation are required for the location and function value need to be known.

or

22

22

311

21

3

i cxbxax

cxbxaxxP

22

22

311

2

j cxbxax

cxbxxP

Page 36: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Cubic Spline InterpolationCubic Spline Interpolation

Hermite Polynomials produce a smooth interpolation, they have a disadvantage that the slope of the input function must be specified at each breakpoint.

Cubic Spline interpolation use only the data points used to maintaining the desired smoothness of the function and is piecewise continuous.

Page 37: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Chapter 9

Approximation

Page 38: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Approximation MethodsApproximation Methods

• Interpolation matches the data points exactly. In case of experimental data, this assumption is not often true.

• Approximation - we want to consider the curve that will fit the data with the smallest “error”.

What is the difference between approximation and interpolation?

Page 39: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Least Square Fit ApproximationsLeast Square Fit Approximations

The solution is the minimization of the sum of squares. This will give a least square solution.

2k eS

This is known as the Maximum Likelihood Principle.

Page 40: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Least Square ErrorLeast Square Error

How do you minimize the error?

0db

d

0da

d

S

STake the derivative with the coefficients and set it equal to zero.

Page 41: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Least Square Coefficients for Least Square Coefficients for Quadratic FitQuadratic Fit

The equations can be written as:

N

1ii

N

1iii

N

1ii

2i

N

1ii

N

1i

2i

N

1ii

N

1i

2i

N

1i

3i

N

1i

2i

N

1i

3i

N

1i

4i

c

b

a

N Y

Yx

Yx

xx

xxx

xxx

Page 42: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Polynomial Least Square Polynomial Least Square The technique can be used to all forms of polynomials of the form:

nn

2210 xaxaxaay

N

1ii

ni

N

1iii

N

1ii

n

1

0

N

1i

2ni

N

1i

ni

N

1ii

N

1i

ni

N

1ii

a

a

aN

Yx

Yx

Y

xx

x

xx

Page 43: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Polynomial Least Square Polynomial Least Square

Solving large sets of linear equations are not a simple task. They can have the undesirable property known as ill-conditioning. The results of this method is that round-off errors in solving for the coefficients cause unusually large errors in the curve fits.

Page 44: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Polynomial Least SquarePolynomial Least Square

Or measure of the variance of the problem

Where, n is the degree polynomial and N is the number of elements and Yk are the data points and,

n

0j

jkjk xay

N

1k

2kk

2 1

1yY

nN

Page 45: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Nonlinear Least Squared Nonlinear Least Squared Approximation MethodApproximation Method

How would you handle a problem, which is modeled as:

ax

a

b

or

b

ey

xy

Page 46: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Nonlinear Least Squared Nonlinear Least Squared Approximation MethodApproximation Method

Take the natural log of the equations

xy

xyxy

ab

ln ablnlnb a

xy

xyey

ab

ablnlnb ax

and

Page 47: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Continuous Least Square Continuous Least Square FunctionsFunctions

Instead of modeling a known complex function over a region, we would like to model the values with a simple polynomial. This technique uses a least squares over a continuous region.

The coefficients of the polynomial can be determined using same technique that was used in discrete method.

Page 48: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Continuous Least Square Continuous Least Square FunctionsFunctions

The technique minimizes the error of the function uses an integral.

where

dxxsxfE b

a

2

2210 xaxaaxf

Page 49: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Continuous Least Square Continuous Least Square FunctionsFunctions

Take the derivative of the error with respect to the coefficients and set it equal to zero.

And compute the components of the coefficient matrix. The right hand side of the matrix will be the function we are modeling times a x value.

0 2

i

b

ai

dx

da

xdfxsxf

da

dE

Page 50: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Continuous Least Square Continuous Least Square FunctionFunction

There are other forms of equations, which can be used to represent continuous functions. Examples of these functions are

• Legrendre Polynomials

• Tchebyshev Polynomials

• Cosines and sines.

Page 51: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Legendre PolynomialLegendre Polynomial

The Legendre polynomials are a set of orthogonal functions, which can be used to represent a function as components of a function.

xPaxPaxPaxf nn1100

Page 52: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Legendre PolynomialLegendre Polynomial

These function are orthogonal over a range [ -1, 1 ]. This range can be scaled to fit the function. The orthogonal functions are defined as:

j i if 0

j i if #

1

1

ji dxxPxP

Page 53: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Continuous FunctionsContinuous Functions

Other forms of orthogonal functions are sines and cosines, which are used in Fourier approximation. The advantages for the sines and cosines are that they can model large time scales.

You will need to clip the ends of the series so that it will have zeros at the ends.

Page 54: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Chapter 11Chapter 11

Numerical Differentiation and Integration

Page 55: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Numerical DifferentiationNumerical Differentiation

A Taylor series or Lagrange interpolation of points can be used to find the derivatives. The Taylor series expansion is defined as:

!3!2

!3!2

0

30i

0

20i

00i0i

0i

xx

3

33

xx

2

22

xx0i

000

xfxx

xfxx

xfxxxfxf

xxx

dx

fdx

dx

fdx

dx

dfxxfxf

Page 56: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Numerical DifferentiationNumerical Differentiation

Assume that the data points are equally spaced and the equations can be written as:

3 !3

!2

2

1 !3

!2

i

3

i

2

ii1-i

ii

i

3

i

2

ii1i

xfx

xfx

xfxxfxf

xfxf

xfx

xfx

xfxxfxf

Page 57: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Differential ErrorDifferential Error

Notice that the errors of the forward and backward 1st derivative of the equations have an error of the order of O(x) and the central differentiation has an error of order O(x2). The central difference has an better accuracy and lower error that the others. This can be improved by using more terms to model the first derivative.

Page 58: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Higher Order DerivativesHigher Order Derivatives

To find higher derivatives, use the Taylor series expansions of term and eliminate the terms from the sum of equations. To improve the error in the problem add additional terms.

Page 59: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Lagrange DifferentiationLagrange Differentiation

Another form of differentiation is to use the Lagrange interpolation between three points. The values can be determine for unevenly spaced points. Given:

3

1323

212

3212

311

3121

32

332211

yxxxx

xxxxy

xxxx

xxxxy

xxxx

xxxx

yxLyxLyxLxL

Page 60: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Lagrange DifferentiationLagrange Differentiation

Differentiate the Lagrange interpolation

Assume a constant spacing

31323

212

3212

31

13121

32

22

2

yxxxx

xxxy

xxxx

xxx

yxxxx

xxxxLxf

3221

2231

1232

2

22

2

2y

x

xxxy

x

xxxy

x

xxxxf

Page 61: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Richardson ExtrapolationRichardson Extrapolation

This technique uses the concept of variable grid sizes to reduce the error. The technique uses a simple method for eliminating the error. Consider a second order central difference technique. Write the equation in the form:

xaxa

2 42

212

1-ii1ii

x

xfxfxfxf

Page 62: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Richardson ExtrapolationRichardson ExtrapolationThe central difference can be defined as

xaxa

2 42

212

1-ii1ii

x

xfxfxfxf

2

xa

2

xa

2

xA

xaxa xA4

2

2

1i

42

21i

Axf

Axf

Write the equation with different grid sizes

Page 63: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Richardson ExtrapolationRichardson ExtrapolationThe equation can be rewritten as:

It can be rewritten in the form

16

xa

3

x 2x

4A

4

2

AA

xb xb xA 62

41 B

Page 64: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Richardson ExtrapolationRichardson ExtrapolationThe technique can be extrapolated to include the higher order error elimination by using a finer grid.

x

15

x 2x

16A 6

OBB

Page 65: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Trapezoid RuleTrapezoid Rule

Integrate to obtain the rule

1

0

1 1

0 0

1 12 2

0 0

( ) ( ) ( )

( ) (1 ) ( )

( ) ( ) ( ) ( ) ( )2 2 2

b b

a af x dx L x dx h L d

f a h d f b h d

hf a h f b h f a f b

Page 66: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Simpson’s 1/3-RuleSimpson’s 1/3-Rule

1

1

23

2

1

1

3

1

1

1

23

0

1

12

1

0

21

1

10

1

1

b

a

)2

ξ

3

ξ(

2

h)f(x

)3

ξ(ξ)hf(x)

2

ξ

3

ξ(

2

h)f(x

)dξ1ξ(ξ2

h)f(x)dξξ1()hf(x

)dξ1ξ(ξ2

h)f(xdξ)(Lhf(x)dx

)f(x)4f(x)f(x3

hf(x)dx 210

b

a

Integrate the Lagrange interpolation

Page 67: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Simpson’s 3/8-RuleSimpson’s 3/8-Rule

)x(f)xx)(xx)(xx(

)xx)(xx)(xx(

)x(f)xx)(xx)(xx(

)xx)(xx)(xx(

)x(f)xx)(xx)(xx(

)xx)(xx)(xx(

)x(f)xx)(xx)(xx(

)xx)(xx)(xx()x(L

3231303

210

2321202

310

1312101

320

0302010

321

)x(f)x(f3)x(f3)x(f8

h33

a-bh ; L(x)dxf(x)dx

3210

b

a

b

a

Page 68: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Midpoint RuleMidpoint RuleNewton-Cotes Open Formula

)(f24

)ab()

2

ba(f)ab(

)x(f)ab(dx)x(f3

m

b

a

a b x

f(x)

xm

Page 69: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Composite Trapezoid RuleComposite Trapezoid Rule

)x(f)x(f2)2f(x)f(x2)f(x2

h

)f(x)f(x2

h)f(x)f(x

2

h)f(x)f(x

2

h

f(x)dxf(x)dxf(x)dxf(x)dx

n1ni10

n1n2110

x

x

x

x

x

x

b

a

n

1n

2

1

1

0

x0 x1x

f(x)

x2h h x3h h x4

n

abh

Page 70: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

2 4 n

0 2 n 2

b x x x

a x x x

0 1 2 2 3 4

n 2 n 1 n

0 1 2 3 4

2i-1 2 2i 1

2

f(x)dx f(x)dx f(x)dx f(x)dx

h hf(x ) 4f(x ) f(x ) f(x ) 4f(x ) f(x )

3 3h

f(x ) 4f(x ) f(x )3

f(x ) 4f(x ) 2f(x ) 4f(x ) 2f(x )34f(x ) 2 ( ) 4f(x )

2 ( ) 4 (i

n

h

f x

f x f x

1) ( )n nf x

Composite Simpson’s RuleComposite Simpson’s RuleMultiple applications of Simpson’s rule

Page 71: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Richardson ExtrapolationRichardson ExtrapolationUse trapezoidal rule as an example

– subintervals: n = 2j = 1, 2, 4, 8, 16, …. j2

1jjn1n10

b

ahc)x(f)x(f2)f(x2)f(x

2

hf(x)dx

)()()()(

)()()()(

)()()()()(

)()()(

)()(

bfxf2xf2af2

hI2j

bfxf2xf2af16

hI83

bfxf2xf2xf2af8

hI42

bfxf2af4

hI21

bfaf2

hI10

Formulanj

1n1jjj

713

3212

11

0

Page 72: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Richardson ExtrapolationRichardson ExtrapolationFor trapezoidal rule

)h(B)2

h(B16

15

1)h(C

)2

h(b)

2

h(BA

hb)h(BA

hb)h(B h4

c)h(A)

2

h(A4

3

1A

)2

h(c)

2

h(c)

2

h(AA

hchc)h(AA

hc)h(Adx)x(fA

42

42

42

42

42

21

42

21

21

b

a

Page 73: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Richardson ExtrapolationRichardson Extrapolation

kth level of extrapolation

14

)h(Ch/2)(C4)h(D

k

k

Page 74: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

255

II256

63

II64

15

II16

3

II4I16h

II8h

III4h

IIII2h

IIIIIh

hOhOhOhOhO

4k3k2k1k0k

sBoolesSimpsonTrapezoid

3j31j2j21j1j11j0j01j

04

1303

221202

31211101

4030201000

108642

,,,,,,,,

,

,,

,,,

,,,,

,,,,,

/

/

/

/

)()()()()(

''

3, 2,1,k ;14

II4I

k

k,jk,1jk

k,j

Romberg IntegrationRomberg IntegrationAccelerated Trapezoid Rule

Page 75: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Gaussian QuadraturesGaussian Quadratures• Newton-Cotes Formulae

– use evenly-spaced functional values

• Gaussian Quadratures– select functional values at non-uniformly distributed

points to achieve higher accuracy

– change of variables so that the interval of integration is [-1,1]

– Gauss-Legendre formulae

Page 76: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Gaussian Quadrature on Gaussian Quadrature on [-1, 1][-1, 1]

Exact integral for f = x0, x1, x2, x3

– Four equations for four unknowns

)f(xc)f(xcf(x)dx :2n 2211

1

1

3

1x

3

1x

1c1c

xcxc0dxx xf

xcxc3

2dxx xf

xcxc0xdx xf

cc2dx1 1f

2

1

2

1

322

31

1

1 133

222

21

1

1 122

221

1

1 1

2

1

1 1

Page 77: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Gaussian Quadrature on Gaussian Quadrature on [-1, 1][-1, 1]

Exact integral for f = x0, x1, x2, x3

)f(xc)f(xcf(x)dx :2n 2211

1

1

)3

1(f)

3

1(fdx)x(fI

1

1

Page 78: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Gaussian Quadrature on Gaussian Quadrature on [-1, 1][-1, 1]

Exact integral for f = x0, x1, x2, x3, x4, x5

)5

3(f

9

5)0(f

9

8)

5

3(f

9

5dx)x(fI

1

1

Page 79: Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

SummarySummary

• Open book and open notes. • The exam will be 5-8 problems. • Short answer type problems use a table to

differentiate between techniques.• Problems are not going to be excessive.• Make a short summary of the material.• Only use your notes, when you have forgotten

something, do not depend on them.