26
Chapter 10 Orthogonal Polynomials and Least-Squares Approximations to Functions **4/5/13 ET 10.1 Discrete Least-Squares Approximations Given a set of data points (x 1 ,y 1 ), (x 2 ,y 2 ), ...,(x m ,y m ), a normal and useful practice in many applications in statistics, engineering and other applied sciences is to construct a curve that is considered to be the “best fit” for the data, in some sense. Several types of “fits” can be considered. But the one that is used most in applications is the least-squares fit”. Mathematically, the problem is the following: Discrete Least-Squares Approximation Problem Given a set of n discrete data points (x i ,y i ), i =1, 2,...,m. Find the algebraic polynomial P n (x)= a 0 + a 1 x + a 2 x 2 + ··· + a n x n (n<m) such that the error E(a 0 ,a 1 ,...,a n ) in the least-squares sense in minimized; that is, E(a 0 ,a 1 ,...,a n )= m i=1 (y i a 0 a 1 x i a 2 x 2 i −···− a n x n i ) 2 465

Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

Embed Size (px)

Citation preview

Page 1: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

Chapter 10

Orthogonal Polynomials and

Least-Squares Approximations to

Functions

**4/5/13 ET

10.1 Discrete Least-Squares Approximations

Given a set of data points (x1, y1), (x2, y2), . . ., (xm, ym), a normal and useful practice in many

applications in statistics, engineering and other applied sciences is to construct a curve that is

considered to be the “best fit” for the data, in some sense.

Several types of “fits” can be considered. But the one that is used most in applications is the

“least-squares fit”. Mathematically, the problem is the following:

Discrete Least-Squares Approximation Problem

Given a set of n discrete data points (xi, yi), i = 1, 2, . . . ,m.

Find the algebraic polynomial

Pn(x) = a0 + a1x+ a2x2 + · · · + anx

n(n < m)

such that the error E(a0, a1, . . . , an) in the least-squares sense in minimized; that is,

E(a0, a1, . . . , an) =m∑

i=1

(yi − a0 − a1xi − a2x2i − · · · − anx

ni )

2

465

Page 2: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

466CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARESAPPROXIMATIONS TO FUNCTIONS

is minimum.

Here E(a0, a1, . . . , an) is a function of (n + 1) variables: a0, a1, . . . , an.

Since E(a0, a1, . . . , an) is a function of the variables, a0, a1, . . . , an, for this function to be

minimum, we must have:

∂E

∂aj= 0, j = 0, 1, . . . , n.

Now, simple computations of these partial derivatives yield:

∂E

∂a0= −2

m∑

i=1

(yi − a0 − a1xi − · · · − anxni )

∂E

∂a1= −2

m∑

i=1

xi(yi − a0 − a1xi − · · · − anxni )

...

∂E

∂an= −2

m∑

i=1

xni (yi − a0 − a1xi − · · · − anxni )

Setting these equations to be zero, we have

a0

m∑

i=1

1 + a1

m∑

i=1

xi + a2

m∑

i=1

x2i + · · · + an

m∑

i=1

xni =m∑

i=1

yi

a0

m∑

i=1

xi + a1

m∑

i=1

x2i + · · ·+ an

m∑

i=1

xn+1i =

m∑

i=1

xiyi

......

a0

m∑

i=1

xni + a1

m∑

i=1

xn+1i + · · ·+ an

m∑

i=1

x2ni =

m∑

i=1

xni yi

Set

sk =m∑

i=1

xki , k = 0, 1, . . . , 2n

bk =

m∑

i=1

xki yi, k = 0, 1, . . . , n

Using these notations, the above equations can be written as:

Page 3: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

10.1. DISCRETE LEAST-SQUARES APPROXIMATIONS 467

s0a0 + s1a1 + · · ·+ snan = b0 (Note thatm∑

i=1

1 =m∑

i=1

x0i = s0.)

s1a0 + s2a1 + · · ·+ sn+1an = b1

......

sna0 + sn+1a1 + · · · + s2nan = bn

This is a system of (n + 1) equations in (n + 1) unknowns a0, a1, . . . , an. These equations are

called Normal Equations. This system now can be solved to obtain these (n+1) unknowns,

provided a solution to the system exists. We will not show that this system has a unique solution

if xi’s are distinct.

The system can be written in the following matrix form:

s0 s1 · · · sn

s1 s2 · · · sn+1

......

...

sn sn+1 · · · s2n

a0

a1...

an

=

b0

b1...

bn

.

or

Sa = b

where

S =

s0 s1 · · · sn

s1 s2 · · · sn+1

......

...

sn sn+1 · · · s2n

, a =

a0

a1...

an

, b =

b0

b1...

bn

.

Define

V =

1 x1 x21 · · · xn1

1 x2 x22 · · · xn2

1 x3 x23 · · · xn3...

......

...

1 xm x2m · · · xnm

Then the above system has the form:

V TV a = b.

The matrix V is known as the Vandermonde matrix, and we have seen in Chapter 6 that

this matrix has full rank if xi’s are distinct. In this case, the matrix S = V TV is symmetric

and positive definite [Exercise] and is therefore nonsingular. Thus, if xi’s are distinct, the

equation Sa = b has a unique solution.

Page 4: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

468CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARESAPPROXIMATIONS TO FUNCTIONS

Theorem 10.1 (Existence and uniqueness of Discrete Least-Squares Solutions). Let

(x1, y1), (x2, y2), . . ., (xn, yn) be n distinct points. Then the discrete least-square approximation

problem has a unique solution.

10.1.1 Least-Squares Approximation of a Function

We have described least-squares approximation to fit a set of discrete data. Here we describe

continuous least-square approximations of a function f(x) by using polynomials.

First, consider approximation by a polynomial with monomial basis: {1, x, x2, . . . , xn}.

Least-Square Approximations of a Function Using Monomial Polynomials

Given a function f(x), continuous on [a, b], find a polynomial Pn(x) of degree at most n:

Pn(x) = a0 + a1x+ a2x2 + · · ·+ anx

n

such that the integral of the square of the error is minimized. That is,

E =

∫ b

a

[f(x)− Pn(x)]2 dx

is minimized.

The polynomial Pn(x) is called the Least-Squares Polynomial.

Since E is a function of a0, a1, . . . , an, we denote this by E(a0, a1, . . . , an).

For minimization, we must have

∂E

∂ai= 0, i = 0, 1, . . . , n.

As before, these conditions will give rise to a system of (n + 1) normal equations in (n + 1)

unknowns: a0, a1, . . . , an. Solution of these equations will yield the unknowns: a0, a1, . . . , an.

Page 5: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

10.1. DISCRETE LEAST-SQUARES APPROXIMATIONS 469

10.1.2 Setting up the Normal Equations

Since

E =

∫ b

a

[f(x)− (a0 + a1x+ a2x2 + · · ·+ anx

n)]2 dx,

∂E

∂a0= −2

∫ b

a

[f(x)− a0 − a1x− a2x2 − · · · − anx

n] dx

∂E

∂a1= −2

∫ b

a

x[f(x)− a0 − a1x− a2x2 − · · · − anx

n] dx

...

∂E

∂an= −2

∫ b

a

xn[f(x)− a0 − a1x− a2x2 − · · · − anx

n] dx

So,

∂E

∂a0= 0 ⇒ a0

∫ b

a

1 dx+ a1

∫ b

a

x dx+ a2

∫ b

a

x2 dx+ · · ·+ an

∫ b

a

xn dx =

∫ b

a

f(x) dx

Similarly,

∂E

∂ai= 0 ⇒ a0

∫ b

a

xi dx+ a1

∫ b

a

xi+1 dx+ a2

∫ b

a

xi+2 dx+ · · ·+ an

∫ b

a

xi+n dx =

∫ b

a

xif(x) dx,

i = 1, 2, 3, . . . , n.

So, the (n+ 1) normal equations in this case are:

i = 0: a0

∫ b

a

1 dx+ a1

∫ b

a

x dx+ a2

∫ b

a

x2 dx+ · · ·+ an

∫ b

a

xn dx =

∫ b

a

f(x) dx

i = 1: a0

∫ b

a

x dx+ a1

∫ b

a

x2 dx+ a3

∫ b

a

x3 dx+ · · ·+ an

∫ b

a

xn+1 dx =

∫ b

a

x f(x) dx

...

i = n: a0

∫ b

a

xn dx+ a1

∫ b

a

xn+1 dx+ a2

∫ b

a

xn+2 dx+ · · ·+ an

∫ b

a

x2n dx =

∫ b

a

xn f(x) dx

Denote∫ b

a

xi dx = si, i = 0, 1, 2, 3, . . . , 2n, and bi =

∫ b

a

xif(x) dx, i = 0, 1, . . . , n.

Then the above (n+ 1) equations can be written as

a0s0 + a1s1 + a2s2 + · · ·+ ansn = b0

a0s1 + a1s2 + a2s3 + · · ·+ ansn+1 = b1

...

a0sn + a1sn+1 + a2sn+2 + · · ·+ ans2n = bn.

Page 6: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

470CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARESAPPROXIMATIONS TO FUNCTIONS

or in matrix notation

s0 s1 s2 · · · sn

s1 s2 s3 · · · sn+1

......

......

sn sn+1 sn+2 · · · s2n

a0

a1...

an

=

b0

b1...

bn

Denote

S = (si), a =

a0

a1...

an

, b =

b0

b1...

bn

.

Then we have the system of normal equations:

Sa = b

The solution of these equations will yield the coefficients a0, a1, . . . , an of the least-squares poly-

nomial Pn(x).

A Special Case: Let the interval be [0, 1]. Then

si =

∫ 1

0xi dx =

1

i+ 1, i = 0, 1, . . . , 2n.

Thus, in this case the matrix of the normal equations

S =

1 12 · · · 1

n12

13 · · · 1

n+1...

......

1n

1n+1 · · · 1

2n

which is a Hilbert Matrix. It is well-known to be ill-conditioned (see Chapter 3).

Page 7: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

10.1. DISCRETE LEAST-SQUARES APPROXIMATIONS 471

Algorithm 10.2 (Least-Squares Approximation using Monomial Polynomials).

Inputs: (i) f(x) - A continuous function on [a, b].

(ii) n - The degree of the desired least-squares polynomial

Output:The coefficients a0, a1, . . . , an of the desired least-squares polynomial: Pn(x) = a0 +

a1x+ · · ·+ anxn.

Step 1. Compute s0, s1, . . . , s2n:

For i = 0, 1, . . . , 2n do

si =∫ b

axi dx

End

Step 2. Compute b0, b1, . . . , bn:

For i = 0, 1, . . . , n do

bi =∫ b

axif(x) dx

End

Step 3. Form the matrix S from the numbers s0, s1, . . . , s2n and the vector b from the numbers

b0, b1, . . . , bn.

S =

s0 s1 . . . sn

s1 s2 . . . sn+1

......

...

sn sn+1 . . . s2n

, b =

b0

b1...

bn

.

Step 4. Solve the (n+ 1)× (n+ 1) system of equations for a0, a1, . . . an:

Sa = b, where a =

a0...

an

.

Example 10.3

Find Linear and Quadratic least-squares approximations to f(x) = ex on [−1, 1].

Page 8: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

472CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARESAPPROXIMATIONS TO FUNCTIONS

Linear Approximation: n = 1; P1(x) = a0 + a1x

s0 =

∫ 1

−1dx = 2

s1 =

∫ 1

−1x dx =

[

x2

2

]1

−1

=1

2−(

1

2

)

= 0

s2 =

∫ 1

−1x2 dx =

[

x3

3

]1

−1

=1

3−(−1

3

)

=2

3

Thus,

S =

(

s0 s1

s1 s2

)

=

(

2 0

0 23

)

b0 =

∫ 1

−1ex dx = e− 1

e= 2.3504

b1 =

∫ 1

−1exx dx =

2

e= 0.7358

The normal system is:(

2 0

0 23

)(

a0

a1

)

=

(

b0

b1

)

This gives

a0 = 1.1752, a1 = 1.1037

The linear least-squares polynomial P1(x) = 1.1752 + 1.1037x.

Accuracy Check:

P1(0.5) = 1.7270, e0.5 = 1.6487

Relative Error: = 0.0453.

|e0.5 − P1(0.5)||e0.5| =

|1.6487 − 1.7270||1.6487| = 0.0475.

Quadratic Fitting: n = 2; P2(x) = a0 + a1x+ a2x2

s0 = 2, s1 = 0, s2 =2

3

s3 =

∫ 1

−1x3dx =

[

x4

4

]1

−1

= 0

s4 =

∫ 1

−1x4dx =

[

x5

5

]1

−1

=2

5

Page 9: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

10.1. DISCRETE LEAST-SQUARES APPROXIMATIONS 473

b0 =

∫ 1

−1exdx = e− 1

e= 2.3504

b1 =

∫ 1

−1xexdx =

2

e= 0.7358

b2 =

∫ 1

−1x2exdx = e− 5

e= 0.8789.

The system of normal equations is:

2 0 23

0 23 0

23 0 2

5

a0

a1

a2

=

2.3504

0.7358

0.8789

The solution of this system is:

a0 = 0.9963, a1 = 1.1037, a2 = 0.5368.

The quadratic least-squares polynomial P2(x) = 0.9963 + 1.1037x + 0.5368x2.

Accuracy Check:

P2(0.5) = 1.6889

e0.5 = 1.6487

Relative error: |P2(0.5)−e0.5||e0.5| = |1.6824−1.6487|

|1.6487| = 0.0204.

Example 10.4

Find linear and quadratic least-squares polynomial approximation to f(x) = x2 + 5x + 6 in

[0, 1].

Linear Fit: P1(x) = a0 + a1x

s0 =

∫ 1

0dx = 1

s1 =

∫ 1

0xdx =

1

2

s2 =

∫ 1

0x2dx =

1

3

Page 10: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

474CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARESAPPROXIMATIONS TO FUNCTIONS

b0 =

∫ 1

0(x2 + 5x+ 6) dx =

1

3+

5

2+ 6

=53

6

b1 =

∫ 1

0x(x2 + 5x+ 6) dx =

∫ 1

0(x3 + 5x2 + 6x) dx

=1

4+

5

3+

6

2=

59

12

The normal equations are:(

1 12

12

13

)(

a0

a1

)

=

(

5365912

)

⇒a0 = 5.8333

a1 = 6

The linear least squares polynomial P1(x) = 5.8333 + 6x.

Accuracy Check:

Exact Value: f(0.5) = 8.75; P1(0.5) = 8.833

Relative error: |8.833−8.75||8.75| = 0.0095.

Quadratic Least-Square Approximation: P2(x) = a0 + a1x+ a2x2

S =

1 12

13

12

13

14

13

14

15

b0 =53

6, b1 =

59

12,

b2 =

∫ 1

0x2(x2 + 5x+ 6) dx =

∫ 1

0(x4 + 5x3 + 6x2) dx =

1

5+

5

4+

6

3=

69

20.

The solution of the linear system is: a0 = 6, a1 = 5, a2 = 1,

P2(x) = 6 + 5x+ x2 (Exact)

10.1.3 Use of Orthogonal Polynomials in Least-squares Approximations

The least-squares approximation using monomial polynomials, as described above, is not numer-

ically effective; since the system matrix S of normal equations is very often ill-conditioned.

Page 11: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

10.1. DISCRETE LEAST-SQUARES APPROXIMATIONS 475

For example, when the interval is [0, 1], we have seen that S is a Hilbert matrix, which is

notoriously ill-conditioned for even modest values of n. When n = 5, the condition number of

this matrix = cond(S) = O(105). Such computations can, however, be made computationally

effective by using a special type of polynomials, called orthogonal polynomials.

Definition 10.5. The set of functions {φ0, φ1, . . . , φn} in [a, b] is called a set of orthogonal

functions, with respect to a weight function w(x), if

∫ b

a

w(x)φj(x)φi(x) dx =

0 if i 6= j

Cj if i = j

where Cj is a real positive number.

Furthermore, if Cj = 1, j = 0, 1, . . . , n, then the orthogonal set is called an orthonormal set.

Using this interesting property, least-squares computations can be more numerically effective,

as shown below. Without any loss of generality, let’s assume that w(x) = 1.

Idea: The idea is to find a least-squares approximation of f(x) on [a, b] by means of a

polynomial of the form

Pn(x) = a0φ0(x) + a1φ1(x) + · · ·+ anφn(x),

where {φn}nk=0 is a set of orthogonal polynomials. That is, the basis for generating Pn(x)

in this case is a set of orthogonal polynomials.

The question now arises: Is it possible to write Pn(x) as a linear combination of the or-

thogonal polynomials? The answer is yes and provided in the following. It can be shown

[Exercise]:

Given the set of orthogonal polynomials {Qi(x)}ni=0, a polynomial Pn(x) of degree ≤ n, can be

written as:

Pn(x) = a0Q0(x) + a1Q1(x) + · · · + anQn(x)

for some a0, a1, . . . , an.

Finding the least-squares approximation of f(x) on [a, b] using orthogonal polynomials, then

can be stated as follows:

Page 12: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

476CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARESAPPROXIMATIONS TO FUNCTIONS

Least-squares Approximation of a Function Using Orthogonal Polynomials

Given f(x), continuous on [a, b], find a0, a1, . . . , an using a polynomial of the form:

Pn(x) = a0φ0(x) + a1φ1(x) + · · ·+ anφn(x),

where

{φk(x)}nk=0

is a given set of orthogonal polynomials on [a, b], such that the error function:

E(a0, a1, . . . , an) =

∫ b

a

[f(x)− (a0φ0(x) + · · · anφn(x))]2 dx

is minimized.

As before, we set∂E

∂ai= 0, i = 0, 1, . . . , n.

Now∂E

∂a0= −2

∫ b

a

φ0(x)[f(x)− a0φ0(x)− a1φ1(x)− · · · − anφn(x)] dx.

Setting this equal to zero, we get

∫ b

a

φ0(x)f(x) dx =

∫ b

a

(a0φ0(x) + · · ·+ anφn(x))φ0(x) dx.

Since, {φk(x)}nk=0 is an orthogonal set, we have,

∫ b

a

φ20(x) dx = C0,

and∫ b

a

φ0(x)φi(x) dx = 0, i 6= 0.

Applying the above orthogonal property, we see from above that

∫ b

a

φ0(x)f(x) dx = C0a0.

That is,

a0 =1

C0

∫ b

a

φ0(x)f(x) dx.

Similarly,∂E

∂a1= −2

∫ b

a

φ1(x)[f(x)− a0φ0(x)− a1φ1(x)− · · · − anφn(x)] dx.

Page 13: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

10.1. DISCRETE LEAST-SQUARES APPROXIMATIONS 477

Again from the orthogonal property of {φj(x)}nj=0 we have

∫ b

a

φ21(x) dx = C1 and

∫ b

a

φ1(x)φi(x) dx = 0, i 6= 1,

so, setting ∂E∂a1

= 0, we get

a1 =1

C1

∫ b

a

φ1(x)f(x) dx

In general, we have

ak =1

Ck

∫ b

a

φk(x)f(x) dx, k = 0, 1, . . . , n,

where

Ck =

∫ b

a

φ2k(x) dx.

10.1.4 Expressions for ak with Weight Function w(x).

If the weight function w(x) is included, then ak is modified to

ak =1

Ck

∫ b

a

w(x)f(x)φk(x) dx, k = 0, 1, . . . , n

Where Ck =

∫ b

a

=

∫ b

a

w(x)φ2k(x)dx

Algorithm 10.6 (Least-Squares Approximation Using Orthogonal Polynomials).

Inputs: f(x) - A continuous function f(x) on [a, b]

w(x) - A weight function (an integrable function on [a, b]).

{φk(x)}nk=0 - A set of n orthogonal functions on [a, b].

Output:The coefficients a0, a1, . . . , an such that

∫ b

a

w(x)[f(x)− a0φ0(x)− a1φ1(x)− · · · − anφn(x)]2 dx

is minimized.

Step 1. Compute Ck, k = 0, 1, . . . , n as follows:

For k = 0, 1, 2, . . . , n do

Ck =∫ b

aw(x)φ2

k(x) dx

End

Page 14: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

478CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARESAPPROXIMATIONS TO FUNCTIONS

Step 2. Compute ak, k = 0, . . . , n as follows:

For k = 0, 1, 2, . . . , n do

ak = 1Ck

∫ b

aw(x)f(x)φk(x) dx

End

Generating a Set of Orthogonal Polynomials

The next question is: How to generate a set of orthogonal polynomials on [a, b], with respect

to a weight function w(x)?

The classical Gram-Schmidt process (seeChapter 7), can be used for this purpose. Starting

with the set of monomials {1, x, . . . , xn}, we can generate the orthonormal set of polynomials

{Qk(x)}.

10.2 Least-Squares Approximation Using Legendre’s Polyno-

mials

Recall from the last chapter that the first few Legendre polynomials are given by:

φ0(x) = 1

φ1(x) = x

φ2(x) = x2 − 13

φ3(x) = x3 − 35x

etc.

To use the Legendre polynomials to approximate least-squares solution of a function f(x), we

set w(x) = 1, and [a, b] = [−1, 1] and compute

• Ck, k = 0, 1, . . . , n by Step 1 of Algorithm 10.6:

Ck =

∫ 1

−1φ2k(x)dx

and

• ak, k = 0, 1, . . . , n by Step 2 of Algorithm 10.6:

ak =1

Ck

∫ 1

−1f(x)φk(x)dx.

Page 15: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

10.2. LEAST-SQUARES APPROXIMATION USING LEGENDRE’S POLYNOMIALS 479

The least-squares polynomial will then be given by Pn(x) = a0Q0(x)+a1Q1(x)+ · · ·+anQn(x).

A few Ck’s are now listed below:

C0 =∫ 1−1 φ

20(x) dx =

∫ 1−1 1 dx = 2

C1 =∫ 1−1 φ

21(x) dx =

∫ 1−1 x

2 dx = 23

C2 =∫ 1−1 φ

22(x) dx =

∫ 1−1

(

x2 − 13

)2dx = 8

45 .

and so on.

Example 10.7

Find linear and quadratic least-squares approximation to f(x) = ex using Legendre polyno-

mials.

Linear Approximation: P1(x) = a0φ0(x) + a1φ1(x)

φ0(x) = 1, φ1(x) = x

Step 1. Compute C0 and C1:

C0 =

∫ 1

−1φ20(x) dx =

∫ 1

−1dx = [x]1−1 = 2

C1 =

∫ 1

−1φ21(x) dx =

∫ 1

−1x2 dx =

[

x3

3

]1

−1

=1

3+

1

3=

2

3.

Step 2. Compute a0 and a1:

a0 =1

C0

∫ 1

−1φ0(x)e

xdx1

2

∫ 1

−1exdx =

1

2

(

e− 1

e

)

a1 =1

C1

∫ 1

−1f(x)φ1(x)dx

=3

2

∫ 1

−1exxdx =

3

e.

The linear least-squares polynomial:

P1(x) = a0φ0(x) + a1φ1(x)

=1

2

[

e− 1

e

]

+3

ex

Page 16: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

480CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARESAPPROXIMATIONS TO FUNCTIONS

Accuracy Check:

P1(0.5) =1

2

[

e− 1

e

]

+3

e· 0.5 = 1.7270

e0.5 = 1.6487

Relative error: |1.7270−1.6487||1.6487| = 0.0475.

Quadratic Approximation: P2(x) = a0 φ0(x) + a1 φ1(x) + a2 φ2(x)

a0 =1

2

(

e− 1

e

)

, a1 =3

e(Already computed above)

Step 1. Compute

C2 =

∫ 1

−1φ22(x) dx =

∫ 1

−1

(

x2 − 1

3

)2

dx

=

(

x5

5− 2

3· x

3

3+

1

ax

)1

−1

=8

45

Step 2. Compute

a2 =1

C2

∫ 1

−1exφ2(x) dx

a2 =45

8

∫ 1

−1ex(

x2 − 1

3

)

dx

= e− 7

e

The quadratic least-squares polynomial :

P2(x) =1

2

(

e− 1

e

)

+3

ex+

(

e− 7

e

)(

x2 − 1

3

)

Accuracy Check:

P2(0.5) = 1.7151

e0.5 = 1.6487

Relative error: |1.7151−1.6487||1.6487| = 0.0403.

Compare this relative error with that obtained earlier with an non-orthogonal polynomial of

degree 2.

Page 17: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

10.3. CHEBYSHEV POLYNOMIALS: ANOTHERWONDERFUL FAMILY OFORTHOGONAL POLYNOMIALS

10.3 Chebyshev polynomials: Another wonderful family of or-

thogonal polynomials

Definition 10.8. The set of polynomials defined by

Tn(x) = cos[n arccos x], n ≥ 0

on [−1, 1] are called the Chebyshev polynomials.

To see that Tn(x) is a polynomial of degree n in our familiar form, we derive a recursive

relation by noting that

T0(x) = 1 (the Chebyshev polynomial of degree zero).

T1(x) = x (the Chebyshev polynomial of degree 1).

10.3.1 A Recursive Relation for Generating Chebyshev Polynomials

Substitute θ = arccos x. Then,

Tn(x) = cos(nθ), 0 ≤ θ ≤ π.

Tn+1(x) = cos(n + 1)θ = cosnθ cos θ − sinnθ sin θ

Tn−1(x) = cos(n − 1)θ = cosnθ cos θ + sinnθ sin θ

Adding the last two equations, we obtain

Tn+1(x) + Tn−1(x) = 2 cosnθ cos θ

The right-hand side still does not look like a polynomial in x. But note that cos θ = x

So,

Tn+1(x) = 2 cosnθ cos θ − Tn−1(x)

= 2xTn(x)− Tn−1(x).

This above is a three-term recurrence relation to generate the Chebyshev Polynomials.

Three-Term Recurrence Formula for Chebyshev Polynomials

T0(x) = 1, T1(x) = x

Tn+1(x) = 2xTn(x)− Tn−1(x), n ≥ 1.

Page 18: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

482CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARESAPPROXIMATIONS TO FUNCTIONS

Using this recursive relation, the Chebyshev polynomials of the successive degrees can be gen-

erated:

n = 1 : T2(x) = 2xT1(x)− T0(x) = 2x2 − 1,

n = 2 : T3(x) = 2xT2(x)− T1(x) = 2x(2x2 − 1)− x = 4x3 − 3x

and so on.

10.3.2 The orthogonal property of the Chebyshev polynomials

We now show that Chebyshev polynomials are orthogonal with respect to the weight function

w(x) =1√

1− x2in the interval [−1, 1].

To demonstrate the orthogonal property of these polynomials, we show that

Orthogonal Property of the Chebyshev Polynomials

∫ −1

1

Tm(x)Tn(x)√1− x2

dx =

0 if m 6= n

π2 if m = n.

First,

∫ 1

−1

Tm(x)Tn(x)√1− x2

dx, m 6= n.

=

∫ 1

−1

cos(m arccos x) cos(n arccos x)√1− x2

dx

Since

θ = arccos x

dθ = − 1√1− x2

dx,

Page 19: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

10.4. THE LEAST-SQUAREAPPROXIMATION USING CHEBYSHEV POLYNOMIALS483

The above integral becomes:

−∫ 0

π

cosmθ cosnθdθ

=

∫ π

0cosmθ cosnθdθ

Now, cosmθ cosnθ can be written as1

2[cos(m+ n)θ + cos(m− n)θ]

So,

∫ π

0cosmθ cosnθ dθ

=1

2

∫ π

0cos(m+ n)θ dθ +

1

2

∫ π

0cos(m− n)θ dθ

=1

2

[

1

(m+ n)sin(m+ n)θ

0

+1

2

[

1

(m− n)sin(m− n)θ

0

= 0.

Similarly, it can be shown [Exercise] that

∫ −1

1

T 2n(x) dx√1− x2

2for n ≥ 1.

10.4 The Least-Square Approximation using Chebyshev Poly-

nomials

As before, the Chebyshev polynomials can be used to find least-squares approximations to a

function f(x) as stated below.

In Algorithm 10.6, set w(x) = 1√1−x2

,

[a, b] = [−1, 1],

and φk(x) = Tk(x),

Then, it is easy to see that using the orthogonal property of Chebyshev polynomials:

Page 20: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

484CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARESAPPROXIMATIONS TO FUNCTIONS

C0 =

∫ 1

−1

T 20 (x)√1− x2

dx = π

Ck =

∫ 1

−1

T 2k (x)√1− x2

dx =π

2, k = 1, . . . , n

Thus, from Step 2 of Algorithm 10.6, we have

a0 =1

π

∫ 1

−1

f(x)dx√1− x2

and ai =2

π

∫ 1

−1

f(x)dx√1− x2

.

The least-squares approximating polynomial Pn(x) of f(x) using Chebyshev polynomials is

given by:

Pn(x) = a0T0(x) + a1T1(x) + · · ·+ anTn(x)

where

ai =2

π

∫ 1

−1

f(x)Ti(x) dx√1− x2

, i = 1, · · · , n

and

a0 =1

π

∫ 1

−1

f(x) dx√1− x2

Example 10.9

Find a linear least-squares approximation of f(x) = ex using Chebyshev polynomials.

Here

P1(x) = a0φ0(x) + a1φi(x) = a0T0(x) + a1T1(x) = a0 + a1x,

where

a0 =1

π

∫ 1

−1

ex dx√1− x2

≈ 1.2660

a1 =2

π

∫ 1

−1

xex√1− x2

dx ≈ 1.1303

Thus, P1(x) = 1.2660 + 1.1303x

Accuracy Check:

P1(0.5) = 1.8312;

e0.5 = 1.6487

Page 21: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

10.5. MONIC CHEBYSHEV POLYNOMIALS 485

Relative error: |1.6487−1.8312|1.6487 = 0.1106.

10.5 Monic Chebyshev Polynomials

Note that Tk(x) is a Chebyshev polynomial of degree k with the leading coefficient 2k−1, k ≥ 1.

Thus we can generate a set of monic Chebyshev polynomials from the polynomials Tk(x) as

follows:

• The Monic Chebyshev Polynomials, T̃k(x), are then given by

T̃0(x) = 1, T̃k(x) =1

2k−1Tk(x), k ≥ 1.

• The k zeros of T̃k(x) are easily calculated [Exercise]:

x̃j = cos

(

2j − 1

2kπ

)

, j = 1, 2, . . . , k.

• The maximum or minimum values of T̃k(x) [Exercise] occur at

x̃j = cos

(

k

)

,

and

T̃k(x̃j) =(−1)j

2k−1, j = 0, 1, . . . , k.

10.6 Minimax Polynomial Approximations with Chebyshev Poly-

nomials

As seen above the Chebyshev polynomials can, of course, be used to find least-squares poly-

nomial approximations. However, these polynomials have several other wonderful polynomial

approximation properties. First, we state the minimax property of the Chebyshev poly-

nomials.

Minimax Property of the Chebyshev Polynomials

If Pn(x) is any monic polynomial of degree n, then

1

2n−1= max

x∈[−1,1]|T̃n(x)| ≤ max

x∈[−1,1]|Pn(x)|.

Moreover, this happens when

Pn(x) ≡ T̃n(x).

Page 22: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

486CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARESAPPROXIMATIONS TO FUNCTIONS

Interpretation: The above results say that the absolute maximum of the monic Chebyshev

polynomial of degree n, T̃n(x), over [−1, 1] is less than or equal to that of any polynomial Pn(x)

over the same interval.

Proof. By contradiction [Exercise].

10.6.1 Choosing the interpolating nodes with the Chebyshev Zeros

Recall from Chapter 6 that error in polynomial interpolation of a function f(x) with the

nodes, x0, x1, . . . , xn by a polynomial Pn(x) of degree at most n is given by

E = f(x)− P (x) =fn+1(ξ)

(n+ 1)!Ψ(x),

where Ψ(x) = (x− x0)(x− x1) · · · (x− xn).

The question is: How to choose these (n + 1) nodes x0, x1, . . . , xn so that |Ψ(x)| is minimized

in [−1, 1]?

The answer can be given from the above minimax property of the monic Chebyshev polynomials.

Note that Ψ(x) is a monic polynomial of degree (n+ 1).

So, by the minimax property of the Chebyshev polynomials, we have

maxx∈[−1,1]

|T̃n+1(x)| ≤ maxx∈[−1,1]

|Ψ(x)|.

Thus,

• The maximum value of Ψ(x) = (x − x0)(x − x1) · · · (x − xn) in [−1, 1] is smallest when

x0, x1, . . . , xn are chosen as the (n+1) zeros of the (n+1)th degree monic Chebyshev polynomials

T̃n+1(x).

That is, when x0, x1, . . . , xn are chosen as:

x̃k+1 = cos(2k + 1)

2(n + 1)π, k = −1, 0, 1, . . . , n− 1.

• The smallest absolute maximum value of Ψ(x) with x0, x1, . . . , xn as chosen above, is:1

2n.

10.6.2 Working with an Arbitrary Interval

If the interval is [a, b], different from [−1, 1], then, the zeros of T̃n+1(x) need to be shifted by

using the transformation:

x̃ =1

2[(b− a)x+ (a+ b)]

Page 23: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

10.6. MINIMAX POLYNOMIAL APPROXIMATIONSWITH CHEBYSHEV POLYNOMIALS487

Example 10.10

Let the interpolating polynomial be of degree at most 2 and the interval be [1.5, 2].

The three zeros of T̃3(x) in [−1, 1] are given by

x̃0 = cosπ

6, x̃1 = cos

π

6, and x̃2 = cos

pi

2.

These zeros are to be shifted using transformation:

xnew i =1

2[(2− 1.5)x̃i + (2 + 1.5)] , i = 0, 1, 2.

10.6.3 Use of Chebyshev Polynomials to Economize Power Series

Power Series Economization

Let Pn(x) = a0 + a1x+ · · ·+ anxn be a polynomial of degree n obtained by truncating a power

series expansion of a continuous function on [a, b]. The problem is to find a polynomial Pr(x)

of degree r (< n) such that

maxx∈[−1,1]

|Pn(x)− Pr(x)|,

is as small as possible.

We first consider r = n − 1; that is, the problem of approximating Pn(x) by a polynomial

Pn−1(x) of degree n−1. If the total error (sum of truncation error and error of approximation)

is still within an acceptable tolerance, then we consider approximation by a polynomial of

Pn−1(x) and the process is continued until the accumulated error exceeds the tolerance.

We will show below how minimax property of the Chebyshev polynomials can be gainfully used.

First note that | 1anPn(x) − Pn−1(x)| is a monic polynomial. So, by the minimax property, we

have

maxx∈[−1,1]

| 1an

Pn(x)− Pn−1(x)| ≥ maxx∈[−1,1]

|T̃n(x)| =1

2n−1.

Thus, if we choose

Pn−1(x) = Pn(x)− anT̃n(x),

Then the minimum value of the maximum error of approximating Pn(x) by Pn−1(x) over [−1, 1]

is given by:

max−1≤x≤1

|Pn(x)− Pn−1(x)| =|an|2n−1

.

If this quantity, |an|2n−1 , plus error due to the truncation of the power series is within the per-

missible tolerance ǫ, we can then repeat the process by constructing Pn−2(x) from Pn−1(x) as

above. The process can be continued until the accumulated error exceeds the error tolerance ǫ.

Page 24: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

488CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARESAPPROXIMATIONS TO FUNCTIONS

So, the process can be summarized as follows:

Power Series Economization Process by Chebyshev Polynomials

Inputs: (i) f(x) - (n+ 1) times differentiable function on [a, b]

(ii) n = positive integer

Outputs: An economized power series of f(x).

Step 1. Obtain Pn(x) = a0 + a1xn + · · · + anx

n by truncating the power series expansion of

f(x).

Step 2. Find the truncation error.

ETR(x) = Reminder after n terms =|f (n+1)(ξ(x))||xn+1|

(n+ 1)!

and compute the upper bound of |ETR(x)| for −1 ≤ x ≤ 1.

Step 3. Compute Pn−1(x):

Pn−1(x) = Pn(x)− anT̃n(x)

Step 4. Check if the maximum value of the total error(

|ETR|+ |an|2n−1

)

is less than ǫ. If so,

approximate Pn−1(x) by Pn−2(x). Compute Pn−2(x) as:

Pn−2(x) = Pn−1(x)− an−1T̃n−1(x)

Step 5. Compute the error of the current approximation:

|Pn−1(x)− Pn−2(x)| =|an−1|2n−2

.

Step 6. Compute the accumulated error:

(

|ETR|+ | an

2n−1|+ |an−1

2n−2|)

If this error is still less than ǫ, continue until the accumulated error exceeds ǫ.

Example 10.11

Find the economized power series for cos x = 1− 12x

2 + x4

4! − · · · for 0 ≤ x ≤ 1 with a tolerance

ǫ = 0.05.

Page 25: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

10.6. MINIMAX POLYNOMIAL APPROXIMATIONSWITH CHEBYSHEV POLYNOMIALS489

Step 1. Let’s first try with n = 4.

P4(x) = 1− 1

2x2 + 0 · x3 + x4

4!

Step 2. Truncation Error.

TR(x) =f5(ξ(x))

5!x5

since f (5)(x) = − sinx and | sinx| ≤ 1 for all x, we have

|TR(x)| ≤1

120(1)(1)5 = 0.0083.

Step 3. Compute P3(x):

P3(x) = P4(x)− a4T̃4(x)

= 1− 1

2x2 +

x4

4!− 1

4!

(

x4 − x2 +1

8

)

= −0.4583x2 + 0.9948

Step 4. Maximum absolute accumulated error = Maximum absolute truncation error + max-

imum absolute Chebyshev error = 0.0083 + 124 · 1

8 = 0.0135 which is greater than ǫ = 0.05.

Thus, the economized power series of f(x) = cos x for 0 ≤ x ≤ 1 with an error tolerance of

ǫ = 0.05 is the 3rd degree polynomial:

P3(x) = −0.4583x2 + 0.9948.

Accuracy Check: We will check below how accurate this approximation is for three values of

x in 0 ≤ x ≤ 1: two extreme values x = 0 and x = 1, and one intermediate value x = 0.5.

For x=0.5.

cos(0.5) = 0.8776

P3(0.5) = 0.8802

Error = | cos(0.5) − P3(0.5)| = | − 0.0026| = 0.0026.

For x=0.

cos(0) = 1; P3(0) = 0.9948

Error = | cos(0)− P3(0)| = 0.0052.

Page 26: Orthogonal Polynomialsand Least-Squares …dattab/MATH435.2013/APPROXIMATION.pdf · ... Using Monomial Polynomials Given a function f(x), ... (Least-Squares Approximation using Monomial

490CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARESAPPROXIMATIONS TO FUNCTIONS

For x=1.

cos(1) = 0.5403; P3(1) = 0.5365

Error = | cos(1)− P3(1)| = 0.0038.

Remark: These errors are all much less than 0.05 even though theoretical error-bound, 0.0135,

exceeded it.