Chapter-2
Least square approximation
This method of curve fitting was suggested early in the nineteenth century by the French
mathematician Adrien Legendre.
The method of least squares assumes that the best fitting line in the curve for which the sum of
the squares of the vertical distances of the points ( x i , y i ) from the line is minimum.
The simplest example of a least-squares approximation is fitting a straight line to a set of paired
observations:( x 1 , y 1 ) ,( x 2 , y 2 ) ,…,( x n , y n ). The mathematical expression for the straight line is
y=a0 +ax +e
Minimizing sum of squared errors
Sum of the squared errors:
n
φ ( a0 , a )=∑ ( a0 + a x k − y k )
2
k=1
Necessary conditions for minimum (from partial derivative of calculus)
∂ φ 0∧∂ φ
= =0
∂ a0 ∂a
Which leads to the normal equation
1
{
n
∑ 2 ( a0 +a x k − y k )=0
k=1
n
∑ 2 ( a0 + a x k − y k ) x k =0
k=1
Or ¿
Simply notation
n n n n
Let p=∑ (x k ); q=∑ ( y k ) ; r=∑ ( x k y k )∧s=∑ ( x k ) yields
2
k=1 k=1 k=1 k=1
{
a0 n+ ap=q
a0 p +as=r
⟹
n
p [ ps ](aa )=( qr)
0
Solving simply 2 x 2 system using Cramer’s rule
That is
pq−nr
a= 2
p −ns
pr −sq
a 0= 2
p −ns
Matrix view of normal equation
It often happens that Ax=b has no solution. The usual reason is
The matrix has more rows than columns. There are more equations than unknowns.
We cannot always get the error e=b− Ax downs to zero. When e is zero x is an exact solution
to Ax=b. When the length of e is as small as possible, ^x is a least squares solution. Our goal in
this section is to compute ^x and use it. These are real problems and they need an answer.
When Ax=b has no solution, multiply by AT and solve AT A x^ = A T b .
Minimizing the error
How do we make the error e=b− Ax as small as possible? This is an important question with a
beautiful answer. The best x (called ^x ) can be found by geometry or algebra or calculus:
90 angle or project using P or set the derivative of the error to zero.
0
2
But we will see by calculus only
By calculus:
Suppose y=a0 +ax be the line which fits best for the data ( x 1 , y 1 ) , ( x 2 , y 2 ) … , ( x n , y n )
Most functions are minimized by calculus. Here the error function E to be minimized is a sum of
squares e 21+ e 22+ e23 +…+ e2n is minimized.
2
E=‖ Ax−b‖ =( a 0+ a x 1− y 1 )2+ ( a0 + a x2− y 2) 2+ …+ ( a 0+ a x n− y n )2
n n n
∂E
=2 ∑ ( a 0+ a x i− y i )=0 ⟹ ∑ ( a0 +a x i )=∑ yi
∂ a0 i=1 i=1 i=1
n n n
∂E
=2 ∑ ( a0 + a x i− y i ) x i=0 ⟹ ∑ ( a0 x i+ a x2i ) =∑ x i y i
∂a i=1 i=1 i=1
( ) [ ]
n
1 x1
n ∑ xi
Thus
i=1
= A T A , where A= 1 x 2
n n
⋮ ⋮
∑ xi ∑ x2i 1 xn
i=1 i=1
( )
n
∑ yi
And
i=1
n
=A T b
∑ xi yi
i=1
The equation is identical with AT A x^ = A T b. The best a 0 anda are the components of ^x . The
equations from calculus are the same as the ‘’normal equations’’ from linear algebra. These are
the key equations of least squares:
The partial derivatives of ‖ Ax−b‖2 are zero when AT A x^ = A T b.
Example: Suppose our data (x i , y i ) consist of pairs (−2 , 4 ) , (−1 , 2 ) , ( 0 ,1 ) , ( 2 ,1 ) and(3 , 1) .
Find the least square solution y=a0 +ax +¿ .
Solution
x 1=−2 , x2 =−1; x 3=0 ; x 4 =2 abd x 5=3
3
5
p=∑ ( x k )=−2−1+ 0+2+3=2
k=1
y 1=4 , y 2=2 ; y 3=1 ; y 4=1 abd y 5=1
5
q=∑ ( y k )=4+2+1+1+1=9
k=1
n
r =∑ ( x k y k )=x 1 y 1 + x 2 y 2 + x 3 y 3+ x 4 y 4 + x 5 y 5=−8−2+0+ 2+ 3=−5
k=1
5
s=∑ (x 2k )=x 21+ x22 + x 23 + x 24 + x 25=4+1+0+ 4+ 9=18
k=1
pq−nr 2 ( 9 )−5 (−5) −43
a= 2
= 2 = =−0.5
p −ns 2 −18(5) 86
pr −sq 2 (−5 )−18(9) 172
a 0= 2
= 2 = =2.
p −ns 2 −18 (5) 86
Therefore the linear equation which fits best for the given data is y=−0.566 x+ 2.026.
Or using matrix
( )( )
n 5
n ∑ xi 5 ∑ xi
n
i=1
n
= 5
i=1
5 (
=A T A= 5 2
2 18 )
∑ x i ∑ x2i ∑ x i ∑ x 2i
i=1 i=1 i=1 i=1
( )( )
n 5
∑ yi ∑ yi
i=1
n
= i=1
5 ( )
=A T b= 9
−5
∑ xi yi ∑ xi yi
i=1 i=1
Let ^x = ()
a0
a
AT A x^ = A T b ⟹ ^x = ( aa )=inv ( A A )∗A b=(−0.5
0 T 2T
)
Therefore the best fitting straight line is y=2−0.5 x
4
Curve fitting: Suppose we know that the relation between x and y is given by quadratic law
y=a+bx +c x , so we want to fit a parabola y=a+bx +c x to the data. Then our unknowns are
2 2
a , b andc and should satisfy the equation
2
y i=a+b x i+ c x i ,i=1 , 2 ,3 , … , n
In a matrix form
( )( ) ( )
2
1 x1 x1 y1
2 a
1 x2 x2 y
b= 2
⋮ ⋮ ⋮ ⋮
c
1 xn xn 2
yn
Example: using the data (−2 , 4 ) , (−1 , 2 ) , ( 0 ,1 ) , ( 2 ,1 ) and(3 , 1) find the least square solution
y=a+bx +c x .
2
Solution
( )( ) ( )
1 −2 4 4
1 −1 1 a 2
1 0 0 b =1
1 2 4 c 1
1 3 9 1
( )
1 −2 4
( )
1 −1 1 5 2 18
T
A= 1 0 0 ⟹ A A= 2 18 26
1 2 4 18 26 114
1 3 9
()
9
T
A y = −5
31
() ( )
a 1.1169
( T
) ( T
)
b =inv A A ∗ A y = −0.8052
c 0.2792
The best fitting parabola is
y=1.1169−0.8052 x +02
5
0.2792 x .
2
Orthogonal Polynomials
Discrete Least-Squares Approximation Problem
Given a set of n discrete data points (x i , y i ), i=1 , 2 ,. . . ,m .
Find the algebraic polynomial
2 n
P n(x )=a 0+ a1 x + a2 x + ・・ ・+ an x (n<m)
such that the error E(a0 , a1 , .. . , an ) in the least-squares sense is minimized; that is,
m
E(a 0 , a 1 ,. . . , a n)=∑ ( y i−(a 0+ a1 xi + a2 x 2i +…+ an x ni ) ) is minimum.
2
i=1
Here E(a0 , a1 , .. . , an ) is a function of (n+1) variables: a 0 , a 1 , . . ., a n.
Since E(a0 , a1 , .. . , an ) is a function of the variable a 0 , a 1 , . . ., a n for this function to be
minimum we must have
∂E
=0 , i=0 , 1 ,2 , … , n
∂ ai
{
m
∂E
=−2 ∑ ( y i−( a0 +a1 x i+ a2 x i +…+ an xi ) )
2 n
∂ a0 i=1
m
∂E
⟹ ∂ a =−2 ∑ ( y i−( a0 +a 1 x i +a2 x i + …+an x i ) ) x i
2 n
1 i=1
⋮
m
∂E
=−2 ∑ ( y i−( a0 +a 1 x i +a2 x i + …+an x i ) ) x i
2 n n
∂ an i=1
Setting these equations to be zero we have
{
m m m m m
a0 ∑ 1+a1 ∑ x i+ a2 ∑ x +⋯+ an ∑ x =∑ y i
2 n
i i
i=1 i=1 i=1 i=1 i=1
m m m m m
a0 ∑ x i+ a1 ∑ x i +a 2 ∑ x i +⋯+a n ∑ x i =∑ x i y i
2 3 n +1
i=1 i=1 i=1 i=1 i=1
⋮
m m m m m
a0 ∑ x i +a1 ∑ x i +a 2 ∑ x i +⋯+ an ∑ xi =∑ xi y i
n n+1 n+2 2n n
i=1 i=1 i=1 i=1 i=1
Set
6
m m
sk =∑ x , k =0 , 1 ,2 , … , 2 n and b k =∑ x ki y i , k =0 , 1, 2 , … , n
k
i
i=1 i=1
Using these notations, the above equations can be written as:
{
m
s 0 a0 + s1 a1 +…+ s n a n=b 0 (note that ∑ x i =s 0)
0
i=1
s 1 a 0 +s 2 a1+ …+ sn +1 an=b 1
⋮
s n a0 +s n+1 a1 +…+ s 2 n an=b n
This is a system of (n+1) equations in (n+1) unknowns a 0 , a 1 , . . ., a n . these equations
are
Called Normal Equations. This system now can be solved to obtain these (n+1)
unknowns,
Provided a solution to the system exists. The system can be written in the following
matrix form:
( )( ) ( )
s0 s1 ⋯ sn a0 b0
s1 s2 ⋯ s n+1 a1 b
= 1
⋮ ⋮ ⋯ ⋮ ⋮ ⋮
s n s n+ 1 ⋯ s2 n an bn
( ) () ()
s0 s 1 ⋯ sn a0 b0
s s2 ⋯ s n+1 a1 b
Or Sa=b , where S= 1 , a= ∧b= 1
⋮ ⋮ ⋯ ⋮ ⋮ ⋮
s n s n+ 1 ⋯ s2 n an bn
Define
( )
2 n
1 x1 x1 ⋯ x1
2 n
1 x2 x2 ⋯ x2
A= 1 x 3 x 23 ⋯ x3
n
⋮ ⋮ ⋮ ⋯ ⋮
1 x m x 2m ⋯ xm
n
Then the above system has the form:
T
A ∗A∗a=b .
In this case, the matrix S= A T ∗A is symmetric and positive definite and is therefore
nonsingular.
Thus, if x i ' s are distinct, the equation Sa=b has a unique solution.
Least-Squares Approximation of a Function
7
We have described least-squares approximation to fit a set of discrete data.
Here we describe
Continuous least-square approximations of a function f (x) by using polynomials.
First, consider approximation by a polynomial with monomial basis: {1 , x , x 2 ,. . . , x n }.
Least-Square Approximations of a Function Using Monomial Polynomials
Given a function f (x), continuous on [a , b], find a polynomial Pn (x ) of degree at
most n :
Pn (x ) ¿ a 0+ a1 x+ a2 x 2 + ・ ・・+an x n
such that the integral of the square of the error is minimized. That is,
b
E=∫ ( f ( x )−P n( x) ) dx is minimized.
2
a
The polynomial Pn (x ) is called the Least-Squares Polynomial.
Since E is a function of a 0 , a 1 , . . ., a n , we denote this by E(a0 , a1 , .. . , an ).
For minimization, we must have
∂E
=0 , i=0 , 1 ,. . . , n.
∂ ai
As before, these conditions will give rise to a system of (n+1) normal equations in
(n+1)
unknowns: a 0 , a 1 , . . ., a n. Solution of these equations will yield the unknowns:
a 0 , a 1 , . . ., a n.
Setting up the Normal Equations
Since
b
n 2
E=∫ [ f ( x )−(a0 + a1 x +a 2 x + ・ ・ ・+ an x ) ]
2
a
b
∂E
=−2∫ [ f ( x )−(a 0+ a1 x + a2 x + ・ ・ ・+an x ) ] dx
2 n
∂ a0 a
b
∂E
=−2∫ x [ f ( x ) −(a 0 +a1 x+ a2 x + ・ ・・+a n x ) ] dx
2 n
∂ a1 a
⋮
b
∂E
=−2∫ x [ f ( x )−( a0 +a1 x+ a2 x + ・ ・ ・+a n x ) ] dx
n 2 n
∂ an a
∂E
Since =0 , for i=0 ,1 , 2 , … , n we have
∂ ai
b b b b
∫ f ( x )=¿ a0 ∫ dx +a 1∫ xdx +…+a n∫ x n dx ¿
a a a a
b b b b
∫ xf ( x )=¿ a0∫ xdx + a1∫ x dx+ …+a n∫ x 2 n+1
dx ¿
a a a a
⋮
b b b b
∫ x f ( x )=¿ a 0∫ x dx +a1∫ x
n n n+1
dx+ …+a n∫ x dx ¿
2n
a a a a
8
Denote
b b
∫ x i dx=si for i=0 , 1 , 2, … , 2 n and ∫ x i f (x ) dx=bi for i=0 , 1 , 2, … , n
a a
Then the above equations can be written as
{
a 0 s 0+ a1 s 1+ a2 s 2+ …+an sn=b0
a0 s 1 +a1 s 2 +a2 s 3 +…+ an s n+1=b 1
⋮
a0 sn +a 1 sn +1+ a2 s n+2 +…+ an s 2 n=b n
Or in matrix form
( )( ) ( )
s0 s1 ⋯ sn a0 b0
s1 s2 ⋯ s n+1 a1 b
= 1
⋮ ⋮ ⋯ ⋮ ⋮ ⋮
s n s n+ 1 ⋯ s2 n an bn
() () ( )
b0 s 0 s1 ⋯ sn
a0
b s s2 ⋯ s n+1
Sa=b , where a= ⋮ ,b= 1 ∧S= 1
⋮ ⋮ ⋮ ⋯ ⋮
an
bn s n s n+1 ⋯ s2 n
Example
Find Linear and Quadratic least-squares approximations to f ( x)=e x on [−1 ,1].
Solution
Linear Approximation: n=1; P1 (x )=a 0+ a1 x
1 1 1
2
s0 =∫ dx=2 , S 1=∫ xdx =0 , S 2=∫ x dx=
2
−1 −1 −1 3
1 1
1
b 0=∫ f ( x ) dx=∫ e dx=e− =2.35
x
−1 −1 e
1 1
2
b 1=∫ xf ( x ) dx =∫ xe dx= =0.7358
x
−1 −1 e
( )( )
2 0
S0 S 1
S= = 2
S1 S 2 0
3
b=
( )( )
b0
b1
= 2.35
0.7358
( )( ) ( )
02
a 2.35
2 0= ⟹ a 0=1.1752∧a1=1.1037
0 a1 0.7358
3
Hence p ( x )=1.1752+1.1037 x
9
Quadratic Fitting: n=2; P 2(x )=a 0 +a1 x+ a2 x 2
1 1
2 2
s0 =2, s1=0 , s2 = , s 3=∫ x dx=0 , s 4 =∫ x dx=
3 4
3 −1 −1 5
1
b 0=2.3504 , b 1=0.7358 , b2=∫ x e dx=0.8789
2 x
−1
( )( ) ( )
2 0 2 /3 a0 2.3504
0 2/3 0 a1 = 0.7358
2/3 0 2/5 a2 0.8789
Hence
()( )
a0 0.9963
a1 = 1.1037
a2 0.5368
2
The quadratic least square polynomial is P2 ( x ) =0.9963+1.1037 x +0.5368 x
Exercise
Find Linear and Quadratic least-squares approximations to f ( x )=x 2 +5 x+ 6 on
[0 ,1].
Use of Orthogonal Polynomials in Least-squares
Approximations
Definition: The set of functions {ϕ 0 , ϕ 1 , .. . , ϕ n }in [a , b]is called a set of
orthogonal functions, with respect to a weight function w (x), if
b
a
{
∫ w ( x ) ϕ j ( x ) ϕ i ( x ) dx= C0 ifif ii=
j
≠j
j
Where C j is a real positive number
Furthermore, if C j=1 , j=0 ,1 , . .. , n , then the orthogonal set is called an orthonormal
set.
Using this interesting property, least-squares computations can be more
numerically effective,
as shown below. Without any loss of generality, let’s assume thatw (x)=1.
Given the set of orthogonal polynomials{Qi (x ) }ni=0 , a polynomial Pn (x )of degree n ,
can be
Written as:
Pn (x )=a 0 Q0 (x )+a1 Q1 (x)+ ・ ・・+a n Qn ¿), for some a 0 , a 1 , … , an
Finding the least-squares approximation of f (x) on [a , b]using orthogonal
polynomials, then
Can be stated as follows:
Least-squares Approximation of a Function Using Orthogonal Polynomials
10
Given f (x), continuous on [a , b], find a 0 , a 1 , . . ., a n using a polynomial of the form:
Pn (x )=a 0 Q0 (x )+a1 Q1 ¿
Where
n
{Q k ( x) }k=0 is a given set of orthogonal polynomials on [a , b], such that the error
function:
b
E ( a 0 ,a 1 , . . ., a n )=∫ ¿ ¿ ¿ is minimized.
a
As before, we set
∂E
=0 , i=0 , 1 ,… ,n
∂ ai
Now
b b
∂E
=0 ⟹∫ Q0 ( x ) f ( x ) dx=∫ Q0 (x )¿ ¿ ¿
∂ a0 a a
Since, {Q k ( x) }nk=0 is an orthogonal set, we have,
b
∫ Q20 ( x ) dx=C 0
a
b
and ∫ Q 0( x) Qi (x)dx=0 ,fori≠ 0
a
Applying the above orthogonal property, we see from above that
b
∫ Q 0 ( x ) f ( x ) dx =¿ C0 a 0 ¿.
a
That is
b
∫ Q 0 ( x ) f ( x ) dx
a
a 0=
C0
Similarly
b
1
a k = ∫ ϕ k ( x ) f ( x ) dx , k =0 , 1, 2 , … , n
Ck a
Where
b
C k =∫ ϕ k (x ) dx
2
Expressions for a kwith Weight Function w (x).
If the weight function w (x) is included, then a k is modified to
11
b
1
a k = ∫ w( x )ϕ k ( x ) f ( x ) dx , k =0 ,1 , 2 , … , n
Ck a
Where
b
C k =∫ w ( x ) Qk (x )dx
2
Least-Squares Approximation Using Legendre’s Polynomials
Legendre’s Equation and Legendre Functions
n
1 d ( 2 )n
Q n ( x )= n n
x −1 ,forn=0 , 1, 2 , 3 , … is called Legendre polynomials.
2 n! d x
Recall that the first few Legendre polynomials are given by:
Q0 ( x )=1
Q1 ( x ) =x
1
Q2 ( x ) = ( 3 x −1 )
2
2
1 3
Q 3 ( x )= (5 x −3 x)
2
1 4 2
Q4 ( x )= (35 x −30 x +3)
8
1
Q 5 ( x )= ¿
8
12
To use the Legendre polynomials to approximate least-squares solution of a
function f (x), we
set w (x)=1, and [a , b]=[−1 ,1] and compute
C k anda kwhere k =0 , 1 ,. . . ,n . That is
b
C k =∫ Qk ( x )dx and
2
b
1
a k = ∫ f (x )Qk (x )dx
Ck a
The least-squares polynomial will then be given by
Pn (x )=a 0 Q0 (x )+a1 Q1 ( x)+ ・ ・・+a n Qn ( x ).
A few C k ’ s are now listed below:
{
1 1
C0 =∫ ϕ 0 (x )dx=∫ 1 dx=2
2
−1 −1
1 1
2
C 1=∫ ϕ1 ( x )dx=∫ x dx=
2 2
−1 −1 3
1 1
( )
2
1 1 2
C2=∫ ϕ (x)dx=∫ x 2−2
2 dx=
−1 −1 4 3 45
And So on
Example
Find linear and quadratic least-squares approximation to f (x)=e x using Legendre
polynomials.
Solution
Linear Approximation: P1 ( x ) =a0 ϕ 0 (x )+ a1 ϕ 1(x )
ϕ 0 ( x )=1 , ϕ 1 ( x )=x
Step1. Compute
1 1
C 0=∫ ϕ 0 ( x ) dx=∫ dx =2
2
−1 −1
1 1
2
C 1=∫ ϕ 1 ( x ) dx=∫ x dx=
2 2
−1 −1 3
Step2. Compute a 0∧a 1
b 1
1 1 1 1
a 0= ∫ ϕ 0 ( x ) f ( x ) dx = ∫ e dx= (e− )
x
C0 a 2 −1 2 e
b 1
1
a 1= ∫ ϕ ( x ) f ( x ) dx= 32 ∫ xe x dx= 3e
C1 a 1 −1
1 1 3
P1 ( x ) = (e− )+ x
2 e e
13
Quadratic Approximation: P2 (x )=a0 ϕ 0 (x )+ a1 ϕ 1 (x)+a 2 ϕ 2 (x )
a 0=
1
2 ( )
1
e− , a1 =
e
3
e
b 1
) (
2
1 1 2
C 2=∫ Q ( x ) dx= ∫ x −
2 2
2 dx=
a 4 −1 3 45
b 1
a = ∫ ϕ ( x ) f ( x ) dx= ∫ ( x − ) e dx=4(e− )
1 45 1 7
2 x
2 2
C 2 a 2 3 −1 e
The quadratic least square polynomial is
P2 ( x ) =
1
2 ( )
1 3 7
( )
e− + x +2 e− ( 3 x 2−1 )
e e e
Chebyshev polynomials: Another wonderful family of orthogonal
polynomials
Definition: The set of polynomials defined by
T n (x)=cos[n arccos x ], n≥ 0 On [−1 ,1] are called the Chebyshev polynomials.
To see that T n (x) is a polynomial of degree n in our familiar form, we derive a
recursive
Relation by noting that
T 0 (x)=1 (The Chebyshev polynomial of degree zero).
T 1 (x)=x (The Chebyshev polynomial of degree 1).
A Recursive Relation for Generating Chebyshev Polynomials
Substituteθ=arccos x. Then,T n (x)=cos(nθ), 0 ≤ θ ≤ π .
T n+1 (x )=cos (n+1)θ=cos nθ cos θ−sin nθ sin θ
T n−1 (x)=cos (n−1)θ=cos nθ cos θ+sin nθ sin θ
Adding the last two equations, we obtain
T n+1 (x )+ T n−1 (x)=2 cos nθ cos θ
The right-hand side still does not look like a polynomial in x . Note that cos θ=x
So,
T n+1 (x )=2 cos nθ cos θ−T n−1 (x)=2 x T n (x )−T n−1(x ).
The above is a three-term recurrence relation to generate the Chebyshev
Polynomials.
Three-Term Recurrence Formula for Chebyshev Polynomials
T 0 ( x )=1 , T 1(x )=x
T n+1 ( x )=2 x T n ( x )−T n−1 ( x ) , n ≥1
14
Using this recursive relation, the Chebyshev polynomials of the successive degrees
can be generated:
2
n=1:T 2 (x)=2 x T 1 (x )−T 0 (x)=2 x −1 ,
2 3
n=2:T 3 (x )=2 x T 2( x)−T 1 (x)=2 x (2 x −1)−x=4 x −3 x and so on.
The orthogonal property of the Chebyshev polynomials
We now show that Chebyshev polynomials are orthogonal with respect to the
weight function
1
w (x)= in the interval [−1 ,1]
√ 1−x 2
To demonstrate the orthogonal property of these polynomials, we show that
Orthogonal Property of the Chebyshev Polynomials
{
1
T m (x )T n ( x ) 0 , if m ≠ n
∫ = π
−1 √ 1−x 2 2
, m=n
1 1
T m (x )T n ( x ) cos (marccosx)cos (narccosx)
First,∫ dx=∫ dx , m ≠ n
−1 √ 1−x 2
−1 √ 1−x 2
−1
Since, arccosx =θ , dθ= dx
√ 1−x 2
The above integral becomes:
0 π
−∫ cosmθcosnθdθ=∫ cosmθcosnθdθ
π 0
1
Now, cosmθcosnθ can be written as
2
[ cos ( m+n ) θ+ cos ( m−n ) θ ]
So
π π
∫ cosmθcosnθdθ= 12 ∫ [ cos ( m+n ) θ+ cos ( m−n ) θ ] dθ=0
0 0
Similarly, it can be shown [Exercise] that
1 2
T n( x )
π
∫ = , for n ≥1
−1 √1−x 2 2
The Least-Square Approximation using Chebyshev Polynomials
15
As before, the Chebyshev polynomials can be used to find least-squares
approximations to a
function f (x) as stated below.
1
set w (x)= ,[ a , b ]=[−1 ,1] and Q k (x )=T k (x),
√ 1−x 2
Then, it is easy to see that using the orthogonal property of Chebyshev polynomials:
1 2
T 0 (x )
C 0=∫ dx=π
−1 √ 1−x 2
1 2
T k (x )
π
C k =∫ dx= , k =1 ,… , n
−1 √ 1−x 2 2
These
1
1 f (x)
a 0= ∫ dx
π −1 √ 1−x2
1
2 f (x )T i ( x )
a i= ∫ dx , i=1 , … , n
π −1 √ 1−x 2
The least-squares approximating polynomial Pn (x ) of f (x) using Chebyshev
polynomials is given by
Pn ( x )=a 0 T 0 ( x ) +a 1 T 1 ( x ) +…+ an T n ( x )
Where
1 1
2 f (x )T i ( x ) 1 f (x)
a i= ∫ dx , i=1 , … , n and a 0= ∫ dx
π −1 √ 1−x 2 π −1 √ 1−x2
Example
Find a linear least-squares approximation of f (x)=e x using Chebyshev polynomials.
Solution
P1 ( x ) =a0 ϕ 0 ( x )+ a1 ϕ1 ( x )=a 0 T 0 ( x ) + a1 T 1 ( x )=a0 +a 1 x
Where
16
1 1 x
1 f (x) 1 e
a 0= ∫ dx= ∫ dx=1.2660
π −1 √ 1−x2 π −1 √ 1−x 2
1 1
2 f (x )T 1 ( x ) 2 e x( x )
a 1= ∫ dx = ∫ dx=1.1303
π −1 √ 1−x 2 π −1 √ 1−x 2
Thus, P1 ( x ) =1.2660+1.1303 x
17