KEMBAR78
Numerical Chapter6 | PDF | Interpolation | Mathematical Analysis
0% found this document useful (0 votes)
37 views31 pages

Numerical Chapter6

Chapter 6 of the document discusses the application of interpolation methods, particularly focusing on Muller’s Method for finding roots of equations using quadratic interpolation. It includes detailed explanations, examples, and formulas for implementing the method, as well as an alternative approach called Inverse Quadratic Interpolation to avoid complex roots. Additionally, it touches on numerical differentiation techniques for approximating derivatives using interpolating polynomials.

Uploaded by

tesfayeyisahak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views31 pages

Numerical Chapter6

Chapter 6 of the document discusses the application of interpolation methods, particularly focusing on Muller’s Method for finding roots of equations using quadratic interpolation. It includes detailed explanations, examples, and formulas for implementing the method, as well as an alternative approach called Inverse Quadratic Interpolation to avoid complex roots. Additionally, it touches on numerical differentiation techniques for approximating derivatives using interpolating polynomials.

Uploaded by

tesfayeyisahak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Manalebish D

Numerical Analysis Math-3221


Chapter 6 : Application of Interpolations

January 5, 2024
Chapter 1

Application of Interpolation

1.1 Finding roots

1.1.1 Muller’s Method

This method is motivated by the secant method.

• The secant method interpolates two function values linearly and takes
the intersection of these interpolants with the x-axis as the next ap-
proximation of the root.

• Muller method uses three function values and quadratic interpolation


instead.

Having the functional values at x(n−2) , x(n−1) and xn , and an interpolating


polynomial P (x) = a(x − x(n) )2 + b(x − x(n) ) + c, the polynomial is fitted to
the data as

f (xn−2 ) =a(x(n−2) − x(n) )2 + b(x(n−2) − x(n) ) + c


f (xn−1 ) =a(x(n−1) − x(n) )2 + b(x(n−1) − x(n) ) + c
f (xn ) =c

1
Solving for a and b Using Cramer’s rule we have
Solving for a and b Using Cramer’s rule we have
(x(n−1) − x(n) )[f (x(n−2) ) − f (x(n) )] − (x(n−2) − x(n) )[f (x(n−1) ) − f (x(n) )]
a=
(x(n−2) − x(n) )(x(n−1) − x(n) )(x(n−2) − x(n−1) )
=f [x(n) , x(n−1) , x(n−2) ]
(x(n−2) − x(n) )2 [f (x(n−1) ) − f (x(n) )] − (x(n−1) − x(n) )2 [f (x(n−2) ) − f (x(n) )]
b=
(x(n−2) − x(n) )(x(n−1) − x(n) )(x(n−2) − x(n−1) )
=f [x(n) , x(n−1) ] + f [x(n) , x(n−2) ] − f [x(n−1) , x(n−2) ]

(x(n−1) − x(n) )[f (x(n−2) ) − f (x(n) )] − (x(n−2) − x(n) )[f (x(n−1) ) − f (x(n) )]
" #
P (x) = (x − x(n) )2
(x(n−2) − x(n) )(x(n−1) − x(n) )(x(n−2) − x(n−1) )

(x(n−2) − x(n) )2 [f (x(n−1) ) − f (x(n) )] − (x(n−1) − x(n) )2 [f (x(n−2) ) − f (x(n) )]


" #
+ (x − x(n) ) + f (xn )
(x(n−2) − x(n) )(x(n−1) − x(n) )(x(n−2) − x(n−1) )

The next approximation x(n+1) is one of the roots of P and the one closer to
x(n) is chosen
−2c
x(n+1) − x(n) = √
b + sgn(b) b2 − 4ac

Note:- x(n+1) can be complex even if all previous approximation have been
real.
An alternative representation uses the Newton form of the interpolating
polynomial

P (x) =f (x(n) ) + (x − x(n) )f [x(n) , x(n−1) ]


+ (x − x(n) )(x − x(n−1) )f [x(n) , x(n−1) , x(n−2) ]

where f [x(n) , x(n−1) ] and f [x(n) , x(n−1) , x(n−2) ] denote divided differences.
One can also see that

c = f (x(n) )
a = f [x(n) , x(n−1) , x(n−2) ]
b = f [x(n) , x(n−1) ] + f [x(n) , x(n−2) ] − f [x(n−1) , x(n−2) ]

Example 1 Use Muller’s method with guess x(0) , x(1) and x(2) as 4.5, 5 and
5.5 respectively to determine a root of the equation f (x) = x3 − 13x − 12

Manalebish D: 2 2
Solution:-
We first evaluate the function at the guesses values f (4.5) = 20.625, f (5) = 48
and f (5.5) = 82.875
Now for our interpolating quadratic polynomial P (x) = f (4.5)+(x−4.5)f [4.5, 5]+
(x − 4.5)(x − 5)f [4.5, 5, 5.5]
where

c =f (4.5) = 20.625
a =f [4.5, 5, 5.5] = 15
b =f [4.5, 5] + f [4.5, 5.5] − f [5, 5.5] = 47.25

−2c
Then x(1) − x(0) = √
b + sgn(b) b2 − 4ac

−2(20.625)
x(n+1) − x(n) = p = −0.5235
47.25 + (47.25)2 − 4(15)(20.625)
x(3) − x(0) = 0.5235 ⇒ x(3) = 4.5 − 0.5235 = 3.976

continuing the iteration but now with the points x(0) , x(1) and x(2) as 3.976, 4.5
and 5 respectively
Then f (3.976) = −0.833 and f (4.5) = 20.625, f (5) = 48
Now for our interpolating quadratic polynomial P (x) = f (3.976) + (x −
3.976)f [3.976, 4.5] + (x − 3.976)(x − 4.5)f [3.976, 4.5, 5]
where

c =f (3.976) = −0.833
a =f [3.976, 4.5, 5] = 13.48
b =f [3.976, 4.5] + f [3.976, 5] − f [4.5, 5] = 33.88

Manalebish D: 3 3
−2c
Then x(n+1) − x(n) = √
b + sgn(b) b2 − 4ac
−2(−0.833)
x(4) − x(3) = p = 0.024
33.88 + (33.88)2 − 4(13.48)(−0.833)
x(4) − x(3) = 0.024 ⇒ x(4) = 3.976 + 0.024 = 4

x(4) = 4 is the root of the equation f (x) = x3 − 13x − 12


That is
f (4) = 43 − 13(4) − 12 = 64 − 52 − 12 = 0

Example 2 Do two iterations for Muller’s method to solve x3 − 3x + 1 = 0


starting with x0 = 0.5, x1 = 1 and x2 = 0

Solution:-
We first evaluate the function at the guesses values f (0.5) = −0.375, f (1) =
−1 and f (0) = 1
Now for our interpolating quadratic polynomial

P (x) = f (x0 ) + (x − x0 )f [x0 , x1 ] + (x − x0 )(x − x1 )f [x0 , x1 , x2 ]


P (x0 ) = f (x0 ) = c = −0.375
a = f [x0 , x1 , x2 ] = f [0.5, 1, 0] = 1.5
b = f [x(0) , x(1) ] + f [x(0) , x(2) ] − f [x(1) , x(2) ] = −2
−2c 2(−0.375)
Then x(3) − x(0) = √ = p = 0.333
b + sgn(b) b2 − 4ac −2 − 4 − 4(15)(−2)
Now take x2 = 0, x0 = 0.33 and x1 = 0.5
where

c =f (0.33) = 0.037
a =0.833
b = − 2.6

Manalebish D: 4 4
−2c −2(0.037)
Then x(4) − x(0) = √ = p
b + sgn(b) b2 − 4ac −2.6 − (2.6)2 − 4(0.833)(0.037)
x(4) = 0.3475

Note that b2 − 4ac may be negative which leads to complex numbers.

1.2 Inverse Quadratic Interpolation


The occurrence of complex roots in Muller’s method can be avoided by the
Inverse Quadratic Interpolation method.
To interpolate the inverse one uses the Lagrange interpolation formula by
swaping the roles of f and x
(y − f (xn−1 ))(y − f (x(n) )
P (y) = n−2 n−1 n−2 n
x(n−2)
(f (x ) − f (x ))(f (x ) − f (x ))
(y − f (xn−2 ))(y − f (x(n) )
+ n−1 n−2 n−1 n
x(n−1)
(f (x ) − f (x ))(f (x ) − f (x ))
(y − f (xn−2 ))(y − f (x(n−1) )
+ n n−2 n n−2
x(n)
(f (x ) − f (x ))(f (x ) − f (x ))
Now,we are interested in the root of f and thus substitute y = 0 and use
P (0) = x(n+1)

f (xn−1 )f (x(n) )
x(n+1) = n−2 n−1 n−2 n
x(n−2)
(f (x ) − f (x ))(f (x ) − f (x ))
f (xn−2 )f (x(n) )
+ n−1 n−2 n−1 n
x(n−1)
(f (x ) − f (x ))(f (x ) − f (x ))
f (xn−2 )f (x(n−1) )
+ n n−2 n n−1
x(n)
(f (x ) − f (x ))(f (x ) − f (x ))
Note 1 If the starting point is close to the root then the algorithm converges.
However, the algorithm can fail completely if any two of the functional values
f (xn ) f (xn−1 ) , f (xn−2 ) coincide.

Manalebish D: 5 5
Example 3 Find the value of x when y = 19 given the following values
x 0 2 5
y 0 1 20

Solution:-
x0 = 0, x1 = 1, x2 = 2
y0 = 0, y1 = 1, y2 = 20 and y(x) = 19
(y(x) − y1 )(y(x) − y2 ) (y(x) − y0 )(y(x) − y2 )
x= x0 + x1
(y0 − y1 )(y0 − y2 ) (y1 − y0 )(y1 − y2 )
(y(x) − y0 )(y(x) − y1 )
+ x2
(y2 − y0 )(y2 − y1 )

(19 − 1)(19 − 20) (19 − 0)(19 − 20) (19 − 0)(19 − 1)


x= 0+ 1+ 2
(0 − 1)(0 − 20) (1 − 0)(1 − 20) (20 − 0)(20 − 1)
19(−1) 19 × 18
= 1+ 2 = 1 + 1.8 = 2.8
−19 20 × 19

Hence y(2.8) = 19

Example 4 Let us suppose that we want to calculate a zero of the function


f (x) = x3 −15x+4 starting with 0.2, 0.3 and 0.4 Then we will do the quadratic
interpolation for the inverse of f
f (x) x first d.d. second d.d.
1.008 0.2
0.0675
−0.473 0.3 0.00028963
−0.0684
−1.936 0.4

P2 (x) = 0.2 − 0.0675(x − 1.008) + 0.00028963(x − 1.008)(x + 0.473)


= 0.00028963x2 − 0.06765x + 0.2679

Then the zeroes of f (x) are zeroes of the quadratic polynomial.


Hence the roots P2 (0) = 0 are x1 = 0.2679 and x2 = 4.03 f (0.2679) = 0.0007

Manalebish D: 6 6
1.3 Numerical Differentiation
• To determine approximation values of the derivative of a function f (x)
which is given by a table on an interval [a, b]. We replace f by an
interpolating polynomial P (x) and set f ′ (x) = P ′ (x) for a < x < b.

• If the error in P (x) is E(x) = f (x) − P (x), then the error in the
derivative is E ′ (x) = f ′ (x) − P ′ (x).

• The closeness of P (x) to f (x) does not guarantee closeness of P ′ (x) to


f ′ (x) on the interval [a, b] because the later values are tangents (slopes).

Approximation to the derivative can be obtained nummerically using the


following two approaches.

• Methods based on finite differences for equispaced data

• Methods based on divided differences or Lagrange interpolation for non-


uniform data.

1.3.1 Derivative using Newton’s forward difference in-


terpolation

Methods based on finite differences for equispaced data

Derivative using Newton’s forward difference interpolation


Consider the data (xi , f (xi )) given at equispaced points xi = x0 + ih, i =
0, 1, 2, . . . , n where h is the step length.
The Newton’s forward difference formula is given by
△f0 △2 f 0
f (x) = f (x0 ) + (x − x0 ) + (x − x0 )(x − x1 ) +
1!h 2!h2
△3 f 0 △n f 0
(x − x0 )(x − x1 )(x − x2 ) + . . . + (x − x0 )(x − x1 ) . . . (x − xn−1 )
3!h3 n!hn

Manalebish D: 7 7
Set x = x0 + sh then f (x) = f (x0 + sh)

s(s − 1) 2 s(s − 1)(s − 2) 3


f (x) = f0 + s △ f0 + △ f0 + △ f0 + . . .
2! 3!
s(s − 1) . . . (s − n + 1) n
+ △ f0
n!

Note that s > 0, Differentiating f (x) with respect to x we get

df df ds x − x0 ds 1
= but s = then =
dx ds dx h dx h
df 1 df
=
dx h ds 
df 1 2 1 2 3
= △f0 + (2s − 1) △ f0 + (3s − 6s + 2) △ f0 + . . .
ds 2 6
 
df 1 1 1
= △f0 + (2s − 1) △2 f0 + (3s2 − 6s + 2) △3 f0 + . . .
dx h 2 6

At x = x0 we have s = 0 then we obtain the approximation to the derivative


f ′ (x) as
 
′ 1 1 2 2 3
f (x0 ) = △f0 − △ f0 + △ f0 + . . .
h 2 6
2 3
 
′ 1 △ f0 △ f0
f (x0 ) = △f0 − + + ... (1.1)
h 2 3

Taking only lower order approximations in the first few terms of (1.1) we
get

1 1
f ′ (x0 ) = △ f0 = [f (x1 ) − f (x0 )]
h h

In general for any point xk

1 1
f ′ (xk ) = △ fk = [f (xk+1 ) − f (xk )]
h h

Manalebish D: 8 8
Taking two terms (1.1) we get
△2 f0
 
′ 1
f (x0 ) = △f0 −
h 2
 
1 1
= (f (x1 ) − f (x0 )) − (f (x2 ) − 2f (x1 ) + f (x0 ))
h 2
 
1 1 3
= − f (x2 ) + 2f (x1 ) − f (x0 ))
h 2 2
1
= [−3f (x0 )) + 4f (x1 ) − f (x2 )]
2h
1
or in general f ′ (xk ) = [−3f (xk )) + 4f (xk+1 ) − f (xk+2 )]
2h
For second derivatives with respect to x we get
d2 f
     
d df d df ds d 1 df 1
= = · = ·
dx2 ds dx ds dx dx ds h ds h
 
1 d df
= 2
h ds ds
df 1 1
But = △ f0 + (2s − 1) △2 f0 + (3s2 − 6s + 2) △3 f0 + . . .
ds 2 6 
1 2 1 3
= 2 △ f0 + (6s − 6) △ f0 + . . .
h 6
At x = x0 we have s = 0
1  2
f ′′ (x0 ) = 3

△ f 0 − △ f 0 + . . .
h2
Similarly for f ′′ (x0 ) Taking only the first term of
1 2 1
f ′′ (x0 ) = 2
△ f0 = [f (x2 )) − 2f (x1 ) + f (x0 )]
h 2h
1 1
Or in general f ′′ (xk ) = 2 △2 fk = [f (xk+2 )) − 2f (xk+1 ) + f (xk )] Taking
h 2h
two terms
1
f ′′ (x0 ) =
 2
△ f0 − △ 3 f0

h2
1
= 2 [f (x2 ) − 2f (x1 ) + f (x0 ) − (f (x3 )) − 3f (x2 ) + 3f (x1 ) − f (x0 ))]
h
1
= 2 [−f (x3 ) + 4f (x2 ) − 5f (x1 ) + 2f (x0 )]
h

Manalebish D: 9 9
Or in general
1 2 1
f ′′ (xk ) = 2
[△ fk − △3 fk ] = [−f (xk+3 ) + 4f (xk+2 ) − 5f (xk+1 ) + 2f (xk )]
h 2h

1.3.2 Error of Approximation

Using Taylors Series approximation, we obtain the error in the formula for
f ′ (x) at x = xk in using only one term of the approximation

1
E(f, xk ) = f ′ (xk ) − [f (xk + h) − f (xk )]
h 
h2 ′′
 
′ 1 ′
= f (xk ) − f (xk ) + hf (xk ) + f (xk ) + · · · − f (xk )
h 2
h2 ′′
 
′ 1 ′
= f (xk ) − hf (xk ) + f (xk ) + · · ·
h 2
h
= f ′ (xk ) − f ′ (xk ) − f ′′ (xk ) + · · ·
2
h
= − f ′′ (xk ) + · · ·
2

The error is of order O(h) or the formula is first order


Using Taylors Series approximation, we obtain the error in the formula
for f ′ (x) at x = xk using two terms of the approximation as

1
E(f, xk ) = f ′ (xk ) − [−3f (xk ) + 4f (xk+1 ) − f (xk+2 )]
2h 
2
 
1 h
= f ′ (xk ) − −3f (xk ) + 4 f (xk ) + hf ′ (xk ) + f ′′ (xk ) + · · ·
2h 2
2
 
4h
− f (xk ) + 2hf ′ (xk ) + f ′′ (xk ) + · · ·
2
3
 
1 4h
= f ′ (xk ) − 2hf ′ (xk ) − f ′′′ (xk ) + · · ·
2h 6
h2 ′′′
= f (xk ) + · · ·
3

The error is of order O(h2 ) or the formula is second order

Manalebish D: 10 10
Remark 1.3.1 • The advantage of using forward difference formulas for
derivatives is when we need the values of the derivatives at points near
the top of the table of the values.

• One disadvantage of differentiation using Newton’s forward difference


1 1
formula is we are multiplying the right hand side by factors , 2 , . . .
h h
and soon since h is a small number the reciprocal is a large number and
this will have a series effect on the round of errors in the values of f (x)
dy
Example 5 Find at x = 1 from the following table of values.
dx
x 1 2 3 4
y 1 8 27 64

Solution:-Let us first draw the Newton’s forward difference table.


x f (x) △f △2 f △3 f
1 1
2 8 7
3 27 19 12
4 64 37 18 6
We have h = xi+1 − xi = 1, x0 = 1, x = x0 + sh
⇒ x = 1 + s for x = 1, s = 0
For
s(s − 1) 2 s(s − 1)(s − 2) 3
y = f (x) = f (x0 ) + s △ f0 + △ f0 + △ f0
2 3!
dy △f0 2s − 1 2 3s2 − 6s + 2 3
= + △ f0 + △ f0 for x = 1 =⇒ s = 0
dx h 2h 6h
1
Therefore factoring we have
h
 
dy 1 1 2 1 3
(1) = △f0 − △ f0 + △ f0
dx h 2 3
1 1
= 7 − (12) + 6
2 3
=7−6+2=3

Manalebish D: 11 11
1.4 Differentiation operator
Let us now define the differential operator D by
df
Df (x) = = f ′ (x)
dx
Expanding the shift operator in a Taylor series at x, we get
h2 ′′ h3
Ef (x) = f (x + h) =f (x) + hf ′ (x) + f (x) + f ′′′ (x) + . . .
2! 3!
2 3
h h
=(1 + hD + D2 + D3 + . . .)f (x)
2! 3!
Ef (x) =ehD f (x)

⇒ E = ehD and E = 1 + △
⇒ △ = ehD − 1 solving for D

△ +1 = ehD
hD = ln(△ + 1)
1
D = ln(△ + 1)
h
Next expanding ln(1 + △) in a MacLaurine’s series we have
△2 △3 △4
 
1
D= △− + − + ···
h 2 3 4
2 3 4
 
1 △ f (x 0 ) △ f (x 0 ) △ f (x 0 )
Thus Df (x0 ) = f ′ (x0 ) = △f (x0 ) − + − + · · · for higher
h 2 3 4
order derivatives we have
1
Dr = r
[ln(△ + 1)]r r = 1, 2, . . .
h
Example 6 Find f ′ (3) and f ′′ (3) for the following data:
x 3.0 3.2 3.4 3.6 3.8 4.0
f (x) −14 −10.032 −5.296 −0.256 6.672 14

Manalebish D: 12 12
Solution:-
We have h = 0.2 x = x0 + sh x0 = 3
Then x = 3 + 0.2s for x = 3 we get s = 0
Now let us draw the forward difference table
x f (x) △fk △2 f k △3 f k △4 f k △5 f k
3 −14
3.2 −10.032 3.968
3.4 −5.296 4.736 0.768
3.6 −0.256 5.040 0.304 −0.464
3.8 6.672 6.928 1.888 1.584 2.048
4.0 14 7.328 0.400 −1.488 −3.072 −5.120
Then we have
2 3 4 5
 
1 △ f 0 △ f 0 △ f 0 △ f 0
f ′ (x0 ) = △f0 − + − +
h 2 3 4 5
 
1 0.768 0.464 2.048 5.120
f ′ (3) = 3.968 − − − −
0.2 2 3 4 5
= 9.4667
 
1 11 5
f ′′ (x0 ) = 2 △2 f0 − △3 f0 + △4 f0 − △5 f0
h 12 6
 
′′ 1 11 5
f (3) = 0.768 + 0.464 + (2.048) − (−5.120)
(0.2)2 12 6
= 184.4

1.4.1 Derivatives using Newton’s Backward Difference


Formula
Derivatives using Newton’s Backward Difference Formula Consider
the data (xi , f (xi )) given at equi-spaced points xi = x0 + ih, i = 0, 1, 2, . . . , n
where h is the step length.

Manalebish D: 13 13
The Newton’s backward difference formula is given by
▽fn ▽2 f n
f (x) = f (xn ) + (x − xn ) + (x − xn )(x − xn−1 ) +
1!h 2!h2
▽3 fn ▽n f n
(x − xn )(x − xn−1 )(x − xn−2 ) + . . . + (x − x n )(x − x n−1 ) . . . (x − x 1 )
3!h3 n!hn

Let x be a point near xn such that x−xn = sh Then the formula is simplified
as
s(s + 1) 2 s(s + 1)(s + 2) 3
f (x) = f (xn + sh) = fn + s ▽ fn + ▽ fn + ▽ fn +
2! 3!
s(s + 1) . . . (s + n − 1) n
... + ▽ fn
n!
x − xn
Note that s = <0
h
Differential with respect to x we get
df df ds 1 df
= =
dx dsdx h ds
(3s2 + 6s + 2) 3

1 (2s + 1) 2
= ▽fn + ▽ fn + ▽ fn + . . .
h 2! 3!
At x = xn we have s = 0 hence we obtain the approximation to the first
derivative f ′ (xn ) as
2 3
 
1 ▽ f n ▽ f n
f ′ (xn ) = ▽fn + + + ...
h 2 3
At x = xn−1 we have h = xn − xn−1 and xn−1 = xn + sh that is s = −1 Hence
2 3
 
1 ▽ f n ▽ f n
f ′ (xn−1 ) = ▽fn − − + ...
h 2 3
For second derivative with respect to x we have
d2 f
     
d df d df ds 1 d 1 df
= = =
dx2 dx dx ds dx dx h ds h ds
 
1 d df
= 2
h ds ds
 
1 2 1 3 1 2 4
= 2 ▽ fn + (6s + 6) ▽ fn + (12s + 36s + 22) ▽ fn + . . .
h 6 24

Manalebish D: 14 14
At x = xn , that is at s = 0 we obtain the approximation to the second
derivative at xn
 
′′ 1 2 3 11 4 5 5
f (xn ) = 2 ▽ fn + ▽ fn + ▽ fn + ▽ fn + . . .
h 12 6

At x = xn−1 , that is at s = −1
 
1 1 1
f ′′ (xn−1 ) = 2 ▽2 fn − ▽4 fn − ▽5 fn + . . .
h 12 12

Remark 1.4.1 We use the backward difference formula for derivatives, when
we need the values of the derivatives near the end of the table of values

Using the operator relation Let us derive approximations to f ′ (xn ) and f ′′ (xn )
in terms of the backward differences.
df
For the differential operator D defined by D(f (x) = = f ′ (x) We have the
dx
relation with the shift operator as ⇒ E = e and we also have E = (1−▽)−1
hD

Then ⇒ (1 − ▽)−1 = ehD solving for D

(1 − ▽)−1 = ehD
hD = ln(1 − ▽)−1
hD = − ln(1 − ▽)
−1
D= ln(1 − ▽)
h

Next expanding ln(1 − ▽) in a MacLaurine’s series we have

▽2 ▽3 △4
 
−1 −1
D= ln(1 − ▽) = −▽− − − − ···
h h 2 3 4
▽2 ▽3 △4
 
1
= ▽+ + + + ···
h 2 3 4

▽2 f (xn ) ▽3 f (xn ) ▽4 f (xn )


 
′ 1
Thus Df (xn ) = f (xn ) = ▽f (xn ) − + − + ···
h 2 3 4
Manalebish D: 15 15
and Second order  2 3

1 ▽ f (xn ) ▽ f (xn ) 11 5
D2 f (xn ) = f ′′ (xn ) = 2 + + ▽4 f (xn ) + ▽5 f (xn ) + · · ·
h 2 3 12 6

for higher order derivatives we have

1
Dr = r
[ln(△ + 1)]r r = 1, 2, . . .
h

Example 7 Find f ′ (3) and f ′′ (3) using Newton’s backward difference for-
mula, for the data

x 1 1.5 2.0 2.5 3.0


f (x) −1.5 −2.875 −3.5 −2.625 0.5

Solution:-
The step length h = 0.5 x = xn + sh for xn = 3 we have x = 3 + 0.5 for
x=3⇒s=0
Let us draw the backward difference table

x f (x) ▽f (x) ▽2 f (x) ▽3 f (x) ▽4 f (x)


1 −1.5 −1.375 0.75 0.75 0
1.5 −2.875 −0.625 1.5 0.75
2.0 −3.5 0.875 2.25
2.5 −2.625 3.125
3.0 0.5

Then
2 3 4 5
 
1 ▽ f n ▽ f n ▽ f n ▽ f n
f ′ (xn ) = ▽fn + + + +
h 2 3 4 5
 
1 2.25 0.75
f ′ (3) = −3.125 + +
0.5 2 3

f (3) = 9

Manalebish D: 16 16
Then for the second derivative

1  2
f ′′ (xn ) = 3

▽ f n + ▽ f n
h2
1
f ′ (3) = [2.25 + 0.75]
(0.5)2
f ′ (3) = 12

1.4.2 Derivatives using Divided Difference formula

Derivatives using Divided Difference formula


The divided difference interpolation polynomial fitting the data (xi , f (xi )), i =
0, 1, 2, . . . , n is given by

f (x) =f (x0 ) + (x − x0 )f [x0 , x1 ] + (x − x0 )(x − x1 )f [x0 , x1 , x2 ] + . . .


+ (x − x0 )(x − x1 ) . . . (x − xn )f [x0 x1 . . . xn ]

differentiating with respect to x

f ′ (x) =f [x0 , x1 ] + ((x − x0 ) + (x − x1 ))f [x0 , x1 , x2 ] + ((x − x1 )(x − x2 )+


(x − x0 )(x − x2 ) + (x − x0 )(x − x1 ))f [x0 x1 , x2 , x3 ] + · · ·

For the second derivative

f ′′ (x) = 2f [x0 , x1 , x2 ]+2((x − x0 ) + (x − x1 ) + (x − x2 ))f [x0 x1 , x2 , x3 ]


+ ···

Example 8 Find f ′′ (3) for the following tabular data

x 0 1 2 4
f (x) 3 4 7 19

Solution:-Let us draw the divided difference table

Manalebish D: 17 17
x f (x) first d.df (x) second d.d. third d.d
0 3
1
1 4 1
3 0
2 7 1
6
4 19

f ′ (x) =1 + ((x − 0) + (x − 1))1 = 2x


f ′′ (x) =2(1) + 2((x − 0) + (x − 1) + (x − 2))0 = 2

Example 9 Find f ′′ (1.6) for the following tabular data

x 1 1.5 2.0 3
f (x) 0 0.40547 0.69315 1.09861

Let us draw the divided difference table

x f (x) first d.df (x) second d.d. third d.d


1 0
0.81094
1.5 0.40547 −0.235580
0.57536 0.061157
2.0 0.69315 −0.113265
0.40546
3.0 1.09861

Manalebish D: 18 18
Then

f ′ (x) =f [x0 , x1 ] + ((x − x0 ) + (x − x1 ))f [x0 , x1 , x2 ] + · · ·


f ′′ (x) =2f [x0 , x1 , x2 ] + 2[(x − x0 ) + (x − x1 ) + (x − x2 )]f [x0 , x1 , x2 , x3 ]
f ′′ (x) =2(−0.23558) + 2((0.6) + (0.1) + (−0.4))(0.061157)
= − 0.47116 + 0.03669
= − 0.43447

Remark 1.4.2 In applications we require maximum or minimum of function


given as a tabulated data. we use numerical differentiation formula for finding
the first derivative , equate it to zero to find stationary point (critical points).
The numerical values obtained for the second derivatives at these stationary
points decides weather there is a maximum or minimum at these points.

1.4.3 Differentiating using Lagrange interpolation for-


mula

Differentiating using Lagrange interpolation formula


If we approximate f by a second degree Lagrange interpolation polynomial

(x − x1 )(x − x2 ) (x − x0 )(x − x2 )
f (x) = P2 (x) = f (x0 ) + f (x1 )
(x0 − x1 )(x0 − x2 ) (x1 − x0 )(x1 − x2 )
(x − x0 )(x − x1 )
+ f (x2 )
(x2 − x0 )(x2 − x1 )

We get x2 − x1 = x1 − x0 = h x2 − x0 = 2h

(2x − x1 − x2 ) (2x − x0 − x2 )
f ′ (x) = P2′ (x) = f (x 0 ) − f (x1 )
2h2 h2
(2x − x0 − x1 )
+ f (x2 )
2h2

evaluating at the points x0 , x1 , x2 we obtain ′′ three point formulas ”

Manalebish D: 19 19
(2x0 − x1 − x2 ) (2x0 − x0 − x2 ) (2x0 − x0 − x1 )
f ′ (x0 ) = 2
f (x0 ) + 2
f (x1 ) + f (x2 )
2h −h 2h2
(x0 − x1 ) + (x0 − x2 ) (x0 − x2 ) (x0 − x1 )
= f (x 0 ) + f (x 1 ) + f (x2 )
2h2 −h2 2h2
−3h 2h −h
= 2 f0 + 2 f1 + 2 f2
2h h 2h
−3 2 1
= f0 + f1 − f2
2h h 2h
1
f ′ (x0 ) = [−3f0 + 4f1 − f2 ]
2h
1
f ′ (x1 ) = [−f0 + f2 ]
2h
1
f ′ (x2 ) = [f0 − 4f1 + 3f2 ]
2h

Example 10 For the following data use Lagrange interpolation to find f ′ (0.4)

x 0.0 0.2 0.4 0.6 0.8


f (x) 0.0 0.0016 0.0256 0.1296 0.4096

Solution:-

1
f ′ (x2 ) = [f0 − 4f1 + 3f2 ]
2h
1
[f 0 − 4(0.0016) + 3(0.0256)] = 0.176
2h

1.5 Integration(Trapezoidal and Simpson’s rule)


The problem of numerical evaluation of a definite integral
Z b
I= f (x)dx
a

on an interval [a, b] of a given function f . Geometrically, I is the area of the


region R under the graph of f between a and b.

Manalebish D: 20 20
If we can find an antiderivative F of f i.e. f (x) = F ′ (x) Then
Z b
I= f (x)dx = F (b) − F (a)
a

But if F is too complicated to find or f is given by a table of values, we use


numerical methods to approximate I.
When the node x′k s are prescribed and are equispaced with

x0 = a, xn = b
(b − a)
where h = , then
n
Z b n
X
I= f (x)dx = λk f (xk )
a k=0

= λ0 f (x0 ) + λ1 f (x1 ) + λ2 f (x2 ) + . . . + λn f (xn )

1.5.1 Trapezoidal Rule

Trapezoidal Rule
Let the curve y = f (x), a ≤ x ≤ b, be approximated by the line joining the
points P (a, f (a)), Q(b, f (b)) on the curve.
Using the Newtons forward difference formula the linear polynomial approx-
imation to f (x) interpolating at P (a, f (a)), Q(b, f (b)) is given by
1
f (x) = f (x0 ) + (x − x0 ) △ f (x0 )
h
where x0 = a, x1 = b and h = b − a Then
Z b Z x1
1
I= f (x)dx = [f (x0 + (x − x0 ) △ f (x0 )]dx
a x0 h
1 (x1 − x0 )2
= f (x0 )[x1 − x0 ] + △ f (x0 )
h 2
1
= hf (x0 ) + △ f (x0 )h2
2h

Manalebish D: 21 21
h
= hf (x0 ) + [f (x1 ) − f (x0 )]
2
h
= [f (x1 ) − f (x0 )]
2
(b − a)
= [f (b) − f (a)]
2

Hence the trapezoidal rule is given by


Z b
h (b − a)
I= f (x)dx = [f (x1 ) − f (x0 )] = [f (b) − f (a)]
a 2 2

1.5.2 Error in the Trapezoidal rule

The error bound is given by


(b − a)3 h3
|R1 (f, x)| ≤ M2 = M2 where M2 = max |f ′′ (x)| In similar manner
12 12 a≤x≤b
if the interval [a, b] is subdivided into n points Then,
Z b Z x1 Z x2 Z xN
I= f (x)dx = f (x)dx + f (x)dx + · · · + f (x)dx
a x0 x1 xN −1

Using the Trapezoidal rule to evaluate each integral we get


Z b
h
f (x)dx = [(f (x0 ) + f (x1 )) + (f (x1 ) + f (x2 ))
a 2
+ · · · + (f (xn−2 + f (xn−1 )) + (f (xn−1 ) + f (xn ))]
h
= [f (x0 ) + 2[f (x1 ) + f (x2 ) + · · · f (xn−1 )] + f (xn )]
2
The error expression

(b − a)3
|R1 (f, x)| ≤ M2
12
h3 ′′
= [f (ξ1 ) + f ′′ (ξ2 ) + · · · f ′′ (ξn )], xn−1 < ξn < xn
12
Manalebish D: 22 22
Remark 1.5.1 The trapezium rule provides exact results for polynomial of
degree ≤ 1

Example 11 Consider the Lagrange interpolating polynomial between the


points P (a, f (a)), Q(b, f (b)) Then

(x − b) (x − a)
f (x) = f (a) + f (b)
a−b b−a
1
= [bf (a) − xf (a) + xf (b) − af (b)]
b−a
1
= [x(f (b) − f (a)) + bf (a) − af (b)]
b−a

Substituting in the integral


Z b Z b
1
I= f (x)dx = [x(f (b) − f (a)) + bf (a) − af (b)] dx
a b−a a
 b
f (b) − f (a) x2 bf (a) − af (b) b
= + [x]a
b−a 2 a b−a
f (b) − f (a) b2 − a2
 
bf (a) − af (b)
= + [b − a]
b−a 2 b−a
f (b) − f (a) (b − a)
= [b + a] + bf (a) − af (b) = [f (a) + f (b)]
2 2

R 1 dx
Example 12 Find the approximate value of I = 0 1 + x using trapezoidal

rule with 2 and 4 sub intervals

Solution:-
1−0 1
With n = 2, h = 2 = 2 the nodes are 0, 0.5, 1
1−0 1
With n = 4, h = 4 = 4 the nodes are 0, 0.25, 0.5, 0.75, 1
Then we have the following values

x 0 0.25 0.5 0.75 1


f (x) 1 0.8 0.6667 0.5714 0.5

Manalebish D: 23 23
Now we compute the value of the integral

h
n=2 I= [f (0) + 2f (0.5) + f (1)]
2
0.5
= [1 + 2(0.6667) + 0.5]
2
= 0.708334

h
n=4 I= [f (0) + 2f (0.25) + 2f (0.5) + 2f (0.75) + f (1)]
2
0.25
= [1 + 2(0.8 + 0.6667 + 0.571429) + 0.5] = 0.697024
2

The exact value of the integral is ln 2 = 0.693147


Hence the error |0.693147 − 0.697024| = 0.003877

(1 − 0)3
|R1 (f, x)| ≤ max |f ′′ (x)|
12 0≤x≤1
1 −1 2
Since f (x) = f ′ (x) = f ′′ (x) =
1+x (1 + x)2 (1 + x)3

(1 − 0)3 1
|R1 (f, x)| ≤ 2 = = 0.16667
12 6

0.003877 < 0.16667

Example 13 The velocity of a particle which starts from rest is given by the
following table

t sec 0 2 4 6 8 10 12 14 16 18 20
v m/s 0 16 29 40 46 51 32 18 8 3 0

Evaluate the total distance traveled in 20 seconds using Trapezoidal rule.

Manalebish D: 24 24
Solution:-
Z
ds
v= ⇒ ds = vdt ⇒ s = vdt
dt
Z 20
s= vdt with h = 2 using trapezoidal rule
0
h
s = [f (0) + 2[f (2) + f (4) + f (6) + f (8) + f (10) + f (12) + f (14) + f (16)
2
+ f (18)] + f (20)]
2
= [0 + 2[16 + 29 + 40 + 46 + 51 + 32 + 18 + 8 + 3] + 0] = 486m
2

1.5.3 Simpson’s Rule

• Simpson’s rule is used to approximate f (x) by piece-wise quadratic


function.

• It is more practical because it is sufficiently simple and more accurate.

• Let the interval [a, b] be subdivided into two equal parts with step length
h (b−a)
2 we have three abscissas x0 = a, x1 = a+b
2 and x2 = b.
Then P (x0 , f (x0 )), Q(x1 , f (x1 )), R(x2 , f (x2 )) are three points on the
curve y = f (x) we approximate by a parabola joining P, Q and R.
Using Newton’s forward difference formula, the quadratic polynomial
approximation to f (x), interpolating at the points
P (x0 , f (x0 )), Q(x1 , f (x1 )), R(x2 , f (x2 )) is given by
(x − x0 ) 1
f (x) = f (x0 ) + △ f (x0 ) + 2 (x − x0 )(x − x1 ) △2 f (x0 )
h 2h

Manalebish D: 25 25
Substituting in the definite integral we obtain
Z b Z x2
f (x)dx = f (x)dx
a x
Z 0x2
 
(x − x0 ) (x − x0 )(x − x1 ) 2
= f (x0 ) + △ f (x0 ) + △ f (x0 ) dx
x0 h 2h2
x
△f (x0 ) (x − x0 )2 2

= f (x0 )(x2 − x0 ) +
h 2 x0
2 3
x 2
x2

△ f (x0 ) x
+ − (x0 + x1 ) + x0 x1 x
2h2 3 2 x0
2
 
△f (x0 ) (x2 − x0 )
= f (x0 )(x2 − x0 ) +
h 2
2
 3 2
x30 x20

△ f (x0 ) x2 x2
+ − (x0 + x1 ) + x0 x1 x2 − + (x0 + x1 ) − x0 x1 x0
2h2 3 2 3 2

Substituting x2 = x0 + 2h and x1 = x0 + h
△f (x0 )4h2 2h2 2
= 2hf (x0 ) + + △ f (x0 )
2h 6h
..
.
h 2
= 2hf (x0 ) + 2h △ f (x0 ) + △ f (x0 )
3
h
= [6f (x0 ) + 6[f (x1 ) − f (x0 )] + (f (x0 ) − 2f (x1 ) + f (x2 ))]
3
h
= [f (x0 ) + 4f (x1 ) + f (x2 )]
3

In terms of the end points


Z b    
(b − a) a+b
f (x)dx = f (a) + 4f + f (b)
a 6 2

This formula is called Simpsons rule of approximation.


x−x0
Let x = x0 + hs then s = h . The limits of integration become for

Manalebish D: 26 26
x = x0 , s = 0 and for x = x2 , s = 2 We also have dx = kds. Hence
Z x2 Z 2
1
f (x)dx = h [f (x0 ) + s △ f (x0 ) + s(s − 1) △2 f (x0 )]ds
x0 0 2
2
s2 △2 f (x0 ) s3 s2
 
= h f (x0 )s + △ f (x0 ) + −
2 2 3 2 0
2
 

= h 2f (x0 ) + 2 △ f (x0 ) + f (x0 )
3
h
= [6f (x0 ) + 6[f (x1 ) − f (x0 )] + [f (x2 ) − 2f (x1 ) − f (x0 )]]
3
h
= [f (x0 ) + 4f (x1 ) + f (x2 )]
3

Which is the same formula derived above


Simpson’s rule integrates exactly polynomial of degree ≤ 3. That is, using
the definition of error we show that

R2 (f, x) = 0 for f (x) = 1, x, x2 , x3

That is for
b
(b − a)
Z
f (x) = 1 R2 (f, x) = 1dx − (6)
a 6
=(b − a) − (b − a) = 0
Z b    
(b − a) a+b
f (x) = x R2 (f, x) = xdx − a+4 +b
a 6 2
b2 − a2 (b − a)
= − (3a + 3b)
2 6
(b − a)(b + a) (b − a)(b + a)
= − =0
2 2

Manalebish D: 27 27
2 !
b 

Z
(b a) a + b
f (x) = x2 R2 (f, x) = x2 dx − a2 + 4 + b2
a 6 2
b − a3 (b − a)
3
 2
a + 2ab + b2
  
2 2
= − a +4 +b
3 6 4
b3 − a3 (b − a) 2
a + ab + b2

= −
3 3
b 3 − a3 b 3 − a3
= − =0
3 3 !
Z b  3
(b − a) a + b
f (x) = x3 R2 (f, x) = x3 dx − a3 + 4 + b3
a 6 2
b − a4 (b − a)
4
 3
a + 3a2 b + 3ab2 + b3
  
3 3
= − a + +b
4 6 2
b 4 − a4 b − a 3
a + a2 b + ab2 + b3

= −
4 4
b 4 − a4 b 4 − a4
= − =0
4 4

Simpson’s rule integrates exactly polynomial of degree ≤ 3. Therefore the


method is of order 3.
Let f (x) = x4 Then
4 !
b 

Z
(b a) a + b
R2 (f, x) = x4 dx − a4 + 4 + b4
a 6 2
b5 − a5 (b − a) 4 3 2 2 3 4
   
a + 4a b + 6a b + 4ab + b
= − a4 + + b4
5 6 4
b5 − a5 (b − a) 5a4 + 4a3 b + 6a2 b2 + 4ab3 + 5b4
 
= −
5 6 4
1
24(b5 − a5 ) − 5(b − a)[5a4 + 4a3 b + 6a2 b2 + 4ab3 + 5b4 ]
 
=
120
−(b − a)5
=
120

The general expression for error is given by

−(b − a)5 (4) −h5 (4)


R2 (f, x) = f (ξ) = f (ξ)
(120)4! 90

Manalebish D: 28 28
(b−a)
since h = 2 and a ≤ ξ ≤ b

−(b − a)5 h5
|R2 (f, x)| = M4 = M4 where M4 = max |f (4) (x)|
2880 90 a≤x≤b

In general if the interval [a, b] is two large we divide it into even number of
(b − a)
intervals as 2N and the step length h =
2N
Z b Z x2N Z x2 Z x4 Z x2N
f (x)dx = f (x)dx = f (x)dx + f (x)dx + · · · + f (x)dx
a x0 x0 x2 x2N −2

Note that there are N integrals evaluating each integral is Simpson’s rule
Z b
h
f (x)dx = [(f (x0 ) + 4f (x1 ) + f (x2 )) + (f (x2 ) + 4f (x3 ) + f (x4 )) +
a 3
. . . + (f (x2N −2 ) + 4f (x2N −1 ) + f (x2N ))]
h
= [f (x0 ) + 4f (x1 ) + 2f (x2 ) + 4f (x3 ) + 2f (x4 )+
3
. . . + 2f (x2N −2 ) + 4f (x2N −1 ) + f (x2N )]

This is of order 3

R1 2
Example 14 Evaluate 0 e−x dx by simpson’s rule with n = 10

1−0
Now we subdivide [0, 1] into n = 10 intervals h = 10 = 0.1

x 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1


f (x) 1 0.99 0.96 0.91 0.85 0.78 0.7 0.61 0.53 0.44 0.37

Z 1
2 0.1
e−x dx = [1 + 4(0.99) + 2(0.96) + 4(0.91) + 2(0.85) + 4(0.78) + 2(0.7)
0 3
+4(0.61) + 2(0.53) + 4(0.44) + 0.37]
= 0.7456

R 1 dx
Example 15 Evaluate 0 1 + x with four equal sub intervals simpson’s rule

with n = 4

Manalebish D: 29 29
1−0
Now we subdivide [0, 1] into n = 4 intervals h = 4 = 0.25

x 0 0.25 0.5 0.75 1


f (x) 1 0.8 0.67 0.57 0.5

Z 1
dx 0.25
= [1 + 4(0.8) + 2(0.67) + 4(0.57) + 0.5]
0 1+x 3
= 0.6933

The exact value of the integral I = ln 2 = 0.693147


|Exact − I| = |0.6931 − 0.6933| = 0.0001

Manalebish D: 30 30

You might also like