Numerical
Numerical
March 7, 2025
Contents
1 Introduction 2
4 Interpolation 16
5 Numerical Integration: 22
1
1 Introduction
There are many problems in science and engineering, which cannot be solved exactly. For example:
R1 dx dy
x+sin x = x + sin x tan x = x
0 dx
There are no methods which produce the exact solution. Even when a method exists to solve a
problem, it may be too complicated to use the method. In such situations, we use numerical methods.
Important :
1. When truncated after five decimal places, all numbers after five decimal places are removed
without rounding.
2. Correct up to four decimal places with rounding off means : In the iteration process when two
consecutive iteration will be given the same value of the desire parameter up to four decimal
places, you stop there. Final answer should be written after rounding off.
2
3. If nothing is mentioned in the question then you should find the answer at least correct up to
three decimal places with rounding off.
4. If there is a trigonometric function in your calculation, then all the calculation should be done
in radian. Otherwise in degree mode.
3
2 Solution of system of linear equations
There are two methods to solve a system of linear equations numerically in your syllabus.
1. Jacobi Method
Necessary Condition for the Jacobi Method and Gauss Seidel Method:
2. The coefficients matrix A has no zero on its main diagonal, namely, a11 , a22 , . . . , ann are nonzero.
To begin, solve the 1st equation for x1 , the 2nd equation for x2 and so on to obtain the rewritten
equations:
1
x1 = (b − a12 x2 − a13 x3 − · · · − a1n xn )
a11
1
x2 = (b − a21 x1 − a23 x3 − · · · − a2n xn )
a22
..
.
1
xn = (b − an1 x1 − an2 x2 − · · · − an,n−1 xn−1 )
ann
(0) (0) (0)
Then make an initial guess of the solution x(0) = (x1 , x2 , . . . , xn ).
Substitute these values into the right hand side of the rewritten equations to obtain the first ap-
(1) (1) (1)
proximation, (x1 , x2 , . . . , xn ) This accomplishes one iteration.
(2) (2) (2)
In the same way, the second approximation (x1 , x2 , . . . , xn ) is computed by substituting the
first approximation’s values into the right hand side of the rewritten equations.
(k) (k) (k)
By repeated iterations, we form a sequence of approximations x(k) = (x1 , x2 , . . . , xn ), k =
1, 2, 3, . . ..
4
The iteration of formula of jacobi method:
(k)
For each k ≥ 1, generate the components xi of x(k) from x(k−1) by
" #
n
(k) (k−1)
xi = a1ii
P
(−aij xj + bi ) for i = 1, 2, . . . , n.
j=1,j̸=i
Example: Use the Jacobi method to approximate the solution of the following system of linear
equations.
1 2 1
x1 = − + x2 − x3
5 5 9
2 3 1
x2 = + x1 − x3
9 9 9
3 2 1
x3 = − + x1 − x2
7 7 7
(0) (0) (0)
(x1 , x2 , x3 ) = (0, 0, 0)
1 2 3
x11 = − + (0) − (0) = −0.200
5 3 5
2 3 1
x12 = + (0) − ≈ 0.222
9 9 9
3 2 1
x3 = + (0) − (0) ≈ −0.429
7 7 7
n 0 1 2 3 4 5 6 7
x1 0.000 -0.2000 0.146 0.192 0.181 0.185 0.186 0.186
x2 0.000 0.222 0.203 0.328 0.332 0.329 0.331 0.331
x3 0.000 -0.429 -0.517 -0.416 -0.421 -0.424 -0.423 -0.423
Convergence of Jacobi Method: If the iteration sequence of (x(k) ) = (xk1 , . . . , xk1 ) is convergent
to the exact solution of Ax = b, then we call that the jacobi method is convergent.
Strictly diagonally dominant: An n×n matrix A is strictly diagonally dominant if the absolute
value of each entry on the main diagonal is greater than the sum of the absolute values of the other
5
entries in the same row. That is
Result 1. If A is strictly diagonally dominant, then the system of linear equations given by Ax = b
has unique solution to which the Gauss seidel method will converge for any initial guess.
a11 0 0 · · · 0
0 a22 0 · · · 0
A can be written in the following form A = D + R where D = . .. and
. .. ..
. . . ··· .
0 0 0 · · · ann
0 a12 a13 · · · a1n
a21 a23 · · · a2n
R= . −1
. . Define H = D R (−H is called iteration matrix). Actually H is
. ..
.. .. . · · · ..
an1 an2 an3 · · · 0
generated from the iteration formula.
Let λ1 , λ2 , . . . , λn be the eigenvalues of H.
Result 2. The Jacobi method is convergent for any initial guess if and only if ρ(H) < 1.
6
The Gauss-Seidel Method:
1 2 1
x1 = − + x2 − x3
5 5 9
2 3 1
x2 = + x1 − x3
9 9 9
3 2 1
x3 = − + x1 − x2
7 7 7
(0) (0) (0)
(x1 , x2 , x3 ) = (0, 0, 0)
2
x2 = 9 + 39 (−0.2000) − 1
9 ≈ 0.156
n 0 1 2 3 4 5
x1 0.000 -0.2000 0.167 0.191 0.186 0.186
x2 0.000 0.156 0.334 0.333 0.331 0.331
x3 0.000 -0.508 -0.429 -0.422 -0.423 -0.423
Convergence of Gauss Seidel method: If the iteration sequence of (x(k) ) = (xk1 , . . . , xk1 ) is
convergent to the exact solution of Ax = b, then we call that the Gauss Seidel method is convergent.
7
Condition for convergence:
Result 1 If A is strictly diagonally dominant, then the system of linear equations given by Ax = b
has unique solution to which the Gauss seidel method will converge for any initial guess.
a11 0 0 · · · 0
0 a22 0 · · · 0
A can be written in the following form A = L + D + U where D = . .. ,
.. ..
.. . . ··· .
0 0 0 · · · ann
0 0 0 ··· 0 0 a12 a13 · · · a1n
a21 0 0 · · · 0 0 0 a23 · · · a2n
L= . .. and U = .. .. .
. .. .. .. ..
. . . · · · . . . . ··· .
an1 an2 an3 · · · 0 0 0 0 ··· 0
Define H = (D + L)−1 U (−H is called iteration matrix). Actually H is generated from the itera-
tion formula.
Result 2. The Gauss Seidel method is convergent for any initial guess if and only if ρ(H) < 1.
−2 1 5 x 15
Example: 4 −8 1 y = −21. Show that the Gauss Seidel method is divergent.
4 −1 −2 z −2
−2 1 5
Solution: Since the matrix A = 4 −8 1 is not strictly diagonally dominant, so we have
4 −1 −2
−2 0 0 0 0 0 0 1 5
to use Result 2. Here D = 0 −8 0 , L = 4 0 0 and U = 0 0 1 .
0 0 −2 4 −1 0 0 0 0
−2 0 0
D + L = 4 −8 0 .
4 −1 −2
−16 0 0
(D + L)−1 = 321
−8 −4 0
−28 2 −16
−16 0 0 0 1 5 0 −16 −80
H = (D + L)−1 U = 321
−8 −4 0 0 0 1 = 0 −8 −44
−28 2 −16 0 0 0 0 −28 −138
8
√
73± 5457
The eigenvalue of H are 0, 2 or 0, −0.003, 4.59. Since ρ(H) = 4.59 > 1, the Gauss Seidel
method is divergent.
Remark: The iteration matrix −H for Gauss Seidel is different from the iteration matrix −H for
Jacobi method.
9
3 Solution of transcendental equations:
There are four methods in the syllabus to calculate approximate solution of transcendental equations.
1. Bisection Method
3. Newton-Raphson Method
The method is applicable for numerically solving the equation f (x) = 0 for the real variable x,
where f is continuous function on an interval [a, b] and where f (a) and f (b) have opposite sign.
Result: Let f : [a, b] → R be a continuous function. Assume that f (a) and f (b) have opposite
sign. Then f must have at least one root in the interval (a, b).
1. The Bisection Method is a successive approximation method that narrows down an interval an
interval that contains a root of the function f (x)
2. The Bisection Method is given an initial interval [a, b] that contains a root.
3. The Bisection Method will cut the interval into 2 halves and check which half interval contains
a root of the function
4. The Bisection Method will keep cut the interval in halves until the resulting interval is extremely
small.
Example:
Use the bisection method for the following function f (x) = x3 − x − 2.
Sol: f (1) = −2 and f (2) =, so f (x) has a root in [a, b] = [1, 2]. The following table shows up to
14th iteration. 1st iteration: a1 = a = 1, b1 = b = 2 and c1 = a1 +b
2
1
= 1.5.
10
n an bn cn f (cn )
1 1 2 1.5 -0.125
2 1.5 2 1.75 1.6093750
3 1.5 1.75 1.625 .6660156
4 1.5 1.625 1.5625 0.2521973
5 1.5 1.562 1.5312500 0.0591125
6 1.5 1.5312500 1.5156250 -0.0340538
7 1.5156250 1.5312500 1.5234375 0.0122504
8 1.5156250 1.5234375 1.5195313 -0.0109712
9 1.5195313 1.5234375 1.5205078 0.0006222
10 1.5195313 1.5214844 1.5205078 -0.0051789
11 1.5205078 1.5214844 1.5209961 -0.0022794
12 1.5209961 1.5214844 1.5212402 -0.0008289
13 1.5212402 1.5214844 1.5213623 -0.0001034
14 1.5213623 1.5214844 1.5214233 0.0002594
14 1.5213623 1.5214233 1.5213928 0.0000780
After 13 iterations, it becomes apparent that there is a convergence to about 1.521 : a root for the
f (x) = x3 − x − 2.
Error of bisection method: Consider the equation f (x) = 0 which has a root c in the interval
[a, b]. Then the error after nth iteration is
b−a
|cn − c| ≤ 2n
11
Fixed point iteration:
Example: 1. f : R → R, define by f (x) = x. All the points in R are the fixed points of f .
2. f : R → R, define by f (x) = x2 − 1. Calculate all the fixed points of f (x). Let x0 be a fixed
point of f . Then f (x0 ) = x0 =⇒ x20 − 1 = x0 =⇒ x20 − x√0 − 1 = 0. This says that all the roots of
this equations are the fixed points of f (x) and those are 1±2 5 .
Fixed point iteration is an iteration method to compute approximate real root of a transcendental
equation f (x) = 0.
Let
f (x) = 0 (1)
be an equation.
By some algebraic way we transform (1) into
x = g(x) (2)
.
It is clear that x0 is a root of f (x) if and only if x0 is a fixed point of g(x).
Convergence of fixed point iteration: If the iteration sequence (xn ) is convergent to a root of
f (x) = 0, then say the fixed point iteration method is convergent.
Condition of convergent: If g has a fixed point, then that fixed is a root of f (x) = 0. So we
supply a sufficient condition on g(x) to have fixed point.
Let f (x) = 0 has a root in [a, b]. If g satisfies the following condition in [a, b], then g must have a
fixed point in [a, b].
1. g(x) must be continuous in [a, b]
Remark: We are considering the interval where f (x) = 0 has a real root.
Example: Consider f (x) = x3 − 5x + 1 = 0 and it has a root in (0, 1). x = g(x) can be written
in the following ways.
12
(iii) x = x3 − 4x + 1, g(x) = x3 − 4x + 1.
You can easily check that g(x) = 51 (x3 + 1) is satisfying all the above three conditions in [0, 1],
hence if we consider this g(x) then fixed point iteration is convergent.
cos x
Example: Let f (x) = x − 2 .
cos x
Consider g(x) = 2 and initial guess x0 = 1.
n xn
0 x0 = .2701512
1 x1 = .4818653
2 x2 = .4430660
3 x3 = .4517207
4 x4 = .4498487
5 x5 = .4502565
6 x6 = .4501871
7 x7 = .4501871
8 x8 = .4501829
9 x9 = .4501838
10 x10 = .4501836
11 x11 = .4501836
We notice that x11 = x10 . hence the approximated root of f (x) = 0 is .4501836 upto 7 decimal
places.
You can easily check that f (x) has a root in (0, 1). It is clear that g(x) is continuous in [0, 1], g ′ (x)
exists and continuous in (0, 1) and g ′ (x) ≤ 12 for all x ∈ (0, 1). So g(x) satisfying all the sufficient
condition for convergence.
13
Newton Raphson Method: This is also an iteration method to calculate an approximate root
of f (x) = 0. Start from the initial guess x0 and the iteration formula is xn+1 = xn − ff′(x n)
(xn ) , where f
has a continuous derivative f ′ and f ′ (x0 ) ̸= 0.
xn+1 = xn − f(2x−cos x)
′ (2+sin x) .
Our initial guess is x0 = 1.
Here x2 and x3 both are .4501836. Here iteration converges. So x = .4501836 is an approximated root
of f (x) = 0 correct upto 6 decimal places.
Remark: In Newton Raphson method, it may happen that f ′ (xn ) = 0 for some n. Then stop the
iteration at that point and try with another initial guess.
Regula falsi method The convergce process in the bisection method is very slow. It depends only
on the choice of end points of the interval [a, b]. The function f (x) does not have any role in finding
the point c (which is just the mid-point of a and b). It is used only to decide the next smaller interval
[a, c] or [c, b]. A better approximation to c can be obtained by taking the straight line L joining the
points (a, f (a)) and (b, f (b)) intersecting the x-axis. To obtain the value of c we can equate the two
expressions of the slope m of the line L.
14
REGULA-FALSI METHOD
The convergce process in the bisection method is very slow. It depends only on the choice of
end points of the intervalIteration number
[a,b]. The function a have anybrole in finding
f(x) does not c the f(a)f(c)
point c
(which is just the mid-point of a and 1 b). It is used only0 to decide
0.5the next smaller interval
0.376 1.38
[a,c] or [c,b]. A better approximation to c can be obtained by taking the straight line L joining
2
the points (a,f(a)) and (b,f(b)) intersecting the x-axis.0.376 0.5value of0.36
To obtain the -0.102
c we can equate
the two expressions of the slope m of 3 the line L. 0.376 0.36 0.36 -0.085
f(b) - f(a) 0 - f(b)
m = 0−f (b) =
m = f (b)−f (a) (b-a)
= c−b (c-b)
b−a
=> (c-b) * (f(b)-f(a)) = -(b-a) * f(b)
c = aff (b)−f
(b)−bf (a)
(a)
f(b) * (b-a)
c=b-
f(b) - f(a)
Now the next smaller interval which brackets the root can be obtained by checking
Given a function f (x) continuos on an interval [a, b] such that f (a) ∗ f (b) < 0.
Do f(a) * f(b) < 0 then b = c
> 0 then a = c
af (b)−bf (a) = 0 then c is the root.
c= f (b)−f (a)
Selecting c by the above expression is called Regula-Falsi method or False position method.
if f (a)f (c) < 0 then b = c
Algorithm - False Position Scheme
else a = c
Given a function f (x) continuos oncriterion
while (none of the convergence an interval [a,b]
C1,such
C2that
orf (a)
C3* fis(b)satisfied)
<0
Do
The false position method is again bound to converge because it brackets the root in the
whole of its convergence process.
Numerical Example :
Iteration
a b c f(a) * f(c)
No.
1 0 0.5 0.376 1.38 (+ve)
2 0.376 0.5 0.36 -0.102 (-ve)
3 0.376 0.36 0.36 -0.085 (-ve)
So one of the roots of 3x + sin(x) - exp(x) = 0 is approximately 0.36. Note : Although the
length of the interval is getting smaller in each iteration, it is possible that it may not go to
15the endpoints becomes fixed and
zero. If the graph y = f(x) is concave near the root 's', one of
the other end marches towards the root.
Motivation: We are given the values of a function f (x) at different points x0 , x1 , . . . , xn . We want to
find approximate value of the function f (x) for new x, that lie between x0 , . . . , xn . Let f (xi ) = fi for
i = 0, . . . , n. To find the value of f at some new point x is quite impossible as f might be unknown
to us., or even if we know f , it might be very difficult to compute f at x. So, we want to replace f
by some function g, quite close to f , and g is very much simple function. A standard idea in interpo-
lation theory is replace a general function f by a polynomial function p (p is an approximation of f )
polynomial function has a number of important uses.
Let x0 < x1 < x2 < · · · < xn be n + 1 distinct points. Let f (xi ) = fi for i = 0, 1, . . . , n. The
problem of polynomial interpolation is to find a polynomial Pn (x) of degree ≤ n such that
Remark: Through two distinct points, we can always pass a straight line (a polynomial of degree
1). Through three distinct points we can either pass a parabola (a polynomial of degree 2) or a straight
line if all the three points lie on the line. In general, through n + 1 distinct points a polynomial of
degree at most n can always be fitted.
There are four methods to calculate such polynomial P in your syllabus. Those are the following.
1.Lagrange’s Interpolation.
x x0 x1 x2 ... xn
f (x) f0 f1 f2 ... fn
where fi = f (xi ). We can fit a polynomial of degree ≤ n to this data. This polynomial must use all
the ordinates f0 , f1 , . . . , fn .
P (x) = l0 f0 + l1 + f1 + · · · + ln fn where
Example:
16
x 1 2 4
f (x) 1 7 61
Find the Lagrange interpolating polynomial and determine the approximate value of f (3).
We have three data values. We can fit an interpolating polynomial of degree ≤ 2. We have
P (x) = l0 f0 + l1 f1 + l2 f2 where
(x−2)(x−4)
l0 = (1−2)(1−4) = 31 (x2 − 6x + 8)
(x−1)(x−4)
l1 = (2−1)(2−4) = − 12 (x2 − 5x + 4)
(x−1)(x−2)
l2 = (4−1)(4−2) = 61 (x3 − 3x + 2)
Finite Difference Operators: Let the tabular points x0 , x1 , . . . , xn be equispaced with step
length h, that is xi = x0 + ih, i = 1, . . . , n or xi+1 − xi = h for all i.
Shift Operator E:
n
n
△n fi = (−1)k
P
k fi+n−k
k=0
n
n
▽n fi = (−1)k
P
k fi−k
k=0
△ = E − 1 and ▽ = 1 − E −1 .
x x0 x1 x2 ... xn
f (x) f0 f1 f2 ... fn
17
where fi = f (xi ). We can fit a polynomial of degree ≤ n to this data. This polynomial must use all
the ordinates f0 , f1 , . . . , fn .
2 n
P (x) = f0 + (x − x0 ) △f △ f0 △ f0
1!h + (x − x0 )(x − x1 ) 2!h2 + · · · + (x − x0 )(x − x1 ) · · · (x − xn−1 ) n!hn .
0
x −4 −2 0 2 4 6
f (x) −139 −21 1 23 141 451
x f (x) △f △2 f △3 f △4 f △5 f
−4 −139
118
−2 −21 −96
22 96
0 1 0 0
22 96 0
2 23 96 0
118 96
4 141 192
310
6 451
2 3
P (x) = f0 + f0 + (x − x0 ) △f △ f0 △ f0
1!h + (x − x0 )(x − x1 ) 2!h2 + (x − x0 )(x − x1 )(x − x2 ) 3!h3 + (x − x0 )(x −
0
4 5
x1 )(x − x2 )(x − x3 ) △4!hf40 + (x − x0 )(x − x1 )(x − x2 )(x − x3 )(x − x4 ) △5!hf50
−96
P (x) = −139 + (x + 4) 118 96
2 + (x + 4)(x + 2) 2×4 + (x + 4)(x + 2)x 6×8 + 0 + 0
P (x) = 2x3 + 3x + 1.
x x0 x1 x2 ... xn
f (x) f0 f1 f2 ... fn
where fi = f (xi ). We can fit a polynomial of degree ≤ n to this data. This polynomial must use
all the ordinates f0 , f1 , . . . , fn .
2 n
P (x) = fn + (x − xn ) ▽f ▽ fn ▽ fn
1!h + (x − xn )(x − xn−1 ) 2!h2 + · · · + (x − xn )(x − xn−1 ) · · · (x − x1 ) n!hn
n
Example:
18
x −4 −2 0 2 4 6
f (x) −139 −21 1 23 141 451
x f (x) ▽f ▽2 f ▽3 f ▽4 f ▽5 f
−4 −139
118
−2 −21 −96
22 96
0 1 0 0
22 96 0
2 23 96 0
118 96
4 141 192
310
6 451
2 5
P (x) = f5 + (x − x5 ) ▽f ▽ f5 ▽ f5
1!h + (x − x5 )(x − x4 ) 2!h2 + · · · + (x − x5 )(x − x4 )(x − x3 )(x − x4 ) 5!h5 .
5
P (x) = 2x3 + 3x + 1.
Let the data (xi , fi ) for i = 0, 1, 2, . . . , n be given. Then, we define the divided difference as follows.
First divided difference The first divided di9fference of any two consecutive data values defined as
Second divided difference The second divided difference using three consecutive data values is
defined as
Example:
19
x f (x) First d.d Second d.d Third d.d
−3 18
−2 12 −6
−1 8 −4 1
1 6 −1 1 0
2 8 2 1 0
3 12 4 1 0
Example:
x −3 −2 −1 1 2 3
f (x) 18 12 8 6 8 12
P (x) = f0 +(x−x0 )f [x0 , x1 ]+(x−x0 )(x−x1 )f [x0 , x1 , x2 ]+(x−x0 )(x−x1 )(x−x2 )f [x0 , x1 , x2 , x3 ]+
(x−x0 )(x−x1 )(x−x2 )(x−x3 )f [x0 , x1 , x2 , x3 , x4 ]+(x−x0 )(x−x1 )(x−x2 )(x−x3 )f [x0 , x1 , x2 , x3 , x4 , x5 ]
Error of Interpolation: The difference f (x) − P (x) is called the error of interpolation or trun-
cation error. We write
Result: Let f : [a, b] be a n + 1 continuously differentiable function. Let a < x0 < x1 < · · · < xn
be n points such that f (xi ) = fi . Then there exists ξ ∈ [x0 , xn ] such that
(n+1)
E(f, x) = (x − x0 )(x − x1 ) · · · (x − xn ) f (n+1)!(ξ) .
20
" #
M
|E(f, x)| ≤ (n+1)! max |(x − x0 )(x − x1 ) · · · (x − xn )|
a≤x≤b
21
5 Numerical Integration:
R1 2
Motivation: Can you calculate the value the following integration? 0 e−x dx. It is not easy to
R 1 −x2
calculate the value of 0 e dx. In such case we introduce numerical integration and calculate the
R1 2
approximate value of 0 e−x dx.
There are two Numerical Integration numerical techniques are there in your syllabus.
Rb
Trapezoidal rule: The problem is that you have to calculate the approximate value of a f (x).
x a b
f (x) f (a) f (b)
f (b)−f (a)
P (x) = f (a) + b−a (x − a), this is a linear interpolating polynomial of f (x).
Rb Rb f (b)−f (a)
Then a f (x)dx ≈ a (f (a) + b−a (x − a))dx
Geometrical Interpretation:
Geometrically, trapezoidal rule is equivalent to approximating the area of the trapezoidal under
the straight line connecting (a, f (a)) and (b, f (b)) as shown in the following figure.
Remark: To improve the accuracy of Trapezoidal Rule – by divide the integration interval from
a to b into n number of equal segments. Results equation is called Multiple Application or composite
integration formulas.
We divided the interval [a, b] into n numbers of equal sub-intervals, that is [xj−1 , xj ] for j = 1, . . . , n
such that xj − xj−1 = h for j = 1, . . . , n.
b−a
That is h = n
22
R xj
We now calculate the approximate value of xj−1 f (x) for j = 1, . . . , n, see Figure 4.5.
R xj f (xj )+f (xj−1 ) f (xj )+f (xj−1 )
That is, xj−1 f (x)dx ≈ (xj − xj−1 ) 2 =h 2
" #
n R n n−1
Rb P xj P f (xj )+f (xj−1 ) h P
Therefore a f (x)dx = xj−1 f (x)dx ≈ h 2 = 2 f (x0 ) + f (xn ) + 2 f (xj ) .
j=1 j=1 j=1
" #
Rb n−1
h P
a f (x)dx ≈ 2 f (x0 ) + f (xn ) + 2 f (xi ) .
j=1
R 1.4
Example:Compute the value of the following integral 0.2 (sin x − ln x + ex )dx using trapezoidal
rule with 6 equal sub intervals.
b−a 1.4−.2
h= n = 6 = .2.
x f (x)
.2 f (.2) = 3.0296
.4 f (.4) = 2.7975
.6 f (.6) = 2.8975
.8 f (.8) = 3.1661
1 f (1) = 3.5598
1.2 f (1.2) = 4.0698
1.4 f (1.4) = 4.7042
R 1.4 h
0.2(sin x − ln x + ≈ ex )dx 2 f (x0 ) + f (x6 ) + 2 f (x1 ) + f (x2 ) + f (x3 ) + f (x4 ) + f (x5 )
.2
= 2 7.7338 + 2(16.4907)
= 4.07152.
23
Simpson’s 1/3rd rule:
Simpson’s 1/3rd rule is an extension of Trapezoidal rule where the integrand (the function f (x) is
approximated by a second degree polynomial.
Let P (x) be the 2nd degree interpolating polynomial of f (x) passes through the points (a, f (a)), (b, f (b))
and ( a+b a+b
2 , f ( 2 )). By using Lagrange’s interpolating polynomial, we have
We divided the interval [a, b] into n (even) numbers of equal sub-intervals, that is [xj−1 , xj ] for
j = 1, . . . , n such that xj − xj−1 = h for j = 1, . . . , n.
b−a
That is h = n .
R xj+1
We now calculate the approximate value of xj−1 f (x) for j = 1, . . . , n − 1, see the Figure.
R xj+1 xj−1 −xj+1 h
That is, xj−1 f (x)dx ≈ 6 f (xj−1 + f (xj+1 ) + 4 f (xj ) = 3 f (xj−1 + f (xj+1 ) + 4 f (xj )
Rb n−1 n−1
PR xj+1 h P
Therefore a f (x)dx = xj−1 f (x)dx ≈ 3 f (xj−1 + f (xj+1 ) + 4 f (xj )
j=1 j=1
" #
h
= 3 f (x0 ) + f (xn ) + 4 f (x1 ) + f (x3 ) + · · · + f (xn−1 + 2 f (x2 ) + f (x4 ) + · · · + f (xn−2 ) .
" #
Rb h
a f (x)dx ≈ 3 f (x0 ) + f (xn ) + 4 f (x1 ) + f (x3 ) + · · · + f (xn−1 + 2 f (x2 ) + f (x4 ) + · · · + f (xn−2 ) .
24
5.2
R
Example: Compute the value of the following integral ln xdx using trapezoidal rule with 6
4
equal subintervals.
b−a 5.2−4
Sol: Here n = 6, a = 4 and b = 5.2. Therefore h = 6 = 6 = 0.2.
x f (x)
4 f (4) = 1.3862944
4.2 f (4.2) = 1.4350845
4.4 f (4.4) = 1.4816045
4.6 f (4.6) = 1.5260563
4.8 f (4.8) = 1.5686159
5 f (5) = 1.6049379
5.2 f (5.2) = 1.6486586
Z5.2
h
ln xdx ≈ f (x0 ) + f (x6 ) + 2 f (x2 ) + f (x4 ) + 4 f (x1 ) + f (x3 ) + f (x5 )
3
4
2
= 3.034953 + 6.1004408 + 18.232315
3
= 1.8278472
Error:
Rb
For trapezoidal rule: Consider the integration f (x)dx, where f (x) is twice continuously differentiable
a
function in [a, b]. We apply trapezoidal rule with the interval length h. Therefore the absolute value
of the error is bounded by
b−a 2 ′′
|E| ≤ max |f (x)|
12 h a≤x≤b
25
Rb
For simpson’s 1/3 rd rule rule: Consider the integration f (x)dx, where f (x) is 4th times contin-
a
uously differentiable function in [a, b]. We apply simpson’s 1/3 rd rule rule with the interval length h
Therefore the absolute value of the error is bounded by
b−a 4
|E| ≤ max |f iv (x)|
180 h a≤x≤b
26