KEMBAR78
Computational Methods Lab Manual WITH VIVA | PDF | Computational Science | Mathematical Objects
0% found this document useful (0 votes)
177 views36 pages

Computational Methods Lab Manual WITH VIVA

The document outlines a lab manual for computational methods, detailing various experiments focused on numerical methods for finding roots of equations, including the Newton-Raphson and Bisection methods. Each experiment includes objectives, course outcomes, theoretical background, sample problems, and code implementations in Turbo C++. Additionally, it provides viva questions to assess understanding of the methods used.

Uploaded by

Ujjwal Bansal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
177 views36 pages

Computational Methods Lab Manual WITH VIVA

The document outlines a lab manual for computational methods, detailing various experiments focused on numerical methods for finding roots of equations, including the Newton-Raphson and Bisection methods. Each experiment includes objectives, course outcomes, theoretical background, sample problems, and code implementations in Turbo C++. Additionally, it provides viva questions to assess understanding of the methods used.

Uploaded by

Ujjwal Bansal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

COMPUTATIONAL METHODS LAB

ES-251

LAB MANUAL
LIST OF EXPERIMENTS AS PER GGSIPU

Sr. No. Title of Lab Experiments CO

1. Program for finding roots of f(x)=0 Newton Raphson method. CO1, CO2
2. CO1, CO2
Program for finding roots of f(x)=0 by bisection method.

3. Program for finding roots of f(x)=0 by secant method. CO1, CO2


4. To implement Langrange’s Interpolation formula. CO1, CO3
5. To implement Newton’s Divided Difference formula. CO1, CO3
6. Program for solving numerical integration by Trapezoidal rule. CO1, CO4
7. CO1, CO4
Program for solving numerical integration by Simpson’s 1/3 rule

8. To implement Numerical Integration Simpson 3/8 rule. CO1, CO4


9. Inverse of a system of linear equations using Gauss-Jordan method. CO1, CO4
10. Program for solving ordinary differential equation by Runge-Kutta CO1, CO4
Method.

List of Experiments Beyond the Syllabus


S. No Experiment COs
1. To Implement an adaptive adaptive Simpson's rule, to numerically evaluate CO1,CO4
definite integrals.
2. To develop a program to solve partial differential equations using numerical CO1,CO4
methods and to apply it to a sample problem in electrical science.
Experiment No: 1

Aim/Objective: Program for finding roots of f(x)=0 Newton Raphson Method.

Course Outcome: CO1

Software Used: Turbo C++

Theory:

The Newton Raphson Method is referred to as one of the most commonly used techniques
for finding the roots of given equations. It can be efficiently generalised to find solutions to a
system of equations. Moreover, we can show that when we approach the root, the method is
quadratically convergent.

Newton Raphson Method Formula


Let x0 be the approximate root of f(x) = 0 and let x1 = x0 + h be the correct root. Then f(x1) =
0
⇒ f(x0 + h) = 0….(1)
By expanding the above equation using Taylor’s theorem, we get:
f(x0) + hf1(x0) + … = 0
⇒ h = -f(x0) /f’(x0)
Therefore, x1 = x0 – f(x0)/ f’(x0)
Now, x1 is the better approximation than x0.
Similarly, the successive approximations x2, x3, …., xn+1 are given by
xn+1=xn−f(xn)f′(xn)
This is called Newton Raphson formula.
Geometrical Interpretation of Newton Raphson Formula
The geometric meaning of Newton’s Raphson method is that a tangent is drawn at the point
[x0, f(x0)] to the curve y = f(x).

Figure – Newton Raphson Method


It cuts the x-axis at x1, which will be a better approximation of the root. Now, drawing
another tangent at [x1, f(x1)], which cuts the x-axis at x2, which is a still better approximation,
and the process can be continued till the desired accuracy is achieved.
Sample Problem:
F(x)= x3+5*x+3 find approximation root using Newton Raphson Method
F(0)=3
F(-1)=-3
It means roots lie in between 0 and -1.

F1(x)= 3x2+5
F1(0)=5

Using formula:
1st iteration
x1= x0 – (F(x0)/F1(x0))
x1=0-(3/5)
x1=-0.6
2nd Iteration
x1=-0.6 , F(-0.6)= -0.784, F1(-0.6)= 6.08
x2= x1 – (F(x1)/(F1(x1))
x2= -0.6 – (-0.784/6.08)
x2= -0.47

3rd Iteration and so on to get closest root.

Code:

#include <iostream.h>
#include <conio.h>
#include <iomanip.h>
#include <math.h>
float f(float x)
{
return x*log10(x)-1.2;
}
float df(float x)
{
return log10(x) + 0.43429;
}
int main()
{
clrscr();
int itr, maxitr;
float h,x0,x1,aerr;
cout << "Enter x0,allowed error,"<< "maximum iterations" << endl;
cin >> x0 >> aerr >> maxitr;
for (itr=1;itr<=maxitr;itr++)
{
h = f(x0)/df(x0);
x1 = x0-h;
cout << "Iteration no." << setw(3) << itr
<< "X = " << setw(9) << setprecision(6)
<< x1 << endl;
if (fabs(h) < aerr)
{
cout << "After" << setw(3) << itr << "iterations, root = "
<< setw(8) << setprecision(6) << x1;
getch();
return 0;
}
x0 = x1;
}
cout << "Iterations not sufficient,"<< "solution does not converge" << endl;
getch();
return 0;
}

Viva Questions:
1. Can you explain the basic concept of the Newton-Raphson method and how it is used to find the roots
of a function?
Answer: The Newton-Raphson method is an iterative numerical technique used to find approximate solutions (roots)
of a real-valued function. It starts with an initial guess, iteratively refines it by using the function's value and its
derivative at that point to estimate a better guess. The method converges towards a root when certain conditions are
met, such as a sufficiently small value of the function, within a specified tolerance.
2. In the program you provided, what is the significance of the 'x0' variable, and how is it related to the
initial guess for the root?
Answer: The 'x0' variable represents the initial guess for the root of the function. It is the starting point from which the
Newton-Raphson method begins its iterations to approximate the root. The method uses 'x0' as the first estimate and
then refines it with each iteration to get closer to the actual root.
3. What is the role of the 'epsilon' variable in the program, and why is it important in the context of the
Newton-Raphson method?
Answer: 'epsilon' represents the tolerance level, which is the maximum acceptable error between consecutive
iterations. It's essential in the context of the Newton-Raphson method because it determines when to stop the
iterations. When the difference between successive approximations is smaller than 'epsilon,' it indicates that the
method has likely converged to a sufficiently accurate root.
4. How does the program handle cases where the derivative of the function becomes too small, and why is
this important for the method's convergence?
Answer: The program checks if the absolute value of the derivative ('dfx') becomes too small (less than 1e-6) during
the iteration. If this condition is met, it indicates that the method may not converge or may converge very slowly. In
such cases, it prints a message indicating that the derivative is too small. A small derivative can result in a division by
a very small number, causing numerical instability and preventing convergence.
5. Could you explain why the choice of an appropriate initial guess is crucial when using the Newton-
Raphson method, and what happens if the initial guess is far from the actual root?
Answer: Choosing an appropriate initial guess is critical because it significantly affects the method's convergence. If
the initial guess is close to the actual root, the method converges quickly. However, if the guess is far from the root,
the method may converge slowly, converge to a different root, or fail to converge altogether. It's important to select an
initial guess based on domain knowledge or experimentation to improve the chances of convergence to the desired
root.
Experiment No: 2

Aim/Objective: Program for finding roots of f(x)=0 by Bisection method.

Course Outcome: CO1

Software Used: Turbo C++

Theory:

The Bisection Method is used to find the roots of a polynomial equation. It separates the
interval and subdivides the interval in which the root of the equation lies. The principle behind
this method is the intermediate theorem for continuous functions. It works by narrowing the
gap between the positive and negative intervals until it closes in on the correct answer. This
method narrows the gap by taking the average of the positive and negative intervals. It is a
simple method and it is relatively slow. The bisection method is also known as interval halving
method, root-finding method, binary search method or dichotomy method.
Let us consider a continuous function “f” which is defined on the closed interval [a, b], is given
with f(a) and f(b) of different signs. Then by intermediate theorem, there exists a point x belong
to (a, b) for which f(x) = 0.

Bisection Method Algorithm


Follow the below procedure to get the solution for the continuous function:
For any continuous function f(x),

● Find two points, say a and b such that a < b and f(a)* f(b) < 0
● Find the midpoint of a and b, say “t”
● t is the root of the given function if f(t) = 0; else follow the next step
● Divide the interval [a, b] – If f(t)*f(a) <0, there exist a root between t and a
– else if f(t) *f (b) < 0, there exist a root between t and b
● Repeat above three steps until f(t) = 0.
The bisection method is an approximation method to find the roots of the given equation by
repeatedly dividing the interval. This method will divide the interval until the resulting interval
is found, which is extremely small.
Sample Problem:
Given: x2-3 = 0
Let f(x) = x2-3
Now, find the value of f(x) at a= 1 and b=2.
f(x=1) = 12-3 = 1 – 3 = -2 < 0
f(x=2) = 22-3 = 4 – 3 = 1 > 0
The given function is continuous, and the root lies in the interval [1, 2].
Let “t” be the midpoint of the interval.
I.e., t = (1+2)/2
t =3 / 2
t = 1.5
Therefore, the value of the function at “t” is
f(t) = f(1.5) = (1.5)2-3 = 2.25 – 3 = -0.75 < 0
If f(t)<0, assume a = t.
and
If f(t)>0, assume b = t.
f(t) is negative, so a is replaced with t = 1.5 for the next iterations.
The iterations for the given functions are:

Iterations a b t f(a) f(b) f(t)

1 1 2 1.5 -2 1 -0.75

2 1.5 2 1.75 -0.75 1 0.062

3 1.5 1.75 1.625 -0.75 0.0625 -0.359

4 1.625 1.75 1.6875 -0.3594 0.0625 -0.1523

5 1.6875 1.75 1.7188 -01523 0.0625 -0.0457

6 1.7188 1.75 1.7344 -0.0457 0.0625 0.0081

7 1.7188 1.7344 1.7266 -0.0457 0.0081 -0.0189


So, at the seventh iteration, we get the final interval [1.7266, 1.7344]
Hence, 1.7344 is the approximated solution.

Code:
#include <iostream.h>
#include <conio.h>
#include <iomanip.h>
#include <math.h>
float f(float x)
{
return (x*x*x - 4*x - 9);
}
void bisect(float *x,float a,float b,int *itr)
{
*x = (a + b)/2;
++(*itr);
cout << "Iteration no." <<setw(3) << *itr
<< "X = " << setw(7) << setprecision(5)
<< *x << endl;
}
int main()
{
clrscr();
int itr = 0, maxitr;
float x, a, b, aerr, x1, fixed;
cout<<"This program is made by Sidharth"<<endl;
cout << "Enter the values of a,b,"
<< "allowed error, maximum iterations" << endl;
cin >> a >> b >> aerr >> maxitr;
cout << fixed;
bisect(&x,a,b,&itr);
do
{
if (f(a)*f(x) < 0)
b = x;
else
a = x;
bisect (&x1,a,b,&itr);
if (fabs(x1-x) < aerr)
{
cout << "After" << itr << "iterations, root"
<< "=" << setw(6) << setprecision(4)
<< x1 << endl;
getch();
return 0;
}
x = x1;
} while (itr < maxitr);
cout << "Solution does not converge,"
<< "iterations not sufficient" << endl;
getch();
return 0;
}

VIVA Questions

1. Can you explain the fundamental concept of the Bisection method and how it is employed to find the
roots of a function?
Answer: The Bisection method is a numerical technique used to find the approximate roots of a real-valued function.
It works by narrowing down the interval where a root exists by iteratively bisecting the interval and determining
which subinterval the root lies in. The method continues until the interval becomes sufficiently small or the root is
accurately approximated.
2. In the program you provided, what is the significance of the 'a' and 'b' variables, and how do they
relate to the interval for root approximation?
Answer: The 'a' and 'b' variables represent the endpoints of the initial interval within which the root is assumed to
exist. They define the range of values where the root is sought. The Bisection method starts with this initial interval
and iteratively refines it by splitting it in half to focus on the subinterval where the root is located.
3. What is the role of the 'epsilon' variable in the program, and why is it important in the context of the
Bisection method?
Answer: 'epsilon' is the tolerance level that determines the acceptable size of the interval within which the root is
considered to have been found. It is vital in the context of the Bisection method because it helps decide when to stop
the iterations. When the size of the current interval becomes smaller than 'epsilon,' it indicates that the method has
sufficiently approximated the root.
4. How does the program handle cases where the function does not have a root within the initial interval
'a' and 'b'?
Answer: The program assumes that the root exists within the initial interval 'a' and 'b.' If the function does not have a
root in this interval, the Bisection method will not work effectively. To address this issue, it's crucial to choose an
initial interval ('a' and 'b') based on knowledge of the function's behavior or use other methods to estimate a suitable
interval containing the root.
5. Why is the Bisection method considered a robust and reliable root-finding technique, and what are its
advantages and limitations compared to other methods?
Answer: The Bisection method is considered robust and reliable because it is guaranteed to converge to a root as long
as the function is continuous and changes sign within the initial interval. It doesn't require knowledge of the function's
derivative, making it suitable for a wide range of functions. However, it may converge slowly and may not be efficient
when the root is located far from the initial interval or when the function has multiple roots.
Experiment No: 3

Aim/Objective: Program for finding roots of f(x)=0 by secant method.

Course Outcome: CO1

Software Used: Turbo C++

Theory:

Secant method is also a recursive method for finding the root for the polynomials by
successive approximation. It’s similar to the Regular-Falsi method but here we don’t need to
check f(x1)f(x2)<0 again and again after every approximation. In this method, the
neighbourhood roots are approximated by secant line or chord to the function f(x). It’s also
advantageous of this method that we don’t need to differentiate the given function f(x), as we
do in Newton-Raphson method.

Figure – Secant Method


Now, we’ll derive the formula for secant method. The equation of Secant line passing through
two points is:
Here, m=slope
So, apply for (x1, f(x1)) and (x0, f(x0))
Y - f(x1) = [f(x0)-f(x1)/(x0-x1)] (x-x1) Equation (1)
As we’re finding root of function f(x) so, Y=f(x)=0 in Equation (1) and the point where the
secant line cut the x-axis is,
x= x1 - [(x0 - x1)/ (f(x0) - f(x1)]f(x1) .
We use the above result for successive approximation for the root of function f(x). Let’s say
the first approximation is x=x2:
x2= x1 - [(x0 - x1)/ (f(x0)-f(x1))]f(x1)
Similarly, the second approximation would be x =x3:
x3= x2 - [(x1-x2)/ (f(x1)-f(x2))]f(x2)
And so on, till kth iteration,,
xk+1= xk - [(xk-1 - xk) / (f(xk-1) - f(xk))]f(xk)
Advantages of Secant Method:
● The speed of convergence of secant method is faster than that of Bisection and
Regula falsi method.
● It uses the two most recent approximations of root to find new approximations,
instead of using only those approximations which bound the interval to enclose root
Disadvantages of Secant Method:
● The Convergence in secant method is not always assured.
● If at any stage of iteration this method fails.
● Since convergence is not guaranteed, therefore we should put limit on maximum
number of iterations while implementing this method on computer.

Sample Problem:

A real root of the equation f(x) = x3 – 5x + 1 = 0 lies in the interval (0, 1). Perform four
iterations of the secant method.
Solution –
We have, x0 = 0, x1 = 1, f(x0) = 1, f(x1) = – 3
x2 = x1 – [( x0 – x1) / (f(x0) – f(x1))]f(x1)
= 1 – [ (0 – 1) / ((1-(-3))](-3)
= 0.25.

f(x2) = – 0.234375
The second approximation is,
x3 = x2 – [( x1 – x2) / (f(x1) – f(x2))]f(x2)
=(– 0.234375) – [(1 – 0.25)/(–3 – (– 0.234375))](– 0.234375)
= 0.186441

f(x3)
The third approximation is,
x4 = x3 – [( x2 – x3) / (f(x2) – f(x3))]f(x3)
= 0.186441 – [( 0.25 – 0.186441) / ( – 0.234375) – (0.074276) ](– 0.234375)
= 0.201736.

f(x4) = – 0.000470
The fourth approximation is,
x5 = x4 – [( x3 – x4) / (f(x3) – f(x4))]f(x4)
= 0.201736 – [( 0.186441 – 0.201736) / (0.074276 – (– 0.000470)](– 0.000470)
= 0.201640
Code:

#include<iostream.h>
#include<iomanip.h>
#include<math.h>
#include<conio.h>
#include<stdlib.h>

float f(float x)
{
return x*x*x - 2*x - 5;
}
int main()
{
clrscr ();

float x0, x1, x2, f0, f1, f2, e;


int itr = 1, N;

cout<<"Enter first guess: ";


cin>>x0;
cout<<"Enter second guess: ";
cin>>x1;
cout<<"Enter tolerable error: ";
cin>>e;
cout<<"Enter maximum iteration: ";
cin>>N;
do
{
f0 = f(x0);
f1 = f(x1);
if(f0 == f1)
{
cout<<"Mathematical Error.";
getch();
exit(0);
}

x2 = x1 - (x1 - x0) * f1/(f1-f0);


f2 = f(x2);

cout<<"Iteration-"<< itr<<":\t x2 = "<< setw(10)<< x2<<" and f(x2) = "<<


setw(10)<< f(x2)<< endl;

x0 = x1;
f0 = f1;
x1 = x2;
f1 = f2;
itr = itr + 1;
if(itr > N)
{
cout<<"Not Convergent.";
getch();
exit(0);
}
}while(fabs(f2)>e);
cout<< endl<<"Root is: "<< x2;
getch();
return 0;
}
VIVA QUESTIONS:
1. Can you explain the basic concept of the Secant method and how it differs from the Newton-Raphson
method for finding roots of a function?
Answer: The Secant method is an iterative numerical technique used to find the roots of a function. Unlike the
Newton-Raphson method, it doesn't require the evaluation of the derivative of the function. Instead, it uses two initial
guesses to approximate the root, updating the guesses iteratively based on the function's values at those points.
2. In the program you provided, what do the 'x0' and 'x1' variables represent, and how are they used in
the Secant method?
Answer: 'x0' and 'x1' represent the two initial guesses for the root. In the Secant method, we use these two points to
construct a line (secant) and find its intersection with the x-axis. The intersection point becomes the new
approximation for the root in each iteration.
3. What is the purpose of the 'epsilon' variable in the program, and why is it important in the context of
the Secant method?
Answer: The 'epsilon' variable determines the tolerance or the maximum acceptable error between consecutive
iterations. It's crucial in the Secant method because it controls when to stop the iterations. When the difference
between successive approximations is smaller than 'epsilon,' it indicates that the method has likely converged to a
sufficiently accurate root.
4. How does the program handle cases where the Secant method fails to converge or converges very
slowly?
Answer: If the Secant method fails to converge or converges very slowly, it might lead to division by a very small
number, causing numerical instability. The program doesn't specifically handle this scenario, but careful selection of
initial guesses closer to the root and monitoring the number of iterations can help address convergence issues.
5. Why is it necessary to provide two initial guesses in the Secant method, and how does this affect the
convergence process compared to methods that require only one initial guess?
Answer: Two initial guesses are required in the Secant method because it approximates the derivative using the secant
line, which is defined by two points. This approach allows the method to handle functions with irregular or complex
behavior. Having two initial guesses makes the method more versatile and suitable for functions where the derivative
is challenging to calculate. However, it also means the Secant method may converge more slowly than methods like
Newton-Raphson, which require just one initial guess.
Experiment – 4

Aim/Objective: To implement Lagrange’s Interpolation formula


Course Outcome: CO2
Software used: Turbo C++
Theory:
Interpolation is a way of finding additional data points within a range of discrete data
points. The Lagrange interpolation formula is a method for determining a polynomial, known
as a Lagrange polynomial,that takes on specific values at random places. Lagrange's
interpolation is a polynomial approximation to f of Nth degree (x). Interpolation is a technique
for generating new values for any function from a set of existing values. Using this technique,
we may find the unknown value on a point. When it comes to the linear interpolation formula,
it can be used to find a new value from two provided points. The "n" set of numbers is required
when comparing it to Lagrange's interpolation formula. The new value will then be found using
Lagrange's approach.
Let 𝑦 = 𝑓(𝑥) be a polynomial of 𝑛𝑡ℎ degree passing through (𝑛 + 1) points (𝑥𝑘, 𝑦𝑘),
k=0,1,…,n.
Then,
𝑦 = 𝑎0(𝑥 − 𝑥1)(𝑥 − 𝑥2) ⋯ (𝑥 − 𝑥𝑛) + 𝑎1(𝑥 − 𝑥0)(𝑥 − 𝑥2) ⋯ (𝑥 − 𝑥𝑛) +⋯
+ 𝑎𝑘(𝑥 − 𝑥0)(𝑥 − 𝑥1) ⋯ (𝑥 − 𝑥𝑘−1)(𝑥 − 𝑥𝑘+1) ⋯ (𝑥 − 𝑥𝑛) +⋯
+ 𝑎𝑛(𝑥 − 𝑥0)(𝑥 − 𝑥1) ⋯ (𝑥 − 𝑥𝑛−1) (1)
This equation will be satisfied by the given points (𝑥𝑘, 𝑦𝑘), 𝑘 = 0, 1, … , 𝑛.
For (𝑥0, 𝑦0), we have 𝑦0 = 𝑎0(𝑥0 − 𝑥1)(𝑥0 − 𝑥2) ⋯ (𝑥0 − 𝑥𝑛). Thus,
𝑦0
𝑎0 =
(𝑥0 − 𝑥1)(𝑥0 − 𝑥2) ⋯ (𝑥0 − 𝑥𝑛)
Similarly, for (𝑥1, 𝑦1), we have 𝑦1 = 𝑎1(𝑥1 − 𝑥0)(𝑥1 − 𝑥2) ⋯ (𝑥1 − 𝑥𝑛). Hence,
𝑦1
𝑎1 =
(𝑥1 − 𝑥0)(𝑥1 − 𝑥2) ⋯ (𝑥1 − 𝑥𝑛)
(𝑥 )
and so on. For the point 𝑛, 𝑦𝑛 would thus be
𝑦𝑛
𝑎𝑛 =
(𝑥𝑛 − 𝑥0)(𝑥𝑛 − 𝑥1) ⋯ (𝑥𝑛 − 𝑥𝑛−1)
Applying these values in equation (1), we get
(𝑥 − 𝑥1)(𝑥 − 𝑥2) ⋯ (𝑥 − 𝑥𝑛)
𝑦 = 𝑓(𝑥) = 𝑦0
(𝑥0 − 𝑥1)(𝑥0 − 𝑥2) ⋯ (𝑥0 − 𝑥𝑛)
(𝑥 − 𝑥0)(𝑥 − 𝑥1) ⋯ (𝑥 − 𝑥𝑛)
+ 𝑦1 + ⋯
(𝑥1 − 𝑥0)(𝑥1 − 𝑥2) ⋯ (𝑥1 − 𝑥𝑛)
(𝑥 − 𝑥0)(𝑥 − 𝑥1) ⋯ (𝑥 − 𝑥𝑛) 𝑦
+
(𝑥𝑛 − 𝑥0)(𝑥𝑛 − 𝑥1) ⋯ (𝑥𝑛 − 𝑥𝑛−1) 𝑛
which is known as the 𝐿𝑎𝑔𝑟𝑎𝑛𝑔𝑒′𝑠 𝐼𝑛𝑡𝑒𝑟𝑝𝑜𝑙𝑎𝑡𝑖𝑜𝑛 𝑓𝑜𝑟𝑚𝑢𝑙𝑎 𝑓𝑜𝑟 𝑢𝑛𝑒𝑞𝑢𝑎𝑙 𝑖𝑛𝑡𝑒𝑟𝑣𝑎𝑙𝑠.

Sample Problem:

𝑥 −1 0 2 3
𝑦 −8 3 1 12
Use the Lagrange formula to fit a polynomial to the following data and hence find 𝑓(1).

Solution:
By Lagrange’s formula
VIVA Questions

1. What is the primary objective of implementing Lagrange's Interpolation formula, and how does it
relate to the broader field of numerical methods and computational mathematics?
Answer: The primary objective of implementing Lagrange's Interpolation formula is to approximate a function based
on a limited set of data points. It's a fundamental technique in numerical analysis used to estimate values between
given data points. It's important in numerical methods for functions that lack a continuous, closed-form representation.
2. Can you explain the concept of Lagrange interpolation and the mathematical foundation behind it?
Answer: Lagrange interpolation constructs a polynomial that passes through given data points. It uses a set of
Lagrange basis polynomials, each associated with a data point, to create the interpolating polynomial. The
interpolating polynomial is a weighted sum of these basis polynomials, and it uniquely passes through all the data
points.
3. How is Turbo C++ utilized in implementing Lagrange's Interpolation formula, and what are the key
components or functions involved in the program?
Answer: Turbo C++ is an integrated development environment used to write, compile, and run C++ programs. In the
context of implementing Lagrange's Interpolation, the program would typically involve functions for data input,
Lagrange polynomial construction, and polynomial evaluation at specific points. Turbo C++ provides a platform for
coding and running these functions.
4. What are the potential challenges or limitations of using Lagrange's Interpolation formula for data
approximation, and are there cases where it may not provide accurate results?
Answer: Lagrange's Interpolation can lead to Runge's phenomenon, where oscillations may occur when interpolating
using high-degree polynomials. This can lead to inaccuracies, especially if the data is not well-behaved or contains
outliers. In such cases, other interpolation methods or polynomial approximation techniques might be more
appropriate.
5. Could you provide an example of a problem or dataset where you would apply Lagrange's
Interpolation, and walk us through the steps involved in using the program to approximate the missing
data points?
Answer: You could present a simple dataset, perhaps a set of x and y coordinates, and demonstrate how the program
in Turbo C++ would take this data as input, perform the Lagrange interpolation, and estimate the y-values for missing
x-values. Walk through the code and calculations involved in the process.
VIVA QUESTIONS:
1. What is the objective of implementing Newton's Divided Difference formula in Turbo C++?
Answer: The objective of implementing Newton's Divided Difference formula in Turbo C++ is to interpolate a
polynomial that passes through a given set of data points. This interpolation technique is useful in various
applications, such as numerical analysis and approximation of functions.
2. Can you briefly explain what Newton's Divided Difference formula is and how it works?
Answer: Newton's Divided Difference formula is a method for constructing an interpolating polynomial through a set
of data points. It uses divided differences, which are essentially the finite differences of the function values at the data
points. By applying these differences iteratively, the formula generates the coefficients of the polynomial, which can
be used to interpolate values between the given data points.
3. What are the main steps involved in implementing Newton's Divided Difference formula in Turbo
C++?
Answer: The main steps in implementing Newton's Divided Difference formula in Turbo C++ include:
• Input the data points (x, y) from the user or a file.
• Calculate the divided differences for constructing the coefficients of the interpolating polynomial.
• Use the coefficients to form the polynomial.
• Interpolate values using the polynomial as needed.
4. What is the significance of divided differences in the context of interpolation, and how are they
computed in the algorithm?
Answer: Divided differences are essential in Newton's Divided Difference formula as they represent the coefficients of
the interpolating polynomial. They are computed by recursively taking finite differences of the function values at the
data points. The first-order divided differences are simply the differences in the y-values. Higher-order divided
differences involve taking differences of lower-order divided differences. These differences are used to determine the
coefficients of the polynomial.
5. Why is Turbo C++ chosen as the software for implementing this formula? Are there any specific
advantages to using Turbo C++ for this task?
Answer: Turbo C++ was historically popular and widely used for C and C++ programming, especially in the DOS
environment. However, it's important to note that Turbo C++ is considered outdated and has limitations when
compared to modern C++ compilers. It may be chosen for educational purposes or for compatibility with legacy
systems. Specific advantages could include simplicity, a familiar environment for some users, and compatibility with
older hardware and software. However, for modern development, it's advisable to use more up-to-date C++ compilers
and IDEs.
VIVA Questions:
1. What is the Trapezoidal rule, and how does it work in the context of numerical integration?
Answer: The Trapezoidal rule is a numerical integration technique used to approximate the definite integral of a
function. It divides the area under the curve into trapezoids, calculates the area of each trapezoid, and then sums these
areas to approximate the integral. It's based on approximating the curve with straight-line segments.
2. In the program you provided, what is the role of the 'n' variable, and why is it important in the
Trapezoidal rule for numerical integration?
Answer: The 'n' variable represents the number of subintervals used to divide the integration interval. Increasing 'n'
results in a more accurate approximation of the integral. The Trapezoidal rule becomes more accurate as 'n' increases
because it approximates the curve more closely with smaller trapezoids.
3. How does the program handle user input for the function to be integrated and the integration limits (a
and b)?
Answer: The program should prompt the user to input the function to be integrated and the integration limits (a and b).
It may use functions or loops to read and store these values. Make sure the program can accept different functions, a,
and b values to make it versatile.
4. What is the significance of the 'h' variable in the Trapezoidal rule, and how is it calculated in the
program?
Answer: 'h' represents the width of each subinterval, and it's crucial in the Trapezoidal rule. It's calculated as (b - a) /
n, where 'a' and 'b' are the integration limits and 'n' is the number of subintervals. 'h' is used to determine the width of
each trapezoid.
5. What are the limitations of the Trapezoidal rule for numerical integration, and are there situations
where it may not provide an accurate approximation of the integral?
Answer: The Trapezoidal rule is less accurate when used to approximate integrals of functions with rapidly changing
curvature or when the function has sharp corners. It tends to overestimate the integral for functions that are concave
up and underestimate it for functions that are concave down. In such cases, other numerical integration methods like
Simpson's rule or Gaussian quadrature may be more appropriate.
Viva Question:
1. What is numerical integration? Why is it required?
Numerical integration is the process of approximating the definite integral of a function using numerical methods. It is
required when the anti-derivative of a function cannot be found analytically.
2. What is Simpson's 1/3rd rule? Explain the formula.
Simpson's 1/3rd rule is a method for numerical integration that approximates the integral by fitting quadratic
polynomial segments under the curve and summing their areas. The formula is:
Integral (a to b) f(x) dx ≈ (b-a)/6 * [f(a) + 4f((a+b)/2) + f(b)]
3. How does the program implement Simpson's rule?
The program takes as input the limits of integration a and b, number of intervals n, and the function f(x). It divides the
interval [a,b] into n equal subintervals. The function is evaluated at a, b and n/2 intermediate points. Using the
formula, it computes the areas under the quadratic segments and sums them to approximate the integral.
4. What are the advantages of Simpson's 1/3rd rule?
Advantages are:
- It is easy to implement
- Provides good accuracy for smooth functions
- Works for both even and odd number of intervals
- Has an error term of O(h^4) which is better than trapezoidal rule
5. What are the limitations of Simpson's 1/3rd rule?
Limitations are:
- The function needs to be continuous and differentiable in the interval
- Accuracy decreases for highly oscillatory functions
- An even number of subintervals is required for maximum accuracy
- Programming can get complicated for adaptive quadrature with uneven spacing
VIVA Questions:
1. What is Simpson's 3/8th rule for numerical integration?
Simpson's 3/8th rule is a numerical integration technique used to approximate the integral of a function over an
interval. It is an extension of Simpson's 1/3rd rule and has higher accuracy.
2. Explain the formula used in Simpson's 3/8th rule.
The formula is:
Integral (a to b) f(x)dx ≈ (b-a)/8 * [f(a) + 3f(a+b)/4 + 3f((a+3b)/4) + f(b)]
It uses 3 intervals instead of 2 and evaluates the function at 4 points.
3. What are the advantages of using Simpson's 3/8th rule?
Advantages are:
- It has a higher order of accuracy than 1/3rd rule
- The error term is on the order of O(h^5)
- It can handle a non-uniform mesh spacing
- Good for integrating smooth functions
4. What are the limitations of Simpson's 3/8th rule?
Limitations are:
- Programming is more complex than 1/3rd rule
- Requires at least 3 subintervals
- Decreases in accuracy for highly oscillatory functions
- Function needs to be 5 times differentiable
5. How is the program implemented in C++?
The implementation includes:
- Input bounds of integration a and b
- Calculating number of subintervals n
- Finding interval spacing h = (b-a)/n
- Evaluating function at a, b, (a+b)/4, (a+3b)/4
- Applying formula to find integral value
- Outputting result
VIVA Questions
1. What is the Gauss-Jordan method, and how does it work in the context of finding the inverse of a
system of linear equations?
Answer: The Gauss-Jordan method is a technique for solving a system of linear equations and finding the inverse of a
square matrix. It involves row reduction operations to transform the matrix into an identity matrix, and the process is
applied to both the original matrix and an identity matrix. The resulting identity matrix on the right side represents the
inverse of the original matrix.
2. In the program you provided, how does it handle user input for the coefficient matrix of the system of
linear equations?
Answer: The program should prompt the user to input the elements of the coefficient matrix, typically row by row or
element by element. It may use loops or arrays to store these values. Ensuring that the program can handle different
matrix sizes is important for its versatility.
3. What are the possible outcomes when applying the Gauss-Jordan method to find the inverse of a
matrix?
Answer: There are three possible outcomes:
• If the Gauss-Jordan method is successful, the original matrix will be transformed into an identity
matrix, and the identity matrix on the right side will be the inverse of the original matrix.
• If the matrix is singular (non-invertible), the method will not be able to transform it into an identity
matrix.
• If the matrix is not square, it cannot have an inverse, and the method will not be applicable.
4. Can you explain the role of pivot elements in the Gauss-Jordan method, and why are they important
for the process?
Answer: Pivot elements are the leading non-zero elements in each row when the matrix is in the process of being
transformed into row-echelon form. They are essential as they determine the order in which rows are manipulated to
create zeros below and above the pivot. Proper choice of pivot elements ensures that the method works efficiently and
correctly.
5. What are the limitations or challenges when using the Gauss-Jordan method to find the inverse of a
matrix, and are there situations where it may not provide a valid solution?
Answer: The Gauss-Jordan method may encounter numerical instability when dealing with matrices close to singular
or with large condition numbers. It also requires more computation time for larger matrices. Moreover, it can fail to
provide a solution if the matrix is singular, meaning it does not have an inverse.
VIVA QUESTIONS:

1. What is the primary objective of using the Runge-Kutta method in the context of solving
ordinary differential equations (ODEs)?
Answer: The primary objective of using the Runge-Kutta method is to approximate the solutions to
ordinary differential equations (ODEs). It's a numerical technique that allows us to compute an
approximate solution to an ODE when an analytical solution is either difficult or impossible to find.
The Runge-Kutta method provides a more accurate approximation of the solution compared to simpler
methods like Euler's method.
2. In the program you provided, what is the significance of the step size 'h,' and how is it chosen
for solving ODEs using the Runge-Kutta method?
Answer: The 'h' (step size) determines how small the intervals between each approximation are. It's
crucial for the accuracy of the solution. Choosing an appropriate 'h' is essential. It's typically
determined based on the specific problem and balancing accuracy and computation time. Smaller 'h'
values generally result in a more accurate solution, but they may require more computation.
3. How does the program handle the input of the ODE, initial conditions, and other relevant
parameters for solving the ODE using the Runge-Kutta method?
Answer: The program should prompt the user for the ODE, initial conditions, and any other relevant
parameters. It may use functions or loops to read and store these values. Users should be able to input
the ODE in a specific format, and the program should handle it accordingly.
4. Can you explain the concept of a higher-order Runge-Kutta method (e.g., the fourth-order
Runge-Kutta method), and why is it preferred over lower-order methods?
Answer: The fourth-order Runge-Kutta method is a numerical technique for solving ODEs that
provides a higher level of accuracy compared to lower-order methods like the Euler method. It
involves four stages of calculations and offers a good balance between accuracy and computational
complexity. Higher-order Runge-Kutta methods provide more accurate approximations by considering
multiple points within each step.
5. What are the potential challenges or limitations when using the Runge-Kutta method for
solving ODEs, and are there situations where other numerical methods might be more
suitable?
Answer: Challenges include choosing an appropriate step size, potential instability for stiff ODEs, and
the need for evaluating the derivative multiple times. In some cases, other numerical methods such as
the Euler method or adaptive step-size methods might be more suitable, depending on the nature of the
ODE and the desired level of accuracy.
Advance
Experiment
(Beyond the syllabus)
Advance Experiment – 1

Aim/Objective: To Implement an adaptive adaptive Simpson's rule, to numerically evaluate definite


integrals.
Course Outcome: CO2
Software Used: Turbo C++
Theory: The adaptive Simpson's rule is a numerical integration method used to approximate definite
integrals by dividing the integration interval into smaller subintervals and applying Simpson's rule
within those subintervals. This method is adaptive, meaning it dynamically adjusts the subinterval
sizes to improve accuracy.
1. Simpson's Rule:
Simpson's rule is a numerical integration technique that approximates the integral of a function by
using quadratic interpolating polynomials. It divides the integration interval into equally spaced
subintervals and approximates the integral over each subinterval as a quadratic function of the interval
endpoints. The formula for Simpson's rule is:
ℎ 𝑎+𝑏
I𝐼𝑛 = 3 [𝑓(𝑎) + 4𝑓 ( 2 ) + 𝑓(𝑏)]

Where:
In is the approximation of the integral over the interval [a, b].
h is the width of each subinterval, calculated as (b – a)/n, where n is the number of subintervals.
f(a) and f(b) are the function values at the interval endpoints.
2. Adaptive Integration:
The basic Simpson's rule divides the interval into a fixed number of subintervals. In contrast, the
adaptive Simpson's rule dynamically refines the subdivision of the interval to improve accuracy. It
does this by comparing the results of Simpson's rule on subintervals with different widths. If the
difference between the approximations for two subintervals is larger than a specified tolerance, the
algorithm subdivides further, increasing the accuracy of the estimate.
3. Algorithm:
The adaptive Simpson's rule algorithm involves the following steps:
a. Divide the integration interval [a, b] into two equal subintervals by calculating the midpoint c = (a
+ b)/2
b. Apply Simpson's rule to approximate the integral over the entire interval and each of the two
subintervals.
c. Compare the approximations over the subintervals to check if they are accurate enough. If the
difference between the two subintervals' approximations is larger than a specified tolerance, continue
to the next step.
d. Recursively apply the adaptive Simpson's rule to each subinterval, further subdividing them.
Repeat until the accuracy criterion is met for all subintervals.
e. Combine the results from all subintervals to obtain the final estimate of the integral over the
original interval [a, b].
4. Error Estimation:
The adaptive Simpson's rule is effective because it automatically adapts the subinterval sizes to the
function's behavior. It subdivides the interval until the error in the estimate falls below a user-specified
tolerance. The error in the approximation is related to the difference between the results of Simpson's
rule on two subintervals.
5. Advantages:
Adaptive Simpson's rule is more accurate than fixed-step Simpson's rule.
It requires fewer function evaluations to achieve a given level of accuracy because it focuses
computational effort where it is most needed.
6. Limitations:
The recursive nature of the algorithm can lead to additional computational overhead.
The effectiveness of the method depends on the smoothness of the function being integrated.
Sample Problem: The adaptive Simpson's rule C++ program with a sample example using the
function f(x) = x^4 and integration limits from 0 to 1:
#include <iostream>
#include <cmath>

using namespace std;

// Function to be integrated
double func(double x) {
return pow(x, 4); // Example function: x^4
}

// Adaptive Simpson's rule


double adaptiveSimpson(double a, double b, double eps, double fa, double fb, double fc, int depth) {
double c = (a + b) / 2;
double h = (b - a) / 6;
double d = (a + c) / 2;
double e = (c + b) / 2;
double fd = func(d);
double fe = func(e);
double S1 = (h / 3) * (fa + 4 * fd + fc);
double S2 = (h / 3) * (fc + 4 * fe + fb);
double S = S1 + S2;

if (depth <= 0 || fabs(S2 - S1) <= 15 * eps) {


return S2 + (S2 - S1) / 15.0;
} else {
double left = adaptiveSimpson(a, c, eps / 2, fa, fc, fd, depth - 1);
double right = adaptiveSimpson(c, b, eps / 2, fc, fb, fe, depth - 1);
return left + right;
}
}

int main() {
double a = 0.0;
double b = 1.0;
double eps = 1e-6; // Desired accuracy (epsilon)

double fa = func(a);
double fb = func(b);
double c = (a + b) / 2;
double fc = func(c);

int depth = 10; // Maximum recursion depth

double result = adaptiveSimpson(a, b, eps, fa, fb, fc, depth);

cout << "Result of integration: " << result << endl;

return 0;
}}
VIVA Question:
1. What is the adaptive Simpson's rule, and how does it differ from the standard Simpson's rule
for numerical integration?
Answer: The adaptive Simpson's rule is a refinement of the standard Simpson's rule, which divides the
integration interval into smaller subintervals. The key difference is that the adaptive version
dynamically subdivides intervals until a specified accuracy is achieved. It's more efficient and accurate
for functions with varying behavior.
2. Explain the significance of the 'depth' variable in the program, and how does it affect the
adaptive subdivision of intervals?
Answer: The 'depth' variable controls the maximum recursion depth. When the difference between the
adaptive Simpson's rule estimates for two adjacent intervals is greater than 15 times the desired
accuracy, the interval is further subdivided, and 'depth' decreases by 1. A smaller 'depth' means more
subdivisions, improving the accuracy of the approximation.
3. What are the key advantages of using the adaptive Simpson's rule over traditional numerical
integration methods, and in what situations is it particularly useful?
Answer: The adaptive Simpson's rule is advantageous for its adaptability, which ensures accuracy even
for functions with rapidly changing behavior. It's useful when dealing with functions that are difficult
to approximate accurately using fixed-step methods. It also reduces computational effort by
concentrating effort where it is most needed.
4. How does the program handle the user's input for the integration limits (a and b) and the
desired accuracy (epsilon), and why are these inputs important for the integration process?
Answer: The program prompts the user to input the lower and upper integration limits (a and b) and
the desired accuracy (epsilon). These inputs are vital for defining the integration problem and
controlling the precision of the result. The accuracy ensures the approximation meets the desired level
of precision.
5. Can you provide an example of a real-world scenario or problem in which the adaptive
Simpson's rule for numerical integration would be a valuable tool, and explain how the
program could be applied to that problem?
Answer: An example could be simulating the heat distribution in a complex shape, where analytical
solutions are not available. The program could be applied by defining the heat distribution function,
setting the integration limits to cover the shape, and using the adaptive Simpson's rule to approximate
the total heat within the region. The adaptive nature of the method ensures accurate results even in
complex geometries.
Advance Experiment – 2
Objective: To develop a program to solve partial differential equations using numerical methods and
to apply it to a sample problem in electrical science.
Course Outcome: CO2
Software Used: Turbo C++
Theory: Partial differential equations (PDEs) are a type of differential equation that involves more
than one independent variable. They are used to model a wide variety of phenomena in physics,
engineering, and other fields.
Numerical methods are used to solve PDEs because there is no general analytical solution for most
PDEs. Numerical methods approximate the solution to the PDE by discretizing the domain and solving
a system of algebraic equations.
Sample problem in electrical science:
The diffusion of charge carriers in a semiconductor is governed by the following PDE:
∂n/∂t = D ∇2n
where n is the concentration of charge carriers and D is the diffusion coefficient.

C++ code:
The following C++ code solves the diffusion equation using the explicit finite difference method:
#include <iostream>
#include <vector>

using namespace std;

// Diffusion coefficient
const double D = 1.0e-4;

// Time step
const double dt = 1.0e-6;

// Spatial grid spacing


const double dx = 1.0e-4;

// Initial concentration of charge carriers


vector<double> n(100, 1.0);

int main() {

// Calculate the number of time steps


int n_steps = 1000;

// Iterate over time steps


for (int i = 0; i < n_steps; i++) {

// Calculate the new concentration of charge carriers


for (int j = 1; j < n.size() - 1; j++) {
n[j] = n[j] + D * dt * ((n[j + 1] - 2 * n[j] + n[j - 1]) / dx^2);
}
}

// Print the concentration of charge carriers


for (int j = 0; j < n.size(); j++) {
cout << n[j] << endl;
}
return 0;
}

Output:
The output of the program is a list of the concentration of charge carriers at each grid point after 1000
time steps.

VIVA Questions:
1. What are partial differential equations (PDEs), and why are numerical methods used to solve
them?
Answer: Partial differential equations are mathematical equations that involve multiple independent
variables and their partial derivatives. They are used to model physical phenomena in various fields,
including electrical science. Numerical methods are employed to solve PDEs when analytical solutions
are not available or practical. These methods discretize the PDEs into a system of algebraic equations
that can be solved on a computer.
2. What numerical methods can be used to solve partial differential equations, and what factors
influence the choice of method for a specific problem?
Answer: Numerical methods for solving PDEs include finite difference, finite element, finite volume,
and spectral methods, among others. The choice of method depends on factors such as the problem's
domain, boundary conditions, type of PDE (e.g., elliptic, parabolic, hyperbolic), and computational
resources available. Each method has its strengths and weaknesses.
3. How does the program you developed handle boundary and initial conditions when solving
partial differential equations, and why are these conditions essential?
Answer: The program should allow the user to input or specify boundary and initial conditions as they
apply to the specific problem. Boundary and initial conditions are crucial in defining the problem and
ensuring that the solution is physically meaningful. They constrain the behavior of the PDE and
provide necessary information for numerical solvers to find a unique solution.
4. Can you explain the general process of discretization in the context of solving PDEs
numerically, and how is it implemented in your program?
Answer: Discretization involves dividing the problem's domain into a grid or mesh, converting the
PDE into algebraic equations at discrete points, and then solving these equations iteratively. In the
program, you might implement this process by defining a grid, approximating derivatives with finite
differences, and using iterative methods to find the solution at each grid point.
5. Can you provide an example of a sample problem in electrical science that your program can
solve, and walk us through the steps of using the program to solve that problem?
Answer: You could present an example problem, such as simulating the heat distribution in an
electrical circuit component, and explain how the program takes input parameters, sets up the PDE,
applies boundary conditions, and iteratively solves the problem. You may also discuss how to
visualize or interpret the results generated by the program.

You might also like