Applied Numerical Methods
&
Krieger Publishing Company
Malabar, Floridaf ve
‘99F
cot 43 FOO
1990
a
nga Editon 1969
Reprint Editon 1990
Printed and Published by
ROBERT E. KRIEGER PUBLISHING COMPANY, INC.
KRIEGER DRIVE
MALABAR, FLORIDA 32950
Copyright ©1969 by John Wiley and Sons, Inc.
Reprinted by Arrangement
All sights reserved. No part of thisbook may be reproduced in any formorby any means, electronic or mechanical,
including information storage and retrieval systems without permission in writing from the publisher,
No liability is assumed with respect to the use of the information contained herein.
Printed in the United States of America
Library of Congress Cataloging-in-Publication Data
(Camaban, Brice.
Applied numerical methods / Brice Camahan, H.A. Luther, James O.
wil
pom,
Reprint. Originally published: New York : Wiley, 1969.
Includes bibliographical references and index.
ISBN 0-89464-486-6 (alk. paper)
1. Numerical analysis. 2. Algorithms, I. Luther, HLA.
TL Wilkes, James 0. DI. Title
QA297.C34 1990
519.4-.de20 190.36060
cp
w9e 765 4to
DONALD L. KATZ
A, H, White University Professor of Chemical Engineering
‘The University of MichiganPreface
This book is intended to be an intermediate treatment of the theory and appli-
cations of numerical methods. Much of the material has been presented at the
University of Michigan in a course for senior and graduate engineering students.
‘The main feature of this volume is that the various numerical methods are not only
discussed in the text but are also illustrated by completely documented computer
programs. Many of these programs relate to problems in engineering and applied
‘mathematics, The reader should gain an appreciation of what to expect during the
implementation of particular numerical techniques on a digital computer.
Although the emphasis here is on numerical methods (in contrast to numerical
analysis), short proofs or their outlines are given throughout the text. The more
important numerical methods are illustrated by worked computer examples. The
appendix explains the general manner in which the computer examples are presented,
and also describes the flow-diagram convention that is adopted. In addition to the
computer examples, which are numbered, there are several much shorter examples
appearing throughout the text. These shorter examples are not numbered, and
usually illustrate a particular point by means of a short hand-calculation. The
computer programs are written in the FORTRANGIV language and have been run
on an IBM 360/67 computer. We assume that the reader is already moderately
familiar with the FORTRAN-IV language.
There is a substantial set of unworked problems at the end of each chapter.
Some of these involve the derivation of formulas or proofs; others involve hand
calculations; and the rest are concerned with the computer solution of a variety of
problems, many of which are drawn from various branches of engineering and
applied mathematics.
Brice Carnahan
H. A. Luther
James 0. WilkesContents
COMPUTER EXAMPLES
CHAPTER 1
Interpolation and Approximation
1.1 Introduction
1.2 Approximating Functi
1.3 Polynomial Approximation—A Survey
The Interpolating Polynomial
The Least-Squares Polynomial
The Minimax Polynomial
Power Series
1.4 Evaluation of Polynomials and Their Derivatives
1S. The Interpolating Polynomial
1.6 Newton's Divided-Difference Interpolating Polynomial
1.7 _Lagrange’s Interpolating Polynomial
1.8 Polynomial Interpolation with Equally Spaced Base Points
Forward Differences
Backward Differences
Central Diferences
1.9 Concluding Remarks on Polynomia! Interpolation
1.10 Chebyshev Polynomials
11 Minimizing the Maximum Error
1.12 Chebyshev Economization—Telescoping a Power Series
Problems
Bibliography
CHAPTER 2
Numerical Integration
2.1 Introduction
2.2. Numerical Integration with Equally Spaced Base Points
23. Newton-Cotes Closed Integration Formulas
2.4 Newton-Cotes Open Integration Formulas
2.5. Integration Error Using the Newton-Cotes Formulas
2.6 Composite Integration Formulas
2.7. Repeated Interval-Halving and Romberg Integration
2.8 Numerical Integration with Unequally Spaced Base Points
xv
2”
35
35
36
3”
39
39
4
43
68
9
0
1
5
B83a
ixX Conents
2.9
Orthogonal Polynomials
Legendre Polynomials: P,(x)
Laguerre Polynomials: ,(x)
Chebyshev Polynomials: T(x)
Hermite Polynomials: Hy(x}
General Comments on Orthogonal Polynomials
2.10 Gaussian Quadrature
Gauss-Legendre Quadrature
Gauss-Laguerre Quadrature
Gauss-Chebyshev Quadrature
Gauss-Hermite Quadrature
Other Gaussian Quadrature Formulas
2.11 Numerical Differentiation
Problems
Bibliography
CHAPTER 3
Solution of Equations
3
32
33
34
35
36
37
38
39
Introduction
Graeffe's Method
Bernoulli's Method
Iterative Factorization of Polynomials
Method of Successive Substitutions
‘Ward's Method
Newton's Method
Regula Falsi and Related Methods
Rutishauser’s QD Algorithm
Problems
Bibliography
CHAPTER 4
Matrices and Related Topics
4a
42
43
44
45
46
47
48
49
‘Notation and Pretiminary Concepts
Vectors
Linear Transformations and Subspaces
Similar Matrices and Polynomials in a Matrix
Symmetric and Hermitian Matrices
The Power Method of Mises
Method of Rutishauser
Jacobi’s Method for Symmetric Matrices
Method of Daailevski
Problems
Bibliography
100
100
100
100
tor
101
101
101
13
15
116
116
128
131
140
4
14
14a
142
156
168
169
m
178
196
198
210
210
213
219
21
224
226
236
250
261
263
26R,~ CHAPTER 5
Systems of Equations 269
5.1 Introduction 269
§.2 Elementary Transformations of Matrices 269
5.3 Gaussian Elimination ne
5.4 Gauss-Jordan Elimination 272
5.5 A Finite Form of the Method of Kaczmarz 297
5.6 Jacobi Iterative Method 298
5.7 Gauss-Seidel Iterative Method 299
5.8 Iterative Methods for Nonlinear Equations 308
5.9 Newton-Raphson Iteration for Nonlinear Equations 319
Problems 330
Bibliography 340
~ CHAPTER 6
The Approximation of the Solution of Ordinary Differential Equations 341
6.1 Introduction 341
6.2 Solution of First-Order Ordinary Differential Equations 341
Taylor's Expansion Approach 343
6.3. Euler's Method 344
6.4 Error Propagation in Euler's Method 346
65 Runge-Kutta Methods 3ot
6.6 Truncation Error, Stability, and Step-Size Control in
the Runge-Kutta Algorithms 363
6.7 Simultaneous Ordinary Differential Equations 365
68 Multistep Methods 381
6.9 Open Integration Formulas 381
6.10 Closed Integration Formulas 383
6.11 Predictor-Corrector Methods 384
6.12 Truncation Error, Stability, and Step-Size Control in
‘the Multistep Algorithms 386
6.13 Other Integration Formulas 390
6.14 Boundary-Value Problems 405
Problems 416
Bibliography 428
{ CHAPTER 7
Approximation of the Solution of Partial Differential Equations 429
7.1 Introduction 429
7.2 Examples of Partial Differential Equations 429
7.3. The Approximation of Derivatives by Finite Differences 430
7.4 A Simple Parabolic Differential Equation 431
7.5 The Explicit Form of the Difference Equation 432xii Contents
7.6 Convergence of the Explicit Form
7.7 The Implicit Form of the Difference Equation
78 Convergence of the Implicit Form
79 Solution of Equations Resulting from the Impticit Afetfiod
7.10 Stability
7.11 Consistency
7.12. The Crank-Nicolson Method
7.13 Unconditionally Stable Explicit Procedures
DuFort-Frankel Method
Saul"yeo Method
Barakat and Clark Method
7.14 The Implicit Alternating-Direction Method
1.15 Additional Methods for Two and Three Space Dimensions
7.16 Simultaneous First- and Second-Order Space Derivatives
7.17 Types of Boundary Condit
7.18 Finite-Difference Approximations at the Interface between
‘Two Different M
7.19 Irregular Boundaries
7.20 The Solution of Nonlinear Partial Differential Equations
7.21 Derivation of the Elliptic Difference Equation
7.22 Laplace's Equation in a Rectangle
7.23 Alternative Treatment of the Boundary Points
7.24 Iterative Methods of Solution
7.25 Successive Overrelaxation and Alternating-Direction Methods
7.26 Characteristic-Value Problems
Problems
Bibliography
CHAPTER 8
Statistical Methods
8.1 Introduction: The Use of Statistical Methods,
8.2. Definitions and Notation
83 Laws of Probability
84 Permutations and Combinations
8.5. Population Statistics
8.6 Sample Statistics
8.7 Moment-Generating Functions
8.8 The Binomial Distribution
8.9. The Multinomial Distribution
8.10 The Poisson Distribution
8.11 The Normal Distribution
8.12 Derivation of the Normal Distribution Frequency Function
8.13 The z* Distribution
8.14 as a Measure of Goodness-of-Fit
8.15 Contingency Tables
432
440
440
449
450
451
451
451
451
452
452
453
462
462
462
463
464
482
483
483
484
508
508,
520
530
331
331
532
533
533
533
5a
542
543
543
543
$52
553
559
560
5618.16 The Sample Variance
8.17 Student's ¢ Distribution
8.18 The F Distribution
8.19 Linear Regression and Method of Least Squares
8.20 Multiple and Polynomial Regression
8.21 Alternative Formulation of Regression Equations
8.22 Rogression in Terms of Orthogonal Polynomials
8,23 Introduction to the Analysis of Variance
Problems
Bibliography
APPENDIX
Presentation of Computer Examples
Flow-Diagram Convention
INDEX
568
568
510
sm
573
314
374,
585
587
592
593,
593
594.
597Computer Examples
CHAPTER |
1.1 Interpolation with Finite Divided Differences
1.2 Lagrangian Interpolation
1.3. Chebyshev Economization
CHAPTER 2
21 Radiant Interchange between Parallel Plates—Composite
‘Simpson's Rule
2.2 Fourier Coefficients Using Romberg Integration
2.3 Gauss-Legendre Quadrature
24 Velocity Distribution Using Gaussian Quadrature
CHAPTER 3
3.1 Graeffe's Root-Squaring Method—Mechanical Vibration
Frequencies
3.2 Iterative Factorization of Polynomials
3.3 Solution of an Equation of State Using Newton's Method
3.4 Gauss-Legendre Base Points and Weight Factors by the
Half-Interval Method
3.5. Displacement of a Cam Follower Using the Regula Falsi Method
CHAPTER 4
4.1 Matrix Operations
4.2 The Power Method
4.3. Rutishauser’s Method
44° Jacobi's Method
”
»
46
80
2
106
m
144
158
173
180
190
216
28
238
252
xvxvi Computer Examples
CHAPTER 5
5.1
$2
$3
5a
35
Gauss-Jordan Reduction—Voltages and Currents in an
Electrical Network
Calculation of the Inverse Matrix Using the Maximum
Pivot Strategy—Member Forces in a Plane Truss
Gauss-Seidel Method
Flow in a Pipe Network—Successive-Substitution Method
Chemical Equilibrium —Newton-Raphson Method
CHAPTER 6
61
62
63
64
65
Euler's Method
Ethane Pyrolysis in a Tubular Reactor
Fourth-Order Runge-Kutta Method—Transient Behavior of a
Resonant Circuit
Hamming's Method
A Boundary-Value Problem in Fluid Mechanies
CHAPTER 7
1
72
13
14
15
16
1
18
19
Unsteady-State Heat Conduction in an Infinite, Parallel-Sided
Slab (Explicit Method)
Unsteady-State Heat Conduction in an Infinite, Parallel-Sided
‘Slab (Implicit Method)
Unsteady-State Heat Conduction in a Long Bar of Square
Cross Section (Implicit Alternating- Direction Method)
Unsteady-State Heat Conduction in a Solidifying Alloy
Natural Convection at a Heated Vertical Plate
‘Steady-State Heat Conduction in a Square Plate
Deflection of a Loaded Plate
Torsion with Curved Boundary
Unsteady Conduction between Cylinders (Characteristic- Value
Problem)
24
282
302
310
321
348,
353
367
393
407
434
443
454
465
414
486
491
498
510CHAPTER 8
8.1 Distribution of Points in a Bridge Hand
8.2. Poisson Distribution Random Number Generator
8.3 Tabulation of the Standardized Normal Distribution
8.4. 42 Test for Goodnest-of-Fit
8.5 Polynomial Regression with Plotting
sas
545
562
516
xviiApplied Numerical MethodsCHAPTER 1
Interpolation and Approximation
4.1 Introduction
This text is concerned with the practical solution of
problems in engineering, science, and applied mathe-
matics. Special emphasis is given to those aspects of prob-
lem formulation and mathematical analysis which lead to
the construction of a solutian algorithm or procedure suit-
able for execution on a digital computer. The identifica-
tion and analysis of computational errors resulting from
mathematical approximations present in the algorithms
will be emphasized throughout.
‘To the question, “Why approximate?", we can only
answer, “Because we must!” Mathematical models of
physical or natural processes inevitably contain some in-
herent errors. These errors result from incomplete under-
standing of natural phenomena, the stochastic or random
nature of many processes, and uncertainties in experimen-
tal measurements. Often, a model includes only the most
Pertinera features of the physical process and is deliber-
ately stripped of superfluous detail related to second-level
effects
Even if an error-free mathematical model could be de-
veloped, it could not, in general, be solved exactly on a
digital computer. A digital computer can perform only @
Nimited number of simple arithmetic operations (prin-
cipally addition, subtraction, multiplication, and division)
on finite, rational numbers. Fundamentally important
‘mathematical operations such as differentiation, integra-
tion, and evaluation of infinite series cannot be imple-
‘mented directly on a digital computer. All such computers
hhave finite memories and computational registers; only a
discrete subset of the real, rational numbers may be
generated, manipulated, and stored. Thus, it is impossible
to represent infinitesimally small or infinitely large quan-
tities, or even @ continuum of the real numbers on a finite
interval.
Algorithms that use only arithmetic operations and cer-
tain logical operations such as algebraic comparison are
called numerical methods. The error introduced in approxi
mating the solution of a mathematical problem by a
numerical method is usually termed the truncation error
of the method. We shall devote considerable attention to
the truncation errors associated with the numerical
approximations developed in this text.
‘When @ numerical method is actually run on a digital
‘computer after transcription to computer program form,
another kind of erfor, termed round-off error, is intro-
duced. Round-off errors are caused by the rounding of
results from individual arithmetic operations because only
4 finite number of digits can be retained after each opera
tion, and will differ from computer to computer, even
when the same numerical method is being used.
‘We begin with the important problem of approximating,
one function f(x) by another “suitable” function g(x).
This may be weirten
S(e) = ox).
There are two principal reasons for developing such
approximations. The first is to replace a function f(x)
which is dificult to evaluate or manipulate (for example,
differentiate o¢ integrate) by a simpler, more amenable
function g(x). Transcendental functions given in closed
form, such as In x, sin x, and erf x, are examples of func-
tions which cannot be evaluated by strictly arithmetic
operations without first finding approximating functions
such as finite power series. The second reason is for inter-
polating in tables of functional values. The function /(x)
is known quantitatively fora finite (usually small) number
of arguments called base points; the sampled functional
values may then be tabulated at the n+ 1 base points
os Xap «+11 By aS follows:
Xo S(%o)
aay I)
aa SG)
x Sx)
Xn SG).
We wish to generate an approximating function that
will allow an estimation of the value of f(x) for x # x,
40,1, ..,. In some cases, /(x) is known analytically
but is difficult to evaluate. We have tables of functional
values for the trigonometric functions, Bessel functions,
etc. In others, we may know the general class of functions
to which f(x) belongs, without knowing the values of
specific functional parameters.Ineerpolation and Approximation
In the general case, however, only the base-point func
tional information is given and little is known about f(x)
for other arguments, except perhaps that it is continuous,
in some interval of interest, a
0, there Is some
polynomial p,(x) of degree n = n(c) such that
U@)-pA<6 acxeb.
‘Unfortunately, although it is reassuring to know that
some polynomial will approximate f(x) to a specified ac-
curacy, the usual criteria for generating approximating
polynomials in no way guarantee that the polynomial
found is the one which the Weierstrass theorem shows
must exist. If (x) is in fact unknown exoept for a few
sampled values, then the theorem is of little relevance, (It
is comforting nonetheless!)
The case for polynomials as approximating functions
is not so strong that other possibilities should be ruled out
completely. Periodic functions can often be approximated
‘very efficiently with Fourier funotions; functions with an
obvious exponential character wili be described more
compactly with a sum of expanentials, etc. Nevertheless,
for the general approximation problem, polynomial
approximations are usually adequate and reasonably easy
to generate,
‘The remainder of this chapter will be devoted to poly-
nomial approximations of the form
SO) = pa) = Fae.
Fora thorough discussion of several other approximating
functions, see Hamming (2).
(uy
1.3 Polynomial Approximation—A Survey
After selection of an nth-degree polynomial (1.1) as the
approximating function, we must choose the criterion for
“fitting the data.” This is equivalent to establishing the
procedure for computing the values of the coefficients
Boris ++» Oye
‘The Interpolating Polynomial. Given the paired values
(xn f(x)), 10, 1, sm pethaps the most obvious
criterion for determining the coefficients of p,(x) is to
require that
Pal) = SDs (1.2)
‘Thus the nth degree polynomial p,(x) must reproduce /(3)
exactly for the m+ 1 arguments x = x,. This criterion
seems especially pertinent since (from a fundamental
theorem of algebra) there is one and only one polynomial
of degree 1 or less which assumes specified values for
n+ 1 distinct arguments. This polynomial, called the nth
degree interpolating polynomial, is illustrated schematic.
ally for n= 3 in Fig. 1.1. Note that requirement (1.2)
Bhan
Mey
(0, fea)
Figure 1.1. The interpolating pelynomie,
establishes the value of p,(x) for all x, but in no way
guarantees accurate approximation of f(x) for x ¥ xj,
that is, for arguments other than the given base points.
Iff(2) should be a polynomial of degree m or less, agree-
‘meat is of course exact for all x.
‘The interpolating polynomial will be developed in con-
siderable detail in Sections 1.5 to 1.9.
‘The Least-Squares Polynomial If there is some question
as to the accuracy of the individual values f(x), i= 0, 1,
«+97 (often the case with experimental data), then it may
be unreasonable to require that a polynomial fit the /(x,)
exactly. In addition, it often happens that the desired
polynomial is of low degree, say m, but that there are
‘many data values available, so that n > m. Since the exact,
‘matching criterion of (1.2) for m + 1 functional values ca
bbe satisfied only by one polynomial of degree m or less, itis.
generally impossible to find an interpolating polynomial
of degree m using all n +1 of the sampled functional
values.
Some other measure of goodness-of-fit is needed. In-
stead of requiring that the approximating polynomial
reproduce the given functional values exactly, we ask only
that it fit the data as closely as possible. Of the many4 Interpolation and Approximation
meanings which might be ascribed to “‘as closely as
possible,” the most popular involves application of the
least-squares principle. We ft the given n + 1 functional
values with p,(x), a polynomial of degree m, requiring
that the sum of the squares of the discrepancies between
the f(x) and p,(x) be a minimum. If the discrepancy at
the ith base point x, is given by 5, = pax) — f(x), the
least-squares criterion requires that the ay, = 0, 1, ....m™,
be chosen so that the aggregate squared error
E= Sot= Eton) seo?
-S[Zemt-sa] an
be as small as possible. If m should equal m, the minimum
error E is exactly zero, and the least-squares polynomial
is identical with the interpolating polynomial. Figure 1.2
1)
pals)
Gare)
Figure 12 The least-squares polynomial.
illustrates the fitting of five functional values (n = 4) with
1 least-squares polynomial of degree one (m = 1), that is,
a straight line.
‘When the values f(x) are thought to be of unequal re-
liability or precision, the least-squares criterion is some-
times modified to require that the squared error at x, be
mulkiplied by a nonnegative weight factor w(x) before
the aggregate squared error is calculated, that is, (1.3)
assumes the form
B= 5 west.
‘The weight w(x; is thus a measure of the degree of pre-
cision or relative importance of the value f(x) in deter-
mining the coefficients of the weighted least-squares
Polynomial Py(#)
‘The least-squares principle may also be used to find an
approximating polynomial p,(x) for a known continuous
function f(x) on the interval [a,b]. In this case the object
is to choose the coefficients of p,(x) which minimize E
where
B= [wolpats) —JO0T a.
Here, w(x) is a nonnegative weighting function; in many
cases, w(x) = I.
Since the motivation for the least-squares criterion is
essentially statistical in nature, further description of the
least-squares polynomial will be delayed until Chapter 8.
‘The Minimax Polynomial. Another popular criterion,
termed the minimax principle, requires that the co-
efficients of the approximating polynomial p,(x) be
chosen so that the maximum magnitude of the differences.
SG) — P(X), 1=0,1,..42,¢
where ao, 0, a3 is any permutation of the integers 2, 1, 0.
In general it follows by induction that
PPeekpaay oo Xo) =SPeayhaur J (1.16)
where the sequence of integers oy, a,
permutation of m,n — 1, — 2,0.
19% is any
tis also apparent from the definition that
I(x) So)
(x1 = x0) (%o~ x1)
Sl5 2 common name. Individual
elements of the vector are identifi! fs 8 single subscript attached
to the name, In this case, the x and v tors consist of the elements
a and Ys Yay sonsdny pe Uh
‘matrix Wil describe a rcis"gularatray (rows and col=
lumns) of numbers identiied by ac 98 "on name, Individual ele-
‘ments of the matrix are identified 1% > subseipts, the ftst to
indicate the row index and the second 4. dicate the column index,
“Thos Fi the element appearing in we", row and jth eclomn of
the matrix 7 (see Chapter 4).
Nort that, alike the divided-difference tables such as
Table 1.2, there is no base point (xoY> =/(%)). In
order to facilitate the simple subscription scheme of
(LAW), the divided-difference poction of the table is no
longer symmetric about the base points near the middle
of the table, All elements of the T matrix, 7, for
j>i, {>m and 127, are unused. The subroutine
DTABLE assumes that T is a k by k matrix, and checks
for argument consistency to insure that.m x, iis
assigned the value 7. The base points used to determine
the interpolating polyno‘sial are noroally Xpex-a--
Xan Where max = i+ dj2 for d even and max = i+
(d= D)2 for d odd. Should max be smaller than d+ 1
of larger than n, then max is reassigned the value d+ 1
(OF 1, respectively. This insures that only given base
points are used to fit the polynomial
In terms of the divided differences, the interpolant
value, 5(3) is desesibed by the polynomial of degree d:
FR) = Ynazmat (% = Xena) fmax~a4 15 Smaxa]
+ (8 ~ Xmar—a+ ME = Xs a)
% SUemesmas.ar Smas-
$b @
f[&mars +
Ximena)18
‘The corresponding error term, from (1.32) and (1.39), is
RAB) = S(5) ~ HR) = (5 — Xned)* ( = Xue ed
X SLR X pags oes Xmanndls (11.3)
or
ue
RAB) = Baa) ~ ee TO,
Fin (SaaS LA)
Rewritter in the nested form of (1.10), (1.1.2) becomes
HR) = Co Pe maae «9 Xa al % = Xmas 1)
+S Lt mac= a9 +++ Xmax-a)) — Xmae—2)
FS Dmx 29 009 mass) CE ~ Xmae~3)
He Smee ae 1s Xmoxa]}
% (R= Xmaxma) + Yoax— as
(11.5)
Flow Diagram
Main Program
1m, i=1,2,
Interpolation and Approximation
or, from (1
5G) =
),
° {Tipax= 1h — Xmax~1) + Trae 2,4-1}
x@
%nar=2)
4 Tay 34-2 F ~ Enea) ++ Tawra)
XB Knead)
+ Ymecoee (1.1.6)
FNEWT uses this nested form to evaluate 5(3). Should
there be an inconsistency in the function arguments,
that is, if d> m, the value assigned to FNEWT is zero;
otherwise, the value is (3).
In both OTABLE and FNEWT, a computational switch,
trubl,is set to 1 when argument inconsistency is found;
otherwise, trubl is set to 0. FNEWT does not check to
insure that the elements x, i=1,2,...,m are in
ascending order, although such a test could be incor-
porated easily.
yee cos x)
Compute finite divided.
difference table for
id FA
Compute interpolant value
(8) using a “*well-centered”
Newton's divided-difference
differences of order m or less
Ty yf eter Xoo igs)
(Gubroutine DTAaLE)
Trotm
ad
interpolating polynomial of
degree d.
(Function FNEWT)
JR), cosExample 1.1
Subroutine DTABLE (Arguments: x, ys 7, n,m, trubl, k)
i
Interpolation with Finite Divided Differences
72H
Hit =X
Tagot = Tint gat
Bier Fetay
snub} -0
920 Interpolation and Approximation
Fimetion FNEWT (Arguments: x, y, T, 1m, my dy X, trubl, k)
trubl = 1
d
maxei+$ max d+
Evaluate Newton’s divided-
difference polynomial using
nested algorithm.
Fe HE Xnar
+ Tyas=i-ta~i
I
Of
trubl 0Example 11. Interpolation with Finite Divided Differences a
FORTRAN Jmplementation
List of Principal Variables
Program Symbol
(Main)
wt
DEG
M
N
NMt
TABLE
TRUBL
TRUVAL
x
XARG
Y
YINTER
(Subroutine
TABLE)
1suB.
kK
(Function
NEWT)
IDeGMa
ssuB1, IsuBz
MAX
vest
Definition
Subscripts fj,
Degree, d, of the interpolating polynomial.
1m, highest-onder divided difference to be computed by DTABLE.
1, the number of paired values (xy. ¥: = f(x).
nai.
Matrix of divided differences, 7.
‘Computational switch: set to 1 if argument inconsistency is encountered, otherwise set to 0.
Value of cos £ computed by the library function COs.
Vestor of base points, x).
Interpolation argument, ¥.
Vector of functional values, y, = f(x).
Interpolant value, 5(3).
pitl—j.
Row and colusnn dimensions, k, of matrix T:
d-1.
Subscripts, max — i, d= i
Subscript of largest base point used to determine the interpolating polynomial, max.
Variable used in nested evaluation of interpolating polynomial, (1.1.6).2
Interpolation and Approximation
Program Listing
Main Program
c APPLIED NUMERICAL METHODS, EXANPLE 1.3
c NEWTON'S DIVIDED-DIFFERENCE INTERPOLATING POLYNOMIAL
c
c TEST PROGRAM FOR THE SUBROUTINE OTABLE AND THE FUNCTION
c FNEWT.” THIS PROGRAM READS A SET OF N VALUES X(1).. XCM),
¢ COMPUTES "A CORRESPONDING SET OF VALUES Y(1)...YCN) WHERE
c YC) = COSCXC1)), AND_THEN CALLS. ON SUBROUTINE DTABLE
c TO. CALCULATE ALL’ FINITE DIVIDED DIFFERENCES OF ORDER M on
c LESS," WITH THE DIVIDED DIFFERENCES STORED IN MATRIX TABLE,
€ THE PROGRAM READS VALUES FOR XARG, THE INTERPOLATION
c ARGUNENT, AND IDEG, THE DEGREE OF THE INTERPOLATING
c POLYNOMIAL TO BE EVALUATED BY THE FUNCTION FNENT.
c FNEWT COMPUTES. THE INTERPOLANT VALUE, YINTER, WHICH IS.
c COMPARED WITH THE TRUE VALUE, TRUVAL'= COS(xARG).
c
DIMENSION X(20), ¥¢20), TABLE(20,20)
c
c READ DATA, COMPUTE Y VALUES, AND PRINT .
AeAD"(5, 100) NLM, (XCD, 1#1,N)
waite (6, 200)
00.1 Iin
YO) = cosexc19)
1 MRITE (6,200) 1, XCD, WED
c COMPUTE AND PRINT DIVIDED DIFFERENCES ..
DTABLE( X,Y, TABLE,N,M, TRUBL, 20 )
Te“CrRUBL.NE.O.05 cALL EXIT.
WhiTe (6,202)
mar
Doe yaa, nMa
ter
Te CoT.w os
6 WRITE'(6,203) (TABLECI,J), Jet, L)
c
© saa. READ XARG AND IDEG, CALL ON FNEWT TD INTERPOLATE y+
Waite (6,200)
7 REaa (5,402) xano, 1986
YINTER © FNEWT( X,¥,TABLE,N,M, DEG, XARG, TRUBL,20
¢
© sa... COMPUTE TRUE VALUE OF COS(XARG) AND PRINT RESULTS
FRUVAL = cosCxaRG)
WRITE (6,205) XARG, IDEG, YINTER, TRUVAL, TRUBL
Go 10 7
¢
c .. FORMATS FOR INPUT AND OUTPUT STATEMENTS
100 FORMAT C4X, 13, 10K, 13. / (15x, 5F10.N) )
101 FORMAT (7K, FBL4, 15x, 15.)
200 FORMAT ( 33H1THE SAMPLE FUNCTIONAL VALUES ARE / SHO I, 8X,
1 WWKCNY, 9X, MHYCLY aH)
201 FORMAT CLM? 14, 2713.6
202 FORMAT ( QHIFOR =, i2, 29H, THE DIVIDED DIFFERENCES ARE )
203 FORMAT (AW / (AH, BE16.7) )
208 FORMAT ( 25HITHE DATA AND RESULTS ARE / 1HQ, 5X, UHEARG, 5X,
1 GHIDEG, 5X, SHYINTER, 6X, GHTAUVAL, 3x, SHTRUBL / 1H” )
205 FORMAT CAH, F9.4, 18, 2F12.6, F7.1 3
e
END
Subroutine DTABLE
SUBROUTINE DTABLE ( X,Y,TABLE,N,M,TRUBL,K )
DTABLE COMPUTES THE FINITE DIVIDED D|FFERENCES OF
YC)...YON) FOR ALL ORDERS M OR LESS AND STORES THEM IN
THE iGWER TRIANGULAR PORTION OF THE FIRST M COLUMNS OF THE FIRST
NeL ROWS OF THE MATRIX TABLE. FOR INCONSISTENT ARGUMENTS,
TRUBL * 1,0 ON EXIT, OTHERWISE, TRUL = 0.0 ON EXIT.
DIMENSION X(N), YEND, TABLE(K,K)Example 1.1. Interpolation with Finite Divided Differences
Program Listing (Continued)
c
€
sera CHECK FOR ARGUMENT CONSISTENCY
iF GAT MD Go TO 2
TRUBL"» i,0
RETURN
sezss CALCULATE FIRST-ORDER DIFFERENCES.
jad =
DOS en, NMd
TABLECH,1)"= CYCde2) = YCD/OKCIe) = XCD)
IF OH,LE.1) GO 10 6
ves CALOULATE HIGHER-ORDER DIFFERENCES 0...
5 dea
00.5 tad,NMa
1SUB = Wied
TABLECL,g) = (TABLECL,g-2) = TABLECL=:
«SD OKC) = XCSUBD)
TRUBL = 0.0
RETURN
eno
Function FNEWT
FUNCTION FNEWT ( X,Y, TABLE,N,M,1DEG,XARG,TRUBL, K )
FNEWT ASSUMES THAT X(1)...XUN) ARE (M ASCENDING ORDER AND
FIRST SCANS THE X VECTOR’ TO DETERMINE WHICH ELEMENT 1S
NEAREST (.GE,) THE INTERPOLATION ARGUMENT, XARG.
SWE (Geel BASE POINTS NEEDED FOR THE EVALUATION OF THE
DIVIDED-DIFFERENCE POLYNOMLAL QF DEGREE IDEGt1 ARE THEN
CENTERED ABOUT THE CHOSEN ELEMENT WITH THE LARGEST WAVING
THE SUBSCRIPT MAX. 17 1S ASSUMED THAT THE FIRST M DIVIDED
QUFFERENCES HAVE BEEN COMPUTED BY THE SUBROUTINE
DTABLE AND ARE ALREADY PRESENT IN THE MATRIX TABLE.
MAX IS CHECKED TO INSURE THAT ALL REQUIRED BASE POINTS ARE
AVAILABLE, AND THE INTERPOLANT VALUE IS COMPUTED USING NESTED
POLYNOMIAL EVALUATION. | THE INTERPOLANT 1S RETURNED AS
THE VALUE OF THE FUNCTION. FOR INCONSISTENT ARGUMENTS,
‘TRUBL = 1.0 ON EXIT. OTHERWISE, TAUBL = 0.0 ON EXIT.
DIMENSION XCH), YCND, TABLECK,K)
CHECK FOR ARGUMENT INCONSISTEWCY .
DEG.LE.M) GO TO 2
i
TRL = id
ENENT = 0/0
RETURN
part SFAROH X VECTOR FOR ELEMENT .cE, xARG
TatN
TE U.EQ.N JOR, KARG.LELKCID) 60 TO 5
conTi ue
MAK © 1s 1DEG/2
INSURE THAT ALL REQUIRED DIFFERENCES ARE IN TABLE .~...
GF Guax.LE. 1G) MAX = IDES +
TF WAXLGTIN) MAK =24 {Interpolation and Approximation
Program Listing (Continued)
c
© sense COMPUTE INTERPOLANT VALUE .. 004
VEST" = TABLE (HAX-1, 106)
IF (1DEG,LE.1) GOTO 13
tech) = 1086 - 1
0012 “te1, 1oEGH2
1SuB1 = MAX’= 1
1sue2 = 1DeG - 1
12. YEST = YEST*(KARG = X(1SUBL)) + TABLE(ISUBI-1, 1SUB2)
15 15UB1 = MAX IDEG
TRUBL = 0,0
FNERT. = YEST*(XARG ~ X(ISUB1)) + Y(1SUBL)
RETURN
c
END
Data
Ne 8 Ne 6
XCD. x5) = 0.0000 9.2000 0.3000 0.4000 0.6000
XB) LLIx(8) = 0.7000 0.9000 1.0000
XARG'=" 0.2500 Wes = i
KARG = 012500 Wes = 2
XARG = 012500, Ie = 3
XARG = 012500 IDE =
XARG = 012500 WDec = 5
KARG = 012500 Woes = 6
XARG = 014500 ibe > 1
XARG = 014500 IDE = 2
XARG = 024500, Ioes = 5
KARG = 00500, IoeG = 1
XaRg = 0.0500 IDec =. 2
XARG = 0.0500 Ioec = 3
XARG = 0.9500 Weg = 1
XARG = 019500 IDec = 2
KARG = 0.9500, IDeG = 3
XARG = 011000 ines su
XARG = ~01000 Wes = 4
XARG = 0.5500, IDec = 7
XARG = 1,100 Woec = 1
KARG 000, Ibes = 1
XARG = 210000 iec = 2
XARG = 210000 Ibe = 3
xaRG = 210000 IDEs =u
XARG = 210000 IDEs = §
XARG = 2,000, Ine = 6Example 1.1 Interpolation with Finite Divided Differences
RE
00
00
‘Computer Output
THE SAMPLE FUNCTIONAL VALUES Ai
' xD rn
1 0.0 1.000000
2 0;200000 0, 980087
3 0.300000 0955337
& — o%eoo000 0921081
5 07600000 0.825336
6 0.700000 0: 764542
7 9.900000 0621610
% — oooo00 a suo 307
FOR M = 6, THE DIVIDED DIFFERENCES ARE
-0,99666716-01
-0,2673003E 00 -0,49211216
-0.34275526 00 ~0,477270NE
-0,
=o.
0.
70.
47862716 00
60u9359E 00
7161614 00
81307696 00
=0,0529063E 00
-0,42102276
-0,3707584E
=0,3250510E
THE DATA AND RESULTS ARE
0.
°
°:
o
o
°
°
vn
°.
°.
°
°
°.
°,
°:
o
=o;
0:
as
2
2
2
2
2
2
XARG «IDES
2500 a
2500 2
2500 5
2500 ‘
2500 5
2500, 6
6500 1
S500 2
4500, >
‘0500, 1
0500 2
0500, 3
9500 1
9500 2
9500 3
1000, ‘
1000 ¥
5500 7
1000 1
‘0000 1
‘0000, 2
0000, 3
0000 ‘
3000 5
‘0000 6
YINTER
9.967702
0.968895
0:968909
0.968912
07968913
0.968913
0.897130
0,900287
91900837
07995017
0.99708
0.998777
01580956
01581768
01581689
07395005
9994996
010
0.458995
201272778
701626128
201457578
701395053
0.410927
16679
oo
00
00
0,3709%356-01
0, b082027-01
0,79708816-01
0,1005287€ 00
0,1192673E 00
TRUVAL
0.968913
‘ol9gagi3
0.968913
0:968933
01968933
01968933
0900047
0.900447
0; 900u47
‘al99750
0,998 750
01998750
0.581683
01581683
01581683
0995008
995008
91652525
0.453597
70:416187
0.416187
416187
701416187
=0,816187
2416147
TRUBL
0,5970985¢-61
0,37577096-03
0,3469983E-01
0,31251006-01
=0,3006803E-02
-0,4110366€-02
-0,4955015E-02
25
-0.1181737€=02
-0.1056310E-0226 Interpolation and Approximation
Discussion of Results
‘The programs have been written to allow calculation
of the desired table of divided differences just once (by
calling DTABLE once). Subsequently, an interpolating
polynomial of any degree d Sint do Point a
In addition, write'a main program that reads data
values 1, 1, 25-20 Xp Y4s Yar oon Jor Xo and min,
and then calls upon FLAGR to evaluate the appropriate
interpolating polynomial and return the interpolant
value, 7(@). AS test data, use information from Table
1.2.1 relating observed voltage and temperature for the
Platinum to Platinum-10 percent Rhodium thermo-
couple with cold junctions at 32°F.
Table 1.2.1. Reference Table for
the P-10% Rh Thermocouple 21)
° 320
300 i224
500 1760
1000 2964
1500 405.7
1700 476
2000 509.0
2500 608.4
3000 7087
3300 7614
3500 7990
4000 8919)
4500 9830
3000 10726
300 1128.7
3500 11608
+900 12303
6000 1473
Read tabulated values for the 13 selected base points
, 2 = 500, x5 = 1000, ..., x13 = 6000, and the
Flow Diagram
‘Main Program
corresponding functional values yy, Ya» «== Jiyr Where
¥, = f(x). Then call on FLAGR to evaluate (3) for argu-
ments = 300, 1700, 2500, 3300, 5300, and 5900, with
various values for d and min, Compare the results with
the experimentally observed values from Table 1.2.1
‘Method of Solution
In terms of the problem parameters, Lagrange’s form
of the interpolating polynomial (1.43) becomes:
I@= YL LAY,
(1.2.1)
where
(1.2.2)
min, min +1,
mint d.
‘The program that follows is a straightforward imple-
mentation of (1.2.1) and (1.2.2), except that some
calculations (about d? multiplications and d? subtrac-
tions, at the expense of d+ I divisions) are saved by
writing (1.2.2) in the form
La)
TL@-x) Cy
i= min, min +1, ..., min + d,%# x,
where
c= T@-*). (124)
The restriction in (1.2.3), ¥# x» causes no difficulty,
since, if ¥ =x, the interpolant (3) is known to be y,;
no additional computation is required.
Compute interpolant value
(3) using Lagrange’s
interpolating polynomial of
degree d with base points
nin
(Function FLAGR)30
Function FLAGR (Arguments: x, y, X, d, min, n)
Interpolation and Approximation
eel >| ce e—x)
j= min, min +1,
mind
FORTRAN Implementation
List of Principal Variables
Program Symbol
(Main)
1
IEG
MIN
N
x
XARG
Y
YINTER
(Function
FLAGR)
FACTOR
ie
MAX
TERM
YEsT
Definition
Subscript, i
Degree, d, of the interpolating polynomial.
‘Smallest subscript for base points used to determine the interpolating polynomial, min.
rn, the number of paired values (x,, ys =f)
Vector of base points, x,
Interpolation argument, &.
Vector of functional vaiues, y; = f(x).
Interpolant value, 5(3).
The factor ¢ (see (1.2.4))
Subscript, j
Largest subscript for base points used to determine the interpolating polynomial, min + d.
1,a variable that assumes successively the values £(%)y; in (1.2.1),
Interpolant value, 7(3)..Example 12. Lagrangian Interpolation 31
Program Listing
‘Main Program
APPLIED NUMERICAL METHODS, EXAMPLE 1,2
UAGRANGE"S INTERPOLATING POLYNOMIAL
TEST PROGRAM FOR THE FUNCTION FLAGR. THIS PROGRAM READS A
SET OF N VALUES X(1)...XCN) AND A CORRESPONDING SET OF
FUNCTIONAL VALUES YCi)...¥(N) WHERE YC) = FOX(I)), THE
PROGRAM THEN READS VALUES FOR XARG, IDEG, AND MIN (SEE FLAGR
FOR MEANINGS) AND CALLS ON FLAGR TO PRODUCE THE INTERPOLANT
VALUE, YINTER,
IMPLICIT. REAL®8(AcH, 0-2)
DIMENSION X(100), ¥(400)
+ READ N, X AND Y VALUES, AND PRINT
READ'(5, 100)" Wy (XCD, Pa1,80)
READ (5,101) (YI), (41,0)
waite (6,200)
001. tei,x
WRITE (6,201) 1, XC1), YCL
tz READ INTERPOLATION ARGUMENTS, CALL ON FLAGR, AND PRINT ....,
WRITE (6,202)
READ (5,102) XARG, IDEG, MIN
YINTER = FLAGR (X,Y, XARG, IDEG,MIN, WN)
WRITE (6,203) XARG,"IDEG, MIN, INTER
60 To 2
«FORMATS FOR INPUT ANO OUTPUT STATEMENTS
FORMAT C ux, 137 (15x, 5F20.8) )
FORMAT ( 15%, 5F10.8
FORMAT ( 7k, F10,4, 13X, 12, 12K, 12)
FORMAT ( 3SHITHE' SAMPLE’ FUNCTIONAL VALUES ARE / SHO 1, 8X,
1 WHKCI), 9X, SHVCL) 7a)
FORMAT ("1H 5 14, 2623.8)
FORMAT ( 25HITHE DATA AND RESULTS ARE / 1HO, 5X, SHXARG, 5X,
1 "UHIDEG, 5x, SHMIH, 5X, GHYINTER 7 1H )
Format Cin,"Fa.4, i8, 18, F124)
Function FLAGR
2
FUNCTION FLAGR ( X,Y,XARG,IDEG,MIN,N )
ELAGR USES THE LAGRANGE FORMULA TO EVALUATE THE INTERPOLATING
POLYNOMIAL OF OEGREE IDEG FOR ARGUMENT XARG USING THE DATA
VALUES. XCKIN) +5 0X(MAX). AND YCMIN). «¥CMAX). WHERE
MAX = WIN + IDEG, NO ASSUMPTION 13. HADE REGARDING ORDER OF
THE X(1), AND NO’ ARGUMENT CHECKING 15 DONE. TERM IS
A VARIABLE WHICH CONTAINS SUCCESSIVELY EACH TERM OF THE
LAGRANGE FORMULA. THE FINAL YALUE OF YEST 1S THE INTERPOLATED
VALUE, SEE TEXT FOR A DESCRIPTION OF FACTOR.
IMPLICIT REAL#B¢A-H, 0-2)
REAL*S x, y, XARG, FLAGR
DIMENSION "xCHD, YO,
+ COMPUTE VALUE OF FACTOR .
Factor ~' 1.0
MAX = MINS 1DEG
OZ JeMIN, MAX.
VE CXARG.NE,XCJ)) 60 TO 2
FLAGR = Yu}
RETURN
FACTOR = FACTOR®(XARG = X(J))32
Interpolation and Approximation
Program Listing (Continued)
5
ya.
Yes,
yan)
XARG.
KARG
XARG
XARG
xARG
XARG
xARG
ARG,
xARG
XARG.
XARG
KARG
KARG
KARG
KARG
KARG
XARG
ARG
ARG
ARG.
KARG
ARG
KARG
KARG
KARG
ARG
ARG
ARG
KARG
XARG
XARG
XARG
XARG
xaRs
XARG
KARG
XARG
ARG
EVALUATE INTERPOLATING POLYNOMIAL
iy
DOS” IeMIN MAX
TERM = YC1)#FACTOR/(KARG = XC1))
DOM” SeMIN MAX
VE CURE J) TERA = TERM/(KCT)=X(W))
YEst 2 YeST + TERM
FLAGR = YEST
RETURN
eno.
X05) = 0. 500. 1000, 1500, 2000.
x(10) = 2500, 3000. 3500, 4000, © S00.
X(3) #5000, 5500. 000.
Igy = "732.8 176.0 296.4 © 405.7 509.0
Uivdioy"= 608.4 7001779910 ©8919 983.0
YES) © 107226 116028 1247.5
F300, Wess Lo Me 2
= 300) WEG = 2 0 WIN ® 1
= 300. Wes = 3 MIN = 1
= 300, IDES = MINS 1
= 1700: WEG * 1 MIN= 4
= 100; Wee * 2 MIN 3
= 1700. IDEs * 2 MIN = 8
> 1700 IDeG = 5 MIN = 3
= 1700, Weg = 8 MIN = 2
= 1700, DEG = 8 MINS 3
= 2500; IDEs = 1 wine 5
= 2500, IDES = 1 MINS 6
= 2500; IDEs = 2 MIN? 5
= 2500; IEG = 3 MIN &
= 2500; IEG = 3 MINS 5
= 2500 IEG = & MIN = 8
= 3300 (Deg = 1 MIN = 7
= 3300. (eG = 2 MIN = 6
= 3300: eG = 2 0 MINS 7
= 3300 IDEs = 3 MINS &
= 3300. IDEG = & MINS 5
3300. IEG = 4 MINS 6
3500. 1DEG = 5 MINS 5
3300. IDEG = 6 MINS 4
3300, IDE = 6 5
= 3300 IDEs = 7 MIN = 8
= 3300, (Dec = MIN = 3
3300, 1DEG = 3 MIN 8
3300. 1DEG = 8 MIN 3
5300. 1DeG = 1 MIW
= 5300. 1DEG = 2 MIN
= 5300: IEG = 2 MIN
= 5300. IDeG = 5
= 5300. 1DEG =
= s9c0) IDEG = 1 Min
= 5300. WEG = 2 MIN
= 5900. IDEG = 3 0 WIN
= 5900; WEG = MINExample 1.2 Lagrangian Inverpolation
Computer Output
THE SAMPLE FUNCTIONAL VALUES. ARE
1 xD yy
1 0.0 32,0000
2 $0000 178.0000,
5 1000;0000 236.4000
4 150070000 605: 7000,
$ 2000:0000 $08: 0000
& — 2500;0000 608.4000
4 300070000 704: 7000
& 350020000 © 788:0000
$ hooo.aca 81,9000
10 4500;0000 383.0000
11 5000,0000 © 1072..6000
12 5§00,0000 1160: 8000
15 6000;0000 1247.50
THE DATA AND RESULTS ARE
WEG MIN YINTER
118, 4000
u
170070000
1700, 0000
1700: 0900,
170070000
3700,0000
1700.00
2500,0000
2560,0000
25000000
2500, 0000
250.0000
2500, 0000
3300,0000
3300, 0000
3300,.0000
3300; 0000,
3300,.0000
3300, 0000
3300, 0000
‘3300,0000
3300, 0000
5300;0000
3300.0000,
3300-0000
3300,0000
530.0000
3300:0000
300.0000
5300-0000
$300, 0000
5300.00
400: 0000
$900.00,
5300, 0000
608, 4000
761.4518
761.4547
ai 1125,5200,
1D 112816880
1112527000
10112516904
9 1225,6899,
12 1230,1600
3 1230:2800
10 120.2808
9 1230:2915,au Interpolation and Approximation
Discussion of Results
All computations have been carried out using double-
precision arithmetic (8 byte REAL operands); for the
test data used, however, single-precision arithmetic would
have yielded results of comparable accuracy. Since the
true function f(x), for which y, = f(x; is unknown, it is
not possible to determine an upper bound for the inter-
polation error from (1.398). However, comparison of the
interpolant values with known functional values from
Table 1.2.1 for the arguments used as test data, shows
that interpolation of degree 2 or more when the argu-
ment is well-centered with respect to the base points,
generally yields results comparable in accuracy to the
experimentally measured values. The results for ¥ = 300
are less satisfactory than for the other arguments,
possibly because the argument is near the beginning of
the table where the function appears to have considerable
curvature, and because centering of the argument among.
the base points is not practicable for large d.
While the base points used as test data were ordered in.
ascending sequence and equally spaced, the program is
not limited with respect to either ordering or spacing of
the base points. For greater accuracy, however, one would
normally order the base points in either ascending or
descending sequence, and attempt to center the inter-
polation argument among those base points used to
determine the interpolating polynomial. This tends to
‘Keep the polynomial factor
TL @-x0,
(1.2.5)
and hence the error term comparable to (1.396), as
small as possible, This practice will usually lead to more
satisfactory low-order interpolant values as well. The
rogram does not test to insure that the required base
points are in fact available, although a check to insure
that min > 0 and max 5¢/(xe)
BY (x0 + hi2)
‘The differences of Table 1.8 with subscripts adjusted as
shown in Table 1.10 are illustrated in Table 1.11
Table 1.11 Centrat Diference Table
Example, Use the Gauss forward formula of (1.65) and
central-differences of Table 1.11 to compute the interpolant
value for interpolation argument x — 2.5 with a ~ 3.
Following the zigzag path across the table,
a= (x= xol/h=Q5—2/1 =05,
PAl2.5) ~ 9+ (0.5)(16) + (0.5)(—0.5)8)/2!
+(0.5)(—0.5)1.5)(6)/3! ~ 154.140. Chebysheo Polynomials 39
Evaluation of the generating function
S08) = pix) 2° — De 4-5
for x ~ 2.5 yields the same value, as expected.
19° Concluding Remarks on Polynomial Interpolation
‘Approximating polynomials which use information
about derivatives as well as functional values may also
be constructed. For example, a third-degree polynomial
could be found which reproduces functional values f(x)
and f(x,) and derivative values f"(xo) and f(x.) at xp and
1x, respectively. The simultaneous equations to be solved
in this case would be (for x(x) = Y?-o@e*'):
Wy + 4x0 + 23x8 + 03x) = f(X0)
+ ayxy + ax] + agx} =f)
S80)
4 + 2px, + Saye} =f'(x)-
a, + 2agxy + 3ayx3
This system has the determinant
Lx x8 x3
Pax xtoxt
0 1 2x0 3x5
0 1 2x 3:7].
Higher-order derivatives may be used as well, subject to
the restriction that the determinant of the system of
equations may not vanish. The logical limit to this pro-
sess, when f(x), S'(o), f"(xa)s «+f %Cxo) are employed,
yields the nth-degree polynomial produced by Taylor's
expansion, truncated after the term in x". The gener
of appropriate interpolation formulas for these special
cases is somewhat more tedious, but fundamentally no
more difficult than cases for which only the f(x) are
specified.
Unfortunately, there are no simple ground rules for
deciding what degree interpolation will yield best results.
When it is possible to evaluate higher-order derivatives,
then, of course, an error bound can be computed using
(1.39). In most situations, however, it is not possible to
compute such a bound and the error estimate of (1.33)
is the only information available. As the degree of the
interpolating polynomial increases, whe interval contain-
ing the points x, x, ..., x, also increases in size, tending,
for a given x, to increase the magnitude of the polynomial
term [Tino (¥ — x,) in the error (1.39) or error estimate
(1.33), And, of course, the derivatives and divided dif-
ferences do not necessarily become smaller as n increases
in fact, for many functions (13] the derivatives at first
tend to decrease in magnitude with increasing n and then
eventually increase without bound as becomes larger
and larger. Therefore the error may well increase rather
than decrease as additional terms are retained in the
approximation, that is, as the degree of the interpolating
polynomial is increased.
‘One final word of caution. The functional values f(x)
are usually known to a few significant figures at best.
Successive differencing operations on these data, which
are normally of comparable magnitude, inevitably lead
to loss of significance in the computed results; in some
cases, calculated high-order differences may be com-
pletely meaningless
‘On the reassuring side, low-degree interpolating poly-
nomials usually have Very good convergence properties,
‘that is, most of the functional value can be represented by
low-order terms. In practice, we can almost always
achieve the desired degree of accuracy with low-degree
polynomial approximations, provided that base-point
functional values are available on the interval of interest,
1.40 Chebyshey Polynomials
‘The only approximating functions employed thus for
have been the polynomials, that is, linear combinations
of the monomiais §, x, x?,.... x7. An examination of the
monomials on the interval [~1,1] shows that each
achieves its maximum magnitude (1) at x= +1 and its
minimum magnitude (0) at x=0. If a function f(x) is
approximated by a polynomial
Pat) = Og + OX +O oo FOL,
where p,(x) is presumably a good approximation, the
dropping of high-etder terms or modification of the co-
efficients a,,..., a, will produce litte error for small x
(near zero), but probably substantial error near the ends
of the interval (x near +1).
Unfortunately it is in general true that polynomial
approximations (for example those following from Tay-
lor’s series expansions) for arbitrary functions f(z) ex-
hibit this same uneven error distribution over arbitrary
imcervals a 0, D(x.) <0, etc., and if
D(x) >0, then D(x,) <0, D(x,)>0, ete. Thus D(x)
must change sign m times, or equivalently, have n roots
in the interval ( 1,1]. But D(x) isa polynomial of degree
n=1, because both qy(x) and 5,(x) have leading co-
efficent unity. Since an (1 — I)th-degree polynomial has
only n— 1 roots, there is no polynomial S,(x). The pro-
position that ¢,(x)is the monic polynomial of degree n that
deviates east from zero on{-I,t]is proved by contradiction,
Consider the illustration of the proof for $,(+)
74(x)/2 shown in Fig, 1.11. The solid curve is 6,(x), that
is 2 _ 4, which has three extreme values at
Xo = 1, x =0, and x2 = 1. The dotted curve shows a
proposed S,(x) which has a smaller maximum magnitude
on the interval than $,(x). The difference in the ordi-
nates $,(x) and S,(x) at 9, x1, and x, are shown as
D(x), D(x), and D(x). As indicated by the direction
of the arrows, D(x) must change ign twice on the interval,
an impossibility since #3(x) — (x) is only a first-degree
polynomial
Da= (75)
(1.76)
1.11 Minimizing the Maximum Error
Since the nth-degree polynomial ,(x) = T,(x)/2"~* has
the smallest maximum magnitude of all possible nth-
‘degree monic polynomials on the interval [—1,!], any
error that can be expressed as an nth-degree polynomial
can be minimized for the interval [1,1] by equating it
with $,(2). For example, the error term for the inter-
Polating polynomial has been shown to be of the form
(see 1.39b)
[fe-»]
We can do very little about s**"(@). The only effec-
‘way of minimizing R,(x) is to minimize the maxi-
mum magnitude of the (n+ I)th-degree polynomial
(x — x). Treat f**"*8) as though it were constant,
‘Now equate []}.o(¥ ~ x) with $,4(8), and notice that
the (x — x, terms are simply the n + I factors of by.s(%);
the x; are therefore roots of $,.4(x), or equivalently, the
LOO,
ne Grn!a Interpolation and Approximation
eats)
we
Dé)
-al
Des»)
Figure 1414s, the second degree monic polynomial that deviates least from zero on {~1)
roots of the corresponding Chebyshev polynomial
Ty4s(2), given by
i+ De
an+2 am
xi = c0s|
For an arbitrary interval, a 9,0
DoT TeusMPi2
7 CSTAR(S) # ESTAR) + BCI DeTIOXC! J)
WRITE (6,203) MPL, COSTAR(I), I=1,MP1)
¢
© Lees TRANSFORM ECONOMIZED POLYNOMIAL IN X ON INTERVAL
c {-1,1) TOA POLYNOMIAL IN Z ON INTERVAL (AL,AR)
CALL_TRANS(’M,CSTAR,C,ONEM, ONE, AL,AR }
WRITE (6, 200)" AL, AR, MH, EMAK, MPL, (CCI), 1=1,MPL)
60 10 1
c
© seas FORMATS FOR THE INPUT AND OUTPUT STATEMENTS ..
100 FORMAT C4X, 12, 12K, €12.5, 2C10K, F10.5) )
101 FORMAT ¢ lek, bELH.6")
200 FORMAT (GHIN’ =, 15/6H EPS mp €15.5/6H AL =,F10,0/6H AR =,F10.4/
11HO/27H THE COEFFICIENTS AC1I...AG I1, SH) ARE/] (1H , 1P5E16.6)
201 FORMAT C 1HO/ 1HO/ 35H THE COEFFICIENTS ASTARCL). ..ASTARC, 11,
25H) ARE; 1M / (1H , 1P5E16.6))
202 FORMAT ( 1HO/ 10) 27H THE COEFFICIENTS 6¢1)...6(, 12, SH) ARE/
1 aH / GH, 1PSE16,6))
203 FORMAT C1Nb/ 1MO/ "35H THE COEFFICIENTS CSTAR(1)...CSTAR(, 1,
2 SH) ARE/ TH / CH, 1P5E16.8))
208 FORMAT ( 1H0/°1HO/ {SH THE ECONOMIZED POLYNOMIAL ON THE TWTERLAL ¢
VAL = ,F10.4,2H, ,5H AR = ,F10.4,8H ) 1S OF/IIH DEGREE M =,12, 2H.
2 SSH THE MAXIMUM ERROR ON THIS INTERVAL IS. NO LARGER THAN, iPELS.7,
3 1M. /27HOTHE COEFFICIENTS C¢1)...CC, V1, SH) ARE/ IW /
© GW, 1Pse16.6))
c
ND
‘Subroutine TRANS.
SUBROUTINE TRANS( N, COEFF), COEFFT, ENDLI, ENDRI,ENOLT, ENDRT )
TRANS CONVERTS AN N-TH DEGREE POLYNOMIAL IN ONE VARIABLE
(SAY_Z) ON THE INTERVAL CENDLI,ENORI) HAVING COEFFICIENTS.
COEFFLCI).«.COEFFI(N+1) INTO AW N TH DEGREE POLYNOMIAL
INA SECOMD'VARLABLE (SAY X) ON THE INTERVAL CENDLT, ENDRT)
WITH COEFFICIENTS COEFFY(LI...COFFFT(N+1) WHERE THE TWO
VARIABLES X AND Z ARE RELATED BY THE TRANSFORNATION
2 = (CENDRI~ENDLI)#x * CENDLI *ENORT-ENDRI * ENDLT) )/(ENDRT-ENDLT)
X = CCENDRT-ENOLT)«Z + (ENDR1*ENDLT-ENDLI © ENDRT) )/CENDRI-ENDLI>
IMPLICIT REAL#8(A-H, 0-2)
REAL*S COEFFI, COEFFT, ENDLI, ENORI, ENOLT, ENORT
DIMENSION COEFFIC10), COEFFT(10)
<... COMPUTE CONSTANT PARAMETERS.
Goidi = CENDRT-ENDLID/CENDRT-ENOLT)
CON? = (ENDL! +ENORT=ENDR1eENDLT) /UENORT- ENDLT)
NPL = Ws 2Example 1.3 Chebysheo Ezonomization
‘Program Listing (Continued)
c
Doe e1,NP2
{+ HECK FOR CON2=0 TO AVOID COMPUTING 0.0*0 .....
ie" {'con2.ne.0.0°) "Go To 2
COEFFTCI) = COEFFLCID
60.7) §
2 GOEFFTCI) = 0.0
50-5. d=! ,wP2.
BINOM = NOMIAL(J=1, 11)
3 COEFFTCI) = COEFFT(I) + COEFFI(y)*CON2e*C 3-1),
4 COEFFTCI) = COEFFT( 1 )*coNd#*( 1-1)
RETURN
c
NOM
eno
Function NOMIAL
FUNCTION NOMIAL {XL}
c
c NOMIAL COMPUTES THE BINOMIAL COEFFICIENT (K,L).
¢
NOW = 2
IF CRUEL OR. LyEQ.0 ) GO TO &
00.3 icounT=1, L
3 NOM = NOMe(K- 1 COUNTS1)/1 COUNT
4 NOMIAL © NOM
RETURN
c
END
Data
N= 3 EPS = 0,000000E00 © AL = 100000 aR = 3.00000
ACD. AC) = :000000E00 © 0,000000E00° 9.00000000 1. 000000E00
N= ‘3° eps = 3:260000E00 “AL =" 1,00000 aR = "3. 00000
ACD. AC = 2:000000€00 © 0.00000000" 000000600 1, 000000E00
N= 3" eps = 0;000000E00 “AL = -3.00000 AR = "1.00000
ACD. AC) = 2,000000E00 © 3,000000E00 1.000000 0, 500000E00
N= "3" "eps = 07400000800 "AL = =1,00000 aR = "1.00000
ACD. AC) 2/000000E00 3,000000E00" 1.000000E00 0, 5000800
N= 3" "eps = 0:700000800 “AL * -1,00000 aR = "1.00000
RD. AC) = 2/000000E00 3, 000000z00" 1.000000E00 0, 500000E00
EPS * 0,000000E00 "AL = 0.00000 AR = "1.57080
AD. AC © 15000000600 0, 000000£00° =0,500000E00 0, 000000E00
AGYLIAC) = 4l166667E-2 0,000000E00 -1.388888E-3 —0.000000E00
2iweo1sse-5
EPS = 5:00000¢-5 AL = 000000 AR = 1.57080
ACD. ACAD 1,000000€00 0.000000£00" -0.500000E00 0. 000000E00
ACS) ACD) WI166667E-2 0.0D9900E00 -1;388s88E-3 0 ,000000E00
aca) 2, 6801596-5
N= 3 eS = O;000000E00 AL = -1.00000 aR = 1.00000
AD. .AG) © Looo000€00 —9.000000E00" =0.500800E00 0, 000000E00
\lig6667E-2 0,000000E00 -1,3a8z88e-3 000000000
2i4g0159e-5
= Sioo0000c-5 AL = ~1,00000 aR = 1.00000
100000000 0, 00000000" -0.500000F00 0..000000E00
Al166657E-2 000000000 -1:3e8sasE-3 0.000000600
2ineoisge-s.
= 0100000600 = AL = 1.00000 ar = 1.90000
81250000E00 0380000600" -0,086300E00
9§ = 0,000000E00 “AL = 100000000 AR = 2000.00000
8.300000E00 1,820000E-3" 0, susco0E-6
= 02050000E00 “AL = 1000.00000° AR = 2000..00000
ACD. AG)
6,300000E00 1.320000E~3" 0, 34s 000E-6
5354
‘Computer Output
Interpolation and Approximation
Results for the 2nd Data Set
Noe 3
eps =
aL 10000
aR = 30000
THE COEFEICIENTS
0.0
COEFFICIENTS
'8,0000000 00
THE COEFFICIENTS
1,1000000 01
THE COEFFICIENTS
1.100000 o1
THE ECONOMIZED POLYNOMIAL ON THE INTERVAL ( AL =
THE MAXIMUM ERROR ON THIS INTERVAL 1S NO LARGER THAN
DEGREE M = 1,
THE COEFFICIENTS
1.450000 01
0.326000 01
ac.
0:0
AC) ARE
0.0 1,0000000 00
ASTAR(L)., -ASTAR(N) ARE
1,2000000 01 6.000000 00 1, 0000000 00
BOLD. .BCA) ARE
1,2750000 01 —3.0000000 00 —2.000000-01
CSTAR(1).. .CSTARC2) ARE
1,2750000 01
1.0000,
C1). 6002) ARE
1.275000 01
Results for the 7th Data Set
Nos 8
EPS = 0,500000-08
aL 0:0.
AR 115708
THE COEFFICIENTS AC1)...A(9) ARE
1,0000000 000.6 -5.0000000-01 0.0
0:0 <1lSeseesd-03 010 21480159005
THE COEFFICIENTS.
7.0710550-01
=1,7508860-03,
THE COEFFICIENTS
6,0219750-01
10653830-08
ASTAR(1)...ASTARCS) ARE
-2.1808900-01
727420-05
5,7099140-02
315909280-06
75.5536240-01
22125u5030-08 2.
BCL) -+6B(9) ARE
-5.1361920-01
76 8208880-06,
1.3735810-02
2)8054130-08
ars
3.0000 ) 15 oF
3.250000 00,
4,1666670-02
1,1215910-02
1,3605030-03,Example 1.3 Chebysheo Economization
‘Computer Output (Continued)
THE COEFFICIENTS CSTAR(L)...CSTARCE) ARE
7,0709870-01 =5.553595D-01 -2.179653D-015.707401D-02 1, 0884071
#1570¥6130-03
THE ECONOMIZED POLYNOMIAL ON THE INTERVAL ( AL = 0.0 AR# 1.5708.) 18 OF
DEGREE M = 5. THE MAXIMUM ERROR ON THIS INTERVAL 1S HO LARGER THAN 7.29780800-06,
THE COEFFICIENTS C(1)...0(6) ARE
1,0000070 00 ~3.5836380-08 9782320-01 =7,2012u90-05 —_$,1003120-02
5, 7038910-03
Results for the 8th Data Set
8
nos
ers = 0.0
AL = =1:0000
AR + "120000
THE COEFFICIENTS AC1)..,AC9) ARE
1.000000 00 0.0 -5.0000000-01 0.0 416666 70-02
a0 -1l3egseep-03 0.0 2iha01590-05
THE COEFFICIENTS ASTARGL)...ASTARCO) ARE
1,0000000 00 0,0 5.0000000-01 0.0 4, 1666670-02
°, <1l3essss0-05 0.0 2, 4801590-05,
WE COEFFLCLENTS 8(12,..8(9) ARE
7,6519780-01 0.0 -2.2980690-01 0.0 b.9933430-05
oo ohl1852650-05 00 1,9376280-07
THE COEFFICIENTS CSTAR(1)...CSTAR(9) ARE
1,0000000 00 0.0 -$.0009000-01 9,0 .Me8s670-02
00 o1,3asss80-03 0:0 21401590-05
THE ECONOHIZED POLYNOMIAL ON THE INTERVAL ( AL = 71,0000, AR = 1.0000) 15 OF
DEGREE Mn be HE HARUM ERROR ON THIS INTERVAL 15 'HO LARGER THAN 020
THE COEFFICIENTS C(1)...0(9) ARE
1,0000000 00 0.0 -5.0000000-01 0.0 4,1666570-02
o0 -113e888s0-03 010 274801590-05,56
Interpolation and Approximation
Computer Output (Continued)
Results for the 9th Data Set
nos 8
EPS = 0,$00000-08
AL = -150000
AR = 10000
THE COEFFICIENTS ACL)...A(9) ARE
1,0000000 00° 0.0 =5.0000000-01 0.0
oro <1i38speeo-03 0.0 2iag01s90-05
THE COEFFICIENTS ASTAR(1)...ASTAR(S) ARE
1,0000000 00 9.0 =5:0000000-01 0.0
ao -1:3a8ssep-03 0:0 21¥e01590-05
THE COEFFICIENTS B(1)...8(9) ARE
7,6519780-01 0.0 -2.2980690-01 0.9
oro 811852650-05 0.0 119376240-07
THE COEFFICIENTS CSTAR(1)...CSTAR(S) ARE
9.9995800-01 0,0 9928050-01 0.0
THE ECONOMIZED POLYNOMIAL ON THE INTERVAL CAL ® =1.0000, AR =
DEGREE = &, THE MAXIMUM ERROR OW THIS INTERVAL IS NO LARGER THAN
THE COEFFICIENTS C(1)..20¢5) ARE
9.9995800-01 0.0 4,9924050-01 0.0
Results for the 12th Data Set
Noe 2
Eps = 0,s00000-01
AL = 1000, 0000
AR = 20000000
THE COEFFICIENTS AC1),..AC3) ARE
6.3000000 00 1.8200800-03 -5,4500000-07,
THE COEFFICIENTS ASTAR(1)...ASTAR(S) ARE
8.253700 00 —_-3.9250000-01 6250000-02
THE COEFFICIENTS 8(1)...8(3) ARE
8,2106230 00
3.9250000-01
~4.3125000-02
4,1666670-02
4,3666670-02
4, 9533u30-03,
3,9626780-02,
1,000 ) 1s oF
lzougeis0-05,
3.9626740-02Example 13. Chebyshev Ezanomization
‘Computer Outpat (Continued)
THE COEFFICIENTS CSTARC1)...CSTAR(2) ARE
8,2106250 00 $.9250000-01
THE ECONOMIZED POLYNOMIAL ON THE INTERVAL ( AL = 1000.0000,
AR = 200.0000 ) ss OF
DEGREE M= 1, THE MAXIMUM ERROR ON THIS INTERVAL IS HO LARGER THAN 4.31250000-02.
THE COEFFICIENTS C(1)...C(2) ARE
7,0331250 00 7.8500000-0%
ST58 {Interpolation and Approximation
Discussion of Results
Five different polynomials were used in the twelve test
data sets as follows:
Daa ‘Maximum Polynomial
Allowable
Error, €
Interval
Set LR]
1 ta 0
2 13) 3.26
3° EA °
4 Ly 4
5 07
0
0.0000
0
10,0000
° Pe
0.05
oa ps(z) = 8.25 + 0.392 — 0.08632
6.3 + 0.001822 — 3.45 x 10-72
Data sets t, 3, 6, 8, and 11 allow no error to be intro-
duced by the economization process. Hence, the econo-
mized polynomial for these cases must be equivalent to
the original polynomials; significant discrepancies could
be accounted for only as errors in one or more elements
of the X or T matrices, assuming that the executable
portion of the program is free of error. Results for the 8th
data set, included in the computer output, illustrate these
cases, Results for data set 2, shown in the computer
output, ate those outlined in (1.88) to (1.92). Results for
data sets 4 and 5, not shown, are, respectively:
» Emex = 0-125, pp(z) = 2 + 3.3752 + 27,
m=
and
m= 1, Ener = 0.625, p(s} = 2.5 + 3.3752.
The starting polynomial for data sets 6-9 is the power
series for cos z, expanded about 29 = 0, and truncated
after the term in 2°; it has been used as an example in
Section 1.12. The results for data set 9, shown in the
computer output, correspond to those of (1.82) to (1.87).
‘The results for data set 7, shown in the computer
output, are similar to those for data set 9, except that the
interval is 0< z < 7/2. The economized polynomial is
000007 — 3.383638 x 10"*z ~ 0.4974232:*
= 7.241249 x 109 2 + 5.100312 x 107? 24
— 5.703891 x 10°? 25 (13.16)
In this case, Enge = 7.298 x 10°, The total possible
error in the approximation is given by Engg plus the
maximum possible error introduced in truncating the
power series expansion after the term in 28, that is, by
cos 2
Eas + Foose] » 2 €in[0,/2]. (1.3.17)
‘Thus, the maximum possible magnitude of the error in
(1.3.16) for the interval [0,n/2] is
7.298 x 10-6 + (n/2)'°/10! (1.3.18)
By taking advantage of the fact that cos z is periodic with
period 2n, that cos(x/2+a)=—cos(r/2—2), and
that cos(x + B) = cos(r — B), (1.3.16) may, after suitable
adjustment of the argument, be used to find the cosine
of any angle within the accuracy of (1.3.18). In fact, since
cos z= sin(z + 5/2), (1.3.16), with an appropriate
transformation of variable, could be used 10 calcutate the
sine of any angle as well, with the same bound for the
error,
Results of the economization process for the 10th data
set, not shown, are:
04315, p(z) = 8.206850 + 0.392.
Results for the 12th data set (sce the computer output)
show the first-order minimax polynomial approximation
to a second-degree polynomial representation of the
molar heat capacity for gascous nitrogen at low pressures,
inthe temperature range 1000-2000°K. See Problem 1.46
at the end of the chapter for more details.
Double-precision arithmetic has been used for all
calculations. In order to generate accurate coefficients
for the economized polynomial, particularly when only
small errors, «, are aifowed, it is important to carry as.
many digits as possible throughout the calculations;
double-precision arithmetic should be used, if available.
AS written, the program can handle only starting.
3.25 x 1075,
m=, Ege =Example 1.3 Chebyshev Economization 9
polynomials of degree nine or less. The matrices Tand Y
could be expanded to allow higher-degree starting
polynomials, although the storage requirements for 7
and X could become prohibitive for targe 1, if the sub-
scription scheme outlined earlier were used. An alterna-
tive approach would be to pack the coefficients from
Tables |.12 and 1.13 using a more efficient assignment of
‘memory.
Yet another, more elegant, approach to the economiza-
tion process is to use the recursion relation of (1.72) to
‘generate the needed coefficients. This avoids the need for
saving tabulated information. In addition, since trun-
cation always starts with the highest-order term of
(1.34), ax" can be expanded to yield b, directly. If the
term byT, can be dropped without exceeding the maxi-
mum allowable error, the af, i= 0,1,...,2—1, can be
modified appropriately, to a? for example, using the
recursion relation, Next, d2_,x*"! can be expanded to
yield b,-. directly. If 2, -7,-, can be dropped, the af,
i=0,1,...,2 2, can be modified appropriately, again
using the recursion relation. This process of expanding
only the highest-order untruncated power of x in terms
of the Chebyshev polynomials, followed by proper
adjustment of the coefficients of lower powers of x, leads
directly to the economized polyaomial (1.3.8), without
ever evaluating bg, ... by. Arden [1] and Hamming {2}
suggest some other approaches to the economization
process which use only the recursion relation of (1.72).o Interpolation and Approximation
Problems Tobie P18
an tom | reo fat al at rat 1] at
eaters Es Eanah, 5 a
nrg i on
how large must mbe to yeidan approximation fore thatis 1 | ©) 2] 9] tf,
accurate within 10°? a2] o2 lowe ° o
12 For small values of x, the approximations 6 : is
esltx singex a] 4) uyoy we
3 ° 3
are sometimes employed. In each case, use the error term from + Salmey a 1b
Taylor's expansion to estimate how large avalue of x(iothe 5 |g | gy
nearest 0.001) may be employed with the assurance thatthe
error in the approximation is smaller than 0.01. Check your
conclusions against tables of exponentials and sines.
1.3 Let M be the maximum magnitude of f(x) on the
terval (x9, x,). Show that the error for linear interpolation
for f(x), using the functional valuesat xp and x, is bounded by
4M (4, x0)"
for x_ 21?
1.4 Use the algorithm of (12) to evaluate
Bet fart 28 + Sa
pols)
and each of its derivatives at x =2.5.
15 Write a function, named POLY, that implements the
algorithm of (1.12) to evaluate an nth-degree polynomial
pix) = Staoanx', and each of its derivatives of order 1
through’ at x= %, The function should have the dummy
argument list
(NA, XBAR, DVAL)
‘where N is the degre ofthe polynomial, n, A is a vect
imensional array) containing. the coeficien!s ao, @y, sy
inclements A(1),A(2) ..., A(N + 1), XBAR is the independent
variable value, %, and DVAL is a vector containing. piG%),
7°13), PPA imelements VAL (1), DVAL(2), ..-, DVAL(N)
‘upon return from the function, The value of POLY should be
PA.
Write a short main program that reads values for m, ae,
44, +5 and &, calls on POLY, prints the values returned for
(42), pla), .-» POX), and then reads another data set. Test
POLY with several different polynomials, including that of
Problem 1.4
1.6 (@) Show that the mth divided difference of y= x*is
unity, no matter which base points x», 24,» Xs are chosen.
(©) Show that the nth divided diference of any polynomial
aa) = 3 tea aux! ts given by ay, regardless of the choice of
base points.
17 Investigate the relation, ifany, between the number of
significant figures to which tabulated values of f(x) vs x are
s3xen, and the highest order far which fnite divided differences
ate likely to be meaningful
18 Consider the divided differences in Table P1.8.
{@) What is the significance of the zero at the top of the
seventh column?
(b) Without renumbering the abscissas, write down the
divided. difference polynomial that uses the elements denoted
by asterisks.
(c} What is the likely error bound for a fourth-degree inter-
polation with x» Mf Deuter tols
Ry(o + ah
2.3 Newton-Cotes Closed Integration Formulas
The simplest case of closed integration is shown
schematically in Fig. 2.3, Here, the two base points
Xo =a and x, =6 are used to determine a first-degree
polynomial, py(x) = py(xo + ah) or straight-line approxi-
mation of f(x). The appropriate form of (1.54) is given
by
SC) =o + th) = fle) + 2 AF (0) + Rilo + ah)
= Pilg +h) + Ri(xe tah) (2.4)
where
Ryl%o + a
rata yh
wae SP
Fin (xox), (2.5)
Riso ta) Wale — Df La.xo.ns).
aso asl
Figure 2.3. The trapecoidal rae.
‘Using this polynomial approximation for f(x) and
transforming the integration variable from x to ,
(a = (x — xy)/h), we have
Jf) ax = [700 dx = [po ae
sh [pile + at) de, o6
ls
where the integral on the right, is given, from (2.4), by
‘
4 ['Ua) + aa/Geald
rfeveor+£ ssc),
= [reo +A ee) @7
From the definition of the first forward difference,
Fe) = fo + &) — fo), (2.7) assumes the final form,
f "pax off) +e Lo)
3 Bree) +00 (28)
the familiar trapezoidal rule. The required area under the
‘solid curve of Fig. 2.3 is approximated by the area under
the dotted straight line (the shaded trapezoid).
The error involved in using the trapezoidal approxima-
tion is given by the integral of the remainder term (2.5),
(aoa
=h [Rue + ath) de
=e [ae ae,
A in (xox).
(2.9)n ‘Numerical Integration
If f(x) is a continvous function of x, then £7€) or its
equivalent, 21/(x,x0.%1] (see (2.5)], is a continuous, but
unknown, function of x, so direct evaluation of (2.9) is
impossible, Since x is simply a transformed value of
x, f°@ is a continuous function of the integration
variable a; this property simplifies the estimation of (29).
The factor f"(2) can be taken outside the integral sign by
applying the integral mean-value theorem from the
calculus:
If two functions, q(x) and g(x), are continuous for
@ be.
like quantities, resulting from the use of Simpson's second
rote. Then
Qi)
In terms of the integration limits a and b, we can write
(b—
- Coe pone)
= 2.32)
ses)
where , and &, are different values of € in (a,b). If we
assume that f“*(é,) approximately equals f"(é,), then
(2.32) reduces to
(233)
and (2.31) becomes
=H, -th.
‘The validity of (2.34) hinges completely upon the assump
tion that f%E,) and f(g.) are equal.
Example. Evalvate
(234)
ff teyac~ fc —28 +1245) de= 205 235)
by using the trapezoidal rule (2.21a) and the one-point open
formula of (2.30a), each with degree of precision one, Then
estimate /* by using the technique outlined above. For this
‘polynomial function [see (2.23)], the estimates of /* computed
from (2.21a) and (2.300) are, tespectively, J, =26 and J; = 18.
The ratio of error terms is
n 0 py m6)
aoa =a Sin @b. 2.36)
or
Assuming that £°(E,) is equal to /“(E2) leads to
Ey= 2B ay
‘Note that for this case, the open formula is apparently more
accurate than the closed formula of identical degree of pre-
Cision (not the usual case). Substitution of (2.37) into (2.31)
leads to
2x18
bis bw 4 2X8 209,
pest 238)
which is the true value of the integral
‘examination, (2.38) is seen to reduce to
It = AYA) + 4/2) + £0, 2.39)
{Simpson's rule (2.21b)]. Thus the expression of the error in
terms of twa formulas with degree of precision one has
resulted in a compound formula with degree of precision three.
this case. Upon closer
2.6 Composite Integration Formulas
‘One way to reduce the error associated with a low-
order integration formula is to subdivide the interval of
integration [a,b] into smaller jntervals and then to use the
formula separately on each subinterval. Repeated appli-
cation of a low-order formula is usually preferred to the
single application of a high-order formula, partly because
of the simplicity of the low-order formulas and partly
‘because of computational difficulties, already mentioned
in the previous section, associated with the high-order
formulas. Integration formulas resulting from interval
subdivision and repeated application of a low-order
formla are called composite integration formulas,B Numerteal Integration
Although any of the Newton-Cotes or other simple
(one-application} formulas can be written in composite
form, the closed formulas are especially attractive since,
except for the base points x =a and x = b, base points
at the ends of each subinterval are also base points for
adincent subintervals. Thus, although one might suspect
chat n repeated applications of an m-point formula would
require nm functional evaluations, in fact only n(m—1)+1
such evaluations are needed, a considerable saving,
especially when mis small.
The simplest composite formula is generated by re-
peated application of the trapezoidal rule of (2.21a). For
1n applications of the rule, each subinterval is of length
f= (b= a)in Let x; =X + th J =
fig I(x) dx -f f(x) dx fg Sx) dx
+ [year
+ { LO) dx
h
$a) + sea]
h
+5 UG) +e) +
+ Ee) 4/6091
Me <0