Numerical Methods For Differential Equations and Applications
Numerical Methods For Differential Equations and Applications
net/publication/37988615
CITATIONS READS
66 7,593
1 author:
John C. Butcher
University of Auckland
240 PUBLICATIONS 12,423 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by John C. Butcher on 18 January 2016.
J. C. Butcher
The University of Auckland
Abstract
This paper surveys a number of aspects of numerical methods for ordinary differential
equations. The discussion includes the method of Euler and introduces Runge-Kutta methods
and linear multistep methods as generalizations of Euler. Stability considerations arising from
stiffness lead to a discussion of implicit methods and implementation issues. To the extent
possible within this short survey, numerical methods are looked at in the context of problems
arising in practical applications.
1 Introduction
Differential equations play a role in the modelling of almost every scientific discipline. However,
it is relatively rare for a differential equation to have a solution that can be written in terms of
elementary functions. Usually, the only information about the solution is that it is known to exist
and to be unique, on theoretical grounds, and that it can be approximated more or less accurately
using computational techniques. In this review paper, we will consider some aspects of numerical
methods for the solution of initial value problems in systems of ordinary differential equations.
There are two standard forms for expressing such problems. The first of these is
1
where the exposition is much simpler and more transparent than would have been possible for the
non-autonomous formulation.
The methods under study in this paper are introduced in Section 2 in the context of the central
role of the Euler method and the relationship that many other methods have as generalizations
of Euler. This leads on to Section 3 which discusses the one-step Runge-Kutta methods and this
leads on to Section 4 where linear multistep methods are reviewed.
Section 5 discusses the important property of stiffness and the effect it has on the performance
of traditional numerical methods. The stability considerations that become essential in the light of
stiffness are the subject of Section 6.
In Section 7 issues concerned with the implementation of methods for stiff and non-stiff problems
are discussed. This is followed by Section 8 which considers some applications for which special
care is required. In Section 9 we make some concluding remarks.
y1 + hf (x0 , y0 ). (3)
The numerical method originally proposed by Euler extends this idea by generating approximations
at a sequence of points, xi = x0 + hi, i = 1, 2, . . . where each point is related to the one next before
it by the formula
yi = yi−1 + hf (xi−1 , yi−1 ), i = 1, 2, . . . . (4)
If it is required to find the solution at a known output point x then it is convenient to choose
h = (x − x0 )/n, where n is an integer.
Because (3) can be looked at as the first two terms of a Taylor expansion
h2
y(x0 + h) = y(x0 ) + hy (x0 ) + y (x0 ) + · · · ,
2
it should be expected that, at least for small values of h, the error in completing each step is
approximately proportional to h2 . Since the total number of steps is proportional to h−1 , this
would mean that the total error committed by the time n steps have been completed to produce
an approximation to y(x), would be approximately proportional to h.
The fact that errors behave like the first power of h is regarded as a serious limitation of the
Euler method, because reducing h, and therefore increasing the total computational effort, leads to
only a modest improvement in accuracy. What are preferred are ‘higher order’ methods for which
the error in a step behaves approximately like hp+1 and the total or global error behaves like hp ,
where the order p is 2 or more.
2
Using the velocity-distance interpretation of a differential equation and its solution, we can first
consider how to overcome the limitation of having to use, instead of the average velocity in a time
interval, the value at the beginning of the interval.
Several approaches have been used for approximating the average velocity more accurately than
in the Euler method. One idea is to somehow use an approximation at the mid-point of the interval,
instead of at the left-hand end. A second idea is to use the mean of the values at the two ends of
the interval. The Runge-Kutta method, which we will explore in more detail in the next section,
was originally based by Runge on each of these ideas. To obtain an approximation of the derivative
at the centre or the end of the interval, it is possible to take a temporary step to the desired point
and calculate the derivative using this approximate value as the second argument of the function f .
∗
Denote the temporary approximation as yi−1/2 or yi∗ for the ‘mid-point rule’ and the ‘trapezoidal
rule’ versions of Runge’s methods, and we have the following two sequences of calculations to find
the approximation after a single step.
Mid-point rule method:
∗ 1
yi−1/2 = yi−1 + hf (xi−1 , yi−1 ) (5)
2
1 ∗
yi = yi−1 + hf xi−1 + h, yi−1/2 (6)
2
Trapezoidal rule method:
3
The order condition for (9) may be verified by expanding the difference of the two sides in
Taylor series. That is,
3 1
y(xi ) − y(xi−1 ) − hy (xi−1 ) + hy (xi−2 )
2 2
3 1
= y(xi−1 + h) − y(xi−1 ) − hy (xi−1 ) + hy (xi−1 − h)
2 2
1
= y(xi−1 + hy (xi−1 ) + y (xi−1 + O(h3 ) − y(xi−1 )
2
3 1
− hy (xi−1 ) + hy (xi−1 + y (xi−1 + O(h3 )
2 2
3
= O(h )
Because the Adams-Bashforth methods give the value of yi as a linear combination of yi−1 ,
hf (xi−1 , yi−1 ) and the values of the same quantities but with other subscripts, they are known
as ‘linear multistep methods’. If the average derivative within step number i is computed as the
mean of hf (xi−1 , yi−1 ) and hf (xi , yi ), we get the second order example of what is known as an
‘Adams-Moulton method’. Like all methods in this family, this method is implicit (because yi is
given as the solution to an algebraic equation, rather than by an explicit formula).
h
yi = yi−1 + (f (xi , yi ) + f (xi−1 , yi−1 )) , i = 1, 2, . . . (10)
2
Another special example of a linear multistep method, also of implicit type is written in the
form yi = ayi−1 + byi−2 + chf (xi , yi ). By expanding by Taylor series and matching coefficients, it
is found that the unique choice of a, b and c which gives order 2 is a = 43 , b = − 13 , c = 23 . The
method that results from this choice:
4 1 2
yi = yi−1 − yi−2 + hf (xi , yi ) (11)
3 3 3
is known as a BDF or backward difference method.
In the next sections we will explore some aspects of the Runge-Kutta and linear multistep
methods in further detail.
3 Runge-Kutta methods
Runge-Kutta methods, as we have seen, are ‘one step’ in the sense that the result found at the end
of a step is functionally dependent only on the result given at the end of the previous step. That
is, if yn denotes a computed approximation to y(xn ), then yn is given by a formula of the form
s
yn = yn−1 + h bi Fi ,
i=1
4
It turns out that the components of the c vector are related to the elements of the A matrix by
s
ci = aij , i = 1, 2, . . . , s.
j=1
The number of stages s is the number of Y vectors needed to compute the solution in a method of
this form and is a measure of the complexity of a particular method.
The characteristic coefficients of a specific Runge-Kutta method are conveniently displayed in
a tableau as follows
c1 a11 a12 · · · a1s
c2 a21 a22 · · · a2s
.. .. .. ..
. . . .
cs as1 as2 · · · ass
b1 b2 · · · bs
Even though we have allowed for the possibility of implicit methods in this formulation, the tra-
ditional Runge-Kutta methods, for example those due to Runge, Kutta, Nyström and other early
contributors, have been explicit. This means that aij is precisely zero unless i > j (and conse-
quently c1 = 0) because each quantity used in the computation should be functionally dependent
only on other quantities already computed.
The midpoint and trapezoidal rule methods introduced in Section 2 are given by the tableaus
0 0
1 1
2 2
1 1
1 1
0 1 2 2
Note that here, as is usual for explicit methods, the zero elements on and above the diagonal of A
have been omitted.
The next two example tableaus are of orders 3 and 4 respectively
0
1 1
0 2 2
1 1 1
2 2 2
0 12
1 −1 2 1 0 0 1
1 2 1 1 1 1 1
6 3 6 6 3 3 6
The fourth order method has become very popular since it was proposed by Kutta and is sometimes
referred to as ‘the Runge-Kutta method’, as though it were the only one available.
To verify the order of these and other Runge-Kutta methods it is necessary to expand the exact
and computed solutions in powers of h and to check that the terms up to and including those with
an exponent p agree with each other. For example, it can be shown that the exact solution for (2)
near an initial point (x0 , y0 ) is given by the expansion
(12)
+ h24 (f (f, f, f) + 3 f (f, f (f)) + f (f (f, f)) + f (f (f (f)))) + O(h5 ),
4
5
where f = f (y0 ), f = f (y0 ), f = f (y0 ), f = f (y0 ). On the other hand, the Taylor expansion
for the numerical solution computed by an explicit Runge-Kutta with s = 4 is equal to
+ h6 (2(b2 c22 + b3 c23 + b4 c24 ) f (f, f) + (b3 a32 c2 + b4 a42 c2 + b4 a43 c3 ) f (f (f)))
3
(13)
+(b3 c3 a32 c2 + b4 c4 a42 c2 + b4 c4 a43 c3 ) f (f, f (f))
+(b3 a32 c22 + b4 a42 c22 + b4a43 c23 ) f (f (f, f))
+b4 a43 a32 c2 f (f (f (f))) + O(h5 ),
A comparison of the terms in (12) and (13) shows that agreement up to h4 terms occurs if and only
if
b1 + b2 + b3 + b4 = 1, b2 c2 + b3 c3 + b4 c4 = 12 ,
b2 c22 + b3 c23 + b4 c24 = 13 , b3 a32 c2 + b4 a42 c2 + b4 a43 c3 = 16 ,
b2 c32 + b3 c33 + b4 c34 = 14 , b3 c3 a32 c2 + b4 c4 a42 c2 + b4 c4 a43 c3 = 18 ,
1 1
b3 a32 c22 + b4 a42 c22 + b4 a43 c23 = 12 , b4 a43 a32 c2 = 24 .
These are easily seen to be satisfied by the values a21 = a32 = c2 = c3 = 12 , a31 = a41 = a42 = 0,
a43 = c4 = 1, b1 = b4 = 16 , b2 = b3 = 13 to give the classical fourth order method. Explicit methods
are known at least as high as order 10 but these require increasingly more stages. For example, for
p = 5, s = 6 is necessary and for p = 8, s = 11 is necessary.
In contrast to the complicated relationship between the value of p and the minimal value of s to
achieve this order for explicit methods, for implicit methods there is a simple relationship between
s and p. This is that for any positive integer s there exists an implicit Runge-Kutta method
with order p = 2s (but no higher). In fact methods with these high orders are a generalization
of Gaussian quadrature formulas and reduce to them in the special case of the trivial differential
equation y (x) = f (x).
We give a single example of one of the Gauss family of methods, the method with s = 2 and
p = 4. √ √
1
2
− 3 1 1
− 3
√6 4√ 4 6
1
2
+ 63 14 + 63 1
4
(14)
1 1
2 2
where fi is defined as f (xi , yi ) for problem (1) and as f (yi ) for problem (2). Note that if β0 = 0, the
method is implicit and the approximation yn has to be determined as the solution of an algebraic
equation. Assuming either αk = 0 or βk = 0, the integer k is a measure of the complexity of these
methods and the method is sometimes known as a k-step method.
6
The order of linear multistep methods is easy to determine by Taylor series analyses. In condi-
tions for various orders, the α and β coefficients all occur linearly and therefore this sort of question
is much simpler than for Runge-Kutta methods.
In particular, for order at least 1, the following conditions are necessary and sufficient.
α1 + α2 + · · · + αk = 1, (15)
α1 + 2α2 + · · · + kαk = β0 + β1 + β2 + · · · + βk . (16)
These conditions are of central importance to the study of linear multistep methods and are usually
known as the ‘consistency conditions’. It turns out that for a method to be capable of producing
a sequence of approximations which converge to the exact solution as h → 0, it is necessary and
sufficient that the method be both consistent and ‘stable’ where a stable method is one for which
the polynomial
z k − α1 z k−1 − · · · − αk
has zeros only in the closed unit disc and repeated zeros only in the open unit disc.
Amongst explicit methods, the most important are the Adams-Bashforth methods. For these
α1 = 1 and each of α2 , . . ., αk is zero. Furthermore, the values of β1 , β2 , · · ·, βk are chosen in
such a way as to give an order of p = k. Corresponding implicit Adams-Moulton methods have the
same values of the αs and, because β0 is an additional parameter, it is possible to obtain an order
p = k + 1.
There are good reasons, which we will discuss later for combining an Adams-Bashforth with an
Adams-Moulton method of the same order (or possibly with an order greater by 1) into a single
algorithm. In these so-called ‘predictor-corrector’ pairs, the Adams-Bashforth predictor is used to
obtain an approximate ‘predicted’ value of yn from which an approximate value of fn is computed.
The approximation to tn is then ‘corrected’ using the Adams-Moulton formula but with fn replaced
by the value computed in the predictor stage of the algorithm. Many variants of this scheme are
used in practical programs.
It might be thought that the loss of generality in assuming the special form for the α coefficients
is a disadvantage of the Adams methods. Even though by allowing a more general form for the
method so that formally orders as high as 2k are actually possible, the stability restrictions generally
makes these methods useless. The greatest order for a linear multistep method that is also stable
and therefore convergent is k + 2 (or only k + 1 if k is an odd integer).
Instead of the choice of coefficients made in the Adams methods, it is also possible to use the
values β1 = β2 = · · · = βk = 0 with β0 , α1 , α2 , . . ., αk selected to give order p = k. These backward
difference methods, of which an example is (11), have a special role in the solution of stiff problems.
5 Stiff problems
For many important problems, a phenomenon arises which makes numerical computation compli-
cated and difficult. To illustrate this effect, we will confine ourselves to a simple linear problem but
the difficulty is by no means confined to such problems. Suppose the N × N matrix M has distinct
eigenvalues λ1 , λ2 , . . ., λN and consider the differential equation
y (x) = M y(x).
7
It is well-known that the solution to this equation takes the form
N
y(x) = Ci exp(λi x),
i=1
where the constant vectors C1 , C2 , . . ., CN depend on the information given in the initial value.
The special difficulty, known as ‘stiffness’ arises when some of the eigenvalues are close to zero, or
when some of them have a real part close to zero, whereas other eigenvalues have very negative real
parts. Because of the characteristic behaviour of the exponential function, it is the very negative
eigenvalues that cause trouble. Even though they ultimately have a negligible effect on the exact
solution, they can have an overwhelming effect on the result of a numerical computation using
traditional methods. For example, if a single step is taken using the Euler method, the result
computed is found to be
y1 = y0 + hM y0 = (I + hM )y0 .
Suppose the nonsingular matrix V is such that V −1 M V = diag(q1 , q2 , . . . , qN ), then
Repeated use of this one-step operation will be unstable unless each of the (possibly complex)
numbers 1 + hq1 , 1 + hq2 , . . ., 1 + hqN has a magnitude not exceeding 1. This can only be achieved
by making h so small that the ‘non-stiff’ components take many steps to make progress in modelling
the physically interesting behaviour of the problem.
Problems that have this stiffness property arise in many common situations. They are partic-
ularly common in applications of the method of lines in the solution of many partial differential
equations. Linear and non-linear stiff problems arise also, for example, in the analysis of electrical
circuits and in chemical kinetics.
6 Stability questions
In the example of a stiff problem discussed in the previous section, the behaviour of a single
component of the solution was identified as being of significance. Hence, we consider a problem in
only one dimension of the simple linear form
y = qy. (17)
Since in the analysis we will undertake, the factor q always arises with h as a multiplicative factor,
it is convenient to define z = hq and, because we will be modelling just one component from a
possibly larger system, we will allow z to be complex. For the Euler method we saw in the last
section that a single step applied to (17) results in the numerical approximation being multiplied
by 1 + z. The situations that interest us are when h is determined by criteria that have nothing
to do with q; we can then think of h as being large in magnitude. On the other hand, because q
corresponds to a ‘stiff’ component, we may think of q as being large in magnitude and having a
negative real part. What we want is for the stiff components to simply die away, or at very least
to remain bounded; in this way they will not affect the significance of the dominant part of the
solution of a large stiff system. Hence, we would like 1 + z to have magnitude less than 1 but this
is impossible under the conditions we have specified.
8
A contrasting method defines yn by the implicit equation
yn = yn−1 + hf (xn , yn ).
If we carry out a similar analyis for this ‘implicit Euler method’, it is found that the computed
solution is multiplied in each step by (1 − z)−1 . If q, and therefore z is a complex number with
negative real part then the magnitude of this factor is less than 1. This means that the stiff
components in a large stiff system solved by this implicit method have a desirable property not
possessed by the traditional Euler method. This property is known as A-stability and refers to the
boundedness of any sequence of approximations computed by a method when the problem being
solved is (17) and z is in the left-half complex plane.
Unfortunately, linear multistep methods cannot possess this property unless, like the implicit
Euler method, the implicit mid-point rule, the implicit trapezoidal rule and the second order
backward difference method (11), their orders do not exceed 2.
For Runge-Kutta methods, A-stability is available for any required order. For example the
Gauss methods, such as the fourth order method given by (14), are all A-stable.
7 Implementation considerations
Even though we have introduced several numerical methods and classes of numerical methods in
a constant stepsize setting, it is usually advisable to vary the stepsize in an actual computation.
The reason for this is that, for many problems, the behaviour of the solution varies in such a way
that the contribution to the overall inaccuracy of the computed solution is much greater from some
parts of the trajectory than from others. From the parts where the contribution will be small, large
stepsizes are appropriate because the greater local truncation errors will ultimately do little harm.
On the other hand, for those parts of the trajectory on which the final errors are most dependent,
small stepsizes must be used.
Using a calculus of variations argument it can be concluded that to achieve a suitable balance
in the errors arising from different parts of the trajectory, it is best to make the local truncation
error approximately the same for all steps. Hence, one of the central implementation questions will
be how best to estimate the local truncation error.
In the case of predictor-corrector methods a very simple device makes this task very easy to
accomplish. Consider for example the Adams-Bashforth (9), Adams-Moulton (10) pair written
together in the form
∗ 3 1
yi = yi−1 + h f (xi−1 , yi−1 ) − f (xi−2 , yi−2 ) , (18)
2 2
1 1
yi = yi−1 + h f (xi , yi∗ ) + f (xi−1 , yi−1 ) . (19)
2 2
Note that the predicted value yi∗ found in (18) is used only to permit the evaluation of an approx-
imate value of the derivative at the end of the current step for use in the corrector (19).
If it is assumed that all previously computed values are exactly correct, so as to restrict con-
sideration to the new errors introduced in step number i, then by Taylor series we easily find
that
5 3
y(xi ) − yi∗ = h y (xi ) + O(h4 ), (20)
12
1
y(xi ) − yi = − h3 y (xi ) + O(h4 ). (21)
12
9
To obtain an approximation to the local truncation error the difference yi∗ − yi can be calculated
and divided by 6. This is easily verified to be an asymptotically correct estimate from (20) and
(21).
A similar approximation to the local truncation error is available for any Adams-Bashforth,
Adams-Moulton pair of the same orders.
For explicit Runge-Kutta methods the computation of local truncation errors is much more
difficult and it is usual to add additional stages to make it possible to obtain two embedded
methods with comparable orders; the difference of the results computed from these gives useful
guidance on the errors, but not the asymptotically correct errors as provided by predictor corrector
pairs.
Related to error estimation is the question of actually adjusting stepsizes upwards or downwards
in accordance with the requirement of keeping local errors approximately constant from step to step.
For Runge-Kutta methods this is trivial, because only one value is passed from one step to the next.
However, for linear multistep methods there is no single accepted answer. We will discuss only one
of these, the ‘Nordsieck technique’; this approach has become as popular as any other. In the
Nordsieck method the information is carried from step to step, not in its natural form as single y
value and a set of k f values, but as linear combinations of these quantities which approximate the
sequence of scaled derivatives
h2 hk (k)
y(xi ), hy (xi ), y (xi ), ... , y (xi ).
2! k!
For example, in the case of the Adams-Bashforth, Adams-Moulton pair given by (18), (19),
the approximations to y(xi ), hy (xi − 1) and 12 h2 y (xi−1 ) are, respectively, yi , f (xi−1 , yi−1 ) and
1
2
(f (xi−1 , yi−1 ) − f (xi−2 , yi−2 )). If, before step number i is taken, the step size is to be changed
from h to rh, then the three incoming approximations can be scaled respectively by 1, r and r2 .
If r is always equal to 1, then the reformulation in terms of scaled derivatives makes no difference
to the computed result but the variable stepsize generalization is also possible and is simple and
convenient only in the scaled derivative formulation.
Implementation difficulties for implicit methods applied to stiff problems centre round the solu-
tion of the algebraic system defining the stage values, or the final step result in the case of a linear
multistep method. The algebraic system is usually solved by a variant of the Newton-Raphson
method. The cost of this solution increases sharply with the dimension of the differential equation
system to be solved and, equally significant, it rises sharply with the number of stages. Avoiding
the s factors by using backward difference methods (for which s = 1) limits the order to 2 or implies
that the advantages of A-stabilty are sacrificed.
In the case of Runge-Kutta methods, the cost can be lowered considerably by selecting methods
with special structures. The most important of these special structures is for the coefficient matrix
A to have a one-point spectrum (the resulting methods are said to be ‘singly implicit’).
10
Many physical problems are more appropriately modelled using not differential equations alone,
but differential equations combined with algebraic constraints. In some cases these can be viewed as
limiting cases of singular perturbation problems but they are often so-called ‘differential-algebraic
equations’ in their own right. Many of the methods associated with stiff problems can be adapted
to these more difficult problems.
Another type of physical problem for which differential equations alone are not an adequate
means of description, is where the rate of change of some or all of the dependent variables needs to
be written in terms of the variable values at one or more times in the past, as well as at the present
time. These ‘delay-differential equations’ are often non-stiff and the complications arise from the
need to interpolate already computed solutions to evaluate approximations to the delay values. The
solution of these problems by explicit Runge-Kutta methods imposes additional design demands
on the methods because accurate interpolation formulae must be provided; these are not readily
available unless additional stages are added to the method. Another characteristic difficulty with
delay problems is the occurrence of discontinuous behaviour in the solution. Special techniques are
needed to locate and to pass through the discontinuities without destroying too greatly the efficient
numerical performance. Delay differential equations arise, for example, in economic and biological
modelling.
9 Concluding remarks
In this short survey of numerical methods for ordinary differential equations it has been possible to
mention only selected aspects of this very active research area. We have not mentioned methods
that make use of higher derivatives of the solution, for example Taylor series methods. We have
also not mentioned methods that are both multistage (as in Runge-Kutta methods) and multivalue
(as in linear multistep methods); these have many of the desirable properties of each of the main
traditional classes which they generalize. It has also not been possible to discuss recent exciting
work on the solution of problems arising from a Hamiltonian formulation of mechanical problems. It
is important to identify numerical methods with a ‘symplectic’ character because these are capable
of accurately preserving theoretical invariants that, in general, cannot be held constant with other
numerical methods.
It has not been convenient to give detailed references to all the work that has been discussed in
the paper. However, a number of references are listed. Some of these are textbooks and reference
books on the subject and some are historically important papers from the literature of this subject.
The work by Cauchy [6], Coriolis [7] and Euler [12] is fundamental both to the theory of
differential equations and to their numerical solution. The papers by Runge [24], Heun [18], Kutta
[19] and Nyström [23] contain the early work on Runge-Kutta methods while the papers by Gill
[14], Merson [21] and Butcher [3] represent the modern phase of the theory of these methods. Some
papers on some of the aspects of implicit Runge-Kutta methods that we have discussed here are
those by Butcher [4], Ehle [11] and Burrage, Butcher and Chipman [2].
Some of the fundamental publications on linear multistep methods are Adams and Bashforth
[1], Moulton [22], Dahlquist [9] and Curtiss and Hirschfelder [8]. In addition to [8], fundamental
reading on stiffness and A-stability includes a further paper by Dahlquist [10] and the paper by
Wanner, Hairer and Nørsett, [26], where the idea of ‘order stars’ was first introduced.
Some of the early and successful techniques used in the implementation of linear multistep
methods, particularly involving the Nordsieck approach, are presented in the book by Gear [13].
Many other monographs and textbooks on the subject of this survey have been produced within
11
the last 35 years. Amongst them are those by Henrici [17], Lambert [20], Stetter [25], Butcher [5]
and the two-volume set by Hairer, Nørsett and Wanner [15] and by Hairer and Wanner [16].
References
[1] J.C. Adams, Appendix in F. Bashforth, An attempt to test the theories of capillary action by
comparing the theoretical and measured forms of drops of fluid. With an explanation of the
method of integration employed in constructing the tables which give the theoretical form of
such drops, by J.C.Adams. Cambridge Univ. Press. (1883).
[3] J. C. Butcher, Coefficients for the study of Runge-Kutta integration processes, J. Austral.
math. Soc., 3 (1963), 185-201.
[5] J. C. Butcher, The Numerical Analysis of Ordinary Differential Equations, John Wiley & Sons
(1986).
[6] A.L. Cauchy, Résumé des Leçons données à l’Ecole Royale Polytechnique. Suite du Calcul
Infinitésimal; published: Equations différentielles ordinaires, ed. Chr. Gilain, Johnson 1981.
[7] G. Coriolis, Mémoire sur le degré d’approximation qu’on obtient pour les valeurs numériques
d’une variable qui satisfait à une équation différentielle, en employant pour calculer ces valeurs
diverses équations aux différences plus ou moins approchées, J. de Mathématiques pures et
appliquées (Liouville), 2 (1837), 229-244.
[8] C.F. Curtiss & J.O. Hirschfelder, Integration of stiff equations. Proc. Nat. Acad. Sci., 38
(1952), 235-243.
[9] G. Dahlquist, Convergence and stability in the numerical solution of ordinary differential
equations, Math. Scand. 4 (1956), 33-53.
[10] G. Dahlquist, A special stability problem for linear multistep methods, BIT 3 (1963), 27-43.
[11] B.L. Ehle, On Padé approximations to the exponential function and A-stable methods for the
numerical solution of initial value problems, Research Report CSRR 2010, Dept. AACS, Univ.
of Waterloo, Ontario, Canada (1969).
[12] L. Euler, Institutionum Calculi Integralis. Volumen Primum, (1768), Opera Omnia, Vol.XI.
[13] C. W. Gear, Numerical initial value problems in ordinary differential equations, Prentice-Hall
(1971).
[14] S. Gill, A process for the step-by-step integration of differential equations in an automatic
digital computing machine. Proc. Cambridge Philos. Soc., 47 (1951), 95-108.
[15] E. Hairer, S.P. Nørsett & G. Wanner, Solving ordinary differential equations I. Nonstiff prob-
lems, Springer (1987, Second edition 1993).
12
[16] E. Hairer & G. Wanner, Solving ordinary differential equations II. Stiff and differential-
algebraic problems, Springer (1991, Second edition 1996).
[17] P. Henrici, Discrete variable methods in ordinary differential equations. John Wiley & Sons
(1962).
[18] K. Heun, Neue Methode zur approximativen Integration der Differentialgleichungen einer un-
abhängigen Veränderlichen. Zeitschr. für Math. u. Phys., 45 (1900), 23-38.
[20] J. D. Lambert, Numerical methods for ordinary differential equations, John Wiley & Sons
(1991).
[21] R.H. Merson, An operational method for the study of integration processes. Proc. Symp. Data
Processing, Weapons Research Establishment, Salisbury, Australia, (1957) 110-1 to 110-25.
[22] F.R. Moulton, New methods in exterior ballistics. Univ. Chicago Press, (1926).
[23] E.J. Nyström, Ueber die numerische Integration von Differentialgleichungen. Acta Soc. Sci.
Fenn., 50 (1925), p.1-54.
[24] C. Runge, Ueber die numerische Auflösung von Differentialgleichungen. Math. Ann., 46 (1895),
167-178.
[25] H. J. Stetter, Analysis of discretization methods for ordinary differential equations, Springer
(1973).
[26] G. Wanner, E. Hairer & S.P. Nørsett, Order stars and stability theorems, BIT 18 (1978),
475-489.
13