Vous êtes sur la page 1sur 16

RELATION BETWEEN FIXED AND VARIABLE STEP ADAMS

METHODS

H. RAMOS

Abstract. In this paper we develop a procedure to convert a k-step Adams


method in his variable step-size version. We show a procedure to obtain explic-
itly the coefficients of the variable-step Adams-Basforth and Adams-Moulton
methods.

Introduction
The Adams methods for solving initial value problems in ordinary differential
equations have a long history since they were introduced by John C. Adams and
Francis Bashforth in 1883, and nowadays they still play an essential part in much
modern software. Particularly, in the predictor-corrector formulations they are
among the most widely used algorithms.
But to be efficient, as some authors have remarked (see e.g. [2], p.397), an
integrator based on a particular formula must be suitable for a variable step-size
formulation. However, changing the step size in a multistep method is not an easy
task. A number of different approaches have been suggested for this problem, and
among them, Krogh’s implementation for variable step-size Adams codes appears
to be superior to any other (see e.g.[7], p.76 or [5], p.128). In spite of that, it suffers
from a high computational effort because at every step some complicated integration
coefficients must be computed by a set of recurrence relations derived with the
use of q-fold integrals. As Krogh observed : ”The cost of computing integration
coefficients is the highest disadvantage to permitting arbitrary variations in the
step size” (see [4]).
The aim of this work is to obtain from a fixed k-step Adams method (explicit
or implicit) his corresponding variable-step version by an easily computable matrix
of coefficients expressed in terms of certain elementary symmetric polynomials (in
the values Hi = xi − xn , i = n − (k − 2), . . . , n, for the explicit case, and in the
values Hi = xi − xn+1 , i = n − (k − 2), . . . , n + 1,for the implicit one, where the
grid points xi are unevenly spaced).
The k-step Adams-Bashforth method with fixed step h = xn+1 − xn can be
derived by considering for the I.V.P.

(0.0.1) y 0 = f (x, y), y(x0 ) = y0 , x ∈ [x0 , xf ] ,


the differential equation in its integral form, and then, replacing f (x, y(x)) by an
interpolation polynomial passing through the points
(xn−(k−1) , yn−(k−1) ), . . . , (xn , yn ) .

Key words and phrases. Adams methods, variable step-size.


1
2 H. RAMOS

where the xi are equally spaced. In the implicit case the k-step adams-Moulton
method with fixed step h = xn+1 − xn is obtained similarly, but this time replacing
f (x, y(x)) by an interpolation polynomial passing through the preceding points
vesides with (xn+1 , yn+1 ).
Of course both types of methods, those of fixed and those of variable step, suffer
the disadvantage of needing some starting values (which must be obtained using
the Taylor series expansion, a Runge-Kutta method, or even an Adams method
with low order and very small step size).
The paper is organized as follows. In the next section we recall the Adams-
Bashforth methods in standard form, and after some transformations we express
them, using a matrix notation, in terms of a vector of derivatives of the function
f . In Section 2 we introduce some definitions about symmetric polynomials and
the Newton divided differences and present the main results for constructing the
variable-step formulae, which appear in the next section. In Section 4 we describe
a particular case and show explicitly the formulae. The next sections are devoted
to obtain similar results for the implicit Adams-Moulton formulae.

1. The Adams-Bashforth method of fixed stepsize


In the initial value problem (0.0.1) it is assumed to have the conditions for a
unique solution. In this formulation y is regarded as a scalar valued function, but
it could be interpreted as a vector valued function in order to apply the method to
a system of differential equations.
The k-step Adams-Bashforth method is usually expressed in terms of backward
differences (see e.g. [5], p.81 or [3], p.192 ):
k−1
X

(1.0.2) yn+1 = yn + h γm 5m fn
m=0

where xi = x0 + ih, and it is assumed that we know the numerical approximations


yn , yn−1 , . . . yn−(k−1) to the exact solution of the differential equation (0.0.1), and

the values fi = f (xi , yi ) for i = n−(k−1), . . . , n. The γm are constants independent
of f and k, and can be readily calculated from a recurrence relation easily obtained
using the Euler’s method of generating functions (see e.g. [2], p.358).
It can be shown (see [5], p.83) that this method has order k and error constant γk∗ .
The formula (1.0.2) may be written as
(1.0.3) yn+1 = yn + h γ ∗ (k) BkT ,
with γ ∗ (k) and Bk the k-vectors
γ ∗ (k) = (γ0∗ , γ1∗ , . . . , γk−1

),
Bk = (50 fn , 51 fn , . . . , 5k−1 fn ) .
Remark 1.0.1. The superscript T in a vector o a matrix, as inBkT , denotes transpose.
Now, in order to pass from a fixed step size method to a variable step size one,
we expand the backward differences in the formula (1.0.3) at the point xn , and
truncating the series after k terms we obtain

(1.0.4) BkT = Mk DkT + O(hk )


VARIABLE STEP-SIZE ADAMS METHODS 3

where Mk denotes the k × k matrix


 
1 0 0 0 ...
 0 h −h2 h3 ... 
 0 0 2h2 −6h3 . . . 
 
Mk =  ,
 0 0 0 6h3 . . . 
.. .. .. ..
 
..
. . . . .
Pi−1 m+j−1 i−1
 j−1
with elements (Mk )i j = m=0 (−1) m (m h) . Dk is the k-vector of
derivatives of f
f 0 (xn ) f 00 (xn ) f k−1) (xn )
 
Dk = f (xn ), , ,..., ,
1! 2! (k − 1)!

and O(hk ) is the k-vector O(hk ) = O(hk ), . . . , O(hk ) .
It follows that the k-step Adams-Bashforth method with fixed step h may be
written:

(1.0.5) yn+1 = yn + h γ ∗ (k) Mk DkT + O(hk+1 ) ,


with Mk and DkT as in (1.0.4).
The next section is devoted to describing briefly some results about symmetric
polynomials and Newton divided differences that will be used later to obtain the
final variable-step formulation for Adams methods.

2. On a connection between symmetric polynomials and the Newton


divided differences
Let us firstly recall some definitions.
Definition 2.0.2. Consider a non-negative integer n and any set having n elements
or variables, say C = {x1 , x2 , . . . , xn }. For k = 0, 1, . . . , n the elementary symmetric
polynomial of degree k in the variables x1 , x2 , . . . , xn is defined by
en,0 = 1,
X
en,k = xi1 · · · xik , k = 1, . . . , n.
1≤i1 <i2 <...<ik ≤n

Definition 2.0.3. The complete symmetric polynomial in the variables x1 , x2 , .., xn


of total degree k ≥ 0 (k integer) is defined as the sum of all monomials of degree
k. That is,
hn,0 = 1,
X k
Y
hn,k = x im , k > 0.
1≤i1 ≤i2 ≤...≤ik ≤n m=1

Remark 2.0.4. We observe that en,k = 0 for k < 0 or k > n, and hn,k = 0 for k < 0.
Definition 2.0.5. The zeroth divided difference of a function f with respect to xi ,
denoted f [xi ], is simply the evaluation of f at xi . The divided differences of higher
order are defined recursively in terms of divided differences of lower order by

f [xi , . . . , xi+k−1 ] − f [xi+1 , . . . , xi+k ]


(2.0.6) f [xi , . . . , xi+k ] = .
xi − xi+k
4 H. RAMOS

Now we turn back to the question. In (1.0.5) we had obtained a formulation for
the k-step Adams-Bashforth method appropriate for solving the problem in (0.0.1).
Suppose we want to advance from xn to xn+1 = xn + h. Let us consider k − 1
points unequally spaced preceding the xn , which we denote with a bar over them to
avoid confusion with the equally spaced points xi . Let these points be written as
xn−(k−1) , . . . , xn−1 , so that xi = xi−1 + hi for i = n − (k − 2), . . . , n. Identifying
xn = xn we set the k values Hi = xi − xn for i = n − (k − 1), . . . , n. Consequently
we have

Hn = xn − xn = 0
Hn−1 = xn−1 − xn = −hn
(2.0.7) Hn−2 = xn−2 − xn = −(hn + hn−1 )
..
.
Hn−(k−1) = xn−(k−1) − xn = −(hn + . . . + hn−(k−2) ) .

The next results will allow us to approximate the vector DkT in (1.0.5) using the
divided differences in xi and the elementary symmetric functions in the above Hi .
Theorem 2.0.6. If H∗ = max{|Hn |, |Hn−1 |, . . . , |Hn−(k−1) | with the |Hi | as in
(2.0.7) it holds
 
f [xn ]
 f [xn , xn−1 ] 
 = Pk DkT + O(H∗k ) ,
 
 ..
 . 
f [xn , . . . , xn−(k−1) ]
where Pk is the k × k matrix
 
h1,0 h1,1 . . . h1,k−1
 0 h2,0 . . . h2,k−2 
Pk =  ,
 
.. .. ..
 . . ... . 
0 0 ... hk,0
with the complete symmetric polynomials hi,j expressed in the first i variables of
Hn , Hn−1 , . . . , Hn−(k−1) , DkT is the same vector as in (1.0.5) and O(H∗k ) is the
T
k-vector O(H∗k ) = O(H∗k ), . . . , O(H∗k ) .
Proof. It results immediately by using the formula for the divided differences (see
[1], p.31)
n
X f (xi )
f [x1 , . . . , xn ] = Qn ,
i=1 j=1,j6=i (xi − xj )

and expanding in series each f (xn ), . . . , f (xn−(k−1) ) at the point xn , in view of


xi = xn + Hi . For details see [[6], p.914]. 

Theorem 2.0.7. The matrix Pk in Theorem 2.0.6 has an inverse that may be
expressed in terms of the elementary symmetric polynomials in the variables Hi , i =
n, n − 1, . . . , n − (k − 2), namely
VARIABLE STEP-SIZE ADAMS METHODS 5

(−1)k−1 ek−1,k−1
 
1 −e1,1 e2,2 ...
 0 1 −e2,1 ... (−1)k−2 ek−1,k−2 
Sk =  .
 
.. .. .. ..
 . . . ... . 
0 0 0 ... 1
Proof. It can be obtain by induction arguments, but for a more elegant approach us-
ing generating functions of both the elementary and complete symmetric functions
see [[6], p.913]. 

3. Explicit expression of the variable step size Adams-Bashforth


method
Setting h = hn+1 = xn+1 − xn = Hn+1
and H = max{|Hn+1 |, |Hn |, . . . , |Hn−(k−1) | the formula (1.0.5) may be transformed
in view of Theorem 2.0.6 and Theorem 2.0.7 to obtain the final expression for the
variable-step Adams-Bashforth method
 
f [xn ]
 f [xn , xn−1 ] 
k+1
(3.0.8) yn+1 = yn + hn+1 γ ∗ (k) Mk Sk  .  + O(H ∗ )
 
 .. 
f [xn , . . . , xn−(k−1) ]
where H ∗ = max{|Hn+1 |, |Hn |, . . . , |Hn−(k−1) | .
Remark 3.0.8. The extension of the explicit Adams-Bashforth method (1.0.2) to
variable step sizes due to Krogh (see [4]) assume that the back data is unevenly
spaced. Using Newton’s interpolation formula, and replacing divided differences
by modified divided differences in order to increase efficiency in computations, the
resulting method can be written in the form (see [1], p.137 or [2], p.398):

k−1
X
(3.0.9) yn+1 = yn + hn+1 gj (n) Φ∗j (n)
j=0

with
xn+1 j−1
t − xn−i
Z
1 Y
gj (n) = dt ,
hn+1 xn i=0
xn+1 − xn−i
and Φ∗j (n), the modified divided differences, defined by
j−1
Y
(3.0.10) Φ∗j (n) = (xn+1 − xn−i ) f [xn , . . . , xn−j ] .
i=0

The calculation of the coefficients gj (n), Φ∗j (n) is complicated and may be per-
formed by a set of recurrence relations (see [2], p.399).
Nevertheless we may easily obtain Krogh’s formulation from (3.0.8) rewriting
(6.0.23) for j = 0, . . . , k − 1 in matrix form:
T
(3.0.11) Φ∗ (n) = Gk FkT ,
where Φ∗ (n) is the k-vector Φ∗ (n) = (Φ∗0 (n), . . . , Φ∗k−1 (n)), Gk is the k × k diagonal
matrix
6 H. RAMOS


1

 Hn+1 
Gk =  ,
 
..
 . 
Qk−2
j=0 (Hn+1 − Hn−j )
Qi−2
with elements (Gk )i i = (Hn+1 − Hn−j ), and Fk is the k-vector
j=0

Fk = f [xn ], f [xn , xn−1 ], . . . , f [xn , . . . , xn−(k−1) ] .
Observe that Gk , being a diagonal matrix, can be easily inverted, and (3.0.11)
may be expressed as

T
(3.0.12) FkT = Gk −1 Φ∗ (n) .
Thus, the formula (3.0.8) results in

T k+1
(3.0.13) yn+1 = yn + hn+1 γ ∗ (k) Mk Sk Gk −1 Φ∗ (n) + O(H ∗ ).
Note that the coefficients gi (n) in (3.0.9) may be obtained equating the k-vectors
(g0 (n), . . . , gk−1 (n)) = γ ∗ (k) Mk Sk Gk −1 .

4. Some particular cases for the V.S. Adams-Bashforth formulas


In this section we firstly are going to illustrate the procedure for a selected
number of steps, say k = 4, to avoid large expressions. As the final result in
formulas (3.0.8) and (3.0.13) will be the same, we will use the latest (for no special
reason).
We consider the grid points xn−3 , xn−2 , xn−1 , xn , xn+1 , with

Hn+1 = xn+1 − xn = hn+1


Hn = xn − xn = 0
(4.0.14) Hn−1 = xn−1 − xn = −hn
Hn−2 = xn−2 − xn = −(hn + hn−1 )
Hn−3 = xn−3 − xn = −(hn + hn−1 + hn−2 ) .
and it is assumed that the back values, yn−3 , yn−2 , yn−1 , yn are known, and thus,
fn−3 , fn−2 , fn−1 , fn . We shall express every term in the formula (3.0.13) explicitly
for k = 4:
γ ∗ (4) is the vector of the first four coefficients in the Adams-Bashforth fixed step
formula (1.0.2), that is,
1 5 3
γ ∗ (4) = (1, , , ) .
2 12 8
M4 is the matrix in (1.0.4), namely,
 
1 0 0 0
 0 h −h2 h3 
M4 =  2
.
 0 0 2h −6h3 
0 0 0 6h3
VARIABLE STEP-SIZE ADAMS METHODS 7

S4 is the matrix that appears in Theorem 2.0.7, for this case,

 
1 0 0 0
 0 1 −Hn−1 Hn−1 Hn−2 
S4 = 
 .
0 0 1 −Hn−1 − Hn−2 
0 0 0 1

G−1
4 is the inverse of the diagonal matrix G4 , (see 3.0.11), namely

 
1
 1 
Hn+1
G−1 = .
 
k 1
 Hn+1 (Hn+1 −Hn−1 ) 
1
Hn+1 (Hn+1 −Hn−1 )(Hn+1 −Hn−2 )

T
Φ∗ (n) is the transpose vector of modified divided differences given by

Φ∗ (n) = (Φ∗0 (n), Φ∗1 (n), Φ∗2 (n), Φ∗3 (n) .

Making the product of the matrices γ ∗ (4) M4 S4 G−1


4 , the formula (3.0.13) for
k = 4 results in

 ∗ 
Φ0 (n)
1 2 + 3 c0 3 + 6 c0 2 (1 + c1 ) + 4 c0 (2 + c1 )  ∗
 
 Φ1∗ (n)

yn+1 = yn + hn+1 1, , , 
2 6 + 6 c0 12 (1 + c0 ) (1 + c0 (1 + c1 ))  Φ2 (n) 
Φ∗3 (n)
Φ∗1 (n) Φ∗2 (n) (2 + 3 c0 )
= Φ∗0 (n) + + +
2 6 + 6 c0
Φ∗3 (n) 3 + 6 c0 2 (1 + c1 ) + 4 c0 (2 + c1 )

,
12 (1 + c0 ) (1 + c0 (1 + c1 ))

where in order to simplify we have introduce the notation

hn hn−1 hn−2
(4.0.15) = c0 , = c1 , = c2
hn+1 hn hn−1

for the step size ratios.


The coefficients Φ∗j (n), j = 0, 1, 2, 3 can be computed efficiently with the recur-
rence relations that appear in ([2], p.399). Evaluating directly these coefficients
8 H. RAMOS

and using the relations in (4.0.15) we obtain:

Φ∗0 (n) = f (xn )


f (xn ) − f (xn−1 )
Φ∗1 (n) =
c0
c1 f (xn ) − (1 + c1 ) f (xn−1 ) + (1 + c0 ) (f (xn−2 ))
Φ∗2 (n) =
c0 2 c1 (1 + c1 )
(1 + c0 ) c1 2 (1 + c0 (1 + c1 )) c2 (1 + c2 ) f (xn )
Φ∗3 (n) = −
c0 3 c1 2 (1 + c1 ) c2 (1 + c2 ) (1 + c1 (1 + c2 ))
(1 + c0 ) (1 + c1 ) (1 + c0 (1 + c1 )) c2 (1 + c1 (1 + c2 )) f (xn−1 )
+
c0 3 c1 2 (1 + c1 ) c2 (1 + c2 ) (1 + c1 (1 + c2 ))
(1 + c0 ) (1 + c0 (1 + c1 )) (1 + c2 ) (1 + c1 (1 + c2 )) f (xn−2 )

c0 3 c1 2 (1 + c1 ) c2 (1 + c2 ) (1 + c1 (1 + c2 ))
(1 + c0 ) (1 + c1 ) (1 + c0 (1 + c1 )) f (xn−3 )
.
c0 3 c1 2 (1 + c1 ) c2 (1 + c2 ) (1 + c1 (1 + c2 ))

Thus, the variable 4-steps Adams-Bashforth formula can be written in terms of


the yj = y(xj ) and fj = f (xj , yj ) values as

hn+1
(4.0.16) yn+1 = yn + (A fn − B fn−1 + C fn−2 − D fn−3 ) ,
E

with

= c1 2 c2 (1 + c2 ) 3 + 12 c0 3 (1 + c1 ) (1 + c1 (1 + c2 ))

A
+4 c0 (3 + c1 (2 + c2 )) + 6 c0 2 3 + c1 2 (1 + c2 ) + 2 c1 (2 + c2 ) ,


(1 + c1 ) c2 (1 + c1 (1 + c2 )) 3 + 6 c0 2 (1 + c1 ) (1 + c1 (1 + c2 ))

B =
+4 c0 (2 + c1 (2 + c2 ))] ,
(1 + c2 ) (1 + c1 (1 + c2 )) 3 + 6 c0 2 (1 + c1 (1 + c2 ))

C =
+4 c0 (2 + c1 (1 + c2 ))] ,
(1 + c1 ) 3 + 6 c0 2 (1 + c1 ) + 4 c0 (2 + c1 ) ,

D =
E = 12 c0 3 c1 2 (1 + c1 ) c2 (1 + c2 ) (1 + c1 (1 + c2 )) .

Remark 4.0.9. We can easily recover the fixed step formula from (4.0.16) choosing
values ci = 1, since in this case the step sizes become constant, and the formula
reduces to the well-known 4-steps Adams-Bashforth method (see [5], p.83):

h
yn+1 = yn + (55fn − 59fn−1 + 37fn−2 − 9fn−3 ) .
24
VARIABLE STEP-SIZE ADAMS METHODS 9

hn−j
Similarly, for k = 1, 2, 3, 6, and setting hn−j+1 = cj , j = 0, . . . , k−2 as in (4.0.15),
one obtains the explicit formulas:

k=1: yn+1 = yn + hn+1 fn ,


hn+1
k=2: yn+1 = yn + ((1 + 2c0 )fn − fn−1 ) ,
2c0
hn+1
c1 2 + 6 c0 2 (1 + c1 ) + 3 c0 (2 + c1 ) fn

k=3: yn+1 = yn + 2
6 c0 c1 (1 + c1 )
− (1 + c1 ) (2 + 3 c0 (1 + c1 )) fn−1 + (2 + 3 c0 ) fn−2 ) ,
hn+1
k=6: yn+1 = yn + (Afn − Bfn−1 + Cfn−2 − Dfn−3 + Efn−4 − F fn−5 ) ,
G

with

A = c1 4 c2 3 c3 2 c4 (1 + c2 ) (1 + c3 ) (1 + c2 (1 + c3 )) (1 + c4 ) (1 + c3 (1 + c4 ))
(1 + c2 (1 + c3 (1 + c4 ))) 10 + 60 c0 5 (1 + c1 ) (1 + c1 (1 + c2 ))


(1 + c1 (1 + c2 (1 + c3 ))) (1 + c1 (1 + c2 (1 + c3 + c3 c4 ))) +
12 c0 (5 + c1 (4 + c2 (3 + c3 (2 + c4 )))) + 15 c0 2 (10 + 4 c1 (4 + c2 (3 + c3 (2 + c4 ))) +
c1 2 6 + 3 c2 (3 + c3 (2 + c4 )) + c2 2 3 + c3 2 (1 + c4 ) + 2 c3 (2 + c4 )

+
20 c0 3 10 + 6 c1 (4 + c2 (3 + c3 (2 + c4 ))) + 3 c1 2 (6 + 3 c2 (3 + c3 (2 + c4 )) +
c2 2 3 + c3 2 (1 + c4 ) + 2 c3 (2 + c4 ) + c1 3 4 + c2 3 (1 + c3 ) (1 + c3 (1 + c4 )) +


3 c2 (3 + c3 (2 + c4 )) + 2 c2 2 3 + c3 2 (1 + c4 ) + 2 c3 (2 + c4 )

+
30 c0 4 5 + c1 4 (1 + c2 ) (1 + c2 (1 + c3 )) (1 + c2 (1 + c3 (1 + c4 ))) +
4 c1 (4 + c2 (3 + c3 (2 + c4 ))) + 3 c1 2 (6 + 3 c2 (3 + c3 (2 + c4 )) +
c2 2 3 + c3 2 (1 + c4 ) + 2 c3 (2 + c4 ) + 2 c1 3 4 + c2 3 (1 + c3 ) (1 + c3 (1 + c4 )) +


3 c2 (3 + c3 (2 + c4 )) + 2 c2 2 3 + c3 2 (1 + c4 ) + 2 c3 (2 + c4 )

,

B = c2 3 c3 2 c4 (1 + c1 ) (1 + c1 (1 + c2 )) (1 + c3 ) (1 + c1 (1 + c2 (1 + c3 )))
(1 + c4 ) (1 + c3 (1 + c4 )) (1 + c1 (1 + c2 (1 + c3 + c3 c4 )))
10 + 30 c0 4 (1 + c1 ) (1 + c1 (1 + c2 )) (1 + c1 (1 + c2 (1 + c3 )))


(1 + c1 (1 + c2 (1 + c3 + c3 c4 ))) + 12 c0 (4 + c1 (4 + c2 (3 + c3 (2 + c4 )))) +
15 c0 2 6 + 3 c1 (4 + c2 (3 + c3 (2 + c4 ))) + c1 2 (6 + 3 c2 (3 + c3 (2 + c4 ))
c2 2 3 + c3 2 (1 + c4 ) + 2 c3 (2 + c4 ) + 20 c0 3 (4 + 3 c1 (4 + c2 (3 + c3 (2 + c4 ))) +


2 c1 2 6 + 3 c2 (3 + c3 (2 + c4 )) + c2 2 3 + c3 2 (1 + c4 ) + 2 c3 (2 + c4 ) +


c1 3 4 + c2 3 (1 + c3 ) (1 + c3 (1 + c4 )) + 3 c2 (3 + c3 (2 + c4 )) +
2 c2 2 3 + c3 2 (1 + c4 ) + 2 c3 (2 + c4 )

],
10 H. RAMOS

C = c3 2 c4 (1 + c2 ) (1 + c1 (1 + c2 )) (1 + c2 (1 + c3 ))
(1 + c1 (1 + c2 (1 + c3 ))) (1 + c4 ) (1 + c2 (1 + c3 (1 + c4 )))
(1 + c1 (1 + c2 (1 + c3 + c3 c4 )))
10 + 30 c0 4 (1 + c1 (1 + c2 )) (1 + c1 (1 + c2 (1 + c3 )))


(1 + c1 (1 + c2 (1 + c3 + c3 c4 ))) + 12 c0 (4 + c1 (3 + c2 (3 + c3 (2 + c4 )))) +
15 c0 2 (6 + 3 c1 (3 + c2 (3 + c3 (2 + c4 ))) +
c1 2 (3 + 2 c2 (3 + c3 (2 + c4 )) +
c2 2 3 + c3 2 (1 + c4 ) + 2 c3 (2 + c4 )

+
3 3
20 c0 4 + c1 (1 + c2 ) (1 + c2 (1 + c3 )) (1 + c2 (1 + c3 (1 + c4 ))) +
3 c1 (3 + c2 (3 + c3 (2 + c4 ))) +
2 c1 2 (3 + 2 c2 (3 + c3 (2 + c4 )) +
c2 2 3 + c3 2 (1 + c4 ) + 2 c3 (2 + c4 )

],

D = c4 (1 + c1 ) (1 + c3 ) (1 + c2 (1 + c3 )) (1 + c1 (1 + c2 (1 + c3 )))
(1 + c3 (1 + c4 )) (1 + c2 (1 + c3 (1 + c4 )))
(1 + c1 (1 + c2 (1 + c3 + c3 c4 )))
10 + 30 c0 4 (1 + c1 ) (1 + c1 (1 + c2 (1 + c3 )))


(1 + c1 (1 + c2 (1 + c3 + c3 c4 ))) +
12 c0 (4 + c1 (3 + c2 (2 + c3 (2 + c4 )))) +
15 c0 2 (6 + 3 c1 (3 + c2 (2 + c3 (2 + c4 ))) +
c1 2 3 + c2 2 (1 + c3 ) (1 + c3 (1 + c4 )) +
2 c2 (2 + c3 (2 + c4 )))) +
20 c0 3 4 + c1 3 (1 + c2 (1 + c3 )) (1 + c2 (1 + c3 (1 + c4 ))) +
3 c1 (3 + c2 (2 + c3 (2 + c4 ))) +
2 c1 2 3 + c2 2 (1 + c3 ) (1 + c3 (1 + c4 )) + 2 c2 (2 + c3 (2 + c4 ))

,

E = (1 + c1 ) (1 + c2 ) (1 + c1 (1 + c2 )) (1 + c4 ) (1 + c3 (1 + c4 ))
(1 + c2 (1 + c3 (1 + c4 ))) (1 + c1 (1 + c2 (1 + c3 + c3 c4 )))
10 + 30 c0 4 (1 + c1 ) (1 + c1 (1 + c2 )) (1 + c1 (1 + c2 (1 + c3 + c3 c4 ))) +


12 c0 (4 + c1 (3 + c2 (2 + c3 + c3 c4 ))) +
15 c0 2 (6 + 3 c1 (3 + c2 (2 + c3 + c3 c4 )) +
c1 2 3 + c2 2 (1 + c3 + c3 c4 ) + 2 c2 (2 + c3 + c3 c4 ) +


20 c0 3 4 + c1 3 (1 + c2 ) (1 + c2 (1 + c3 (1 + c4 ))) +
3 c1 (3 + c2 (2 + c3 (1 + c4 ))) +
2 c1 2 3 + c2 2 (1 + c3 (1 + c4 )) + 2 c2 (2 + c3 (1 + c4 ))

,
VARIABLE STEP-SIZE ADAMS METHODS 11

F = (1 + c1 ) (1 + c2 ) (1 + c3 ) (1 + c1 (1 + c2 ))
(1 + c2 (1 + c3 )) (1 + c1 (1 + c2 (1 + c3 )))
10 + 30 c0 4 (1 + c1 ) (1 + c1 (1 + c2 ))


(1 + c1 (1 + c2 (1 + c3 ))) + 12 c0 (4 + c1 (3 + c2 (2 + c3 ))) +
15 c0 2 (6 + 3 c1 (3 + c2 (2 + c3 )) +
c1 2 3 + c2 2 (1 + c3 ) + 2 c2 (2 + c3 ) +


20 c0 3 4 + c1 3 (1 + c2 ) (1 + c2 (1 + c3 )) + 3 c1 (3 + c2 (2 + c3 )) +
2 c1 2 3 + c2 2 (1 + c3 ) + 2 c2 (2 + c3 )

,

G = 60 c0 5 c1 4 c2 3 c3 2 c4 (1 + c1 ) (1 + c2 ) (1 + c3 ) (1 + c4 )
(1 + c1 + c1 c2 ) (1 + c2 + c2 c3 ) (1 + c3 + c3 c4 )
(1 + c1 + c1 c2 + c1 c2 c3 ) (1 + c2 + c2 c3 + c2 c3 c4 )
(1 + c1 + c1 c2 + c1 c2 c3 + c1 c2 c3 c4 ) .

5. The Adams-Moulton method of fixed stepsize


The k-step Adams-Moulton method of fixed step h = xn+1 − xn is also usually
expressed in terms of backward differences (see e.g. [5], p.85 or [3], p.194):
k
X
(5.0.17) yn+1 = yn + h γm 5m fn+1
m=0

where xi = x0 + ih, and it is assumed that we know the numerical approximations


yn , yn−1 , . . . yn−(k−1) to the exact solution of the differential equation (0.0.1), and
the vaues fi = f (xi , yi ) for i = n−(k −1), . . . , n. The γm are constants independent
of f and k, and can be readily calculated from a recurrence relation easily obtained
using the Euler’s method of generating functions (see e.g. [2], p.359).
It can be shown (see [5], p.86) that this method has order k + 1 and error constant
γk+1 .
The formula (5.0.17) may be written as
T
(5.0.18) yn+1 = yn + h γ(k + 1) B k+1 ,
with γ(k + 1) and B k+1 the k + 1-vectors
γ(k + 1) = (γ0 , γ1 , . . . , γk ) ,
B k+1 = (50 fn+1 , 51 fn+1 , . . . , 5k fn+1 ) .
Now, in order to pass from a fixed step size method to a variable step size one,
we expand as we did in Section 1 the backward differences in the formula (5.0.18),
in this case at the point xn+1 , and truncating the series after k + 1 terms we obtain

T
(5.0.19) B k+1 = Mk+1 Dk+1 + O(hk+1 )
where Mk+1 denotes the same matrix as in (1.0.4) but now of order k + 1, with
Pi−1
elements (Mk+1 )i j = m=0 (−1)m+j−1 i−1
 j−1
m (m h) . Dk+1 is the k + 1-vector of
12 H. RAMOS

derivatives of f
f 0 (xn+1 ) f 00 (xn+1 ) f k) (xn+1 )
 
Dk+1 = f (xn+1 ), , ,..., ,
1! 2! k!

and O(hk+1 ) is the k + 1-vector O(hk+1 ) = O(hk+1 ), . . . , O(hk+1 ) .
It follows that the k-steps Adams-Moulton method with fixed step h may be
written:

T
(5.0.20) yn+1 = yn + h γ(k + 1) Mk+1 Dk+1 + O(hk+2 ) ,
T
with Mk+1 and Dk+1 as in (5.0.19).

6. Explicit expression of the variable step size Adams-Moulton


method
In this section we reproduce the analysis in Section 2, but now for the implicit
formulae.
Suppose we want to advance from xn to xn+1 = xn + h. Let us consider k − 1
points unequally spaced preceding the xn , besides with xn and xn+1 , which we
denote with a bar over them to avoid confusion with the equally spaced points
xi . Let these points be written as xn−(k−1) , . . . , xn−1 , so that xi = xi−1 + hi for
i = n − (k − 2), . . . , n. Identifying xn = xn and xn+1 = xn+1 we set the k + 1 values
Ĥi = xi − xn+1 for i = n − (k − 1), . . . , n + 1. Consequently we have

Ĥn+1 = xn+1 − xn+1 = 0


Ĥn = xn − xn+1 = −hn+1
(6.0.21) Ĥn−1 = xn−1 − xn+1 = −(hn+1 + hn )
..
.
Ĥn−(k−1) = xn−(k−1) − xn+1 = −(hn+1 + hn + . . . + hn−(k−2) ) .
T
The next results will allow us to approximate the vector Dk+1 in (5.0.20) using
the divided differences in xi , i = n−(k −1), . . . , n+1 and the elementary symmetric
functions in the above Ĥi . Now, for the implicit case we have two similar theorems
as in Section 2:
Theorem 6.0.10. If Ĥ = max{|Ĥn+1 |, |Ĥn |, . . . , |Ĥn−(k−1) | with the |Ĥi | as in
(6.0.21) then it holds
 
f [xn+1 ]
 f [xn+1 , xn ] 
T
 = Pk+1 D̂k+1 + O(Ĥ k+1 )
 
 ..
 . 
f [xn+1 , . . . , xn−(k−1) ]
where P̂k+1 is similar to the matrix in (1.0.4) but now of order k + 1:
 
h1,0 h1,1 . . . h1,k
 0 h2,0 . . . h2,k−1 
P̂k+1 =  . ,
 
.. ..
 .. . ... . 
0 0 ... hk+1,0
VARIABLE STEP-SIZE ADAMS METHODS 13

and with the complete symmetric polynomials hi,j expressed in the first i variables
T
of Ĥn+1 , Ĥn , . . . , Ĥn−(k−1) . D̂k+1 is the same vector as in (5.0.20) and O(Ĥ k+1 )
 T
the k + 1-vector O(Ĥ k+1 ) = O(Ĥ k+1 ), . . . , O(Ĥ k+1 ) .

Proof. It is similar to the Theorem 2.0.6. 

Theorem 6.0.11. The matrix P̂k+1 in Theorem 6.0.10 has an inverse that may
be expressed in terms of the elementary symmetric polynomials in the variables
Ĥn+1 , . . . , Ĥn−(k−2) , namely

(−1)k ek,k
 
1 −e1,1 e2,2 ...
 0 1 −e2,1 ... (−1)k−1 ek,k−1 
Ŝk+1 =  .
 
.. .. .. ..
 . . . ... . 
0 0 0 ... 1
Proof. It is similar to the Theorem 2.0.7. 

Now in view of the above Theorems we obtain the following result:


 
f [xn+1 ]
 f [xn+1 , xn ] 
T
D̂k+1 = Ŝk+1  .  + O(Ĥ k+1 ) ,
 
 .. 
f [xn+1 , . . . , xn−(k−1) ]

and thus, setting h = hn+1 = xn+1 − xn = −Ĥn , the formula in (5.0.20) may be
rewritten in the form

(6.0.22)  
f [xn+1 ]
 f [xn+1 , xn ] 
yn+1 = yn + hn+1 γ(k + 1) Mk+1 Ŝk+1   + O(Ĥ k+2 ) .
 
..
 . 
f [xn+1 , . . . , xn−(k−1) ]
Remark 6.0.12. In analogy to the Remark 3.0.8 we can easily obtain from (6.0.22)
another formulation for the variable step size Adams-Moulton method. The for-
mula, similar to that in (3.0.9) may be expressed as
k
X
yn+1 = yn + hn+1 gj (n + 1) Φj (n + 1) ,
j=0

where the gj (n + 1) are defined by


xn+1 j−1
t − xn+1−i
Z
1 Y
gj (n + 1) = dt ,
hn+1 xn i=0
xn+1 − xn+1−i

and Φj (n + 1) as
j−1
Y
(6.0.23) Φj (n + 1) = (xn+1 − xn+1−i ) f [xn+1 , . . . , xn+1−j ] .
i=0
14 H. RAMOS

Expressing the Φj (n + 1) in matrix form we obtain:


T
Φ(n + 1) = (Φ0 (n + 1), . . . , Φk (n + 1))T = Ĝk+1 Fk+1
T
,

where Ĝk+1 is the k + 1 × k + 1 diagonal matrix

1
 
 −Ĥn 
Ĝk+1 =  ,
 
..
 . 
Qk−1
j=0 (−Ĥn−j )
Qi−2
with elements (Ĝk+1 )i i = (−Ĥn−j ), and Fk+1 is the k + 1-vector
j=0

Fk+1 = f [xn+1 ], f [xn+1 , xn ], . . . , f [xn+1 , . . . , xn−(k−1) ] .

Now, the matrix Ĝk+1 can be easily inverted, and the formula (6.0.22) results in
T
yn+1 = yn + hn+1 γ(k + 1) Mk+1 Ŝk+1 Ĝ−1
k+1 Φ(n + 1) + O(Ĥ
k+2
).

7. Some particular cases for the V.S. Adams-Moulton formulas


We illustrate the procedure for k = 4, to avoid large expressions. So, we consider
the grid points xn−3 , xn−2 , xn−1 , xn , xn+1 , with

Ĥn+1 = xn+1 − xn+1 = 0


Ĥn = xn − xn+1 = −hn+1
(7.0.24) Ĥn−1 = xn−1 − xn+1 = −(hn+1 + hn )
Ĥn−2 = xn−2 − xn+1 = −(hn+1 + hn + hn−1 )
Ĥn−3 = xn−3 − xn+1 = −(hn+1 + hn + hn−1 + hn−2 ) .

and it is assumed that the back values, yn−3 , yn−2 , yn−1 , yn are known, and thus,
fn−3 , fn−2 , fn−1 , fn . We shall express every term in the formula (6.0.22) explicitly
for k = 4:
γ(5) is the vector of the first five coefficients in the Adams-Moulton fixed step
formula (5.0.17), that is,
−1 −1 −1 −19
γ(5) = (1, , , , ).
2 12 24 720
M5 is the matrix that appear in (5.0.19), namely,
 
1 0 0 0 0

 0 h −h2 h3 −h4 

M5 = 
 0 0 2h2 −6h3 14h4 .

 0 0 0 6h3 −36h4 
0 0 0 0 24h4

Ŝ5 is the matrix that appears in Theorem 6.0.11, for this case,
VARIABLE STEP-SIZE ADAMS METHODS 15

 
1 0 0 0 0

 0 1 −Hn Hn Hn−1 −Hn Hn−1 Hn−2 

Ŝ5 = 
 0 0 1 −Hn − Hn−1 Hn Hn−1 + Hn Hn−2 + Hn−1 Hn−2 .

 0 0 0 1 −Hn − Hn−1 − Hn−2 
0 0 0 0 1
Evaluating directly the vector of divided differences in (6.0.22) and setting
hn−j
= cj , j = 0, . . . , k − 2
hn−j+1
for the step size ratios, the variable 4-steps Adams-Moulton formula can be written
in terms of the yj = y(xj ) and fj = f (xj , yj ) values as

hn+1  
(7.0.25) yn+1 = yn + Â fn+1 + B̂ fn − Ĉ fn−1 + D̂ fn−2 − Ê fn−3 ,

with
 = c0 3 c1 2 c2 (1 + c1 ) (1 + c2 ) (1 + c1 (1 + c2 ))
12 + 15 c0 (3 + c1 (2 + c2 )) + 20 c0 2 3 + c1 2 (1 + c2 ) + 2 c1 (2 + c2 )
 

+30 c0 3 (1 + c1 ) (1 + c1 (1 + c2 )) ,


B̂ = c1 2 c2 (1 + c0 ) (1 + c2 ) (1 + c0 (1 + c1 )) (1 + c0 (1 + c1 (1 + c2 )))
3 + 5 c0 (3 + c1 (2 + c2 )) + 10 c0 2 3 + c1 2 (1 + c2 ) + 2 c1 (2 + c2 )
 

+30 c0 3 (1 + c1 ) (1 + c1 (1 + c2 )) ,


Ĉ = c2 (1 + c1 ) (1 + c0 (1 + c1 )) (1 + c1 (1 + c2 )) (1 + c0 (1 + c1 (1 + c2 )))
3 + 10 c0 2 (1 + c1 ) (1 + c1 (1 + c2 )) + 5 c0 (2 + c1 (2 + c2 )) ,
 

D̂ = (1 + c0 ) (1 + c2 ) (1 + c1 (1 + c2 )) (1 + c0 (1 + c1 (1 + c2 )))
3 + 10 c0 2 (1 + c1 (1 + c2 )) + 5 c0 (2 + c1 (1 + c2 )) ,
 

(1 + c0 ) (1 + c1 ) (1 + c0 (1 + c1 )) 3 + 10 c0 2 (1 + c1 ) + 5 c0 (2 + c1 ) ,

Ê =
F̂ = 60 c0 3 c1 2 c2 (1 + c0 ) (1 + c1 ) (1 + c2 ) (1 + c0 + c0 c1 )
(1 + c1 + c1 c2 ) (1 + c0 + c0 c1 + c0 c1 c2 ) .
Similarly, for k = 1, 2, 3, one obtains the implicit formulas:
hn+1
k=1: yn+1 = yn + (fn+1 + fn ) ,
2
hn+1
c0 (2 + 3c0 )fn+1 + (1 + 4c0 + 3c0 2 )fn − fn−1 ,

k=2: yn+1 = yn +
6c0 (1 + c0 )
hn+1  2
k=3: yn+1 = yn + 2
c0 c1 (1 + c1 )
12 c0 c1 (1 + c0 ) (1 + c1 ) (1 + c0 + c0 c1 )
3 + 4 c0 (2 + c1 ) + 6 c0 2 (1 + c1 ) fn+1 +


(1 + c0 ) c1 (1 + c0 (1 + c1 )) 1 + 6 c0 2 (1 + c1 ) + 2 c0 (2 + c1 ) )fn +


(1 + c1 ) (1 + c0 (1 + c1 )) (1 + 2 c0 (1 + c1 )) fn−1 + (1 + c0 ) (1 + 2 c0 ) fn−2 ] .
16 H. RAMOS

References
[1] Crouzeix,M. and Mignot,A.L. : 1984, ”Analyse numérique des equations differentielles” .
Masson, Paris.
[2] Hairer,E., Norsett, S.P. and Wanner, G.: 1987, ”Solving Ordinary Differential Equations I ”
, Springer, Berlin.
[3] Henrici,P.: 1962, ”Discrete variable Methods in Ordinary Differential Equations” .John Wiley,
New York.
[4] Krogh,F.T.: 1973, ”Changing stepsize in the integration of differential equations using mod-
ified divided differences ” , Lecture Notes in Mathematics, 362,22-71
[5] Lambert,J.D.: 1991, ”Numerical Methods for Ordinary Differential Systems” .John Wiley,
England.
[6] Vigo-Aguiar,J.: 1999, ”An approach to variable coefficients multistep methods for special
differential equations. Int.J.Appl.Math.. Vol1,8,911-921.
[7] Shampine,L.F. and Gordon,M.K.: 1975, ”Computer solution of Ordinary Differntial Equa-
tions. The initial Value Problem ” .Freeman, San Francisco,CA.

Department of Applied Mathematics, Salamanca University, Spain

Vous aimerez peut-être aussi