Vous êtes sur la page 1sur 129

Acceleration Methods

for Slowly Convergent Sequences


and their Applications

Naoki Osada

Acceleration Methods
for Slowly Convergent Sequences
and their Applications

January 1993

Naoki Osada

CONTENTS
Introduction

. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

I. Slowly convergent sequences . . . . . . . . . . . . . . . . . . . .

1. Asymptotic preliminaries . . . . . . . . . . . . . . . . . . . . . . .

1.1 Order symbols and asymptotic expansions . . . . . . . . . . . . . .

1.2 The Euler-Maclaurin summation formula

. . . . . . . . . . . . . .

10

. . . . . . . . . . . . . . . . . . . . .

13

. . . . . . . . . . . . . . . . . . . . . . .

13

2. Slowly convergent sequences


2.1 Order of convergence

2.2 Linearly convergent sequences

. . . . . . . . . . . . . . . . . . .

2.3 Logarithmically onvergent sequences

. . . . . . . . . . . . . . . .

2.4 Criterion of linearly or logarithmically convergent sequences

14
15

. . . . . .

17

3. Innite series . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

3.1 Alternating series . . . . . . . . . . . . . . . . . . . . . . . . .

19

3.2 Logarithmically convergent series . . . . . . . . . . . . . . . . . .

23

4. Numerical integration

. . . . . . . . . . . . . . . . . . . . . . . .

28

4.1 Semi-innite integrals with positive monotonically decreasing integrands .

28

4.2 Semi-innite integrals with oscillatory integrands . . . . . . . . . . .

29

4.3 Improper integrals with endpoint singularities

. . . . . . . . . . . .

31

II. Acceleration methods for scalar sequences

. . . . . . . . . . . .

33

. . . . . . . . . . . . . . . . . . . . . . . . . . .

33

5. Basic concepts

5.1 A sequence transformation, convergence acceleration, and extrapolation

33

. . . . . . . . .

34

. . . . . . . . . . . . . . . . . . . . . . . . . .

35

5.2 A classication of convergence acceleration methods


6. The E-algorithm

6.1 The derivation of the E-algorithm

. . . . . . . . . . . . . . . . .

6.2 The acceleration theorems of the E-algorithm

35

. . . . . . . . . . . .

37

7. The Richardson extrapolation . . . . . . . . . . . . . . . . . . . . .

39

7.1 The birth of the Richardson extrapolation . . . . . . . . . . . . . .

39

7.2 The derivation of the Richardson extrapolation . . . . . . . . . . . .

40

7.3 Generalizations of the Richardson extrapolation

. . . . . . . . . . .

43

8. The -algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . .

50

8.1 The Shanks transformation


8.2 The -algorithm

. . . . . . . . . . . . . . . . . . . .

50

. . . . . . . . . . . . . . . . . . . . . . . . .

51

8.3 The asymptotic properties of the -algorithm

. . . . . . . . . . . .

52

. . . . . . . . . . . . . . .

53

. . . . . . . . . . . . . . . . . . . . . . .

57

8.4 Numerical examples of the -algorithm


9. Levins transformations

9.1 The derivation of the Levin T transformation

. . . . . . . . . . . .

9.2 The convergence theorem of the Levin transformations


9.3 The d-transformation

57

. . . . . . . .

58

. . . . . . . . . . . . . . . . . . . . . . .

61

10. The Aitken 2 process and its modications

. . . . . . . . . . . . . .

64

10.1 The acceleration theorem of the Aitken 2 process . . . . . . . . . .

64

10.2 The derivation of the modied Aitken 2 formula

. . . . . . . . . .

66

. . . . . . . . . . . . .

68

. . . . . . . . . . . . . . . . . . . . .

72

10.3 The automatic modied Aitken formula


11. Lubkins W transformation

11.1 The derivation of Lubkins W transformation

. . . . . . . . . . . .

72

11.2 The exact and acceleration theorems of the W transformation . . . . .

74

11.3 The iteration of the W transformation

75

. . . . . . . . . . . . . . .

11.4 Numerical examples of the iterated W transformation


12. The -algorithm

. . . . . . . .

77

. . . . . . . . . . . . . . . . . . . . . . . . . .

80

12.1 The reciprocal dierences and the -algorithm

. . . . . . . . . . .

80

. . . . . . . . . . . .

82

. . . . . . . . . . . . . . . . . .

87

. . . . . . . . . . . . . . . . . . .

87

13.2 The automatic generalized -algorithm . . . . . . . . . . . . . . .

90

12.2 The asymptotic behavior of the -algorithm


13. Generalizations of the -algorithm
13.1 The generalized -algorithm

14. Comparisons of acceleration methods

. . . . . . . . . . . . . . . . .

93

. . . . . . . . . . . . . . . . . . . . . . . .

93

. . . . . . . . . . . . . . . . . . . . . . . . . . .

94

14.1 Sets of sequences


14.2 Test series

14.3 Numerical results


14.4 Extraction

. . . . . . . . . . . . . . . . . . . . . . . .

95

. . . . . . . . . . . . . . . . . . . . . . . . . . .

98

14.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . .
15. Application to numerical integration

100

. . . . . . . . . . . . . . . .

101

. . . . . . . . . . . . . . . . . . . . . . . . .

101

15.2 Application to semi-innite integrals . . . . . . . . . . . . . . .

102

15.3 Application to improper integrals

. . . . . . . . . . . . . . . .

106

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

108

15.1 Introduction

ii

References

. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

111

Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

115

A. Asymptotic formulae of the Aitken 2 process

. . . . . . . . . . . .

B. An asymptotic formula of Lubkins W transformation

FORTRAN program

115

. . . . . . . . .

117

. . . . . . . . . . . . . . . . . . . . . . . .

120

The automatic generalized -algorithm


The automatic modied Aitken 2 formula

iii

INTRODUCTION
Sequence transformations
Convergent numerical sequences occur quite often in natural science and engineering.
Some of such sequences converge very slowly and their limits are not available without
a suitable convergence acceleration method. This is the raison detre of the study of the
convergence acceleration method.
A convergence acceleration method is usually represented as a sequence transformation. Let S and T be sets of real sequences. A mapping T : S T is called a sequence
transformation, and we write (tn ) = T (sn ) for (sn ) S . Let T : S T be a sequence
transformation and (sn ) S . T accelerates (sn ) if
tn s
= 0,
n s(n) s
lim

where (n) is the greatest index used in the computation of tn .


An illustration : the Aitken 2 process
The most famous sequence transformation is the Aitken 2 process dened by
tn = sn

(sn+1 sn )2
(sn )2
= sn
,
sn+2 2sn+1 + sn
2 sn

(1)

where (sn ) is a scalar sequence. As C. Brezinski pointed out1 , the rst proposer of the 2
process was the greatest Japanese mathematician Takakazu Seki (or K
owa Seki,
1642?-1708). Seki used the 2 process computing in Katsuy
o Samp
o vol. IV
, which was edited by his disciple Murahide Araki in 1712. Let sn be the

perimeter of the polygon with 2n sides inscribed in a circle of diameter one. From
s15 = 3.14159 26487 76985 6708,
s16 = 3.14159 26523 86591 3571,
s17 = 3.14159 26532 88992 7759,
Seki computed
1 C.

Brezinski, History of continued fractions and Pad


e approximants, Springer-Verlag, Berlin, 1991. p.90.

(s16 s15 )(s17 s16 )


,
(s16 s15 ) (s17 s16 )
= 3.14159 26535 89793 2476,

t15 = s16 +

(2)

and he concluded = 3.14159 26535 89.2 The formula (2) is nothing but the 2 process.
Seki obtained seventeen-gure accuracy from s15 , s16 and s17 whose gure of accuracy is
less than ten.
Seki did not explain the reason for (2), but Yoshisuke Matsunaga (1692?1744), a disciple of Murahide Araki, explained it in Kigenkai , an annotated edition
of Katsuy
o Samp
o as follows. Suppose that b = a + ar, c = a + ar + ar2 . Then
b+

(b a)(c b)
a
=
,
(b a) (c b)
1r

the sum of the geometric series a + ar + ar2 + . . . .3


It still remains a mystery how Seki derived the 2 process, but Sekis application
can be explained as follows. Generally, if a sequence satises
sn s + c1 n1 + c2 n2 + . . . ,
where 1 > 1 > 2 > > 0, then tn in (1) satises
(
tn s + c2

1 2
1 1

)2
n2 .

(3)

This result was proved by J. W. Schmidt[48] and P. Wynn[61] in 1966 independently.


Since Sekis sequence (sn ) satises
sn = 2n sin

(1)j 2j+1 2j n
=

+
(2 ) ,
2n
(2j
+
1)!
j=1

(3) implies that


5
tn +
5!
2 A.

1
16

)n+1
.

Hirayama, K. Shimodaira, and H. Hirose(eds.), Takakazu Sekis collected works, English translation
by J. Sudo, (Osaka Kyoiku Tosho, 1974). pp.57-58.
3 M. Fujiwara, History of mathematics in Japan before the Meiji era, vol. II (in Japanese), under the
auspices of the Japan Academy, (Iwanami, 1956) (= , , , ).
p.180.

In 1926, A. C. Aitken[1] iteratively applied the 2 process nding the dominant root
of an algebraic equation, and so it is now named after him. He used (1) repeatedly as
follows:
(n)

T0

n N,

= sn ,

(n+1)

(n)

(n)

Tk+1 = Tk

(Tk
(n+2)

Tk

(n)

Tk )2
(n+1)

2Tk

(n)

+ Tk

k = 0, 1, . . . ; n N.

This algorithm is called the iterated Aitken 2 process.


Derivation of sequence transformations
Many sequence transformations are designed to be exact for sequences of the form
sn = s + c1 g1 (n) + + ck gk (n),

n,

(4)

where s, c1 , . . . , ck are unknown constants and gj (n) (j = 1, . . . k) are known functions of


n. Since s is the solution of the system of linear equations
sn+i = s + c1 g1 (n + i) + + ck gk (n + i),

i = 0, . . . , k,

the sequence transformation (sn ) 7 (tn ) dened by

(n)

tn = Ek

sn

g1 (n)

g (n)
= k
1

g1 (n)

gk (n)

gk (n + 1)
1

g1 (n + 1)

gk (n + 1)
sn+1
g1 (n + 1)

sn+k

g1 (n + k)

gk (n + k)

g1 (n + k)

gk (n + k)

is exact for the model sequence (4). This sequence transformation includes many famous
sequence transformations as follows:
(i) The Aitken 2 process : k = 1 and g1 (n) = sn .
(ii) The Richardson extrapolation : gj (n) = xjn , where (xn ) is an auxiary sequence.
(iii) Shanks transformation : gj (n) = sn+j1 .
(iv) The Levin u-transformation : gj (n) = n1j sn1 .
In 1979 and 1980, T. H
avie[20] and C. Brezinski[10] gave independently a recursive
(n)

algorithm for the computation of Ek , which is called the E-algorithm.


3

By the construction, the E-algorithgm accelerates sequences having the aymptotic


expansion of the form
sn s +

cj gj (n),

(5)

j=1

where s, c1 , c2 , . . . are unknown constants and (gj (n)) is a known asymptotic scale. More
precisely, in 1990, A. Sidi[53] proved that if the E-algorithm is applied to the sequence
(5), then for xed k,
(n)

Ek

(n)

Ek1 s

(
=O

gk+1 (n)
gk (n)

)
,

as n .

Some sequence transformations are designed to accelerate for sequences having a


certain asymptotic property. For example, suppose (sn ) satises
sn+1 s
= .
n sn s
lim

(6)

The Aitken 2 process is also obtained by solving


sn+2 s
sn+1 s
=
sn+1 s
sn s
for the unknown s. Such methods for obtaining a sequence transformation from a formula
with limit was proposed by C. Kowalewski[25] in 1981 and is designated as thechnique du
sous-ensemble, or TSE for short. Recently, the TSE has been formulated by N. Osada[40].
When 1 < 1 and 6= 0 in (6), the sequence (sn ) is said to be linearly
convergent sequence. In 1964, P. Henrici[21] proved that the 2 process accelerates any
linearly convergent sequence. When || > 1 in (6), the sequence (sn ) diverges but (tn )
converges to s. In this case s is called the antilimit of (sn ).
For particular asymptotic scales (gj (n)) such as gj (n) = n+1j or gj (n) = n n+1j
with < 0 and 1 < 1, various sequence transformations which cannot be represented as the E-algorithm are constructed.
A brief history : sequence transformations and asymptotic expansions
Here, we give a history of sequence transformations for a sequence such that

cj
sn s + n
,
nj
j=0

(7)

where < 0 and c0 (6= 0), c1 , . . . are constants independent of n.


According to K. Knopp[24, p.240], the rst sequence transformation was indicated
n
by J. Stirling in 1730. Stirling derived the asymptotic expansion i=1 log(1 + ia) and
the recursive procedure of coecients in its expansion[5, p.156]. The Euler-Maclaurin
summation formula, which was discovered by L. Euler and C. Maclaurin independently in
about 1740, was a quite important contribution to the study of sequence transformations.
In 1755, using this formula, L. Euler found

B2j
1 1
1
1
1 + + + + log n + +

,
2 3
n
2n j=1 2jn2j

(8)

where is the Euler constant and B2j are the Bernoulli numbers. Since the sequence
n 1
i=1 i log n satises (7), the formula (8) is the earliest acceleration for a logarithmic
sequence satisfying (7).
For the partial sum sn =

n
i=1

ai of an innite series, it is convenient to consider

the asymptotic expansion of an /an1 = sn1 /sn2 . In 1936, W. G. Bickley and J.


C. P. Miller[2] considered an accleration method for a slowly convergent series of positive
terms such that
sn1
A1
A2
A3
1
+ 2 + 3 + ...,
sn2
n
n
n

(9)

where A1 (> 1), A2 , A3 , . . . are constants. We note that an asymptotic expansion (7)
implies (9). They assumed
s sn sn1

)
2
1
+ 2 + ... ,
1 n + 0 +
n
n

and determined 1 , 0 , 1 , . . . by using A1 , A2 , . . . . The Bickley-Miller method requires


the coecients A1 , A2 , . . . in (9), so it is not applicable for a sequence without explicit
form of sn .
In 1952, S. Lubkin[29] studied the 2 process and proposed the W transformation.
Then he proved that the W transform accelerates any convergent sequence satisfying
sn
1
2
0 +
+ 2 + ...,
sn1
n
n
where 0 , 1 , . . . are constants, and that the 2 process accelerates if 0 6= 1.

Much earlier, in 1927, L. F. Richardson[44] proposed the deered approach to the


limit, which is now called the Richardson extrapolation, and he considered sn = ((2n +
1)/(2n 1))n with
sn e

c2
c4
+ 4 + ....
2
n
n

He assumed that the errors sn e are proportional to n2 and extrapolated s5 + 16


9 (s5 s4 ),
or equivalently, solved

s5 e
16
=
,
s4 e
25

then he obtained e = 2.71817.


In 1966, P. Wynn[61] applied his -algorithm to a sequence satisfying
sn s + n

cj
.
nj
j=0

(10)

The asymptotic expansion (10) is a special case of (7).


It was dicult to accelerate a sequence satisfying (7) until 1950s. Lubkins W
transformation, proposed in 1952, and the -algorithm of Brezinski[8], proposed in 1971,
can accelerate such a sequence. And when in (7) is negative integer, the -algorithm
of Wynn[60], proposed in 1956, works well on the sequence. These transformations do
not require the knowledge of in (7).
In 1981, P. Bjrstad, G. Dahlquist and E. Grosse[4] proposed the modied Aitken
2

formula dened by
(n)

s0

= sn
(n+1)

(n)

(n)

(n)

(n1)

sk )(sk sk
2k + 1 (sk
(n+1)
(n)
(n1)
2k
s
2s + s

(n)

sk+1 = sk

and proved that if it is applied to a sequence satisfying (7), then for xed k,
(n)

sk s = O(n2k ),

as n .

In 1990, N. Osada[37] proposed the generalized -algorithm dened by


(n)

1 = 0,
(n)

(n)

(n+1)

= k2 +

= sn
k1
(n+1)

(n)

k1 k1

and proved that if it is applied to a sequence satisfying (7), then for xed k,
(n)

2k s = O((n + k)2k ),

as n .

The modied Aitken 2 formula and the generalized -algorithm require the knowledge of the exponent in (7). But Osada showed that can be computed using these
methods as follows. For a given sequence (sn ) satisfying (7), n denotes by
(

n = 1 +

1
),
sn
2 sn1

then Bjrstad, Dahlquist and Grosse[4] proved that the sequence (n ) has the asymptotic
expansion of the form
n + n

tj
.
nj
j=0

Thus by applying these methods with the exponent 2 to (n ), the exponent in (7)
can be estimated.
Organization of this paper
In Chapter I, the asymptotic properties of slowly convergent scalar sequences are
dealt, and examples of sequence, which will be taken up in Chapter II as an objective of
application of acceleration methods, are given. In Section 1, asymptotic preliminaries,
i.e., O-symbol, the Euler-Maclaurin summation formula and so on, are introduced. In
Section 2, some terminologies for slowly convergent sequences are given. In Section 3
and 4, partial sums of innite series, and numerical integration are taken up as slowly
convergent scalar sequences.
In Chapter II, acceleraton methods for scalar sequences are dealt. Taken up methods
are as follows:
(i) Methods added some new results by the author; the -algorithm, the generalized
-algorithm, and the modied Aitken 2 formula.
(ii) Other important methods; the E-algorithm, the Richardson extrapolation, the
-algorithm, Levins transforms, the d transform, the Aitken 2 process, and the Lubkin
W transform.
For these methods, the derivation, convergence theorem or asymptotic behaviour,
and numerical examples are given. For methods mentioned in (i), details are described,
7

but for others only important facts are described. For other information, see Brezinski[9],
Brezinski and Redivo Zaglia[11], Weniger[56][57], and Wimp[58].
In Section 14, these methods are compared using numerical examples of innite
series. In Section 15, these methods are applied to numerical integration.
For conveniences sake, FORTRAN subroutines of the automatic generalized algorithm and the automatic modied Aitken 2 formula are appended.
The numerical computations in this paper were carried out on NEC ACOS-610
computer at Computer Science Center of Nagasaki Institute of Applied Science in double
precision with approximately 16 digits unless otherwise stated.

I. Slowly convergent sequences


1. Asymptotic preliminaries
P. Henrici said The study of the asymptotic behaviour frequently reveals information which enables one to speed up the convergence of the algorithm.([21, p.10])
In this section, we prepare for asymptotic methods.
1.1 Order symbols and asymptotic expansions
Let a and be a real number and a positive number, respectively. The set {x
R | 0 < |x a| < } is called a deleted neighbourhood of a. The open interval (a, a + )
is called a deleted neighbourhood of a + 0. For a positive number M , the open interval
(M, +) is called a deleted neighbourhood of +.
Let b be one of a, a 0, . Let V be a deleted neighbourhood of b. If a function
f (x) dened on V satises limxb f (x) = 0, f (x) is said to be innitesimal at b.
Suppose that f (x) and g(x) are innitesimal at b. We write
f (x) = O(g(x)) as x b,

(1.1)

if there exist a constant C > 0 and a deleted neighbourhood V of b such that


|f (x)| C|g(x)|,

x V.

(1.2)

f (x) = o(g(x)) as x b,

(1.3)

And we write

if for any > 0 there exists a deleted neighbourhood V of b such that


|f (x)| |g(x)|,

x V .

(1.4)

In the rest of this subsection b is xed and the qualifying phrase as x b is omitted.
Let f1 (x) f2 (x) and f3 (x) be innitesimal at b. We write f1 (x) = f2 (x) + O(f3 (x)),
if f1 (x)f2 (x) = O(f3 (x)). Similarly we write f1 (x) = f2 (x)+o(f3 (x)), if f1 (x)f2 (x) =
o(f3 (x)).
If f (x)/g(x) tends to unity, we write
f (x) g(x).
Then g is called an asymptotic approximation to f .
9

(1.5)

A sequence of functions (fn (x)) dened in a deleted neighbourhood V of b is called


an asymptotic scale or an asymptotic sequence if
fn+1 (x) = o(fn (x)), for n = 1, 2, . . . ,

(1.6)

is valid. Let (fn (x)) be an asymptotic scale dened in V . Let f (x) be a function dened
in V . If there exist constants c1 , c2 , . . . , such that
f (x) =

ck fk (x) + o(fn (x)),

(1.7)

k=1

is valid for any n N, then we write


f (x)

ck fk (x),

(1.8)

k=1

and (1.8) is called an asymptotic expansion of f (x) with respect to (fn (n)). We note that
(1.8) implies f (x) c1 f1 (x) in the sense of (1.5).
If f (x) has an asymptotic expansion (1.8), then the coecients (ck ) are unique:
c1 = lim f (x)/f1 (x),
xb
(
)
n1

cn = lim (f (x)
ck fk (x))/fn (x) ,
xb

When f (x) g(x)

(1.9a)
n = 2, 3, . . . .

(1.9b)

k=1

k=1 ck fk (x),

we often write

f (x) g(x) +

ck fk (x)

(1.10)

k=1

For asymptotic methods, see Bruijn[12] and Olver[34].


1.2 The Euler-Maclaurin summation formula
The Euler-Maclaurin summation formula is a quite useful theorem not only for
numerical integration but also for sequences and innite series. In this subsection we
review the Euler-Maclaurin formul without proofs, which can be found in Bourbaki[5].
We begin with the Bernoulli numbers.
The Bernoulli numbers Bn are dened by

x
Bn n
=
x ,
x
e 1 n=0 n!

10

|x| < 2,

(1.11)

where the left-hand side of (1.11) equals 1 when x = 0. The Bernoulli numbers are
computed recursively by
B0 = 1
n1
(n)
Bk = 0
k

(1.12a)
n = 2, 3, . . . .

(1.12b)

k=0

By the relations (1.12), we have


1
1
1
B0 = 1, B1 = , B2 = , B3 = 0, B4 = , B5 = 0,
2
6
30
1
1
5
B6 =
, B7 = 0, B8 = , B9 = 0, B10 =
, B11 = 0,
42
30
66
7
3617
691
B12 =
, B13 = 0, B14 = , B15 = 0, B16 =
.
2730
6
510

(1.13)

It is known that
|B2j |

4(2j)!
(2)2j

for j N.

(1.14)

Theorem 1.1 (The Euler-Maclaurin summation formula)


Let f (x) be a function of C 2p+2 class in a closed interval [a, b]. Then the following
asymptotic formula is satised.

Tn
a

p
)

B2j 2j ( (2j1)
f (x)dx =
h
f
(b) f (2j1) (a) +O(h2p+2 ), as h +0, (1.15a)
(2j)!
j=1

where
h = (b a)/n,

Tn = h

)
n1

1
1
f (a) +
f (a + ih) + f (b) .
2
2
i=1

(1.15b)

Tn in (1.15b) is called an n-panels compound trapezoidal rule, or a trapezoidal rule,


for short. The following modications of the Euler-Maclaurin summation formula are
useful for innite series.
Theorem 1.2
Let f (x) be a function of C 2p+1 class in [n, n + m]. Then
m

i=0

n+m

f (n + i) =

f (x)dx +
n

1
(f (n) + f (n + m))
2

p
)

B2j ( (2j1)
f
(n + m) f (2j1) (n) + Rp (n, m),
(2j)!
j=1

11

(1.16)

4e2
|Rp (n, m)|
(2)2p+1

n+m

|f (2p+1) (x)|dx.

(1.17)

In particular, if f (2p+1) (x) has a denite sign in [n, n + m], then


|Rp (n, m)|

4e2
|f (2p) (n + m) f (2p) (n)|.
2p+1
(2)

(1.18)

Theorem 1.3
Let f (x) be a function of C 2p+1 class in [n, n + 2m]. Then
2m

(1)i f (n + i) =

i=0

1
(f (n) + f (n + 2m))
2
p
)

B2j (22j 1) ( (2j1)


(2j1)
+
f
(n + 2m) f
(n) + Rp (n, m),
(2j)!
j=1
(1.19)

4e2 (22p+1 + 1)
|Rp (n, m)|
(2)2p+1

n+2m

|f (2p+1) (x)|dx.

(1.20)

In particular, if f (2p+1) (x) has a denite sign in [n, n + 2m], then


|Rp (n, m)|

4e2 (22p+1 + 1) (2p)


|f
(n + 2m) f (2p) (n)|.
(2)2p+1

(1.21)

The following theorem gives the asymptotic expansion of the midpoint rule.
Theorem 1.4
Let f (x) be a function of C 2p+2 class in a closed interval [a, b]. Then the following
asymptotic formula is satised.

Mn
a

p
)

(212j 1)B2j 2j ( (2j1)


f (x)dx =
h
f
(b) f (2j1) (a)
(2j)!
j=1

+ O(h2p+2 ), as h +0,
where
h = (b a)/n,

Mn = h

(1.22a)

1
f (a + (i )h).
2
i=1

Proof. Since Mn = 2T2n Tn , it follows from Theorem 1.1.

(1.22b)

Mn in (1.22b) is called an n-panels compound midpoint rule, or a midpoint rule, for


short.
12

2. Slowly convergent sequences


When one deals with the speed of convergence such as slow convergence and convergence acceleration, it is necessary to quantitatively represent the speed of convergence.
To this end, we use the order of convergence, the rate of contraction and the asymptotic
expansion.
2.1 Order of convergence
Let (sn ) be a real sequence converging to a limit s. For p 1, (sn ) is said to have
order p or be p-th order convergence if there exist A, B R, n0 N such that 0 < A B
and
A

|sn+1 s|
B
|sn s|p

n n0 .

(2.1)

2-nd order convergence is usually called a quadratic convergence and 3-rd order convergence is sometimes called a cubic convergence.
If there exist C > 0 and n0 N such that
|sn+1 s| C|sn s|p

n n0 ,

(2.2)

then (sn ) is said to have at least order p or be at least p-th order convergence.
As is well known, under suitable conditions, Newtons iteration
sn+1 = sn

f (sn )
f 0 (sn )

(2.3)

converges at least quadratically to a simple solution of f (x) = 0. Similarly, a sequence


generated by the secant method
sn+1 = sn f (sn )
has order at least (1 +

sn sn1
f (sn ) f (sn1 )

(2.4)

5)/2 = 1.618 . . . . This convergence is suciently fast.

A p-th order convergent sequence (sn ) is said to have the asymptotic error constant
C > 0 if

|sn+1 s|
= C.
n |sn s|p
lim

(2.5)

We note that p-th order convergent sequences not necessarily have the asymptotic error
constant. For example, let (sn ) be a sequence dened by
)
(
1 1
n
+ (1) (sn s)p + s.
sn+1 =
2 4
13

(2.6)

If p = 1, or p > 1 and 0 < 31/(1p ) (3/4)1/(p1) (s0 s) < 1, then (sn ) converges to s and
satises (2.1) with A = 1/4, B = 3/4, but |sn+1 s|/|sn s|p does not converge.
If (sn ) has order p > 1 and |sn+1 s|/|sn s| converges, then
sn+1 s
= 0.
n sn s
lim

(2.7)

A sequence (sn ) satisfying (2.7) is called a super-linearly convergent sequence.


We note that a super-linearly convergent sequence has not necessarily order p > 1.
For example, let (sn ) be a sequence dened by
sn+1 = n (sn s) + s,

0 < || < 1.

(2.8)

Then (sn ) converges super-linearly to s, but has not any order.


2.2 Linearly convergent sequences
When

sn+1 s
= ,
n sn s

1 < 1, 6= 0,

lim

(2.9)

(sn ) is called a linearly convergent sequence and is called the rate of contraction.
In practice, linearly convergent sequences occur in the following situations:
(i) Partial sums sn of alternating series satises (2.9) with the rate of contraction
= 1.
(ii) Suppose that f (x) has a zero of multiplicity m > 1 and is of class C 2 in a
neighbourhood of . If s0 is suciently close to , then Newtons iteration (2.3) converges
linearly to with the rate of contraction 1 1/m.
(iii) Suppose that an equation x = g(x) has a xed point , g(x) is of class C 2 in
a neighbourhood of and 0 < |g 0 ()| < 1. If s0 is suciently close to , then (sn )
generated by sn+1 = g(sn ) converges linearly to with the rate of contraction g 0 ().
The convergence of a linearly convergent sequence whose asymptotic error constant
is close to 1 is so slow that it is necessary to accelerate the convergence. However, it is
easy to accelerate linearly convergent sequences. For example, the Aitken 2 process can
accelerate any linealy convergent sequence(Henrici[21]).
Some linearly convergent sequences have an asymptotic expansion of the form
sn s +

cj jn ,

j=1

14

as n ,

(2.10)

where c1 , c2 , . . . and 1 , 2 , . . . are constants independent of n. A sequence (sn ) satisfying (2.10) with known constants 1 , 2 , . . . can be quite eciently accelerated by the
Richardson extrapolation. When 1 , 2 , . . . in (2.10) are unknown, the sequence can be
eciently accelerated by the -algorithm.
Other some linearly convergent sequences such as the partial sums of certain altenating series (see Theorem 3.3) have an asymptotic expansion of the form
sn s + n n

cj
,
j
n
j=0

as n ,

(2.11)

where 1 < 1, 6= 0, < 0 and c0 (6= 0), c1 , . . . are constants independent of n. A


sequence (sn ) satisfying (2.11) can be eciently accelerated by the Levin transformations.
2.3 Logarithmically convergent sequences
When the equality
lim

sn+1 s
=1
sn s

(2.12)

holds (sn ) is called a logarithmically convergent sequence(Overholt[42]), or a logarithmic


sequence for short.
A typical example of logarithmically convergent sequences is the partial sums of the
Riemann zeta function ():

1
sn =
,
j
j=1

> 1.

(2.13)

As we shall show in Example 3.4, (sn ) has the asymptotic expansion


sn () + n

cj
,
j
n
j=0

(2.14)

where c0 , c1 , . . . are constants such as c0 = 1/(1 ), c1 = 1/2, c2 = /12, c3 = 0.


Similarly to (2.14), many logarithmic sequences (sn ) have the asymptotic expansion
of the form

cj
sn s + n
,
nj
j=0

(2.15)

where < 0 and c0 (6= 0), c1 , . . . are constants. Some other logarithmic sequences (sn )
have the asymptotic expansion of the form
sn s + n (log n)

j=0 i=0

15

ci,j
,
(log n)i nj

(2.16)

where < 0 or = 0 and < 0, and c0,0 (6= 0), ci,j , . . . are constants. The asymptotic
formula (2.15) or (2.16) is a special case of the following one
sn = s + O(n ),

as n ,

(2.17)

or
sn = s + O(n (log n) ),

as n ,

(2.18)

respectively. When in (2.17) or (2.18) is close to 0, (sn ) converges very slowly to s, but
when is suciently large, e.g. > 10, (sn ) converges rapidly to s.
Furthermore, logarithmic sequences which have the asymptotic formula
sn = s + O((log(log n)) ),

as n ,

( < 0)

(2.19)

occur in some literatures. Sequences satisfying (2.18) with = 0 or (2.19) converge quite
slowly.
According to their origin we can classify practical logarithmic sequences into the
following two categories:
(a) one from continuous problems by applying a discretization method.
(b) one from discrete problems.
There are many numerical problems in the class (a) such as numerical dierentiation, numerical integration, ordinary dierential equations, partial dierential equations,
integral equations and so on. Let s be the true value of such a problem and h a mesh
size. For many cases an approximation T (h) has an asymptotic formula
T (h) = s +

cj hj + O(hk+1 ),

(2.20)

j=1

where c1 , . . . , ck , 0 < 1 < < k+1 are constants. Setting sn = T (c/n) and dj = cj cj
for c > 0, we have
sn = s +

dj nj + O(nk+1 ),

(2.21)

j=1

which is a generalization of (2.15). When we put s0n = s2n and j = 2j , we have


s0n

=s+

dj nj + O(nk+1 ),

j=1

16

(2.22)

therefore the subsequence (s0n ) of (sn ) converges linealy to s.


For example, let f (x) be a function of class C 2p+2 in a closed interval [a, b]. Let Tn
b
be an approximation of a f (x)dx by the n-panels compound trapezoidal rule. Then by
the Euler-Maclaurin formula (Theorem 1.1),

Tn =

f (x)dx +
a

cj (22j )n + O((22p2 )n ),

as n ,

(2.23)

j=1

where c1 , . . . , cm are constants independent of n. An application of the Richardson


extrapolation to T2n is called the Romberg integration.
Almost all sequences taken up as examples of logarithmic sequences are of class (b).
For example, a sequence given by analytic function such as sn = (1 + 1/n)n , partial sums
of innite series of positive terms, and an iterative sequence of a singular xed point
problem. Then (sn ) satises (2.15), (2.16) or more general form
sn = s +

cj gj (n) + O(gk+1 (n)),

(2.24)

j=1

such that
gj+1 (n)
= 0,
n gj (n)

j = 1, 2, . . . ,

(2.25)

gj (n + 1)
= 1,
n
gj (n)

j = 1, 2, . . . .

(2.26)

lim

and
lim

A double sequence (gj (n)) satisfying (2.25) and (2.26) is called an asymptotic logarithmic scale.
2.4 Criterion of linearly or logarithmically convergent sequences
The formulae (2.9) and (2.12) involve the limit s, so that they are of no use in
practice. However, under certain conditions, (2.9) and (2.12) can be replaced sn s with
sn = sn+1 sn . Namely, if lim sn+1 /sn = , and if one of the following conditions
n

(i) (Wimp[58]) 0 < || < 1, or || > 1, (For the divergence case || > 1, s can be
any number.)

17

(ii) (Gray and Clark[17]) = 1 and


sn+1
sn
lim
= 1,
sn
n
1+
sn1
1+

(2.27)

is satised, then (2.9) holds.


Moreover, if
(iii) (Gray and Clark[17]) (sn ) converges and sn have the same sign,
then (2.12) holds. In particular, if a real monotone sequence (sn ) with limit s satises
sn+1
= 1,
n sn
lim

then (sn ) converges logarithmically to the limit s.

18

(2.28)

3 Innite series
Most popular slowly convergent sequences are alternating series as well as logarithmically convergent series. In this section we describe the asymptotic expansions of innite
series and give examples that will be used as test problems.
3.1 Alternating series*
We begin with the denition of the property (C) that is a special case of Widders
completely monotonic functions4 .
Let a > 0. A function f (x) has the property (C) in (a, ), if the following four
conditions are satised:
(i) f (x) is of class C in (a, ),
(ii) (1)r f (r) (x) > 0 for x > a, r = 0, 1, . . . (completely monotonic),
(iii) f (x) 0 as x ,
(iv) for r = 0, 1, . . . , f (r+1) (x)/f (r) (x) 0 as x .
If f (x) has the property (C), then for r = 0, 1, . . . , f (r) (x) 0 as x , thus
(f (r) (x))r=0,1,... is an asymptotic scale.
Theorem 3.1 Suppose that a function f (x) has the property (C) in (a, ). Then

1
B2j (22j 1) (2j1)
(1) f (n + i) f (n)
f
(n),
2
(2j)!
i=0
j=1
i

as n ,

(3.1)

where B2j s are the Bernoulli numbers.


Proof. Let m, n, p N with n > a. By the Euler-Maclaurin formula (Theorem 1.3), we
have
2m

i=0

(1)i f (n + i) =

1
(f (n) + f (n + 2m))
2
+

where

p
)

B2j (22j 1) ( (2j1)


f
(n + 2m) f (2j1) (n) + Rp (n, m)
(2j)!
j=1
(3.2)

4e2 (22p+1 + 1)
|Rp (n, m)|
(2)2p+1

n+2m

|f (2p+1) (x)|dx.

(3.3)

*The material in this subsection is taken from the authors paper: N. Osada, Asymptotic expansions and
acceleration methods for alternating series (in Japanese), Trans. Inform. Process. Soc. Japan, 28(1987)
No.5, pp.431436. ( = , , )
4 See, D. V. Widder, The Laplace transform, (Princeton, 1946), p.145.

19

By the assumption (ii), f (2p+1) (x) < 0 for x > a, thus


)
4e2 (22p+1 + 1) ( (2p)
(2p)
|Rp (n, m)|
f
(n) f
(n + 2m) .
(2)2p+1

(3.4)

Letting m , the series in the left-hand side of (3.2) converges and f (r) (n + 2m) 0
for r = 0, 1, . . . . So we obtain

1
B2j (22j 1) (2j1)
(1) f (n + i) = f (n)
f
(n) + O(f (2p) (n)).
2
(2j)!
i=0
j=1
i

Since (3.5) and the assumption (iv), we obtain (3.1).

(3.5)

Theorem 3.2 Suppose that an alternating series is represented as


s=

(1)i1 f (i),

(3.6)

i=1

where f (x) has the property (C) in (a, ) for some a > 0. Let sn be the n-th partial sum
of (3.6). Then the following asymptotic expansion holds:

2j

1
B2j (2 1) (2j1)
sn s (1)n1 f (n) +
f
(n) ,
2
(2j)!
j=1

as n .

(3.7)

Proof. Since
sn s = (1)

n1

f (n) + (1)

(1)i1 f (n + i 1),

(3.8)

i=1

by (3.1) we obtain (3.7).

Notation (Levin and Sidi[27]) Let < 0. We denote A () by the set of all functions of
class C in (a, ) for some a > 0 satisfying the following two conditions:
(i) f (x) has the asymptotic expansion
f (x) x

aj
,
j
x
j=0

as x ,

(3.9)

(ii) Derivatives of any order of f (x) have asymptotic expansions, which can be
obtained by dierentiating that in (3.9) formally term by term.

20

Theorem 3.3 Suppose that f (x) A () has the property (C). Let the asymptotic expan
sion of f (x) be (3.9). Let sn be the n-th partial sum of the series s = i=1 (1)i1 f (i).
Then sn s has the asymptotic expansion

cj
n
,
j
n
j=0

sn s (1)

n1

where
1
cj = aj +
2

b(j+1)/2c

k=1

as n ,

2k1

B2k (22k 1)
( j + i)
aj+12k
(2k)!
i=1

(3.10)

(3.11)

Proof. Since f (x) A () , we have


f (2k1) (x)

m=0

am

(2k1

)
( m + 1 i) xm2k+1 .

(3.12)

i=1

Computing the coecient of (1)n1 nj ,


2k1
B2k (22k 1)

1
( m + 1 i),
am
cj = aj +
2
(2k)!
i=1

(3.13)

k,m

where the summation in the right-hand side of (3.13) is taken all integers k, m such that
k 1, m 0, m + 2k 1 = j.

(3.14)

Since the solutions of (3.14) are only k = 1, 2, . . . , b(j + 1)/2c, we obtain the desired
result.

Using Theorem 3.2 and Theorem 3.3, we can obtain the asymptotic expansions of
typical alternating series.
Example 3.1 In order to illustrate Theorem 3.2, we consider rst a very simple example,
log 2 =

(1)i1
i=1

(3.15)

Let f (x) = 1/x and sn be the n-th partial sum of (3.15). Since f (2j1) (x) = (2j
1)!x2j , by (3.7) we have

2j

1
B2j (2 1)
sn log 2 (1)n1

.
2n j=1 (2j)n2j
21

(3.16)

The rst 5 terms of the right-hand side of (3.16) are as follows:


sn log 2 = (1)

n1

1
n

)
1
1
1
1
17
1

+
5+
+ O( 9 ) .
2 4n 8n3
4n
16n7
n

(3.17)

Example 3.2 In order to illustrate Theorem 3.3, we next consider the Leibniz series

(1)i1
=
.
4
2i

1
i=1

Since

(3.18)

1
1
1
=
1+
,
j
2x 1
2x
(2x)
j=1

|x| >

1
,
2

(3.19)

f (x) = 1/(2x 1) belongs to A (1) . Using Theorem 3.3, we have

1
sn = (1)n1
4
n

)
1
1
5
61
1385
1

+
+ O( 10 ) ,
4 16n2
64n4
256n6
1024n8
n

(3.20)

where sn is the n-th partial sum of (3.18).


Example 3.3 Let us consider
(1 21 )() =

(1)i1

i=1

> 0, 6= 1,

(3.21)

where () is the Riemann zeta function. For 0 < < 1, (3.21) is justied by analytic
continuation.5 Putting f (x) = 1/x in Theorem 3.2, we have

1
B2j (2 1) (2j1)
sn (1 21 )() (1)n1 f (n) +
f
(n) ,
2
(2j)!
j=1
2j

(3.22)

where sn is the n-th partial sum of (3.21). The rst 4 terms of the right-hand side of
(3.22) are as follows:
sn (1 21 )()
(
)
1

( + 1)( + 2) ( + 1)( + 2)( + 3)( + 4)


n1 1

= (1)
n 2 4n
48n3
480n5
1
+ O( +7 ).
(3.23)
n
5 See,

E. C. Titchmarsh, The theory of the Riemann zeta function, 2nd ed. (Clarendon Press, Oxford,
1986), p.21.

22

3.2 Logarithmically convergent series**


The following theorem is useful for obtaining the asymptotic expansions of certain
logarithmically convergent series.
Theorem 3.4 Suppose that a function f (x) has the property (C) in (a, ). Let sn be

the n-th partial sum of the series s = i=1 f (i). Suppose that both the innite integral

f (x)dx and the series s converge. Then


a

sn s =

1
B2j (2j1)
f (x)dx + f (n) +
f
(n) + O(f (2p) (n)),
2
(2j)!
j=1

as n , (3.24)

where B2j s are the Bernoulli numbers.


Proof. Let m, n, p N with n > a. By the Euler-Maclaurin formula (Theorem 1.2), we
have
m

n+m

f (n + i) =

f (x)dx +
n

i=0

1
(f (n) + f (n + m))
2

p
)

B2j ( (2j1)
+
f
(n + m) f (2j1) (n) + Rp (n, m)
(2j)!
j=1

where
|Rp (n, m)|

4e2
|f (2p) (n + m) f (2p) (n)|.
(2)2p+1

(3.25)

(3.26)

Letting m , the series in the left-hand side of (3.25) converges and f (r) (n+m)
0 for r = 0, 1, . . . . So we obtain

s sn1 =

1
B2j (2j1)
f (x)dx + f (n)
f
(n) + O(f (2p) (n)).
2
(2j)!
j=1

By (3.27), we obtain (3.24).

(3.27)

Theorem 3.5 Let f (x) be a function belonging to A () with < 1. Let the asymptotic
expansion of f (x) be

aj
f (x) x
,
j
x
j=0

as x ,

(3.28)

**The material in this subsection is taken from the authors paper: N. Osada, Asymptotic expansions
and acceleration methods for logarithmically convergent series (in Japanese), Trans. Inform. Process. Soc.
Japan, 29(1988) No.3, pp.256261. ( = , , )

23

where a0 6= 0. Assume that both the series s =

f (i) and the integral

f (x)dx
n

i=1

converge. Then the n-th partial sum sn has the asymptotic expansion of the form
sn s n

+1

cj
,
j
n
j=0

as n ,

(3.29)

where
a1
a0
+

2
bj/2c
2k1
B2k

aj
1
cj =
+ aj1 +
aj2k
( j + 1 + l),
+1j
2
(2k)!

c0 =

a0
,
+1

c1 =

k=1

(j > 1)

l=1

(3.30)

where B2j s are the Bernoulli numbers.


Proof. By the Theorem 3.4, we have

sn s

1
B2j (2j1)
f
(n),
f (x)dx + f (n) +
2
(2j)!
j=1

as n .

(3.31)

Integrating (3.28) term by term, we have

f (x)dx

k=0

ak
n+1k ,
+1k

(3.32)

and by the assumption f (x) A () , we have


f (2j1) (n)

k=0

ak

(2j1

)
( k + 1 l) nk2j+1 .

(3.33)

l=1

By (3.31),(3.32) and (3.33), the coecient of n+1j in (3.29) coinsides with cj in (3.30).
This completes the proof.

Using Theorem 3.4, we can obtain the well-known asymptotic expansion of the
Riemann zeta function.
Example 3.4 (The Riemann zeta function) Let us consider
n

1
sn =
,

i
i=1

24

> 1,

(3.34)

Taking f (x) = x and s = () in Theorem 3.4, we have


s sn + n

1
1 B2j
1
+
n
n +
1
2
(2j)!
j=1

(2j1

)
( + l 1) n2j+1 ,

(3.35)

l=1

thus we obtain

1
1 B2j
1
sn s
n
+ n
( + 1) ( + 2j 2)n2j+1 .
1
2
(2j)!
j=1

(3.36)

In particular, if = 2 then
2
1
sn
=
6
n

)
1
1
1
1
1
1
1 +

+
+ O( 10 ) , as n . (3.37)
2n 6n2
30n4
42n6
30n8
n

Next example is more complicated.


Example 3.5 (Gustafson[18]) Consider
s=

i + e1/i

)2

(3.38)

i=1

Let f (x) = (x + e1/x )

. Using Maclaurins expansions of ex and (1 + x)

f (n) n

1 +

j=1

2
1

(j 1)!nj


2
j=0

aj
,
nj

, we have

(3.39)

where the coecients aj in (3.39) are given in Table 3.1. By Theorem 3.5, the n-th
partial sum sn has the asymptotic expansion of the form
sn s n


1 2
j=0

cj
,
nj

(3.40)

where s = 1.71379 67355 40301 48654 and the coecients cj in (3.40) are also given in
Table 3.1.

25

Table 3.1
Coecients aj in (3.39) and cj in (3.40)
j
0
1
2
3
4
5
6
7
8

aj
1
2

1 (1/2)2
1 (1/6) 2
1/12 (5/12)
2
(23/120) 2
11/45 + (7/60)2
1/72 (1/240) 2

19/160 (221/2520) 2

cj

1 2
3/2

2 (25/12)
2
1/2 + (1/2) 2
227/840 + (71/630) 2
41/168 (37/168)
2
67/4968 + (173/6210) 2
3367/12240 + (5111/24480) 2

2924867/14212800 (106187/676800) 2

There are logarithmic terms in the asymptotic expansions of the following series.
Example 3.6 Let us consider
sn =

log i
i=2

> 1.

(3.41)

Since d/d(i ) = log i/i , sn converges to 0 (), where 0 (s) is the derivative of
the Riemann zeta function. Let f (x) = log x/x . Then
log x + 1
,
x+1
( + 1) log x 2 1
f 00 (x) =
,
x+2
( + 1)( + 2) log x + 32 + 6 + 2
f 000 (x) =
.
x+3
f 0 (x) =

(3.42a)
(3.42b)
(3.42c)

By Theorem 3.4,
0

sn + () =

log x
log n B2j (2j1)
dx
+
+
f
(n) + O(f (2p) (n)),

x
2n
(2j)!
j=1
p

(3.43)

thus
sn + 0 () =

log n
1

1
(1 )n
(1 )2 n1
log n log n + 1 ( + 1)( + 2) log n + 32 + 6 + 2
+
+

2n
12n+1
720n+3
log n
+ O( +5 ).
(3.44)
n
26

In particular, if = 2,
sn + 0 (2) =

log n
1
log n log n
1
log n
13
log n
+

+
+

+ O( 7 ),
2
3
3
5
5
n
n
2n
6n
12n
30n
360n
n

(3.45)

where 0 (2) = 0.93754 82543 15843.


Example 3.7 Let us consider
sn =

i=2

1
,
i(log i)

> 1.

(3.46)

Similarly to Example 3.6, we have


sn s =

1
1
log n +
+

(1 )(log n)
2n(log n)
12n2 (log n)+1
6(log n)3 + 11(log n)2 + 6( + 1) log n + ( + 1)( + 2)
+
720n4 (log n)+3
1
+ O( 6
).
n (log n)

(3.47)

In particular, if = 2, then s = 2.10974 28012 36891 97448 and


1
1
1
1
+

log n 2n(log n)2


12n2 (log n)2
6n2 (log n)3
1
11
1
+
+
+
120n4 (log n)2
360n4 (log n)3
20n4 (log n)4
1
1
+
+ O( 6
).
4
5
30n (log n)
n (log n)2

sn s =

(3.48)

This series converges quite slowly. When = 2, by the rst 1043 terms, we can
obtain only two exact digits. P. Henrici6 computed this series using the Plana summation
formula. The asymptotic expansion (3.47) is due to Osada[38].

6 P.

Henrici, Computational analysis with the HP-25 Pocket Calculator, (Wiley, New York, 1977).

27

4 Numerical integration
Innite integrals and improper integrals usually converge slowly. Such an integral
implies slowly convergent sequence or innite series by a suitable method. In this section,
we deal with the convergence of numerical integrals and give some examples.
4.1 Semi-innite integrals with positive monotonically decreasing integrands
Let f (x) be a continuous function dened in [a, ). If the limit

f (x)dx

lim

(4.1)

exists and is nite, then (4.1) is denoted by


f (x)dx and the semi-innite integral
a

f (x)dx is said to converge.
a

Suppose that an integral I =
f (x)dx converges. Let (xn ) be an increasing
a

sequence diverging to with x0 = a. Then I becomes innite series


I=

n=1

xn

f (x)dx.

(4.2)

xn1

Some semi-innite integrals converge linearly in this sense.


Example 4.1 Let us now consider

() =

x1 ex dx,

> 0.

(4.3)

Let sn be dened by sn =

n
0

sn () = n

x1 ex dx. Then

1 n

)
(
1
1
+ O( 2 ) ,
1
n
n

as n .

(4.4)

In particular, if N, then

sn () = n1 en 1 +

j=1

( 1) ( j)
.
nj

(4.5)

(4.4) or (4.5) shows that (sn ) converges linearly to () with the contraction ratio 1/e.

Especially when = 1, the innite series n=1 (sn sn1 ) becomes a geometric series
with the common ratio 1/e.
28

Suppose that f (x) has the asymptotic expansion

aj
f (x) x
,
xj
j=0

as x ,

(4.6)

where < 1 and a0 6= 0. Suppose also that the integral I =


x
Then
f (t)dt I has the asymptotic expansion

f (x)dx converges.
a

f (t)dt I x

+1

j=0

aj
,
( j + 1)xj

as x .

(4.7)

It follows from (4.7) that

f (t)dt converges logarithmically to I.


a

Example 4.2 Let us consider

=
2
Integrating

dx
.
1 + x2

(4.8)

(1)j1
1
=
,
2j
1 + t2
t
j=1

t > 1,

(4.9)

term by term, we have

(1)j1
dt
=
,
1 + t2
(2j 1)x2j1
j=1

x > 1.

(4.10)

We note that the right-hand side of (4.10) converges uniformly to tan1 1/x = /2
x
tan1 x provided that x > 1. The equality (4.10) shows that 0 dt/(1 + t2 ) converges
logarithmically to /2 as x .
4.2 Semi-innite integrals with oscillatory integrands
Let (x) be an oscillating function whose zeros in (a, ) are x1 < x2 < . . . . Set
x0 = a. Let f (x) be a positive continuous function on [a, ) such that the semi-innite
integral

I=

f (x)(x)dx

(4.11)

converges. Let In be dened by

xn

In =

f (x)(x)dx.
xn1

29

(4.12)

If I converges, then the innite series

In

(4.13)

n=1

is an alternating series which converges to I.


Some integrals become geometric series.
Example 4.3 Consider

ex sin xdx,

I=

< 0,

(4.14)

where I = 1/(1 + 2 ). Then


n
ex sin xdx = (1)n1
(n1)

e + 1 (n1)
e
.
1 + 2

(4.15)

Therefore the innite series (4.13) is a geometric series with the common ratio e .
If an integrand is a product of an oscillating function and a rational function converging to 0 as x , then the innite series (4.13) usually becomes an alternating
series satisfying In+1 /In 1 as n .
Example 4.4 Let us consider

I=
0

sin x
dx,
x

0 < < 2,

(4.16)

where I = /(2() sin(/2)). Set

In =
(n1)

sin x
dx.
x

(4.17)

Substituting t = n x, we have
)
(
sin(n t)
t
n1

In =
dt = (1)
(n)
1
sin tdt
(n t)
n
0
0

( + 1) . . . ( + j 1)
(1)n1 (n) 2 +
tj sin tdt .
j
j!(n)
0
j=1

By Theorem 3.3,

Ik I (1)

k=1

where cj are constants.


30

n1

cj
,
j
n
j=0

(4.18)

(4.19)

4.3 Improper integrals with endpoint singularities


Let f (x) be a continuous function in an interval (a, b]. Suppose that limxa+0 f (x) =
such that

lim

+0

f (x)dx
a+

(4.20)

exists and is nite. Then the above limit denotes

f (x)dx. Such an integral is called


a

an improper integral, and the endpoint a is called an integrable singularity. Similar for
the case limxb0 f (x) = .

The Euler-Maclaurin formula is extended to improper integral


be the compound midpoint rule Mn dened by
Mn = h

f (x)dx. Let Mn
a

1
f (a + (i )h),
2
i=1

h=

ba
.
n

(4.21)

Theorem 4.1 (Navot[33]) Let f (x) be represented as


f (x) = x g(x),

> 1,

(4.22)

where g(x) is a C 2p+1 function in [0, 1]. Then


1
Mn
f (x)dx
0

p+1

j=1

2p+1
(2k 1)( k)
(212j 1)B2j (2j1)
f
(1)
+
g (k) (0)
2j
+k+1
(2j)!n
k!n
k=0

2p2

+ O(n

),

(4.23)

where ( k) is the Riemann zeta function.


Example 4.5 We apply Theorem 4.1 to the integral
1
I=
x dx, 1 < < 0,

(4.24)

where I = 1/(1 + ). By g(x) = 1, g (j) (0) = 0 for j = 1, 2, . . . . Since f (j) (x) =


( 1) . . . ( j + 1)xj , we have
Mn I
(2 1)() (212j 1)B2j
+
( 1) . . . ( 2j + 2) + O(n2p2 ).
2j
n+1
(2j)!n
j=1
(4.25)
p+1

31

Putting h = 1/n, we have the asymptotic expansion


1

Mn
x dx c0 h1+ +
cj h2j .
0

(4.26)

j=1

Theorem 4.2 (Lyness and Ninham[30]) Let f (x) be represented as


f (x) = x (1 x) g(x),

, > 1,

(4.27)

where g(x) is a C p1 function in [0, 1]. Let


0 (x) = (1 x) g(x),
Then

Mn
0

1 (x) = x g(x),

(4.28)

p1 (j)

0 (0)(2j 1)( j)
f (x)dx =
j!n+j+1
j=0
p1
(j)

(1)j 1 (1)(2j 1)( j)


+
+ O(np ).
+j+1
j!n
j=0

(4.29)
An integrable singular point in Theorem 4.1 or Theorem 4.2 is called an algebraic
singularity.
Example 4.6 Let us consider the Beta function
1
B(p, q) =
xp1 (1 x)q1 dx,

p, q > 0.

(4.30)

Without loss of generality, we assume p q > 0. Applying Theorem 4.2, we have

(
)
Mn B(p, q)
aj npj + bj nqj , as n .

(4.31)

j=0

Theorem 4.3 (Lyness and Ninham[30]) Let f (x) be represented as


f (x) = x (1 x) log xg(x),

, > 1,

where g(x) is a C p1 function in [0, 1]. Then


1
p1
p1

aj + bj log n cj
+
+ O(np ).
Mn
f (x)dx =
+j+1
+j+1
n
n
0
j=0
j=0

(4.32)

(4.33)

An integrable singular point x = 0 in Theorem 4.3 is said to be algebraic-logarithmic.


Example 4.7 Consider

x log xdx,

I=

> 1,

(4.34)

where I = 1/( + 1)2 . Applying Theorem 4.3 with = 0, we have


p1
p1

aj + bj log n cj
+
+ O(np ).
Mn I =
+j+1
j+1
n
n
j=0
j=0

32

(4.35)

II. Acceleration methods for scalar sequences


5. Basic concepts
5.1 A sequence transformation, convergence acceleration, and extrapolation
Let S and T be sets of real sequences. A mapping T : S T is called a sequence
transformation, and we write (tn ) = T (sn ) for (sn ) S . Let (n) denote the greatest
index used in the computation of tn . For a convergent sequence (sn ), T is regular if (tn )
converges to the same limit as (sn ). Suppose T is regular for (sn ). T accelerates the
convergence of (sn ) if

tn s
= 0.
n s(n) s
lim

(5.1)

When (sn ) diverges, s is called the antilimit (Shanks[51]) of (sn ).


A sequence transformation can often be implemented by various algorithms. For
example, the Aitken 2 process
tn = sn

(sn+1 sn )2
sn+2 2sn+1 + sn

(5.2)

can be represented as
(sn+1 sn )(sn+2 sn+1 )
,
sn+2 2sn+1 + sn
(sn+2 sn+1 )2
= sn+2
,
sn+2 2sn+1 + sn
sn sn+2 s2n+1
=
,
s
2sn+1 + sn
n+2

sn
sn+1

sn sn+1
,
=

1
1

sn sn+1
= sn+1

(5.3)
(5.4)
(5.5)

(5.6)

where stands for the forward dierence, i.e. sn = sn+1 sn . The algorithm (5.6)
coincides with Shanks e1 tansformation. All algorithms (5.2) to (5.6) are equivalent
in theory but are dierent in numerical computation; e.g., the number of arithmetic
operations or rounding errors.
Let T be a sequence transformation satisfying (5.1). Either T or an algorithm for
T is called a convergence acceleration method, or an acceleration method for short, or a
33

speed-up method. A method for estimating the limit or the antilimit of a sequence (sn )
is called an extrapolation method.
The name extrapolation is explained as follows. We take an extrapolation function
f (n) with k + 1 unknown constants. Under suitable conditions, by solving the system of
equations
f (n + i) = sn+i ,

i = 0, . . . , k,

(5.7)

we can determine k + 1 unknown constants. Then, letting n tend to innity, we obtain


the approximation to s as s = lim f (n). Thus the above process is called extrapolation
n

to the limit or extrapolation for short.


Usually, a sequence transformation, convergence acceleration, and extrapolation
are not distinguished.
5.2 A classication of convergence acceleration methods
We divide convergence acceleration methods into three cases according to given
knowledge of the sequence.
I. An explicit knowledge of the sequence generator is given. Then we can usually
accelerate the convergence of the sequence using such a knowledge. For example, if the
n-th term of an innite series is explicitly given as an analytic function f (n), we can
accelerate the series by the Euler-Maclaurin summation formula, the Plana summation
formula, or the Bickley and Miller method mentioned in Introduction.
II. The sequence has the asymptotic expansion with respect to a known asymptotic
scale. Then we can accelerate by using the scale. Typical examples of this case are the
Richardson extrapolation and the E-algorithm. The modied Aitken 2 process and the
generalized -algorithm are also of this case.
III. Neither an explicit knowledge of the sequence generator nor an asymptotic scale
is known. Then we have to estimate the limit using consecutive terms of the sequence
sn , sn+1 , . . . , sn+k . Almost all famous sequence transformations such as the iterated
Aitken 2 process, the -algorithm, Lubkins W transformation, and the -algorithm are
of this case.
In this paper we will deal with the above cases II and III.

34

6. The E-algorithm
Many sequence transformations can be represented as a ratio of two determinants.
The E-algorithm is a recursive algorithm for such transformations and a quite general
method.
6.1 The derivation of the E-algorithm
Suppose that a sequence (sn ) with the limit or the antilimit s satises
sn = s +

cj gj (n).

(6.1)

j=1

Here (gj (n)) is a given auxiliary double sequence which can depend on the sequence
(sn ) whereas c1 , . . . , ck are constants independent of (sn ) and n. The auxiliary double
sequence is not necessarily an asymptotic scale.
Solving the system of linear equations
(n)

sn+i = Tk
(n)

for the unknown Tk

+ c1 g1 (n + i) + + ck gk (n + i),

i = 0, . . . , k,

(6.2)

by Cramers rule, we obtain

(n)

Tk

sn+1
sn

g1 (n) g1 (n + 1)

g (n) gk (n + 1)
= k
1
1

g1 (n) g1 (n + 1)

gk (n) gk (n + 1)

If (sn ) satises
sn s +

sn+k

g1 (n + k)

gk (n + k)
.
1

g1 (n + k)

gk (n + k)

cj gj (n),

(6.3)

(6.4)

j=1
(n)

where (gj (n)) is a given asymptotic scale, then Tk

may be expected to be a good

approximation to s.
Many well known sequence transformations such as the Richardson extrapolation,
the Shanks transformation, Levins transformations and so on can be represented as (6.3).
(n)

In 1975, C. Schneider[50] gave a recursive algorithm for Tk

35

in (6.3). Using dierent

techniques, the same algorithm was later derived by T. H


avie[20], published in 1979, and
then by C. Brezinski[10], published in 1980, who called it the E-algorithm.
(n)

The two dimensional array Ek

(n)

and the auxiliary three dimensional array gk,j are

dened as follows7 :
(n)

E0

= sn ,

n = 0, 1, . . . ,

(n)

g0,j = gj (n),

(6.5a)

j = 1, 2, . . . ; n = 0, 1, . . . .

(6.6a)

For k = 1, 2, . . . and n = 0, 1, . . .
(n)

(n)
Ek

(n+1)

(n+1)

(n)

(n+1)

(n+1) (n)

(n+1)

(n)

gk1,k gk1,k
(n)

(n)
gk,j

(n+1) (n)

Ek1 gk1,k Ek1 gk1,k


gk1,j gk1,k gk1,j gk1,k
gk1,k gk1,k

(6.5b)

j = k + 1, k + 2, . . . .

(6.6b)

The equality (6.5b) is called the main rule and (6.6b) is called the auxiliary rule.
The following theorem is fundamental.
(n)

(n)

(n)

Theorem 6.1 (Brezinski[10]) Let Gk,j , Nk , and Dk

(n)

Gk,j

(n)

Nk

sn

g (n)
= 1

gk (n)

gj (n)

g (n)
= 1

gk (n)

gj (n + k)

g1 (n + k)
,

gk (n + k)

sn+k

g1 (n + k)
,

gk (n + k)

be dened by

(n)

Dk

g (n)
= 1

gk (n)

(6.7)

g1 (n + k)
,

gk (n + k)

(6.8)

respectively. Then for n = 0, 1, . . . ; k = 1, 2, . . . , we have


(n)

(n)

(n)

j > k,

(n)

(n)

n = 0, 1, . . . ; k = 1, 2, . . . .

gk,j = Gk,j /Dk ,


(n)

Ek

= Nk /Dk ,

(6.9)
(6.10)

the sequence (sn ) is dened in n 1, substitute n = 1, 2, . . . for n = 0, 1, . . . in the rest of this


chapter.
7 When

36

Since the expressions (6.5b) and (6.6b) are prone to round-o-error eects, it is
(n)

better to compute Ek

(n)

and gk,j from the following equivalent expressions.


(n)
Ek

(n)
Ek1

(n)
gk1,j

(n+1)
(n)
Ek1 Ek1
(n)
gk1,k (n+1)
,
(n)
gk1,k gk1,k

(6.11)

and
(n)
gk,j

(n+1)
gk1,j
(n)
gk1,k (n+1)
gk1,k

(n)

gk1,j
(n)

gk1,k

(6.12)

respectively.
6.2 The acceleration theorems of the E-algorithm
When (gj (n)) is an asymptotic scale, the following theorem is valid.
Theorem 6.2 (Sidi[53]) Suppose the following four conditions are satised.
(i) lim sn = s,
n

(ii) For any j, there exists bj 6= 1 such that

lim gj (n + 1)/gj (n) = bj , and bi 6= bj

if i 6= j.
(iii) For any j, lim gj+1 (n + 1)/gj (n) = 0,
n

(iv) sn has the asymptotic expansion of the form


sn s +

cj gj (n) as n .

(6.13)

j=1

Then, for any k,

(n)

Ek
and

s ck+1

(n)

Ek

(n)

Ek1 s

bk+1 bj
gk+1 (n)
1

b
j
j=1
(

=O

gk+1 (n)
gk (n)

as n

(6.14)

)
as n .

(6.15)

A logarithmic scale does not satisfy the assumption (ii) of Theorem 6.2, but satises
the assumptions of the next theorem.

37

Theorem 6.3 (Matos and Prevost[31]) If the conditions (i)(iii)(iv) of theorem 6.2 are
satised, and if for any j, p and any n N ,

gj+p (n)

..

gj+p (n + p)
then for any k 0,

0,

gj (n + p)
gj (n)
..
.

(6.16)

(n)

lim

Ek+1 s
(n)

Ek

= 0.

(6.17)

The above theorem is important because the following examples satisfy the assumption (6.16). (Brezinski et al.[11, p.69])
(1) Let (g(n)) be a logarithmic totally monotone sequence, i.e. lim g(n + 1)/g(n) =
n

1 and (1)k k g(n) 0 k. Let (gj (n)) be dened by g1 (n) = g(n), gj (n) = (1)j j g(n)
(j > 1).

(2) gj (n) = xnj with 1 > x1 > x2 > > 0 and 0 < 1 < 2 < . . . .
(3) gj (n) = nj with 1 > 1 > 2 > > 0.
(4) gj (n) = 1/((n + 1)j (log(n + 2))j ) with 0 < 1 2 . . . and j < j+1 if
j = j+1 .

38

7. The Richardson extrapolation


The Richardson extrapolation, as well as the Aitken 2 process, is the most available extrapolation method in numerical computation. Nowadays, the basic idea of the
Richardson extrapolation eliminating the rst several terms in an asymptotic expansion
is used for obtaining various sequence transformations.
7.1 The birth of the Richardson extrapolation
Similar to the Aitken 2 process, the Richardson extrapolation originated in the
process of computing .
Let Tn be the perimeter of the regular polygon with n sides inscribed in a circle of
diameter C. Let Un be the perimeter of the regular polygon with n sides circumscribing
a circle of diameter C. C. Huygens proved geometrically8 in his De Circuli Magnitudine
Inventa, published in 1654, that
T2n +

1
2
1
(T2n Tn ) < C < Tn + Un .
3
3
3

(7.1)

The left-hand side of (7.1) is the Richardson extrapolation.


The iterative application of the Richardson extrapolation was rst used by Katahiro
Takebe (or Kenk
o Takebe, 1664-1739), a disciple of T. Seki. Let sn be the
perimeter of the regular polygon with 2n sides inscribed in a circle of diameter one and
let n = s2n . Takebe used the Richardson extrapolation iteratively in Tetsujyutsu Sankei
Chapter 11 Tanens
u , published in 1722. He presumed that

(n+2 n+1 )/(n+1 n ) converged to 1/4. By i=1 (1/4)i = 1/3, he constructed


(n)

= n+1 +
(n+2)

1
(n+1 n ) ,
3
(n+1)

(n+1)

n = 2, 3, . . . .
(n)

Then he presumed that (1

constructed

)
1 ( (n+1)
(n)
+
,

1
15 1

(n)
2

(n+1)
1

)/(1

(7.2)

1 ) converged to 1/16, and he


n = 2, 3, . . . .

(7.3)

Similarly, he constructed
(n)

k
8 A.

(n+1)

= k1 +

(
)
1
(n+1)
(n)

k1 ,
22k 1 k1

n = 2, 3, . . . ; k = 2, 3, . . . .

(7.4)

Hirayama, History of circle ratio (in Japanese), (Osaka Kyoiku Tosho, 1980), pp.75-76. (=,

(), )

39

From 2 , . . . , 10 , he obtained to 41 exact digits,9 i.e.

(2)
8 = 3.14159 26535 89793 23846 26433 83279 50288 41971 2

(7.5)

7.2 The derivation of the Richardson extrapolation


Let T (h) be an approximation depending on a parameter h > 0 to a xed value s.
Suppose that T (h) satises the asymptotic formula
T (h) = s +

cj h2j + O(h2m+2 ),

as h +0,

(7.6)

j=1

where c1 , c2 , . . . are unknown constants independent of h. For given h0 > h1 > 0, by


computing h21 T (h0 ) h20 T (h1 ), we obtain
m
2 2j

h21 T (h0 ) h20 T (h1 )


h21 h2j
0 h0 h1
).
=
s
+
c
+ O(h2m+2
j
0
2 h2
h21 h20
h
1
0
j=2

(7.7)

We dene T1 (h0 , h1 ) by the left-hand side of (7.7). Then T1 (h0 , h1 ) is a better approximation to s than T (h1 ) provided that h0 is suciently small. When we set h0 = h and
h1 = h/2, (7.7) becomes
T1 (h, h/2) = s +

c0j h2j + O(h2m+2 ),

(7.8)

j=2

where
T1 (h, h/2) =

T (h) 22 T (h/2)
,
1 22

(7.9)

and c0j = cj (1 222j )/(1 22 ).


Similarly, we dene T2 (h, h/2, h/4) by
T2 (h, h/2, h/4) =

T1 (h, h/2) 24 T1 (h/2, h/4)


,
1 24

then we have
T2 (h, h/2, h/4) = s +

c00j h2j + O(h2m+2 ),

(7.10)

(7.11)

j=3

where c00j = c0j (1 242j )/(1 24 ). By (7.6),(7.8) and (7.11), when h is suciently small,
T2 (h, h/2, h/4) is better than both T1 (h/2, h/4) and T (h/4).
9 M.

Fujiwara, op. cit., pp.296-298.

40

The above process was considered by L. F. Richardson in 1927.

The formula

T1 (h0 , h1 ) was named h2 -extrapolation and T2 (h0 , h1 , h2 ) was named h4 -extrapolation.


Such extrapolations were also called the deered approach to the limit. Nowadays this
process is called the Richardson extrapolation.
(n)

Extending of (7.9) and (7.10), we dene the two dimensional array (Tk ) by
(n)

T0

(n)

Tk

= T (2n h)
=

(n)
Tk1

n = 0, 1, . . . ,

(n+1)
22k Tk1
1 22k

(7.12a)

n = 0, 1, . . . ; k = 1, 2, . . . ,

(7.12b)

where h is an initial mesh size. The equation (7.12b) becomes


(n)

Tk

(n+1)

= Tk1

(
)
1
(n+1)
(n)
T

T
k1 .
22k 1 k1

(7.13)

(n)

Takebes algorithm (7.4) coincides with (7.13). It is clear that T1


(n)

and T2

= T1 (h/2n , h/2n+1 )

= T2 (h/2n , h/2n+1 , h/2n+2 ). The following table is called the T -table:


(0)

T0
(1)
T0
(2)
T0
(3)
T0
..
.

(0)

T1
(1)
T1
(2)
T1

(0)

T2
(1)
T2

(7.14)
(0)
T3

..

Using mathematical induction on k, we have


( k
)
m
2i2j

2
(nk)
Tk
=s+
cj
h2j 22j(nk) + O(22(m+1)(nk) ),
2i
12
i=1

(7.15)

j=k+1

where h is an initial mesh size. In particular, we have an asymptotic approximation to


(nk)

Tk

s:
(
(nk)

Tk

s ck+1

1 22i(2k+2)

i=1

1 22i

)
h2k+2 2(nk)(2k+2) .

(7.16)

Numerical quadrature an example. Suppose that a function f (x) is of C in a


closed interval [a, b]. Then the trapezoidal rule
(
Tn = T (h) = h

)
n1

1
1
f (a) +
f (a + ih) + f (b) ,
2
2
i=1
41

h=

ba
n

(7.17)

and the midpoint rule


Mn = T (h) = h

1
f (a + (i )h),
2
i=1

h=

ba
n

(7.18)

have the asymptotic expansion of the form

T (h)

f (x)dx
a

cj h2j , as h +0.

(7.19)

j=1

Thus we can apply the Richardson extrapolation to T (h). This method using the trapezoidal rule is called the Romberg quadrature and its algorithm is as follows:
(n)

T0

(n)

Tk

= T2n
=

n = 0, 1, . . . ,

(n)
Tk1

(n+1)
22k Tk1
1 22k

(7.20a)
n = 0, 1, . . . ; k = 1, 2, . . . ,

(7.20b)

where T2n is the 2n panels trapezoidal rule.


By the Euler-Maclaurin formula and (7.16), we have

(n)
Tk

f (x)dx
a

)
B2k+2 ( (2k+1)

f
(b) f (2k+1) (a) (b a)2k+2 2n(2k+2)
(2k + 2)!

1 22i2k2

i=1

1 22i

)
.
(7.21)

Now we illustrate by a numerical example. Numerical computations in this section


were carried out on the NEC personal computer PC-9801DA in double precision with
approximately 16 digits. Throughout this section, the number of functional evaluations
will be abbreviated to f.e..
Example 7.1 We apply the Romberg quadrature on a proper integral

ex dx = e 1 = 1.71828 18284 59045.

I=

(7.22)

We give errors of the T -table and the number of functional evaluations in Table 7.1. The
Romberg quadrature is quite ecient for such proper integrals.

42

Table 7.1
The errors of the T -table by the Romberg quadrature

(n)

f.e.

T0

0
1
2
3
4

2
3
5
9
17

(n1)

T1

1.41 101
3.56 102
8.94 103
2.24 103
5.59 104

5.79 104
3.70 105
2.23 106
1.46 107

(n2)

T2

(n3)

T3

8.59 107
1.38 108
2.16 1010

3.35 1010
1.34 1012
(0)

By (7.21), the asymptotic error approximation of T4

(n4)

T4

3.32 1014

is

B10 1 2210 1 2410 1 2610 1 2810


(e 1) = 3.42 1014 ,
2
4
6
8
10! 1 2
12
12
12
(0)

which is close to T4

(7.23)

I in Table 7.1.

7.3 Generalizations of the Richardson extrapolation


Nowadays, the Richardson extrapolation (7.4) or (7.13) is extended to three types
of sequences as follows:
I. Polynomial extrapolations.

sn s +

cj xjn ,

(7.24)

j=1

where (xn ) is a known auxiliary sequence10 whereas cj s are unknown constants.


II. Extended polynomial extrapolations.
sn s +

j
cj x
n ,

(7.25)

j=1

where j s are known constants, cj s are unknown constants, and (xn ) is a known particular auxiliary sequence.
In particular, when xn = n and j = j, then (7.25) becomes
sn s +

cj (j )n .

(7.26)

j=1

this paper, xji means either j th power of a scalar xi or the i th component of a vector xj . In (7.24),
xjn = (xn )j .

10 In

43

This is a special case of


sn s +

cj nj ,

(7.27)

j=1

where j s are known constants, cj s are unknown constants.


III. General extrapolations.
sn s +

cj gj (n),

(7.28)

j=1

where (gj (n)) is a known asymptotic scale whereas cj s are unknown constants.
7.3.1 Polynomial extrapolation
Suppose that a sequence (sn ) satises
sn s +

cj xjn ,

(7.29)

j=1

where (xn ) is a known auxiliary sequence and cj s are unknown constants. Solving the
system of equations
(n)

sn+i = Tk

cj xjn+i ,

i = 0, . . . , k,

(7.30)

j=1
(n)

for the unknown Tk

by Cramers rule, we have

(n)

Tk

sn

xn

k
x
= n
1

xn

k
xn

sn+1
xn+1
xkn+1
1
xn+1
xkn+1

sn+k

xn+k

k
xn+k
.
1

xn+k

xkn+k

By Theorem 6.1 and using the Vandermonde determinants,


(
)
1
(n)
(n+1)
(n+1)
(n)
Tk = Tk1 +
T
Tk1 , k = 1, 2, . . . ; n = 0, 1, . . . .
xn /xn+k 1 k1

(7.31)

(7.32)

In Takebes algorithm (7.4) or in the Romberg scheme (7.20), xn = 22n .


Example 7.2 We apply the Richardson extrapolation (7.32) with xn = 1/n to the
logarithmically convergent series

1
.
sn =
2
i
i=1

44

(7.33)

As is well known, (sn ) converges to s = (2) = 2 /6 very slowly. More precisely, by


Example 3.4, we have
sn

2
1
1
1
1
1
1
5
+ 2 3 +

+ ....
5
7
9
6
n 2n
6n
30n
42n
30n
66n11
(nk)

We give errors of Tk

(7.34)

2 /6 in Table 7.2. By the rst 12 terms, we obtain 10 signicant

digits. And, between n = 1 to 16, the best result is T11 s = 2.78 1013 . This
(2)

method tends to be aected by cancellation of signicant digits.


Table 7.2
The errors of the T -table by (7.32)

n
1
2
3
4
5
6
7
8
9
10
11
12
n
6
7
8
9
10
11
12

(n)

T0

6.45 101
3.95 101
2.84 101
2.21 101
1.81 101
1.54 101
1.33 101
1.18 101
1.05 101
9.52 102
8.69 102
8.00 102
(n5)

T5

1.73 105
3.43 106
8.82 107
2.80 107
1.04 107
4.36 108
2.01 108

(n1)

T1

(n2)

T2

1.45 101
6.16 102
3.38 102
2.13 102
1.47 102
1.07 102
8.14 103
6.40 103
5.17 103
4.26 103
3.57 103
(n6)

T6

1.99 102
6.05 103
2.57 103
1.32 103
7.67 104
4.84 104
3.25 104
2.28 104
1.66 104
1.25 104
(n7)

T7

1.12 106
3.18 108
2.14 108
1.33 108
6.70 109
3.38 109

1.23 107
3.65 108
9.79 109
2.95 109
1.01 109

(n3)

T3

1.42 103
2.58 104
7.30 105
2.67 105
1.15 105
5.64 106
3.01 106
1.73 106
1.05 106
(n8)

T8

(n4)

T4

3.12 105
1.96 105
8.06 106
3.57 106
1.74 106
9.24 107
5.24 107
3.14 107
(n9)

T9

2.57 108
3.11 109 6.01 1010
3.86 1010 2.18 1010
3.30 1011 8.48 1011

When xn = 1/n, the subsequence s0n = s2n has the asymptotic expansion of the
form
s0n

j=1

45

cj (2n )j .

(7.35)

Then we can apply the Richardson extrapolation with xn = 2n . This method requires
many terms but usually gives high accuracy.
For a sequence (sn ) satisfying (7.27), the Richardson extrapolation is as follows.
(n)

T0

(n)

Tk

= sn
(n+1)

= Tk1

k
(n+1)
(n)
Tk1 Tk1 .
1 k
(nk)

Similar to (7.16), we have an asymptotic approximation to Tk


(
(nk)

Tk

(7.36a)

s ck+1

i+1 i

i=1

1 i

(7.36b)
s:

)
nk+1 .

(7.37)

Example 7.3 We apply the Richardson extrapolation with xn = 2n to (2) once more.
(nk)

And we give errors of Tk

2 /6 in Table 7.3.
Table 7.3

The errors of the T -table by the Richardson extrapolation with xn = 2n

n terms
0
1
2
3
4
5
6
7
8

1
2
4
8
16
32
64
128
256

n terms
5
6
7
8

32
64
128
256

(n)

T0

6.45 101
3.95 101
2.21 101
1.18 101
6.06 102
3.08 102
1.55 102
7.78 103
3.90 103
(n5)

T5

(n1)

T1

1.45 101
4.77 102
1.37 102
3.66 103
9.46 104
2.40 104
6.06 105
1.52 105
(n6)

T6

(n2)

T2

1.53 102
2.36 103
3.17 104
4.04 105
5.08 106
6.36 107
7.95 108
(n7)

T7

(n3)

T3

5.16 104
2.46 105
8.97 107
2.93 108
9.28 1010
2.91 1011
(n8)

T8

(n4)

T4

8.76 106
1.32 107
1.34 109
1.14 1011
9.06 1014

6.44 108
3.10 1010 1.84 1010
8.87 1013 2.82 1013 1.92 1013
1.94 1015 2.22 1016 8.33 1017 5.55 1017

Using 8 partial sums s1 , s2 , s4 , s8 , s16 , s32 , s128 , and s256 , we obtain 16 signicant
digits. This method is hardly aected by rouding errors such as cancellation of signicant
digits.
46

7.3.2 Extended polynomial extrapolation


Let us consider a sequence satisfying
sn s +

cj nj ,

(7.38)

j=1

where j s are known constants whereas cj s are unknown constants. The Richardson
extrapolation for (7.38) is dened by
(n)

T0

(n)

Tk

= s2n

n = 0, 1, . . . ,
(
)
1
(n+1)
(n+1)
(n)
Tk1 Tk1 .
= Tk1 +
2 k 1

(7.39a)
n = 0, 1, . . . ; k = 1, 2, . . . .
(7.39b)

If we set h = 1/n and T (h) = sn , then (7.38) becomes


T (h) s +

cj hj .

(7.40)

j=1

Similar to (7.39), the Richardson extrapolation for (7.40) is dened by


(n)

T0

(n)

Tk

= T (2n h)

n = 0, 1, . . . ,
(
)
1
(n+1)
(n)
(n+1)
Tk1 Tk1 ,
= Tk1 +
2 k 1

(7.41a)
n = 0, 1, . . . ; k = 1, 2, . . . ,
(7.41b)

where h is an initial mesh size.


Example 7.4 We apply the Richardson extrapolation (7.41) to the improper integral

I=
0

dx
= 2.
x

(7.42)

Let Mn be the n panels midpoint rule for (7.42). By Example 4.5, we have
Mn 2 c0 h

1/2

cj h2j .

(7.43)

j=1
(0)

We show the T -table in Table 7.4. Using 63 functional evaluations, we obtain T5


1.99999 99993 25, whose error is 6.75 1010 .

47

Table 7.4

Applying the Richardson extrapolation (7.41) to


0

(n)

f.e. T0

0
1
2
3
4
5

1
3
7
15
31
63

1.41
1.57
1.69
1.78
1.84
1.89

(n1)

(n2)

T1

1.971
1.9921
1.9979
1.99949
1.99987

(n3)

T2

dx

(n4)

T3

(n5)

T4

T5

1.99914
1.999930
1.999983
1.9999952 1.99999957
1.99999983
1.99999969 1.9999999920 1.9999999986 1.99999999932

Example 7.5 Next we apply the Richardson extrapolation to the Beta function
1
B(2/3, 1/3) =
x1/3 (1 x)2/3 dx,

(7.44)

where B(2/3, 1/3) = 2/ 3 = 3.62759 87284 68436. Let Mn be the n-panels midpoint
rule for (7.44). By Example 4.6, we have
Mn B(2/3, 1/3)

(c2j1 n1/3j+1 + c2j n2/3j+1 ).

(7.45)

j=1
(0)

We give the T -table in Table 7.5. Using 255 functional evaluations, we obtain T7
3.62759 69010, whose error is 1.83 10

. This integral converges more slowly than

that in Example 7.4.


Table 7.5
Applying the Richardson extrapolation to B(2/3, 1/3)

(n)

(n1)

(n2)

f.e. T0

T1

T2

0
1
2
3
4
5
6
7

1
3
7
15
31
63
127
255

3.68
3.70
3.691
3.672
3.657
3.646
3.639

3.74
3.665
3.639
3.631
3.6289
3.6280

2.00
2.34
2.62
2.84
3.01
3.14
3.25
3.33

(n3)

(n4)

T4

T3

3.615
3.6230
3.6262
3.62720
3.62748

(n5)

T5

(n6)

T6

(n7)

T7

3.6262
3.6277 3.6280
3.62765 3.62764 3.627563
3.62761 3.627601 3.6275935 3.6275969
48

7.3.3 General extrapolation


The E-algorithm described in the preceding section is the rst algorithm for a sequence satisfying (7.28). Subsequently, in 1987, W. F. Ford and A. Sidi[16] gave more
eciently algorithm. See, for details, Sidi[53].

49

8. The -algorithm
The -algorithm is a recursive algorithm for the Shanks transformation and is one
of the most familiar convergence acceleration methods.
8.1 The Shanks transformation
Suppose that a sequence (sn ) with the limit or the antilimit s satises
)
( k
k

ai sn+i =
ai s, n N,
i=0

(8.1a)

i=0
k

ai 6= 0,

(8.1b)

i=0

where ai s are unknown constants independent of n. Then a0 , . . . , ak satisfy the system


of linear equations
a0 (sn s) + a1 (sn+1 s) + + ak (sn+k s) = 0
a0 (sn+1 s) + a1 (sn+2 s) + + ak (sn+k+1 s) = 0

(8.2)

a0 (sn+k s) + a1 (sn+k+1 s) + + ak (sn+2k s) = 0.


By (8.1b) and (8.2), we have

sn+1 s
sn s

sn+2 s
sn+1 s

sn+k s sn+k+1 s
thus we obtain

1
1

sn+1
sn
s

sn+k1 sn+k
Therefore,

sn+k s

sn+k+1 s
= 0,

sn+2k s


1
sn


sn+k sn
=


sn+2k1
sn+k1

sn

s
s = n+k1
1

sn

sn+k1

sn+1
sn+1
sn+k
1
sn+1
sn+k
50

sn+1
sn+1
sn+k

sn+k

sn+k

sn+2k1
.
1

sn+k

sn+2k1

(8.3)

sn+k

sn+k
. (8.4)

sn+2k1

(8.5)

The right-hand side of (8.5) is denoted ek (sn ) by D. Shanks[51]:

sn

sn

s
ek (sn ) = n+k1
1

sn

sn+k1

sn+1
sn+1
sn+k
1
sn+1
sn+k

sn+k

sn+k

sn+2k1
.
1

sn+k

sn+2k1

(8.6)

The sequence transformation (sn ) 7 (ek (sn )) is called Shanks e-transformation, or the
Shanks transformation. In particular, e1 is the Aitken 2 process. By construction, ek is
exact on a sequence satisfying (8.1).
If a sequence (sn ) satises
sn = s +

cj (n)nj ,

n N,

(8.7)

j=1

where cj (n) are polynomials in n and j 6= 1 are constants, then (sn ) satises (8.1).
On concerning the necessary and sucient condition for (8.1), see Brezinski and Redivo
Zaglia[11, p.79].
The Shanks transformation was rst considered by R.J. Schmidt[49] in 1941. He used
this method for solving a system of linear equations by iteration but not for accelerating
the convergence. For that reason his paper was neglected until P. Wynn[59] quoted it
in 1956. The Shanks transformation did not receive much attention until Shanks[51]
published in 1955.
The main drawback of the Shanks transformation is to compute determinants of
large order. This drawback was recovered by the -algorithm proposed by Wynn.
8.2 The -algorithm
Immediately after Shanks rediscovered the e-transformation, P. Wynn proposed a
recursive algorithm which is named the -algorithm. He proved the following theorem by
using determinantal identities.
Theorem 8.1 (Wynn[59]) If
(n)

2k = ek (sn ),
1
(n)
2k+1 =
,
ek (sn )
51

(8.8a)
(8.8b)

then
(n)

(n+1)

k+1 = k1 +

(n+1)

(n)

k = 1, 2, . . . ,

(8.9)

provided that none of the denominators vanishes.


For a sequence (sn ), the -algorithm is dened as follows:
(n)

1 = 0,
(n)

(n)

= sn ,
1

(n+1)

k+1 = k1 +

n = 0, 1, . . . ,

(n+1)
k

(n)

(8.10a)

k = 0, 1, . . . .

(8.10b)

There are many research papers on the -algorithm. See, for details, Brezinski[9], Brezinski and Redivo Zaglia[11].
8.3 The asymptotic properties of the -algorithm
(n)

P. Wynn gave asymptotic estimates for the quantities 2k produced by application


of the -algorithm to particular sequences.
Theorem 8.2 (Wynn[61]) If the -algorithm is applied to a sequence (sn ) satisfying

sn s +

aj (n + b)j ,

a1 6= 0,

(8.11)

as n .

(8.12)

j=1

then for xed k,


(n)

2k s +

a1
,
(k + 1)(n + b)

Theorem 8.2 shows that the -algorithm cannot accelerate a logarithmically convergent sequence satisfying (8.11). For, by (8.12)
(n)

2k s
1
=
.
n sn+2k s
k+1

(8.13)

lim

Theorem 8.3 (Wynn[61]) If the -algorithm is applied to a sequence (sn ) satisfying


sn s + (1)

aj (n + b)j ,

a1 6= 0,

(8.14)

j=1

then for xed k,


(n)

2k s +

(1)n (k!)2 a1
,
4k (n + b)2k+1

52

as n .

(8.15)

Theorem 8.3 shows that the -algorithm works well on partial sums of altenating
series
sn =

(1)i1

i=1

1
,
ai + b

(8.16)

where a 6= 0 and b are constants.


Theorem 8.4 (Wynn[61]) If the -algorithm is applied to a sequence (sn ) satisfying
sn s +

aj nj ,

(8.17)

j=1

where 1 > 1 > 2 > > 0, then for xed k,


(n)

2k s +

ak+1 (k+1 1 )2 . . . (k+1 k )2 nk+1


,
(1 1 )2 . . . (1 k )2

as n .

(8.18)

Theorem 8.5 (Wynn[61]) If the -algorithm is applied to a sequence (sn ) satisfying


sn s + (1)

aj nj ,

(8.19)

j=1

where 1 > 1 > 2 > > 0, then for xed k,


(n)

2k s + (1)n

ak+1 (k+1 1 )2 . . . (k+1 k )2 nk+1


,
(1 + 1 )2 . . . (1 + k )2

as n .

(8.20)

The above theorems are further extended by J. Wimp. The following theorem is an
extension of Theorem 8.3.
Theorem 8.6 (Wimp[58, p.127]) If the -algorithm is applied to a sequence (sn ) satisfying
sn s + n (n + b)

cj nj ,

(8.21)

j=0

where c0 6= 0, 6= 1, and 6= 0, 1, . . . , k 1, then for xed k > 0,


[
]
1
c0 n+2k (n + b)2k k!()k
(n)
2k s +
1 + O( ) ,
( 1)2k
n

(8.22)

where ()k = ()( + 1) ( + k 1), which is called the Pochhammers symbol.


8.4 Numerical examples of the -algorithm
Numerical computations reported in the rest of this paper were carried out on the
NEC ACOS-610 computer in double precision with approximately 16 digits.
53

Example 8.1 We apply the -algorithm to the partial sums of alternating series
sn =

(1)i1

2i 1

i=1

(8.23)

As we described in Example 3.2,

1
sn = + (1)n1
4
n
(n2k)

We give sn and 2k

)
1
1
1

+ O( 4 ) .
4 16n2
n

(8.24)

in Table 8.1, where k = b(n 1)/2c. By the rst 13 terms, we

obtain 10 exact digits. And by the rst 20 terms, we obtain 15 exact digits.
Table 8.1
The -algorithm applying to (8.23)
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
20

(n2k)

sn
1.000
0.666
0.866
0.723
0.834
0.744
0.820
0.754
0.813
0.760
0.808
0.764
0.804
0.767
0.802
0.772
0.785

Example 8.2 Let us consider


sn =

2k

0.791
0.7833
0.78558
0.78534
0.78540
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539

7
3
68
832
8126
81682
81623
81635
81633
81634
81633
81633

4
67
01
97448
97448

(1)i1

.
i
i=1

(8.25)

As we described in Example 3.3,


sn = (1

1
1
2)( ) + (1)n1
2
n
54

)
1
5
1
1

+
+ O( 5 ) ,
2 8n 128n3
n

(8.26)

where (1

2)( 12 ) = 0.60489 86434 21630. Theorem 8.6 and (8.26) show that the
(n2k)

-algorithm accelerates the convergence of (sn ). We give sn and 2k

in Table 8.2,

where k = b(n 1)/2c. The results are similar to Example 8.1.


Table 8.2
The -algorithm applying to (8.25)
n

sn

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
20

1.00
0.29
0.87
0.37
0.81
0.40
0.78
0.43
0.76
0.45
0.75
0.46
0.74
0.47
0.73
0.49
0.60

(n2k)

2k

0.6107
0.6022
0.60504
0.60484
0.60490
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489

9
2
74
875
8614
86466
86426
86435
86433
86434
86434
86434

1
99
243
21630
21630

The -algorithm can extrapolate certain divergent series.


Example 8.3 Let us now consider
sn =

(2)i1
i=1

(sn ) diverges from

1
2

(8.27)

log 3 = 0.54930 61443 34055, which is the antilimit of (sn ). We apply


(n2k)

the -algorithm on (8.27). We give sn and 2k

55

in Table 8.3, where k = b(n 1)/2c.

Table 8.3
The -algorithm applying to (8.27)
n

(n2k)

sn

1
1.0
2
0.0
3
1.3
4
0.6
5
2.5
6
2.8
7
6.3
8
9.6
9
18.7
10
32.4
11
60.6
12
109.9
23
119788.2
24 229737.0

0.5

2k

0.571
0.533
0.5507
0.5485
0.54940
0.54926
0.54931
0.54930
0.54930
0.54930
0.54930
0.54930
0.54930

56

2
32
661
595
61443 34119
61443 34055
61443 34055

9. Levins transformations
In a favorable inuence survey [54], Smith and Ford concluded that the Levin utransform is the best available across-the-board method. In this section we deal with
Levins transformations.
9.1 The derivation of the Levin T transformation
If a sequence (sn ) satises
sn = s + Rn

k1

j=0

cj
,
nj

(9.1)

where Rn are nonzero functions of n and c0 (6= 0), . . . ,ck1 are constants independent of
n, then the limit or the antilimit s is represented as

sn

Rn

..

R /nk1
s= n
1

Rn

..

Rn /nk1

k1
Rn+k /(n + k)
.
1

Rn+k

..

k1
Rn+k /(n + k)

...
...
..
.

sn+k
Rn+k
..
.

...
...
...
..
.
...

(9.2)

Thus, if (sn ) satises the asymptotic expansion


sn s + Rn

cj
,
j
n
j=0

(9.3)

then the ratio of determinants

(n)

Tk

sn

Rn

..

Rn /nk1
=
1

Rn

..

Rn /nk1

...
...
..
.
...
...
...
..
.
...

k1
Rn+k /(n + k)

Rn+k

..

k1
Rn+k /(n + k)
sn+k
Rn+k
..
.

(9.4)

may be expected to be a good approximation to s. The transformation of (sn ) into a set


(n)

of sequences {(Tk )} is called the Levin T transformation.


57

By properties of determinants, Rn can be multiplied by any constant without aect(n)

ing Tk . Levin[26] proposed three dierent choices for Rn : Rn = sn , Rn = nsn1


and Rn = sn1 sn /(sn sn1 ), where sn = sn+1 sn . These transformations are called the Levin t-transform, the Levin u-transform and the Levin v-transform,
respectively.
There is no need to compute the determinants in (9.4). Levin himself gave the
following formula:

)k1
( )(
sn+j
k
n+j
(1)
n+k
Rn+j
j
j=0
= k
.
( )(
)k1

k
n
+
j
1
(1)j
j
n+k
Rn+j
j=0
k

(n)

Tk

(n)

The formula Tk

(9.5)

can be recursively computed by the E-algorithm which was described

in Section 6.
(n)

E0

= sn ,

(n)

g0,j = n1j Rn ,

n = 1, 2, . . . ; j = 1, 2, . . . ,

(9.6a)

(n)

(n)
Ek

(n)
Ek1

(n)
gk1,k

Ek1
(n)

n = 1, 2, . . . ; k = 1, 2, . . . ,

(9.6b)

gk1,k
(n)

(n)
gk,j

(n)
gk1,j

(n)
gk1,k

gk1,j
(n)

n = 1, 2, . . . ; k = 1, 2, . . . , j > k,

gk1,k

(9.6c)
(n)

where operates on the upper index n. By Theorem 6.1, we have Ek

(n)

= Tk .

9.2 The convergence theorem of the Levin transformations


We denote by A the set of all functions f dened on [b, ) for b > 0 that have
asymptotic expansions in inverse powers of x of the form

cj
f (x)
xj
j=0

as x .

(9.7)

The next theorem was given by Sidi[52] and Wimp[58], independently.


Theorem 9.1 (Sidi[52];Wimp[58]) Suppose that the sequence (sn ) has an asymptotic
expansion of the form
sn s n

]
c1
c2
c0 +
+ 2 + ... ,
n
n

where 0 < || 1, < 0, and c0 (6= 0), c1 , . . . are constants independent of n.


58

(9.8)

(1) Suppose that 6= 1. If we set Rn = sn or Rn = sn1 sn /(sn sn1 )


then
(n)

Tk

(
)
s = O n n2k

as n .

(9.9)

(2) Suppose that = 1. If we set Rn = nsn1 or Rn = sn1 sn /(sn sn1 )


then
(n)

Tk

(
)
s = O nk

as n .

(9.10)

Theorem 9.1(1) shows that the Levin t- and v-transforms accelerate certain alternating series.
Recently, N. Osada[41] has extended the Levin transforms to vector sequences. These
vector sequence transformations satisfy asymptotic properties similar to Theorem 9.1.
Example 9.1 Let us consider
s0 = 0,
n

(1)i1

sn =
,
i
i=1

n = 1, 2, . . . .

(9.11)

We apply the Levin u-, v-, and t-transforms on (9.11). For the Levin u-transform, we
(1)

(1)

take Tn1 , while Tn2 for v- and t-transforms. We give signicant digits of sn , dened
(1)

by log10 |sn s|, and those of Tnk in Table 9.1.


Table 9.1
The signicant digits of the Levin transforms applying to (9.11)

sn

Levin u
(1)
Tn1

Levin v
(1)
Tn2

Levin t
(1)
Tn2

1
2
3
4
5
6
7
8
9
10
11
12
13
14

0.40
0.51
0.58
0.63
0.67
0.71
0.74
0.77
0.79
0.81
0.83
0.85
0.87
0.88

0.99
2.30
3.69
5.93
6.33
7.52
9.46
10.14
11.27
13.06
13.94
14.89
15.56

2.30
4.14
4.45
5.53
6.93
9.17
9.42
10.70
13.52
13.18
14.36
15.56

2.23
3.12
4.19
5.44
6.95
8.82
9.43
10.77
12.68
13.16
14.40
15.52

59

Theorem 9.1(2) shows that the Levin u- and v-transforms accelerate logarithmically
convergent sequences of the form
sn s n

]
c1
c2
c0 +
+ 2 + ... ,
n
n

(9.12)

where < 0, and c0 (6= 0), c1 , . . . are constants.


Example 9.2 Let us now consider partial sums of (1.5):
sn =

1
.
i i
i=1

(9.13)

We apply the Levin u-, v-, and t-transforms on (9.13) with s0 = 0. Similar to the
(1)

above example, we show signicant digits of sn and Tnk in Table 9.2. Both the u- and
v-transforms accelerate but the t-transform cannot.
Table 9.2
The signicant digits of the Levin transforms applying to (1.5)
Levin u

Levin v

Levin t

(1)
Tn2

Tn2

1.22
2.26
2.35
3.00
3.93
5.29
6.25
7.08
8.34
8.55
7.57
6.97

0.08
0.18
0.27
0.35
0.42
0.48
0.53
0.58
0.62
0.66
0.70
0.73

sn

(1)
Tn1

1
2
3
4
5
6
7
8
9
10
11
12
13
14

0.21
0.10
0.03
0.03
0.07
0.11
0.14
0.16
0.19
0.21
0.23
0.25
0.26
0.28

0.39
1.22
2.37
3.74
4.16
5.42
6.18
6.93
9.01
8.72
8.89
8.14
7.58

(1)

When the asymptotic expansion of logarithmically convergent sequence has logarithmic terms such as log n/n, the Levin u-, v-transforms do not work eectively.
Example 9.3 Consider a sequence dened by
sn =

log j
j=2

60

j2

(9.14)

As we described in Example 3.6, the sequence (sn ) converges to 0 (2), and has the
asymptotic expansion
sn 0 (2)

log n
1
log n log n
1
+

+
...,
2
3
n
n
2n
6n
12n3

(9.15)

where (s) is the Riemann zeta function and 0 (s) is its derivative. We apply the Levin
u-, v-, and t-transforms on (9.14) and show signicant digits in Table 9.3.
Table 9.3
The signicant digits of the Levin transforms applying to (9.14)
Levin u

Levin v

Levin t

(2)
Tn3

Tn3

0.12
0.49
0.33
1.18
1.62
3.18
2.50
2.72
2.81
2.91
3.00
3.08
3.16

0.45
0.47
0.58
0.69
0.79
0.87
0.95
1.02
1.09
1.15
1.21
1.26
1.31

sn

(2)
Tn2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

0.03
0.12
0.19
0.26
0.31
0.36
0.40
0.43
0.47
0.50
0.52
0.55
0.57
0.60
0.62

0.47
0.49
0.71
1.77
1.82
2.23
2.28
2.40
2.50
2.59
2.67
2.75
2.82

(2)

9.3 The d-transformation


Levin and Sidi[27] extended the Levin T transformation. Suppose that a sequence
(sn ) satises
sn = s +

m1

(n)
Ri

i=0
(n)

where Ri

k1

j=0

ci,j
,
nj

(9.16)

(i = 0, . . . , m 1) are nonzero functions of n and ci,j are unknown constants.

61

Using Cramers rule, s can be represented as the ratio of two determinants.

sn
...
sn+mk

(n)
(n+mk)

R
.
.
.
R
0
0

(n)
(n+mk)
R1
...
R1

.
.

.
.
.
.

(n)
(n+mk)
...
Rm1
Rm1

(n)
(n+mk)
R /n

.
.
.
R
/(n
+
mk)
0
0

..
..

.
(n) .

(n+mk)
R
k1
k1
/n
.
.
.
R
/(n
+
mk)
m1
s = m1
.
1
...
1

(n)
(n+mk)
R0
...
R0

(n)
(n+mk)

R1
...
R1

..
..

.
.

(n)
(n+mk)

Rm1
.
.
.
R
m1

(n)
(n+mk)

R
/n
.
.
.
R
/(n
+
mk)

0
0

..
..

.
.

(n)
(n+mk)
k1
k1
Rm1 /n
. . . Rm1 /(n + mk)

(9.17)

(m)

We denote by Tmk,n the right-hand side of (9.17). The transformation of (sn ) into
(m)

a set of sequences {(Tmk,n )} is called the d(m) -transformation.


(n)

On the analogy of the Levin u-transform, we take Rq1 = nq q snq (q = 1, . . . , m)


for the d(m) -transformation. The d(m) -transform can be computed by the E-algorithm.
For n = 1, 2, . . . ,
(n)

E0

= sn ,

(9.18a)

(n)

g0,pm+q = nqp q snq ,

p = 0, 1, . . . ; q = 1, . . . , m

(9.18b)

(n)

(n)
Ek

(n)
Ek1

(n)
gk1,k

(n)
gk1,k

Ek1
(n)

k = 1, 2, . . . ,

(9.18c)

k = 1, 2, . . . , j > k,

(9.18d)

gk1,k
(n)

(n)
gk,j

(n)
gk1,j

gk1,j
(n)

gk1,k
(n)

(m)

where operates on the index n. Then Emk = Tmk,n .


n
Example 9.4 Consider sn = j=2 log j/j 2 once more. By (9.15), nk k sn have asymptotic expansion of the form

aj log n + bj
j=1

nj
62

(9.19)

Thus (sn ) can be represented as


0

sn (2) +

n snk

ckj nj .

(9.20)

j=0

k=1

This asymptotic expansion suggests that the d-transformation works well on (sn ). We
apply the Levin u, which can be regarded as the d(1) -transform, the d(2) -transform, and
the d(3) -transform to (sn ) and show signicant digits in Table 9.4.
Table 9.4
The signicant digits of the d-transforms applying to (9.14)
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

sn
0.03
0.12
0.19
0.26
0.31
0.36
0.40
0.43
0.47
0.50
0.52
0.55
0.57
0.60
0.62

Levin u

d(2)

d(3)

0.47
0.49
0.71
1.77
1.82
2.23
2.28
2.40
2.50
2.59
2.67
2.75
2.82

0.67
0.88
2.06
2.11
3.59
4.23
4.99
5.36
7.07
6.66
5.75

1.12
1.31
1.54
2.68
3.21
3.99
5.17
5.65
5.99

63

10. The Aitken 2 process and its modications


The Aitken 2 process is the most famous as well as the oldest nonlinear sequence
transformation. It can accelerate linearly convergent sequences but cannot accelerate
some logarithmically convergent sequences. In this section we make clear the above fact
and describe some modications of the Aitken 2 process.
10.1 The acceleration theorem of the Aitken 2 process
First we dene two dierence oprators and . The forward deerence operator
is dened by
0 sn = sn

(10.1a)

k sn = k1 sn+1 k1 sn

k = 1, 2, . . . .

(10.1b)

Similarly, the backward deerence operator is dened by


0 sn = sn

(10.2a)

k sn = k1 sn k1 sn1

k = 1, 2, . . . ; n k.

(10.2b)

Suppose that a sequence (sn ) satises sn s = cn . Then we deduce


sn+1 s
sn+2 s
=
.
sn+1 s
sn s

(10.3)

Solving (10.3) for the unknown s, we have


s=

sn sn+2 s2n+1
(sn )2
=
s

.
n
2 sn
2 sn

(10.4)

The Aitken 2 process is dened by


tn = sn

(sn )2
,
2 sn

(10.5)

or equivalently
sn+1 sn+1
,
sn+1 sn+1
(sn+2 )2
= sn+2
.
2 sn+2

tn = sn+1

(10.6)
(10.7)

As is well known, the Aitken 2 process accelerates any linearly convergent sequence.
64

Theorem 10.1 (Henrici[21]) Let (sn ) be a sequence satisfying


sn+1 s = (A + n )(sn s),

(10.8)

where A is a constant, A 6= 1, and n 0 as n . Let (tn ) be dened by (10.5).


Then

tn s
= 0.
n sn+2 s
lim

(10.9)

Proof. See [21, p.73].

The Aitken 2 process can be applied iteratively as follows. The two dimensional
(n)

array (Tk ) is dened by


(n)

T0

= sn ,

n = 1, 2, . . . ,
(n+1)

(n)

(n)

Tk+1 = Tk

(Tk
(n+2)

Tk

(10.10a)
(n)

Tk )2
(n+1)

2Tk

(n)

k = 0, 1, . . . ; n = 1, 2, . . . .

+ Tk

(10.10b)
(n)

The algorithm (10.10) is called the iterated Aitken 2 process. Since T1

(n)

= 2

for any

n N, by Theorem 8.6, we have the following theorem.

11

Theorem 10.2 Suppose that a sequence (sn ) is satised


sn s + (n + b)
n

cj (n + b)j ,

c0 6= 0,

(10.11)

j=0

where 1 < 1, and 6= 0, 2, . . . , 2k 2, then


(n)
Tk

[
]
c0 n+2k (n + b)2k ()( + 2) ( + 2k 2)
1
1 + O(
) . (10.12)
s+
( 1)2k
n+b

Proof. By induction on k, the proof follows from Theorem 8.6.

Example 10.1 We apply the iterated Aitken 2 process to the partial sums of alternating
series
sn =

(1)i1
i=1

(n2k)

We give sn and Tk

2i 1

(10.13)

in Table 10.1, where k = b(n 1)/2c. By the rst 11 terms, we

obtain 10 signicant digits. For comparison, the signicant digits, abbreviated to SD ,


11 Theorem

10.2 was given by K. Murota and M. Sugihara[32].

65

(n2k)

of 2k

are also given in Table 10.1. The iterated Aitken 2 process is better than the

-algorithm, because 0 < ()( + 2) . . . ( + 2k 2) < k!()k .


Table 10.1
The iterated Aitken 2 process applying to (10.13)

n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

(n2k)
Tk

sn
1.000
0.666
0.866
0.723
0.834
0.744
0.820
0.754
0.813
0.760
0.808
0.764
0.804
0.767
0.802
0.785

0.791
0.7833
0.78552
0.78536
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539

25
98
77
8178
8159
81634
81633
81633
81633
81633
81633

9
69
9779
9731
97444
97448

SD of
Aitken 2

SD of
-algorithm

2.20
2.69
3.89
4.45
5.78
6.35
7.82
8.39
8.31
10.55
12.46
12.86
14.37

2.20
2.69
3.73
4.30
5.25
5.87
6.78
7.43
8.31
8.98
9.84
10.52
11.37

10.2 The derivation of the modied Aitken 2 formula


On the other hand, the Aitken 2 process cannot accelerate logarithmic sequences.
Thus some modications of 2 process have been considered. In order to illustrate we
take up a model sequence (sn ) satisfying
sn = s + c0 n + c1 n1 + c2 n2 + O(n3 ),

(10.14)

where < 0 and c0 (6= 0), c1 , c2 are constants.


Lemma 10.3 Suppose that a sequence satises (10.14). Then the following asymptotic
formulae hold.
(sn )2
c0
(1) sn
=s+
n + O(n1 ).
2
sn
1
1 (sn )2
= s + O(n1 ).
(2) sn

2 sn

66

1 sn sn
= s + O(n2 ).
sn sn
Proof. See Appendix A.

(3) sn

A sequence transformation
sn 7 sn

1 (sn )2

2 sn

(10.15)

was considered by Overholt[42]. A sequence transformation


sn 7 sn

1 sn sn
sn sn

(10.16)

was rst considered by Drummond[15]. For a sequence satisfying (10.14), Drummonds


modication (10.16) is better than Overholts modication (10.15).
Drummonds modication has been extended by Bjrstad, Dahlquist and Grosse[4]
as follows:
(0)

s0 = 0,
(n)

s0

= sn ,

(10.17a)
n = 1, 2, . . . ,

(10.17b)
(n)

(n)
sk+1

(n)
sk

(n)

2k + 1 sk sk

,
2k s(n) s(n)
k

k = 0, 1, . . . ; n k + 1,
(10.17c)

where and operate on the upper indexes.12 The formula (10.17) is called the modied
Aitken 2 formula. The following two theorems are fundamental.
(n)

Theorem 10.4 (Bjrstad, Dahlquist and Grosse[3, p.7]) If sk


[
(n)
sk

s = n2k

(k)
c0

]
(k)
(k)
c1
c2
1
+
+ 2 + O( 3 ) ,
n
n
n

then
(n)
sk+1

is represented as

s=n

2k2

(k)

c0 6= 0,

[
]
1
(k+1)
c0
+ O( ) ,
n

(10.18)

(10.19)

where
(k)

(k+1)
c0

12 sk
n

(k)

(k)

2c2
c
(c )2
+
.
= 0 (1 + 2k) (k) 1
12
(2k )(2k + 1 )
c0 (2k )2
(n)

dened by Bjrstad, Dahlquist and Grosse agrees with sk

67

in (10.17).

(10.20)

(j)

Theorem 10.5 (Bjrstad, Dahlquist and Grosse) With the above notation, if c0

6= 0

for j = 0, . . . , k 1, then
(n)

sk s = O(n2k ) as n .

(10.21)

Poof. By induction on k, the proof follows from Theorem 10.4.

Example 10.2 We apply the iterated Aitken 2 process and modied Aitken 2 process
(n2k)

to the partial sums of (1.5). We give sn , Tk

(nl)

and sl

in Table 10.2, where

k = b(n 1)/2c and l = bn/2c. By the rst 11 terms, we obtain 10 exact digits by the
modied Aitken 2 formula.
Table 10.2
The iterated Aitken 2 process and
the modied Aitken 2 formula applying to (1.5)
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14

(n2k)

sn

Tk

1.00
1.35
1.54
1.67
1.76
1.82
1.88
1.92
1.96
1.99
2.02
2.04
2.06
2.08
2.61

1.77
1.90
2.14
2.19
2.33
2.36
2.44
2.46
2.539
2.524
2.525
2.522
2.612

(nl)

sl

2.640
2.6205
2.6159
2.61232
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237

9
657
560
53431
53475
53487
53487
53487
53486
53486
53486

5
16
10
17
04
13
85

10.3 The automatic modied Aitken 2 formula


The main drawback of the modied Aitken 2 process is that it need the explicit
knowledge of the exponent such that
sn s + n

68

cj
,
j
n
j=0

(10.22)

where < 0 and c0 (6= 0), c1 , . . . are constants. Drummond[15] commented that for a
sequence satisfying (10.22),
(

1
) + 1,
sn
2 sn1

(10.23)

where the sign ; denotes approximate equality. Moreover Bjrstad et al.[4] show that
n + n

tj
,
nj
j=0

(10.24)

where n is dened by the right-hand side of (10.23) and t0 (6= 0), t1 , . . . are constants.
Thus the sequence (n ) itself can be accelerated by the modied Aitken 2 process
with = 2 in (10.17c).
Suppose that the rst n terms s1 , . . . , sn of a sequence satisfying (10.22) are given.
(m)

Then we dene (tk ) as follows:


(0)

Initialization. t0 = 0.
For m = 1, . . . , n 2,
(m)

t0

1
) + 1,
sm
2 sm1

(10.25a)

(m)

(m)
tk+1

(m)
tk

(m)

2k + 3 tk tk

,
2k + 2 t(m) t(m)
k

(10.25b)

k = 0, . . . , bm/2c 1,
(m)

where tk

(m+1)

= tk

(m)

tk

(m)

and tk
{
n =

(m)

= tk

(k)

(m1)

tk

. Then we put

tk1

if n is odd,

(k)
tk

if n is even,

(10.26)

where k = b(n 1)/2c. Substituting n for in the denition of the modied Aitken 2
formula (10.17), we can obtain the followings:
(0)

Initialization. sn,0 = 0.
For m = 1, . . . , n,
(m)

sn,0 = sm ,

(10.27a)
(m)

(m)
sn,k+1

(m)
sn,k

(m)

2k + 1 n sn,k sn,k

,
2k n s(m) s(m)
n,k

k = 0, . . . , bn/2c 1,
69

n,k

(10.27b)

(m)

(m+1)

where sn,k = sn,k

(m)

(m)

(m)

(m1)

sn,k and sn,k = sn,k sn,k

This scheme is called the automatic modied Aitken 2 -formula. The data ow of
this scheme is as follows (case n = 4):
s1

s4

&
&

&

s1

s4,0

s2

(2)
s4,0

s3

s4,0

s4

s4,0

s2
s3

(0)

t0

t0

t0

(1)

&
&

(2)

(1)

t1

= 4

(1)

(3)

(4)

&

&

&

(1)

s4,1
(2)

&
&
(2)
s4,2

s4,1
(3)

s4,1

For a given tolerance , this scheme is stopped if n is even and


(k)

(k)

|sn,k sn,k1 | < ,

(10.28a)

or if n is odd and
(k+1)

|sn,k

(k)

sn,k | < ,

(10.28b)

where k = bn/2c.
A FORTRAN subroutine of the automatic modied Aitken 2 formula is given in
Appendix.
Example 10.3 We apply the automatic modied Aitken 2 formula to the partial sums
(nk)

of (1.5). We give sn , n in (10.26) and sn,k

in Table 10.3, where k = bn/2c. By the

rst 11 terms, we obtain 11 exact digits by the automatic modied Aitken 2 formula.

70

Table 10.3
The automatic modied Aitken 2 formula
applying to (1.5)
n

sn

1
2
3
4
5
6
7
8
9
10
11
12
13
14

1.00
1.35
1.54
1.67
1.76
1.82
1.88
1.92
1.96
1.99
2.02
2.04
2.06
2.08
2.61

(nk)

0.544
0.5071
0.50015
0.50001
0.49999
0.49999
0.50000
0.50000
0.50000
0.49999
0.49999
0.50000
0.50000

sn,k

3
9938
9967
017
0068
00001
99992
99999
00000
00000

71

3
3
23
79
00

2.55
2.604
2.61218
2.61236
2.61235
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237

00
4
541
525
5314
53486
53488
53487
53486
53486

35
2
2
20
8549

11. Lubkins W transformation


Lubkins W transformation is the rst nonlinear sequence transformation that can
accelerate not only linear sequences but also some logarithmic sequences.
11.1 The derivation of Lubkins W transformation
Almost all logarithmically convergent sequences that occur in practice satisfy
lim

sn+1 s
sn+1
= lim
= 1.
n
sn s
sn

(11.1)

Suppose that a sequence (sn ) satises


sn+2 s
sn+1 s
= ,
sn+1
1
sn

1
lim

(11.2)

where (6= 0), s R. As Kowalewski[25, p.268] proved, 0 < 1. The equality (11.2)
is equivalent to
lim

sn+1 sn
= .
(sn+1 s)2 sn

(11.3)

If in (11.3) is known, solving the equation


sn+1 sn
= ,
(sn+1 s)2 sn

(11.4)

for the unknown s, we have


s = sn+1

1 sn sn+1
.
2 sn

(11.5)

When = 1, the right-hand side of (11.5) coincides with the Aitken 2 process. When
0 < < 1, that of (11.5) coincides with the modied Aitken 2 formula.
If in (11.3) is unknown, solving the equation
sn+2 sn+1
sn+1 sn
=
2
(sn+2 s) sn+1
(sn+1 s)2 sn

(11.6)

for the unknown s, we obtain


s = sn+1

sn+1 sn 2 sn+1
.
sn+2 2 sn sn 2 sn+1
72

(11.7)

A sequence transformation
sn+1 sn 2 sn+1
W : sn
7 sn+1
sn+2 2 sn sn 2 sn+1

(11.8)

is called Lubkins W transformation[29], or the W transform for short. The relation


between the W transform and the modied Aitken 2 process will be treated in subsection
11.3.
For a sequence (sn ), we dene Wn by13
Wn = sn+1

sn+1 sn 2 sn+1
.
sn+2 2 sn sn 2 sn+1

(11.9)

Wn can be represented as various forms.


sn+1 sn+2 2 sn
Wn = sn+2
sn+2 2 sn sn 2 sn+1
(
)
sn
2

s
( n)
=
1
2
sn
sn+1
1
sn
= sn+2
.
2
1
1

+
sn+2
sn+1
sn
Since the Aitken 2 process can be represented as
(
)
sn

sn
),
tn = (
1

sn

(11.10)

(11.11)

(11.12)

(11.13)

the formula (11.11) means that the W transform is a modication of the Aitken 2
process. Lubkin[29], Tucker[55] and Wimp[58] studied the relationship between the accelerativeness for the W transform and the Aitken 2 process.
We remark that Wn is also represented as

sn+1
sn+2

sn sn+1 /2 sn sn+1 sn+2 /2 sn+1


,
Wn =

1
1

sn sn+1 /2 sn sn+1 sn+2 /2 sn+1


13 W

in (11.9) coincides with Lubkins original Wn+2 .

73

(11.14)

(n+1)

which is the Levin v-transform T1

with Rn = sn1 sn /2 sn1 in (9.5).

11.2 The exact and acceleration theorems of the W transformation


By construction, the W transform is exact on (sn ) satisfying (11.6). More precisely,
the following theorem holds.
Theorem 11.1 (Cordellier[13]) Suppose that sn+2 2 sn 6= sn 2 sn+1 for n N0 .
Then the W transform is exact on (sn ) if and only if sn can be represented as
sn = s + K

n1

j=1

ja + b + 1
,
ja + b

(11.15)

where K is a nonzero real, a 0 and


{
b < 12 and b 6= 1 if a = 0,
ja + b 6= 0, 1 for j N, if a < 0.
Proof. See [13, p.391] or Osada[40, Theorem 2].

Example 11.2 Let K be a nonzero real.


(1) (Wimp[58]) Setting a = 0 in (11.15), the W transform is exact on sn = s+K(1+
n1

1/b)

, even if (sn ) deverges.

(2) Setting a = 1 and b = 1 in (11.15), the W transform is exact on sn = s+K/n.


We cite two theorems that were proved by Lubkin.
Theorem 11.3 (Lubkin[29, Theorem 10]) Suppose that a sequence (sn ) converges and
lim sn+1 /sn = . Suppose that one of the following three conditions holds:

(i) 6= 0, 1,
(ii) = 0 and sn+1 /sn is of constant sign for suciently large n,

(iii) = 1 and (1 + sn+2 /sn+1 )/(1 + sn+1 /sn ) > 1.


Then the W transform accelerates (sn ).
Theorem 11.4 (Lubkin[29, Theorem 12]) Suppose that a sequence (sn ) converges and
sn+1 /sn has an asymptotic expansion of the form
sn /sn1 c0 +

c1
c2
+ 2 + ...,
n
n

(11.16)

where c0 , c1 , . . . are constants. Then the W transform accelerates (sn ).


The preceding theorems show that the W transform accelerates not only linear
sequences (Theorem 11.3 (i)(iii)) but also a large class of logarithmic sequences (the case
c0 = 1 in Theorem 11.4). However, the Aitken 2 process has not this property.
74

11.3 The iteratation of the W transformation


The W transform can be applied iteratively as follows14 : For n = 0, 1, . . . ,15
(n)

W0

= sn ,

(n)

(11.17a)

(n+1)

Wk+1 = Wk
(n)

where Wk

(n+1)

= Wk

(n+1)
(n)
(n+1)
Wk
Wk 2 Wk
,
(n+2) 2
(n)
(n)
(n+1)
Wk
Wk Wk 2 Wk

k = 0, 1, . . . ,
(11.17b)

(n)

Wk . The algorithm (11.17) is called the iterated W trans-

formation.
In order to give an asymptotic formula of the iterated W transform, we dene the
transformation (Sablonni`ere[47]). For a sequence (sn ), the transform is dened by
(sn ) = sn+1

+ 1 sn sn+1
,

2 sn

(11.18)

where is a positive parameter. For a sequence (sn ) satisfying

cj
sn s + n
,
nj
j=0

(n+1)

(sn ) = s1

(11.19)

in (10.17).

Lemma 11.5 (Sablonni`ere) Under the above notation,


Wn = sn+1

sn+1 (sn+1 (sn ))


,
sn+1 ( sn )

> 0.

(11.20)

Proof. By the denition of , we have


sn+1 (sn ) =

+ 1 sn sn+1
,

2 sn

(11.21)

and
+1
sn+1 ( sn ) =

Thus we can obtain the result (11.20).


T

(n)
14 W (n) in (11.17) coincides with
of
k
k
15 When the sequence s is dened in n
n

sn+2 sn+1
sn sn+1

2
sn+1
2 sn

)
.

Weniger[57] and Wn+3k,k of Osada[39,p.363].


1, substitute n = 1, 2, . . . for n = 0, 1, . . . .

75

(11.22)

Using the above lemma and the asymptotic formula of the modied Aitken 2 formula
(Theorem 10.4), Sablonni`ere has proved the following theorem.
(n)

Theorem 11.6 (Sablonni`ere[47]) Suppose that a sequence (sn ) satises (11.19). If Wk


is represented as
[
(n)

Wk

s = n2k

]
(k)
(k)
c
c
1
(k)
c0 + 1 + 22 + O( 3 ) ,
n
n
n

then
(n)
Wk+1

s=n

2k2

(k)

c0 6= 0,

(11.23)

]
[
1
(k+1)
c0
+ O( ) ,
n

(11.24)

where
(k)

(k)

(k+1)
c0

(k)

c (1 + 2k) 2(c0 k( 2k) c1 )2

= 0
(k)
6( 2k)
c0 ( 2k)3
(k)

(k)

(k)

c k 2 ( 2k)( 2k 1) 4c1 k( 2k 1) + 4c2


+ 0
( 2k)2 ( 2k 1)

(11.25)

Proof. See Appendix B.

(j)

Theorem 11.7 (Sablonni`ere[47]) With the above notation, if c0 6= 0 for j = 0, 1, . . . , k


1, then
(n)

Wk

s = O(n2k )

as n

(11.26)

Proof. By induction on k, the proof follows from Theorem 11.6.

Sablonni`ere has also given an asymptotic formula of the W transform applying to a


sequence satisfying

cj
sn s + n
,
nj/2
j=0

(11.27)

where < 0 and c0 (6= 0), c1 (6= 0), c2 , . . . are constants.


(n)

Theorem 11.8 (Sablonni`ere[47]) Suppose that a sequence (sn ) satises (11.27). If Wk


has an asymptotic formula of the form
(n)

Wk

(k)

(k)

s = c0 nk/2 + c1 nk/21/2 + O(nk/21 ),

(k)

c0 6= 0,

(11.28)

then
(n)

(k+1) k/21/2

Wk+1 s = c0

(k+1) k/21

+ c1

76

+ O(nk/23/2 ),

(11.29)

where
(k)

(k+1)
c0

c1
=
2
(k 2) (k + 2 2)

(11.30)

and
(k)

(k+1)

c1

(c1 )2 (k 1 2)(k + 1 2)2


(k)

c0 (k 2)4 (k + 2 2)2

(11.31)

Recently, Osada[39] has extended the iterated W transform to vector sequences; the
Euclidean W transform and the vector W transform. A similar property to Theorem
11.7 holds for both transforms.
11.4 Numerical examples of the iterated W transformation
For linearly convergent sequences the W transform works well.
Example 11.1 Let us consider
sn =

1
(1)i1 .
i
i=1

(11.32)
(n3k)

We apply the iterated W transform to (11.32). We give sn , Wk

k = b(n 1)/3c. By the rst 17 terms, we obtain 15 exact digits.


Table 11.1
The iterated W transform applying to (11.32)
n

sn

1
2
3
4
5
6
7
8
9
10
11
12
13
14
17

1.00
0.29
0.87
0.37
0.81
0.40
0.78
0.43
0.76
0.45
0.75
0.46
0.74
0.47
0.72
0.60

(n3k)

Wk

0.6061
0.60442
0.60511
0.60490
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
77

0
79
888
86446
86430
86435
86432
86432
86432
86432

6
4
2192
2155
21630
21630

in Table 11.1, where

Example 11.2 We apply the iterated W transform to the partial sums of (1.5). We
(n3k)

give sn , Wk

in Table 11.2, where k = b(n 1)/3c. By the rst 15 terms, we obtain

9 exact digits. For this series, the iterated Aitken 2 process cannot accelerate but the
iterated W transform can do. However, the W transform is inferior to the automatic
modied Aitken 2 formula.
Table 11.2
The iterated W transform applying to (1.5)
n

sn

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

2.0
1.35
1.54
1.67
1.76
1.82
1.88
1.92
1.96
1.99
2.02
2.04
2.06
2.08
2.10
2.61

(n3k)

Wk

2.590
2.6019
2.6063
2.61234
2.61236
2.61236
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237

3
2
90
527
5326
5337
5330
5365
53440
53486

Example 11.3 We apply the iterated W transform to the partial sums of (1.5)+(2) =
4.25730 94155 33714:
sn =

i=1
(n3k)

We give sn , Wk

i+1
.
i2

(11.33)

in Table 11.3, where k = b(n 1)/3c. By the rst 20 terms,


(1)

we obtain 4 exact digits. For comparison, we show Levin v-transform Tn2 . The W
transform is slightly better than the Levin v-transform.

78

Table 11.3
The iterated W transform applying to (1.5) + (2)
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
20

(n3k)

(1)

sn

Wk

Tn2

1.0
2.6
2.9
3.09
3.22
3.31
3.39
3.45
3.50
3.54
3.58
3.61
3.63
3.66
3.70
3.77
4.25

4.14
4.17
4.19
4.2568
4.2590
4.2596
4.2596
4.2596
4.2596
4.2596
4.2596
4.2591
4.25727 2
4.25730 9

4.05
4.18
4.212
4.226
4.234
4.240
4.243
4.246
4.248
4.249
4.2509
4.2518
4.2525
4.2546
4.25730

79

12. The -algorithm


As Smith and Ford[54] pointed out, the -algorithm of Wynn works well on some
logarithmic sequences but fails on another logarithmic sequences. In this section we make
clear this fact.
12.1 The reciprocal dierences and the -algorithm
Since the -algorgithm is a special case of reciprocal dierences, we begin with the
denition of reciprocal dierences. Let f (x) be a function. The reciprocal dierences of
f (x) with arguments x0 , x1 , . . . are dened recursively by
0 (x0 ) = f (x0 ),
x0 x1
1 (x0 , x1 ) =
,
0 (x0 ) 0 (x1 )
k (x0 , . . . , xk ) = k2 (x1 , . . . , xk1 )
x0 xk
,
+
k1 (x0 , . . . , xk1 ) k1 (x1 , . . . , xk )

(12.1a)
(12.1b)

k = 2, 3, . . . .
(12.1c)

Substituting x for x0 in (12.1), we have the following continued fraction.


x x1

f (x) = f (x1 ) +

(12.2a)

x x2

1 (x1 , x2 ) +

2 (x1 , x2 , x3 ) 0 (x1 ) +

x x3
..
..

The last two constituent partial fractions are as follows:


x xl1
x xl
l1 (x1 , . . . , xl ) l3 (x1 , . . . , xl2 ) +
l (x, x1 , . . . , xl ) l2 (x1 , . . . , xl1 ).

(12.2b)

The equality of (12.2) holds for x = x1 , . . . , xl . The right-hand side of (12.2) is called
Thieles interpolation formula.
(n)

The -algorithm of Wynn[60] is dened by substituting sn for f (xn ) and k

for

k (xn , . . . , xn+k ) in the reciprocal dierence:


(n)

= sn ,

(n)

0
1

(n)

(12.3a)
1

(n+1)
0

(n)

(12.3b)
k

(n+1)

= k2 +

(n+1)
k1

(n)

k1

80

k = 2, 3, . . . .

(12.3c)

Thieles interpolation formula implies the continued fraction.


sn = sm +

nm
nm1
(m)
1 +
nm2
(m)
(m)
2 0 + (m)
(m)
3 1 + . .
..

(12.4)

Neglecting the term


n m 2k
,
n m 2k 1
(m)
(m)
2k+1 2k1 +
..
.
we obtain

(12.5)

(m)

nk + a1 nk1 + + ak
sn ; 2k k
,
n + b1 nk1 + + bk

(12.6)

where a1 , . . . , ak , b1 , . . . , bk are constants independent of n and the sign ; denotes approximate equality. By construction, the equality of (12.6) holds for n = m, . . . , m + 2k.
Suppose a sequence (sn ) with the limit s satises
sn =

snk + a1 nk1 + + ak
.
nk + b1 nk1 + + bk

(12.7)

(m)

Then, by the above discussion, 2k = s for any m. Thus the -algorithm is a rational
extrapolation which is exact on a sequence satisfying (12.7).
A sequence satisfying (12.7) has an asymptotic expansion of the form
(
)
c1
c2
sn s + n c0 +
+ 2 + ... ,
n
n

as n

(12.8)

where is a negative integer and cj s are constants independent of n. Conversely, suppose


that is a negative integer. If we truncate the terms up to ck /nk in (12.8), then sn satises
(12.7). This fact suggests that the -algorithm works well on (12.8) if and only if is a
negative integer, which will be proved in the end of this section.
Recently, Osada[39] has extended the -algorithm to vector sequences; the vector
-algorithm and the topological -algorithm.

81

12.2 The asymptotic behavior of the -algorithm


In order to describe the asymptotic behavior of the -algorithm, we shall use the
following sequence. For a given non-integer and a given nonzero real c, we dene the
sequence (Cn ) as follows:
C1 = 0,

(12.9a)

C0 = c,

(12.9b)

2k 1
k = 1, 2, . . . ,
C2k2
2k
= C2k2 +
k = 1, 2, . . . ,
(1 )C2k1

C2k1 = C2k3 +
C2k

(12.9c)
(12.9d)

This sequence (Cn ) is called the associated sequence of the -algorithm with respect to
and c.
For the associated sequence of the -algorithm, the following two theorems hold.
Theorem 12.1 Under the above notation,
k(2 )(3 ) (k )
, k = 1, 2, . . .
c(1 + ) (k 1 + )
c(1 + ) (k + )
=
, k = 1, 2, . . . .
(1 )(2 ) (k )

C2k1 =

(12.10a)

C2k

(12.10b)

Proof. By induction on k. For k = 1, C1 = C1 + 1/c = 1/c, C2 = C0 + 2/(1 )C1 =


c(1 + )/(1 ). Assuming that they are valid for k > 1. By the induction hypothesis,
we have
2k + 1
C2k
k(2 ) (k )
(2k + 1)(1 ) (k )
=
+
c(1 + ) (k 1 + )
c(1 + ) (k + )
(k + 1)(2 ) (k )(k + 1 )
=
.
c(1 + ) (k + )

C2k+1 = C2k1 +

(12.11)

Similarly,
C2k+2 =

c(1 + )(2 + ) (k + 1 + )
.
(1 ) (k + 1 )

This completes the proof.

82

(12.12)

We remark that Theorem 12.1 is still valid when is an integer and k < ||.
Theorem 12.2 Under the above notation,
c()
C2k
=
,
2
k k
()
lim

(12.13)

where (x) is the Gamma function.


Proof. By means of Eulers limit formula for the Gamma function
k!k x
,
k x(x + 1) (x + k)

(x) = lim
we obtain the result.

(12.14)

Now we have asymptotic behavior of the -algorithm.


Theorem 12.3 Let (sn ) be a sequence satisfying
(
)
c1
c2
sn s + n c0 +
+ 2 + ... ,
n
n

as n .

(12.15)

Let (Cn ) be the associated sequence of the -algorithm with respect to and c0 in (12.15).
Let A = (1 )(1/2 + c1 /c0 ). Then the following formulae are valid.
(1)
(n)
1

= C1 (n + 1)

[
1+

]
A
B1
3
+
+ O((n + 1) ) ,
n + 1 (n + 1)2

(12.16)

where
B1 =

2 1 c1 (1 ) (1 )2 c1 2
c2 (2 )
+
+
+
.
2
2
12
2c0
c0
c0

(12.17)

(2)
(n)
2

[
= s + C2 (n + 1) 1 +

]
c1
B2
3
+
+ O((n + 1) ) ,
c0 (n + 1) (n + 1)2

(12.18)

where
B2 =

c0 (1 + )
2c1 2
c2 (5 2 )
+
+
.
6(1 )
c0 (1 )
(1 )2
83

(12.19)

(3) Suppose that 6= 1, . . . , 1 k. For j = 1, . . . , k,


[
]
A
(n)
1
2j1 = C2j1 (n + j)
1+
+ O((n + j)1 )
n+j
[
]
c1
(n)

2j = s + C2j (n + j) 1 +
+ O((n + j)2 )
c0 (n + j)

(12.20)
(12.21)

Proof. (1) Using the binomial expansion, we have


sn+1 sn
1

= c0 (n + 1)

[
1

(
)
]
A
1 c1 (1 )
c2
2
+
+
+
n+1
6
2c0
c0 (n + 1)2

+ O((n + 1)3 ).

(12.22)

Hence, we obtain
(n)
1

[
1+

= C1 (n + 1)

]
B1
A
3
+
+ O((n + 1) ) .
n + 1 (n + 1)2

(12.23)

(2) and (3). Similarly to (1).

By Theorem 12.3, when in (12.15) is non-integer, for xed k,


(n)

2k s
C2k
sn+2k s

as n ,

(12.24)

where the sign means asymptotic approximate. When is small non-integer, the
-algorithm cannot be available.
When is a negative integer, say k, we have C0 6= 0, . . . , C2k2 6= 0 and C2k = 0.
Thus, it follows from Theorem 12.3 that
2k = s + O((n + k)k2 ),
(n)

as n .

(12.25)

As illustrations, we give two examples.


Example 12.1 We apply the -algorithm to the partial sums of (2):
sn =
(n2k)

We give sn and 2k

1
.
2
i
i=1

(12.26)

in Table 12.1, where k = b(n 1)/2c. By the rst 12 terms, we

obtain 12 exact digits.


84

Table 12.1
The -algorithm applying to (2)
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
20

(n2k)

sn
1.00
1.25
1.36
1.42
1.46
1.49
1.511
1.527
1.539
1.549
1.558
1.564
1.570
1.575
1.580
1.596
1.644

2k

1.650
1.6468
1.64489
1.64492
1.64493
1.64493
1.64493
1.64493
1.64493
1.64493
1.64493
1.64493
1.64493
1.64493
1.64493

2
437
414
40643
40662
40668
40668
40668
40668
40668
40668
40668

8
64
418
56
82
56
50
48

Example 12.2 We apply the -algorithm to the partial sums of (1.5):


n

1
.
sn =
i i
i=1
(n2k)

We give sn and 2k

(12.27)

in Table 12.2, where k = b(n 1)/2c. Since = 0.5, the

-algorithm cannot accelerate (12.27).

85

Table 12.2
The -algorithm applying to (1.5)
(n2k)

sn

2k

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

2.0
1.35
1.54
1.67
1.76
1.82
1.88
1.92
1.96
1.99
2.02
2.04
2.06
2.08
2.10
2.61

2.19
2.25
2.40
2.42
2.48
2.49
2.525
2.528
2.520
2.553
2.552
2.553
2.564
2.612

86

13. Generalizations of the -algorithm*


Let (sn ) be a sequence satisfying
(
)
c1
c2
sn s + n c0 +
+ 2 + ... ,
n
n

as n

(13.1)

where < 0 and c0 (6= 0), c1 , . . . are constants independent of n. As we proved in the
preceding section, the -algorithm works well on a sequence satisfying (13.1) if and only
if is a negative integer. In this section we extend -algorithm that works well on (13.1)
for any < 0.
13.1 The generalized -algorithm
For a sequence (sn ) satisfying (13.1), we put s0 = 0 if s0 is not dened. We dene
(n)
k

as follows:
(n)

1 = 0,

(13.2a)

(n)

= sn ,

(n)

= k2 +

(13.2b)

(n+1)

k1
(n+1)
k1

(n)

k1

k = 1, 2, . . . .

(13.2c)

This procedure is called the generalized -algorithm with a parameter [37]. It is obvious
that, when = 1, the generalized -algorithm coincides with the -algorithm dened
in (12.3).
(n)

We now derive asymptotic behaviors for the quantities k

produced by applying

the generalized -algorithm with parameter to the sequence satisfying (13.1).


(n)

(n)

Theorem 13.1 (Osada) If 2k1 and 2k satisfy the asymptotic formulae of the forms
1

(n)

2k1 =

(k1)
d0

[
]
(n + k 1)2k1 1 + O((n + k 1)1 ) ,

(n)

2k = s + (n + k)2k
(k1)

with d0

]
(k)
(k)
d
d
(k)
2
d0 + 1 +
+ O((n + k)3 )
n + k (n + k)2

(13.3)

(13.4)

(k)

6= 0 and d0 6= 0, then
]
[
(n)
(k+1)
+ O((n + k + 1)1 ) ,
2k+2 = s + (n + k + 1)2k2 d0

(13.5)

*The material in this section is taken from the authors paper: N. Osada, A convergence acceleration
method for some logarithmically convergent sequences, SIAM J. Numer. Anal. 27(1990), pp.178-189.

87

where
(k)

(k+1)

d0

(k)

(k)

d0
(d )2
2d2
(2k 1) (k) 1
+
12
(2k )(2k + 1)
d0 (2k )2
(k)

(d0 )2 (2k 1)
(k1)

d0

(2k + 1)

(13.6)

Proof. Using (13.4) and the binomial expansion, we have


(n+1)

2k

(n)

(k)

2k = d0 (2k )(n + k + 1)2k1


[
(
)
(k)
d1
1
2k + 1
1+
+ (k)
2 d (2k ) n + k + 1
0
(
)
(k)
(k)
2k 1 d1 (2k + 1)
d2
2k + 2
+
+
+ (k)
2
(k)
6
2d0 (2k )
d0 (2k ) (n + k + 1)
]
+O((n + k + 1)3 ) .
(13.7)

Hence, we obtain
2k
(n+1)
2k

(n)
2k

1
(k)
d0

(n + k + 1)2k+1

1
(

(k)

1
d
+ (k) 1
2 d (2k )
0

2k + 1
n+k+1
(k)

(k)

(d )2 (2k + 1)
2k 1
d
+ 1(k) + 1 (k)
12
2d0
(d )2 (2k )2
) 0
(k)
d2 (2k + 2)
2k + 1
(k)
2
d0 (2k )(2k + 1) (n + k + 1)
]
+O((n + k + 1)3 ) .

By means of (13.2) and (13.8),


1
(n)
2k+1 = (k) (n + k + 1)2k+1
d
[0 (
)
(k)
1
2k + 1
d1
1
+ (k)
2 d (2k ) n + k + 1
0
(
(k)
(k)
d1
(d1 )2 (2k + 1)
2k 1
+
+ (k) +
(k)
12
2d
(d )2 (2k )2
0

(k)
d2 (2k

(k)

+ 2)

(k)
d0 (2k

)(2k + 1)
]
+O((n + k + 1)3 ) .
88

d0

(k1)
d0
(2k

+ 1)

(13.8)

2k + 1
(n + k + 1)2
(13.9)

Similarly we obtain
(n)

(n+1)

2k+2 = 2k

2k + 1
(n+1)
2k+1

(n)

2k+1
[

= s + (n + k + 1)2k2

(k)

(k)

d0
(d )2
(2k 1) (k) 1
12
d0 (2k )2
(k)

(k)

2d2
(d0 )2 (2k 1)
+ (k1)
(2k )(2k + 1)
d0
(2k + 1)
]
+O((n + k + 1)1 ) .
(13.10)

This completes the proof.

(j)

Theorem 13.2 (Osada) With the notation above, if d0 6= 0 for j = 0, 1, . . . , k, then


(n)

2k = s + O((n + k)2k ).

(13.11)

Proof. By means of the induction on k, the proof follows from Theorem 13.1.
(n)

It is easy to see that 2

(n+1)

= s1

in (10.17) when = . Moreover, under the

(n)

(n+k)

assumption of Theorem 13.2, 2k s has the same order as sk

s dened in (10.17).

For another information of the generalized -algorithm, see Weniger[57].


Example 13.1 We apply the generalized -algorithm to the partial sums of (1.5). We
(n2k)

give sn , 2k

in Table 13.1, where k = bn/2c. For comparison, we also give the


(nl)

modied Aitken 2 formula sl

, where l = bn/2c.

89

Table 13.1
The generalized -algorithm and
the modied Aitken 2 formula applied to (1.5)
(nk)

(n2k)

sn

1
2
3
4
5
6
7
8
9
10
11
12
13
14

0
1
1
2
2
3
3
4
4
5
5
6
6
7

1.00
1.35
1.54
1.67
1.76
1.82
1.88
1.92
1.96
1.99
2.02
2.04
2.06
2.08
2.61

sk

2k
2.640
2.6205
2.61215
2.61232
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237

3
71
572
5334
53458
53488
53487
53489
53487
53486
53486

0
2
2
0
848
854

2.640
2.6205
2.61217
2.61232
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237

9
657
560
53431
53475
53487
53487
53487
53486
53486
53486

5
16
10
17
04
13
85

13.2 The automatic generalized -algorithm


The generalized -algorithm requires the knowledge of the exponent in (13.1). But,
as described in Section 10, can be computed using the generalized -algorithm with
parameter 2.
For a given sequence (sn ) satisfying
sn s + n

)
c1
c2
c0 +
+ 2 + ... ,
n
n

as n

(13.1)

where < 0 and c0 (6= 0), c1 , . . . are unknown constants independent of n. We dene n
by
(

n = 1 +

1
).
sn
2 sn1

(13.12)

The sequence (n ) has the asymptotic expansion of the form


n + n

(
)
t1
t2
t0 + + 2 + . . . ,
n
n

90

as n ,

(13.13)

where t0 (6= 0), t1 , . . . are unknown constants independent of n. Thus by applying the
generalized -algorithm with parameter 2 on (n ), we can estimate the exponent .
Suppose that the rst n terms of a sequence (sn ) satisfying (13.1) are given. Then
(m)

we put s0 = 0 and dene k

as follows:

(m)

1 = 0,

(13.14a)

(m)

= m ,

(m)

= k2

m 1,

(m+1)

(13.14b)
k+1

(m+1)

k1

(m)

k1

k = 1, 2, . . . .

(13.14c)

Next, we dene n (n 3) by
{
n =

n3

(1)

if n is odd,

(0)

if n is even.

n2

(13.15)

Then we can apply the generalized -algorithm with parameter n to (sm ).


(m)

n,0 = sm ,
(m)

n,1

(m)

n,k

m = 0, . . . , n
n
, m = 0, 1, . . . , n 1,
= (m+1)
(m)
n,0 n,0
k 1 n
(m+1)
= n,k2 + (m+1)
, k = 2, . . . , n; m = 0, . . . , n k.
(m)
n,k1 n,k1

(13.16a)
(13.16b)

(13.16c)

This scheme is called the automatic generalized -algorithm. The data ow of this scheme
is as follows (case n = 4):
s1

s4

&
&

&

s1

4,0

(2)
4,0

(3)
4,0

(4)
4,0

s2
s3

s2
s3
s4

(1)

(2)
0

(1)

&

&

&

4,1

&

(0)

(1)
1

&

&
(0)
4,2
&
(1)
4,2
&
(2)
4,2

&

&

(0)

&

4,4

(0)

(1)
4,1
(2)
4,1
(3)
4,1

91

(0)

4,3
(1)
4,3

(0)

For a given tolerance , the stopping criterion of this scheme is as follows:


(0)

(1)

(i) n is even and |n,n n,n2 | < ,


(0)

(1)

(ii) n is odd and |n,n1 n,n1 | < .


Example 13.2 We apply the automatic generalized -algorithm to the partial sums of
(n2k)

(1.5). We give sn , n in (13.15) and n,2k

in Table 13.2, where k = bn/2c.

Table 13.2
The automatic generalized -algorithm
applied to (1.5)
n

sn

1
2
3
4
5
6
7
8
9
10
11
12
13
14

1.00
1.35
1.54
1.67
1.76
1.82
1.88
1.92
1.96
1.99
2.02
2.04
2.06
2.08
2.61

(n2k)

0.544
0.5071
0.50015
0.50001
0.50000
0.49999
0.49999
0.50000
0.50000
0.50000
0.49999
0.50000
0.50000

n,2k

4
0052
9980
99946
00002
00001
00002
99999
00000
00000

3
1
9
76
93
00

2.55
2.604
2.61217
2.61236
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237

60
568
53453
53487
53487
53488
53488
53487
53486
53486

1
1
9
2
2
49
85

A FORTRAN subroutine of the automatic generalized -algorithm is given in Appendix.

92

14. Comparisons of acceleration methods*


In this section we compare acceleration methods for scalar sequences using wide
range of slowly convergent innite series.
14.1 Sets of sequences
Whether an acceleration method works eectively on a given sequence or not depends
on the type of the asymptotic expansion of the sequence. Conversely, when we know the
type of expansion we can choose a suitable method.
The sets of sequences S and LOGSF are dened by
S = { (sn ) | sn s + n

cj nj ,

c0 6= 0, 6= 0},

(14.1)

j=0

sn+1 s
sn+1
= lim
= 1 },
n sn s
n sn

LOGSF = { (sn ) | lim

(14.2)

respectively. For (sn ) S , (sn ) converges if 0 < || < 1, and (sn ) diverges if || > 1.
We consider subsets of S and LOGSF as follows:

Alt = { (sn ) | sn s + (1)n n

cj nj ,

c0 6= 0, < 0},

(14.3)

j=0

L1 = { (sn ) | sn s + n
L2 = { (sn ) | sn s + n
L3 = { (sn ) | sn s +

j=0

cj nj ,

c0 6= 0, N},

(14.4)

cj nj ,

c0 6= 0, < 0},

(14.5)

j=0
m

i=1

cij nj ,

0 > 1 > 2 > > m },

(14.6)

j=0

aj + bj log n
,
L4 = { (sn ) | sn s + n
nj
j=0

L5 = { (sn ) | sn s + n (log n)

j=0 i=0

< 0},

cij
,
(log n)i nj

(14.7)

< 0 or = 0 and < 0}.


(14.8)

*The material in this section is an improvement of the authors informal paper: N. Osada, Asymptotic
expansion and acceleration methods for certain logarithmically convrgent sequences, RIMS Kokyuroku
676(1988), pp.195-207.

93

We remark that Alt= S1 and L1 = S1 . We also remark that


L1 L2 L3 LOGSF,

L2 L4 LOGSF,

L2 L5 LOGSF.

(14.9)

14.2 Test series


We take up innite series for each set. Test series are shown in Table 14.1. Some of
them, No. 1,3,4,5, and 8 were tested by Smith and Ford[54].

94

Table 14.1
Test series

set No.
S

1
2

partial sum of series


n

(0.8)i
i=1
n

Alt

3
4

L1

5
6

7
8

= 1, = 4

(1)i1

1
i

log 2

= 1 Example 3.1

0.60489 86434 21630

= 0.5 Example 3.3

2 /6

= 1 Example 3.4

1.20205 69031 59594

= 2 Example 3.4

2.61237 53486 85488

= 0.5 Example 3.4

1.71379 67355 40301

=1

4.25730 94155 33714

1 = 0.5, 2 = 1

0.93754 82543 15844

= 1 Example 3.6

log i

i i

3.93223 97374 31101

= 0.5 Example 3.6

1
i(log i)2

2.10974 28012 36892

= 1 Example 3.7

i=1
n

1
(1)i1
i
i=1

1
i2
i=1
n

11

i+1
i2

log i
i=1
n

i=1

L5 12

i3

i i
i=1
n (
)2

1/i
i+e

i=1

L4 10

n+1

i=2

= 1, = 0.8

log 5

i=1

L3

log 5
4i
i

i=1

L2

asymptotic expansion

(1)i1

i=1
n

sum

i2

2 Example 3.5

14.3 Numerical results


Let (sn ) converge to s or diverge from s. Let T be a sequence transformation (tn ) the
transformed sequence by T , where tn depends on s1 , . . . , sn but does not on sn+k , k > 0.
95

The maximum signicant digits from m terms is dened by


max { log10 |tn s| }.

(14.10)

1nm

Acceleration methods taken up in this section are as follows. The -algorithm, the
Levin u-, v-, and t-transforms, the d(2) - and d(3) -transforms, the iterated Aitken 2 process, the automatic modied Aitken 2 formula, the W transform, the -algorithm, and
the automatic generalized -algorithm. All methods require no knowledge of asymptotic
expansion of objective sequence. We compare with the quantities listed in Table 14.2.
Table 14.2
Acceleration methods

acceleration method

denition

quantity

-algorithm

(8.10)

2k

Levin u-transform

9.1

Tn1

Levin v-transform

9.1

Tn2

Levin t-transform

9.1

Tn2

d(2) -transform

9.3

E2l2

d(3) -transform

9.3

E3m3

iterated Aitken 2 process

(10.10)

automatic modied Aitken 2 formula

10.3

iterated Lubkin W transform

(11.17)

-algorithm

(12.3)

automatic generalized -algorithm

13.2

(n2k)
Tk
, k = b(n 1)/2c
(nl)
sn,l , l = bn/2c
Wp(n3p) , p = b(n 1)/3c
(n2l)
2l
, l = bn/2c
(n2k)
n,2k , k = b(n 1)/2c

(n2k)

, k = b(n 1)/2c

(1)

(1)

(1)

(n2l+2)

, l = bn/2c

(n3m+3)

, m = bn/3c

For each acceleration method we show the maximum signicant digits from 20 terms
of each test series in Table 14.3. In table 14.3, the number of terms is abbreviated to
NT, and the number of signicant digits is abbreviated to SD. Numerical computations reported in this section were carried out on the NEC ACOS-610 computer in
double precision with approximately 16 digits.

96

Table 14.3
The maximum signicant digits

partial sum
NT
SD
S
Alt
L1
L2
L3
L4
L5

1
2
3
4
5
6
7
8
9
10
11
12

20
2.72
diverge
20
1.61
20
0.96
20
1.31
20
2.92
20
0.35
20
0.17
20
0.31
20
0.71
20
0.35
20
0.49

-algorithm
NT
SD
20
20
20
20
20
20
20
20
20
20
20
20

8.01
7.81
15.11
15.74
2.06
4.17
0.78
0.53
0.75
1.62
0.01
0.69

Levin u
NT
SD
19
16
15
14
14
12
10
13
20
20
20
20

10.78
10.78
15.90
15.56
11.46
11.49
9.01
8.44
2.58
3.11
0.98
1.07

Levin v
NT
SD
20
16
15
14
12
12
12
13
19
20
20
20

10.40
11.58
15.86
15.56
9.64
11.01
8.55
7.58
2.96
3.47
1.24
1.13

Table 14.3 (Continued)

Levin t
NT
SD
S
Alt
L1
L2
L3
L4
L5

1
2
3
4
5
6
7
8
9
10
11
12

20
16
14
15
20
20
20
20
20
20
20
20

10.49
10.54
15.95
15.54
2.28
4.60
0.89
0.16
0.87
1.52
0.03
0.74

d(2) -trans
NT
SD
20
19
20
20
18
16
14
15
12
13
13
19

7.16
8.36
13.85
15.90
11.29
12.44
9.82
10.76
5.54
7.07
4.98
1.29

97

d(3) -trans
NT
SD
20
19
20
19
20
16
19
18
15
20
17
20

6.45
6.18
12.84
15.31
10.69
12.20
10.20
9.63
8.54
6.99
4.80
1.59

Aitken 2
NT
SD
17
17
19
16
20
19
19
7
19
15
20
20

9.62
11.77
16.08
16.08
3.18
5.72
1.42
1.05
1.35
1.33
0.18
0.81

Table 14.3 (Continued)

aut mod 2
NT
SD
S
Alt
L1
L2
L3
L4
L5

1
2
3
4
5
6
7
8
9
10
11
12

13
18
19
19
12
19
11
19
19
16
14
20

Lubkin W
NT
SD

6.76
10.52
16.01
15.50
11.02
12.53
9.79
9.70
3.04
3.03
0.75
1.13

20
19
19
17
15
18
15
13
20
12
20
20

-algorithm
NT
SD

10.69
11.13
16.26
15.54
9.66
10.92
8.34
8.38
4.43
2.25
0.46
1.31

decelerate
decelerate
decelerate
decelerate
18
11.66
20
12.81
18
1.52
20
1.03
17
1.54
20
3.03
18
0.44
20
0.89

aut gen
NT
SD
20
20
19
19
20
14
9
17
18
19
17
20

8.06
7.99
14.51
14.59
12.18
13.05
10.51
11.29
3.08
3.58
1.27
1.15

14.4 Extraction
As Delahahe and Germain-Bonne[14] proved, there is no acceleration method that
can accelerate all sequences belonging to LOGSF. However, if a logarithmic sequence
(sn ) satises the asymptotic form
sn = s + O(n ),

(14.11)

sn = s + O(n (log n) ),

(14.12)

or

where < 0, then the subsequence (s2n ) of (sn ) satises


s2n = s + O((2 )n ),

(14.13)

s2n = s + O((2 )n n ),

(14.14)

or

98

respectively. Both (14.13) and (14.14) converges linearly to s with contraction ratio 2 .
In particular, if a sequence (sn ) belongs to L3 , the subsequence (s2n ) of (sn ) satises
the asymptotic expansion of the form
s2n s +

cj nj ,

(14.15)

j=1

where cj and 1 > 1 > 2 > > 0 are constants. By Theorem 8.4, the -algorithm and
the iterated Aitken 2 process can accelerate the subsequence eciently.
If (sn ) satises
sn = s + O((log n) ),

(14.16)

where < 0, then


s2n = s + O(n ),

(14.17)

that is, (s2n ) converges logarithmically. Therefore the d-transform or the automatic
generalized -algorithm are expected to accelerate the convergence of (s2n ).
In Table 14.4, we take up the Levin t-transform, the -algorithm, the iterated Aitken
2 process, and the d(2) -transform as acceleration methods, and we apply to the last 6
series in Table 14.1. Though we do not list in Table 14.4, the Levin t-transform is slightly
better than the Levin u-, v-transforms. The d(2) -transform is slightly better than the
d(3) -transform.
Table 14.4
The maximum signicant digits

series

partial sum

Levin

No.

NT

SD

NT

7
8
9
10
11
12

16384
16384
16384
16384
16384
16384

1.81
1.36
1.80
3.18
0.74
0.99

16384
16384
16384
16384
16384
16384

t
SD

-algorithm
NT

SD

Aitken 2
NT

SD

d(2)
NT

SD

6.49 16384 12.02 4096 11.45 16384 6.96


5.28 16384 10.92 8192 10.77 8192 4.31
5.81 16384 9.56 16384 8.09 16384 6.32
8.52 4096 9.86 8192 7.41 16384 8.91
5.14 8192 6.69 2048 5.97 8192 4.14
1.67 8192 1.49
512 1.45 16384 4.63
99

Table 14.5
The maximum signicant digits for

n
2
+1

i=2

series
No.

1
.
i(log i)2

Levin u

Lubkin W

aut. gen. aut. mod. 2

NT

NT

NT

SD

SD

12 16384 2.60 4096 3.48 8192

SD

NT

3.91 2048

SD

d(3)
NT

SD

3.11 8192 4.04

14.5 Conclusions
Table 14.3 show that the best available methods are the d(2) - and d(3) -transforms.
For S , all tested methods except the -algorithm work well. For L1 and L2 , the
automatic generalized -algorithm is the best.
The Levin u-, v-transforms, the automatic generalized -algorithm, and the automatic modied Aitken 2 formula, and Lubkins W transforms are good methods. The
automatic generalized -algorithm and the automatic modied Aitken 2 formula are
generalizations of the sequence transformation
sn 7 sn

1 sn sn1
,

2 sn1

(14.18)

therefore they perform similarly.


The -algorithm, the Levin t-transform, and the iterated Aitken 2 process perform
similarly. Because these three methods are extensions of the Aitken 2 process.
For a sequence (sn ) belonging to L3 or L4 , any acceleration method listed in Table
14.2 cannot give high accurate result. However, when we apply the -algorithm or the
iterated Aitken 2 process to the subsequence (s2n ), we can obtain good result.
For the last series, the d(2) -, d(3) -transforms, the automatic generalized -algorithm,
Lubkins W transform, and the automatic modied Aitken 2 formula accelerate the
convergence of (s2n ).

100

15. Application to numerical integration


Innite integrals and improper integrals usually converge slowly. Such an integral
yields a slowly convergent sequence or an innite series. Various acceleration methods
have been applied to such slowly convergent integrals. In this section we deal with the
application of acceleration methods to numerical integration.
15.1 Introduction
Acceleration methods are applied to numerical integration in the following ways.

I. The semi-innite integral I = a f (x)dx.


Let a = x0 < x1 < x2 < . . . be an increasing sequence diverging to . Then I
becomes to an innite series
I=

j=1

xj

f (x)dx =
xj1

Ij .

(15.1)

j=1

Let Sn be the nth partial sum of (15.1).


I-1. Suppose that f (x) converges monotonically to zero as x . Let a = x0 <
x1 < x2 < . . . be equidistant points. Then Sn sometimes either converges linearly or
satises
Sn I + n

cj nj .

(15.2)

j=1

As we described in the preceding section, we can accelerate (Sn ) or (S2n ).


I-2. Suppose that f (x) is a product of an oscillating function and a positive decreasing function. Let x1 < x2 < . . . be zeros of f (x). Then the innite series (15.1) becomes
to alternating series. Thus it is easy to accelerate (Sn ).
The rst proposer of this method was I. M. Longman[28]. In 1956, he applied the
Euler transformation to semi-innite integrals involving Bessel functions.
In this paper we consider f (x) = g(x) sin x or f (x) = g(x) cos x, where g(x)
converges monotonically to zero as x and > 0 is a known constant.
b
II. The nite integral I = a f (x)dx.
Let Sn be an approximation of I by applying n panels compound integral formula
such as n panels midpoint rule. As we described in Section 4, Sn has often the asymptotic
expansion of the form
Sn I +

j=0

101

cj nj ,

(15.3)

or

aj + bj log n
cj + dj log n

Sn I + n
+n
.
j
n
nj
j=0
j=0

(15.4)

If f (x) is of class C in [a, b] and if the quadrature formula is either the trapezoidal rule
or the midpoint rule, then j = 2j 2 in (15.3).
II-1. When j s in (15.3), or and in (15.4) are known, by applying the Richardson
extrapolation or the E-algorithm to (Sn ) or (S2n ), the convergence of (Sn ) is accelerated.
In 1955, W. Romberg[45] applied the Richardson extrapolation to (S2n ) when j =
2j 2 in (15.3). Since 1961, many authors such as I. Navot[33] and H. Rutishauser[46]
applied it to improper integrals, see Joyces survey paper[22].
II-2. When the asymptotic scale in the asymptotic expansion is unknown, acceleration methods taken up in the previous section are applied to (Sn ) or (S2n ). As we saw in
the previous section, when there are integers i and j such that i j is not integer, we
can not obtain a high accurate result by applying an acceleration method to (Sn ) itself.
However, for (15.3) and (15.4) it is expected to obtain a good result by applying the
-algorithm or the iterated Aitken 2 process to (S2n ). The rst proposer of this method
was C. Brezinski[6,7]. In 1970 and 1971, he applied the -algorithm with parameter to
(S2n ) and the -algorithm when j = 2j 2 in (15.3). Subsequently many authors such
as D. K. Kahaner[23] applied acceleration methods to nite integrals.
III. For another way of applying extrapolation methods to numerical integration,
see Brezinski and Redivo Zaglia[11, p.366386] and Rabinowitz[43].
15.2 Application to semi-innite integrals
We apply the above method I-1 to integrals listed in Table 15.1.

102

Table 15.1
Semi-innite integrals
with a monotonically decreasing integrand
No.

integral

Sn =

ex dx

geometric series

x2 ex dx

linearly convergent sequence

Sn

3
0

2n

exact

dx
1 + x2

f (x)dx


cj
+ j=1 2j1
2
n

2j

We take xj = 2j and compute

f (x)dx by the Romberg method. Acceleration


2j2

methods that we apply are the Levin u-transform, the -algorithm, Lubkins W transform, the iterated Aitken 2 -process, the d(2) -transform and the automatic generalized

-algorithm. The tolerances are = 106 and 1012 . For 0 dx/(1 + x2 ), we take
= 109 instead of 1012 . The stopping criterion is |Tn Tn1 | < , where (Tn ) is
the accelerated sequence. The results are shown in Table 15.2. Throughout this section,
the number of terms is abbreviated to T, and the number of functional evaluations is
abbreviated to FE.
Table 15.2
Number of terms, functional evaluations, and errors
= 106

Levin
No. T FE
1
2
3

u-transform
error

5 43
6.16 109
8 96 1.24 109
10 102
6.46 109
Aitken 2

No. T
1
2
3

3
8

FE
29
96

T
4
8

process
error

-algorithm
FE
error

Lubkins W transform
T FE
error

36
6.19 109
96 6.60 1010
failure

5
8
9

d(2) -transform

automatic generalized

FE

error

6.16 109 7 57
6.16 109
1.91 109 8 96 6.58 1010
failure
10 102 3.24 108
103

43
96
95

T FE
5
8
9

43
96
95

6.19 109
5.05 108
9.63 107

error
6.16 109
7.75 1010
4.17 108

= 1012
Levin
No. T FE
1
2

5 187 2.78 1016


11 325
1.29 1014

Aitken
No. T FE
1
2

u-transform
error

process
error

Lubkins W transform
T FE
error

4 156 8.33 1017 5 187 8.33 1017


8 280
1.40 1014 12 340
8.88 1016

3 125 8.33 1017


11 325
4.93 1014

-algorithm
FE
error

d(2) -transform
FE
error

automatic generalized
T FE
error

7 233 9.71 1017 5 187 8.33 1017


9 295
1.20 1014 10 310
1.38 1014
= 109

d(2) -transform

Levin u-transform
No. T
3

FE

error

FE

error

automatic generalized
T

FE

error

14 226 6.20 1011 14 226 1.29 1011 14 226 1.21 1010

When = 1012 , all methods except the automatic generalized -algorithm (T= 27,

FE= 557, error= 3.08 1013 ), failure on 0 dx/(1 + x2 ).


Next we apply the above method I-2 to integrals listed in Table 15.3. All integrals
were tested by Hasegawa and Torii[19].
Table 15.3
Semi-innite integrals with an oscillatory integrand
No.

integral

4
0

exact

ex cos xdx

0.5

x sin x
dx
x2 + 1
cos x
dx
x2 + 1
cos x

dx
x2 + 1
sin x
dx
x2

/(2e)
/(2e)
0.42102 44382 40708
0.50406 70619 06919

104

We compute integrals between two consecutive zeros by the Romberg method. Acceleration methods considered here are the Levin u-transform, the -algorithm, and the
d(2) -transform. These methods require no knowledge of asymptotic expansion of the integrand or the integral. The tolerances are = 106 and 1012 . The results are shown
in Table 15.4.
Table 15.4
Number of terms, functional evaluations, and errors
= 106

Levin
No. T FE
1
2
3
4
5

u-transform
error

-algorithm
FE
error

5 81 7.38 109 4 73 7.38 109


8 129
1.54 107 10 145
1.13 107
7 89
1.62 107 9 105
7.37 109
8
8 145 3.01 10
10 177 9.09 108
7 89 5.01 108 9 105
4.77 108

d(2) -transform
FE
error

5 81 7.44 109
9 137
1.46 107
8 97 7.75 108
9 161 1.21 107
8 97 1.05 108

= 1012

No.
1
2
3
4
5

Levin u
T
FE
5 353
12 641
12 897
12 897
12 1025

transform
error

2.50 1016
4.53 1014
1.82 1014
2.01 1013
9.01 1015

-algorithm
FE
error

4 321
4.16 1017
18 833 9.40 1015
17 1217
8.10 1014
18 1281 6.90 1014
17 1345
3.68 1014

d(2) -transform
FE
error

6 358 2.78 1016


14 705
2.16 1014
13 961
1.56 1013
14 1025
4.33 1015
12 1025 1.71 1013

The Levin v- and t-transforms perform similar to the Levin u-transform. The iterated Aitken 2 process and Lubkins W trasnsform are slightly better than the algorithm. The d(2) -transform is better that the d(3) -transform. The best acceleration
methods we tested for semi-innite oscillating integrals are the Levin transforms.
These results are less than Hasegawa and Toriis results[19], but they are available
in practice because they require no knowledge of the integrand.

105

15.3 Application to improper integrals


We apply the above method II-2 to integrals listed in Table 15.5.
Table 15.5
Improper integrals

No.

integral

xdx

exact
2/3

Sn = h

f (a + (2i 1)h), h = (b a)/n

i=1

Sn I + c0 n

1.5

2
0

dx

2 1
B( , )
3 3

4
0

log x
dx
x

Sn I + c0 n0.5 +

j=1

cj n2j
cj n2j

j=1

2/ 3
4

Sn I +
Sn I +

j=1

(c2j1 n1/3j+1 + c2j n2/3j+1 )


(aj n1/2j + bj n1/2j log n

j=0

+cj nj1 )
We use the midpoint rule as the quadrature formula. Acceleration methods are
the Levin u-transform, the -algorithm, Lubkins W transform, the iterated Aitken 2 process, the d(2) -transform and the automatic generalized -algorithm. The tolerance is
= 106 and the maximum number of terms is 15. The results are shown in Table 15.6.

106

Table 15.6
Number of terms, functional evaluations, and errors
= 106

No.
1
2
3
4

No.
1
2
3
4

Levin u-transform
T
FE
error

8
255
3.70 109 7
13 8191 2.09 107 8
15 32767
7.21 106 13
14 16383 4.90 107 11
Aitken 2 process
T
FE
error

-algorithm
FE
error

Lubkins W transform
T
FE
error

127 4.86 109 8


255
3.30 109
255
1.78 108 11 2047 2.46 1010
8191
1.13 107
failure
2047
3.91 109 15 32767
2.04 106
d(2) -transform
FE
error

automatic generalized
T
FE
error

6
63 4.22 107 8
255 1.20 107 8
255
9
511 9.10 1010 14 16383 3.36 108 15 32767
15 32767 6.56 106 15 32767
1.64 105 15 32767
6
15 32767 2.01 10
15 32767 1.06 107 14 16383

2.24 1011
8.26 109
9.97 106
3.20 107

The -algorithm is the best. For the tolerance = 109 , only the -algorithm
succeeds on all integrals listed in Table 15.5, provided that the number of terms is less
than or equals to 15.

107

CONCLUSIONS
In this paper we studied acceleration methods for slowly convergent scalar sequences
from asymptotic view point, and applied these methods to numerical integration.
In conclusion, our opinion is as follows.
1. Suppose that a sequnce (sn ) has an asymptotic expansion of the form
sn s +

cj gj (n).

(1)

j=1

Let T be a sequence transformation and (tn ) = T (sn ). If we know an asymptotic scale


(gj (n)), then we can often obtain an asymptotic formula
tn = s + O(g(n)).

(2)

2. By the above 1, if we know (gj (n)), we can choose a suitable acceleration method
for (sn ).
3. We show the most suitable methods in the following Table 1. We append the
number of theorem giving the asymptotic formula.
4. For a logarithmically convergent sequence (sn ), we can usually obtain higher
accuracy when we apply to (s2n ).
5. There is no all-purpose acceleration method. The best method of all we treated
is the d-transform, and the second best method is the automatic generalized -algorithm.
In application, we can usually determine a type of an aymptotic expansion of an objective
sequence. For example, when we apply the midpoint rule to an improper integral with
endpoint singularity, the objective sequence has the asymptotic expansion of the form
sn s + n

cj n

j=0

+n

dj nj .

(3)

j=0

In such a case, we recommend the methods listed in Table 1.


(n2k)

6. If (sn ) satises (3) and the -algorithm is applied to (s2n ), (2k

) gives high

accurate result. In particular, it is a good method that the -algorithm is applied to M2n ,
where M2n is the 2n panels midpoint rule.

108

Table 1
Suitable acceleration methods

asymptotic expansion
sn s

n
j=1 cj j

n n j=0 cj nj

n j=0 cj nj

cij ni j

j=0 (aj

asymptotic scale
unknown

(sn )

Richardson extrapolation1)

-algorithm2)

(sn )

E-algorithm3)

Levin transforms4)

(sn )
(s2n )

generalized -algorithm5)
modied Aitken 2 formula6)
Richardson extrapolation1)

automatic generalized
-algorithm
-algorithm2)

(sn )
(s2n )

E-algorithm3)
Richardson extrapolation1)

d-transform
-algorithm2)

E-algorithm3)
E-algorithm3)

d-transform
-algorithm2)

E-algorithm3)
E-algorithm3)

d-transform

+ bj log n)nj (sn )


(s2n )

i j
n
i,j cij (log n)

1)

asymptotic scale
known

(sn )
(s2n )

Formula (7.37), 2) Theorem 8.4, 3) Theorem 6.2, 4) Theorem 9.1, 5) Theorem 13.2, 6) Theorem 10.5.

We raise the following questions.


1. Find an ecient algorithm for the automatic generalized -algorithm.
2. Find an acceleration method that works well on a sequence (sn ) satisfying

aij + bij log n


sn s +
n
.
nj
i=1
j=0
m

(4)

Such sequences occur in numerical integration.


3. Find an acceleration method that works well on a sequence (sn ) satisfying
sn s + n

ci,j (log n)i


j=0 i=0

nj

(5)

Such sequences occur in singular xed point problems.


4. Extend results for scalar sequences to vector sequences. In particular, study
acceleration methods for logarithmically convergent vector sequences.

109

Acknowledgements
I would like to express my deepest gratitude to Professor T. Torii of Nagoya University for his constant guidance and encouragement in the preparation of this thesis.
I would like to heartly thank Professor T. Mitsui of Nagoya University for his advice
which improved this thesis.
I am indebted to Professor S. Kuwabara of Nagoya University for his taking the
trouble to referee this thesis.
I would like to give my thank Professor C. Brezinski of Unversite des Sciences et
Technologies de Lille for the invitation to an international congress in Luminy, September
1989, that made the turning point of my research.
I am grateful to Professor M. Iri of the University of Tokyo and Professor K.
Nakashima of Waseda University for their help and encouragement. I would also like
to thank Professor I. Ninomiya of Chubu University who taught me fundamentals of
numerical analysis on occasion at various meetings when I began to study it.

110

REFERENCES
[1] A.C. Aitken, On Bernoullis numerical solution of algebraic equations, Proc. Roy. Soc.
Edinburgh Ser. A 46(1926), 289305.
[2] W.G. Bickley and J.C.P. Miller, The numerical summation of slowly convergent series
of positive terms, Philos. Mag. 7th Ser. 22(1936), 754767.
[3] P. Bjrstad, G. Dahlquist and E. Grosse, Extrapolation of asymptotic expansions by a
modied Aitken 2 -formula, STAN-CS-79-719 (Computer Science Dept., Stanford Univ.,
1979).
[4] P. Bjrstad, G. Dahlquist and E. Grosse, Extrapolation of asymptotic expansions by
a modied Aitken 2 -formula, BIT 21(1981), 5665.
ements de mathematique, Fonctions dune variable reelle, (Hermann,
[5] N. Bourbaki, El
Paris, 1961).
[6] C. Brezinski, Application du -algorithme `
a la quadrature numerique, C. R. Acad.
Sc. Paris t.270(1970), 12521253.
[7] C. Brezinski, Etudes sur les - et -algorithmes, Numer. Math. 17(1971), 153162.
[8] C. Brezinski, Acceleration de suits `
a convergence logarithmique C. R. Acad. Sc. Paris
t.273(1971), 727730.
[9] C. Brezinski, Acceleration de la convergence en analyse numerique Lecture notes in
Math., 584 (Springer, Berlin, 1977).
[10] C. Brezinski, A general extrapolation algorithm, Numer. Math. 35(1980), 175187.
[11] C. Brezinski and M. Redivo Zaglia, Extrapolation methods: theory and practice,
(Elsevier, Amsterdam, 1991).
[12] N. G. de Bruijn, Asymptotic Methods in Analysis (Dover Publ., New York, 1981).
[13] F. Cordellier, Caracterisation des suites que la premi`ere etape du -algorithme transforme en suites constantes, C. R. Acad. Sc. Paris t.284(1971), 389392.
[14] J. P. Delahaye and B. Germain-Bonne, Resultats negatifs en acceleration de la convergence, Numer. Math. 35(1980), 443457.
[15] J.E. Drummond, Summing a common type of slowly convergent series of positive
terms, J. Austral. Math. Soc. 19 Ser. B(1976), 416421.
[16] W.F. Ford and A. Sidi, An algorithm for a generalization of the Richardson extrapolation process, SIAM J. Numer. Anal. 24(1987), 1212-1232.
[17] H.L. Gray and W.D. Clark, On a class of nonlinear transformations and their applications to the evaluation of innite series, J. Res. Nat. Bur. Stand. 73B(1969), 251274.
111

[18] S. Gustafson, A method of computing limit values, SIAM J. Numer. Anal. 10(1973),
10801090.
[19] T. Hasegawa and T. Torii, Indenite integration of oscillatory functions by the Chebyshev series expansion, J. Comput. Appl. Math. 17(1987), 2129.
[20] T. H
avie, Generalized Neville type extrapolation schemes, BIT 19(1979), 204213.
[21] P. Henrici, Elements of numerical analysis, (John Wiley and Sons, New York, 1964).
[22] D.C. Joyce, Survey of extrapolation processes in numerical analysis, SIAM Rev.
13(1971), 435490.
[23] D. K. Kahaner, Numerical quadrature by the -algorithm, Math. Comp. 26(1972),
689693.
[24] K. Knopp, Theory and application of innite series, 2nd. English ed., (Dover Publ.,
New York, 1990).
[25] C. Kowalewski, Acceleration de la convergence pour certaines suites `
a convergence
logarithmique, Lecture note in Math. 888(1981), 263272.
[26] D. Levin, Development of non-linear transformations for improving convergence of
sequences, Intern. J. Computer Math. 3(1973), 371388.
[27] D. Levin and A. Sidi, Two new classes of nonlinear transformations for accelerating
the convergence of innite integrals and series, Appl. Math. and Comp. 9(1981), 175215.
[28] I. M. Longman, Note on a method for computing innite integrals of oscillatory
functions, Proc. Cambridge Phil. Soc. 52(1956), 764768.
[29] S. Lubkin, A method of summing innite series, J. Res. Nat. Bur. Stand. 48(1952),
228254.
[30] J.N. Lyness and B.W. Ninham, Numerical quadrature and asymptotic expansions,
Math. Comp. 21(1967), 162178.
[31] A.C. Matos and M. Prevost, Acceleratoon property for the columns of the Ealgorithm, Numer. Algorithms 2(1992), 393408.
[32] K. Murota and M. Sugihara, A remark on Aitkens 2 -process, Trans. Inform. Process. Soc. Japan 25(1984), 892894. (in Japanese) (= , , Aitken
, ).

[33] I. Navot, An extension of the Euler-Maclaurin summation formula to functions with


a branch singularity, J. Math. and Phys. 40(1961), 271276.
[34] F.W.J. Olver, Asymptotics and Special functions, (Academic Press, New York, 1974).

112

[35] N. Osada, Asymptotic expansions and acceleration methods for alternating series (in
Japanese), Trans. Inform. Process. Soc. Japan 28(1987), 431436. ( = ,
, ).

[36] N. Osada, Asymptotic expansions and acceleration methods for logarithmically convergent series (in Japanese), Trans. Inform. Process. Soc. Japan, 29(1988), 256261. ( =
, , ).

[37] N. Osada, A convergence acceleration method for some logarithmically convergent


sequences, SIAM J. Numer. Anal. 27(1990), 178189.
[38] N. Osada, Accelerable subsets of logarithmic sequences, J. Comput. Appl. Math.
32(1990), 217227.
[39] N. Osada, Acceleration methods for vector sequences, J. Comput. Appl. Math. 38
(1991), 361371.
[40] N. Osada, A method for obtaining sequence transformations, IMA J. Numer. Anal.
12(1992), 8594.
[41] N. Osada, Extension of Levins transformations to vector sequences, Numer. Algorithms 2(1992), 121132.
[42] K.J. Overholt, Extended Aitken acceleration, BIT 5(1965), 122132.
[43] P. Rabinowitz, Extrapolation methods in numerical integration, Numer. Algorithms
3(1992), 1728.
[44] L.F. Richardson, The deferred approach to the limit, Part I Single lattce, Philos.
Trans. Roy. Soc. London Ser A 226(1927), 299349.
[45] W. Romberg, Vereinfachte numerische Integration, Kgl. Norske Vid. Selsk. Forhandlinger 32(1955), 3036.
[46] H. Rutishauser, Ausdehnung des Rombergschen Prinzips, Numer. Math. 5(1963),
4854.
[47] P. Sablonni`ere, Asymptotic behaviour of iterated modied 2 and 2 transforms on
some slowly convergent sequences, Numer. Algorithms 3(1992), 401410.
[48] J.W. Schmidt, Asymptotische Einschlieung bei konvergenzbeschleunigenden Verfahren, Numer. Math. 8(1966), 105113.
[49] R.J. Schmidt, On the numerical solution of linear simultaneous equations by an
iterative method, Phil. Mag. 32(1941), 369383.
[50] C. Schneider, Vereinfachte Rekursionen zur Richardson-Extrapolation in Spezialfallen, Numer. Math. 24(1975), 177184.

113

[51] D. Shanks, Non-linear transformations of divergent and slowly convergent sequences,


J. Math. Phys. 34(1955), 142.
[52] A. Sidi, Analysis of convergence of the T -transformation for power series, Math.
Comp. 35(1980), 833850.
[53] A. Sidi, On a generalization of the Richardson extrapolation process Numer. Math.
57(1990), 365377.
[54] D.A. Smith and W.F. Ford, Acceleration of linear and logarithmic convergence,
SIAM J. Numer. Anal. 16(1979), 223240.
[55] R.R. Tucker, The 2 -process and related topics II, Pacic J. Math. 28(1969), 455
463.
[56] E.J. Weniger, Nonlinear sequence transformations for the acceleration of convergence
and the summation of divergent series, Comput. Phys. Rep. 10(1989), 189371.
[57] E.J. Weniger, On the derivation of iterated sequence transformations for the acceleration of convergence and the summation of divergent series, Comput. Phys. Comm.
64(1991), 1945.
[58] J. Wimp, Sequence transformations and their applications, (Academic Press, New
York, 1981).
[59] P. Wynn, On a device for computing the em (Sn ) transformaion, MTAC 10(1956),
9196.
[60] P. Wynn, On a procrustean technique for the numerical transformaion of slowly
convergent sequences and series, Proc. Camb. Phil. Soc. 52(1956), 663671.
[61] P. Wynn, On the convergence and stability of the epsilon algorithm, SIAM J. Numer.
Anal. 3(1966), 91122.

114

Appendix A. Asymptotic formulae of the Aitken 2 process


Lemma 10.3 Suppose that a sequence (sn ) satises
sn = s + c0 n + c1 n1 + c2 n2 + O(n3 ),

(A.1)

where < 0 and c0 (6= 0), c1 , c2 are constants. Then the following asymptotic formul
hold.

(sn )2
c0
=
s
+
n + O(n1 ).
2 sn
1
1 (sn )2
(2) sn
= s + O(n1 ).
2

sn
1 sn sn
= s + O(n2 ).
(3) sn
sn sn
Proof. (1) Using (A.1) and the binomial expansion, we have
[(
]
]
[(
)
)1
1
1
1 + c1 n1
1+
1
sn = c0 n
1+
n
n
[(
]
)2
1
2
+ c2 n
1+
1 + O(n4 )
n
)
(
1
c0 + c1 ( 1)n2
= c0 n1 +
2
[
]
1
1
+ c0 ( 1) + c1 ( 1) + c2 ( 2)n3 + O(n4 )
6
2
(1) sn

and
]
)1
1
2 sn = c0 n1
1+
1
n
[(
]
(
)
)2
1
1
+
c0 + c1 ( 1)n2
1+
1 + O(n4 )
2
n
[
(
)
]
c1
1
1
2
= c0 ( 1)n
1+ 1+
( 2) + O( 2 ) .
c0
n
n
[(

By (A.2),
2

(sn ) =

c20 2 n22

)
]
[
(
1
1
2c1
( 1) + O( 2 ) ,
1+ 1+
c0
n
n

then we have
115

(A.2)

]
[
(
)
(sn )2
c0
c1
1
1
=
n 1+ 1+
( 2) + O( 2 )
2 sn
1
c0
n
n
[
(
)
]
2c1
1
1
1 1+
( 1) + O( 2 )
c0
n
n
[
(
)
]
c0
c1 1
1
=
n 1+ 1+
+ O( 2 ) .
1
c0 n
n
Thus we obtain
sn

(A.3)

(sn )2
c0
=s+
n + O(n1 ).
2
sn
1

(2) By (A.3), we have


sn

1 (sn )2
= s c0 n1 + O(n2 ).
2

sn

(3) Similarly,
(

)
1
sn = c0 n
+ c0 + c1 ( 1)n2
2
[
]
1
1
+ c0 ( 1) c1 ( 1) + c2 ( 2)n3 + O(n4 ).
6
2
1

By (A.2) and (A.4) we have


sn sn =

c20 2 n22

[
]
2c1
1
1
1+
( 1) + O( 2 ) ,
c0
n
n

and
sn sn = c0 ( 1)n

[
]
c1 ( 2) 1
1
1+
+ O( 2 ) .
c0
n
n

[
]
sn sn
c0
c1 1
1
=
n 1+
+ O( 2 ) .
sn n
1
c0 n
n

Thus

Therefore we obtain
sn
as desired.

1 sn sn
= s + O(n2 ),
sn n

116

(A.4)

Appendix B. An asymptotic formula of Lubkins W transformation


For a sequence (sn ), the transform is dened by
(sn ) = sn+1

+ 1 sn sn+1
,

2 sn

where is a positive parameter.


Theorem 11.6 (Sablonni`ere) Suppose that a sequence (sn ) satises
sn s + n

cj
.
j
n
j=0

(B.1)

If Wn,k is represented as
[
Wn,k s = n

2k

(k)
c0

]
(k)
(k)
c1
c2
1
+
+ 2 + O( 3 ) ,
n
n
n

then
Wn,k+1 s = n

2k2

(k)

c0 6= 0,

(B.2)

]
[
1
(k+1)
+ O( ) ,
c0
n

where
(k)

(k+1)

c0

(k)

+
(k)

(k)

Proof. Let c0 , c1

(k)

(k)

c0 (1 + 2k) 2(c0 k( 2k) c1 )2

(k)
6( 2k)
c0 ( 2k)3
(k)

(k)

c0 k 2 ( 2k)( 2k 1) 4c1 k( 2k 1) + 4c2


( 2k)2 ( 2k 1)
(k)

and c2

(B.3)

be dened by
[

Wn,k s = (n + k)2k

]
(k)
(k)
c

1
(k)
2
+ O(
) .
c0 + 1 +
n + k (n + k)2
(n + k)3

Then
(
)
(k)
(k)
(k)
Wn,k s = c0 n2k + c0 k( 2k) + c1 n2k1
(
)
1 (k) 2
(k)
(k)
+
n2k2
c0 k ( 2k)( 2k 1) + c1 k( 2k 1) + c2
2
+ O(n2k3 )

(B.4)

117

By (B.2) and (B.4), we have


(k)

(k)

c0 = c0 ,
(k)

(k)

(k)

c1 = c0 k( 2k) + c1 ,
1 (k)
(k)
(k)
(k)
c2 = c0 k 2 ( 2k)( 2k 1) c1 k( 2k 1) + c2 .
2
By Theorem 10.4,
(k+1)

2k (Wn,k ) s = d0

(n + k + 1)2k2 + O((n + k + 1)2k3 ).

where
(k)

(k)

(k+1)

d0

(k)

c0 (1 + 2k)
(
c )2
2
c2
(k) 1
+
,
12
( 2k)( 2k 1)
c0 ( 2k)2
(k)

(k)

(k)

c (1 + 2k) (c1 c0 k( 2k))2

= 0
(k)
12
c0 ( 2k)2
(k)

(k)

(k)

c k 2 ( 2k)( 2k 1) 2c1 k( 2k 1) + 2c2


.
+ 0
( 2k)( 2k 1)

(B.5)

By (B.2),
Wn+1,k

)
(k)
(

2k

1)
1
3
c
(k)
= c0 ( 2k)n2k1 1 +
( 2k 1) + 1 (k)
2
n
c0 ( 2k)
(
)
(k)
(k)
c
7
3c ( 2k 1)
2k 2
+ (k) 2
+
( 2k 1) + 1 (k)
6
n2
2c0 ( 2k)
c0 ( 2k)
]
1
+O( 3 ) ,
n
(

(B.6)

and
Wn+1,k 2k (Wn,k )
[
(
=

(k)
c0 n2k

1+

2k +

(k)

c1

(k)
c0

1
n

(k)

(k)

(k+1)

c ( 2k 1) c2
d
1
+
( 2k)( 2k 1) + 1
+ (k) 0 (k)
(k)
2
c0
c0
c0
]
1
+O( 3 ) .
n
118

1
n2
(B.7)

We put
Wn,k+1 s = Wn+1,k N/D.
Then
N = Wn+1,k (Wn+1,k 2k (Wn,k ))
[
(
)
(k)
3
c
(2

4k

1)
5
1
(k)
= (c0 )2 ( 2k)n24k1 1 +
5k + 1 (k)
2
2
n
c0 ( 2k)
(
(k)
c1 ( 2k 1)(5 10k 3)
19 19k 7

)+
+ ( 2k 1)(
(k)
6
3
3
c0 ( 2k)
]
)
(k)
(k)
(k+1)
2c2 ( 2k 1) (c1 )2 ( 2k 1) d0
1
1
+
+
(k)
+ O( 3 ) .
(k)
(k)
n2
n
c ( 2k)
(c )2 ( 2k)
c
0

(B.8)

Similarly, we have
D = Wn+1,k 2k (Wn,k )
[
(
(k)

= c0 ( 2k)n2k1 1 +
(

7
( 2k 1) +
6
]
1
+O( 3 ) .
n

(k)

c ( 2k 1)
3
( 2k 1) + 1 (k)
2
c0 ( 2k)

(k)
3c1 ( 2k 1)
(k)
2c0 ( 2k)

(k)
c2
(k)
c0 (

2k)

1
n

(k+1)
d0
(k)
c0 ( 2k)

2k 2
n2
(B.9)

By (B.8) and (B.9),


[
(
)
(k)
N
c
1
(k)
= c0 n2k 1 + 2k + 1(k)
D
n
c0
)
(
(k)
(k+1)
(k)
2d0
1
1
c1 ( 2k 1) c2
+

+
( 2k)( 2k 1) +
2
(k)
(k)
(k)
2
c0
c0
c0 ( 2k) n
]
1
(B.10)
+O( 3 ) .
n
Since Wn,k+1 s = Wn+1,k N/D , we obtain
(k+1)

2d
Wn,k+1 s = 0
n2k2 + O(n2k3 ).
2k
This completes the proof.

119

FORTRAN PROGRAM
Here we give a FORTRAN program that includes the subroutines GENRHO, the
generalized -algorithm, and MODAIT, the modied Aitken 2 formula. The main routine given below is an example of applications of the subroutines to the series
n

1
.
sn =
i i
i=1

The parameters NMAX, EPSTOR, DMINTOR, XTV in GENRHO and MODAIT are as
follows:
NMAX

The maximum number of iterations.

EPSTOR

The absolute error tolerance.

DMINTOR

A positive number. For a variable x, if |x| <DMINTOR then the program stop.

XTV

The true value, i.e. the limit of the sequence.

The variables N, ILL, TH in GENRHO and MODAIT are as follows:


N

A positive integer n such that sn is the n-th term.

ILL

A non-negative integer. If ILL> 0, then the program stop.

TH

A real number. The exponent in the asymptotic expansion.

The variables XX, RHO, KOPT in GENRHO are as follows:


The n-th term sn . (input)

XX

(n2k)

2k

, where k = bn/2c. (output)

An array of dimension (0:1,0:NMAX)

RHO

(n1k)

RHO(1,k):k

(nk)
RHO(1,k):k

KOPT

(input)

(output)
(n2k)

A non-negative integer such that 2k

is

the accelerated value where k =KOPT.

The variables XX, S, KOPT, DS in MODAIT are as follows:


XX

The n-th term sn . (input)


(nk)

sk

, where k = bn/2c. (output)


120

An array of dimension (0:1,0:NMAX)

(n1k)

S(1,k):sk

(nk)

S(1,k):sk
KOPT

(input)

(output)
(nk)

A non-negative integer such that sk

is

the accelerated value where k =KOPT.


An array of dimension (0:1,0:NMAX)

DS

(nk1)

(nk2)

DS(1,k):sk

sk

(nk)
DS(1,k):sk

(nk1)
sk

(input)

(output)

The function TERM(N) returns the n-th term of innite series.


C

102
103
109

ACCELERATION METHODS FOR INFINITE SERIES


PROGRAM INFSER
IMPLICIT REAL*8 (A-H,O-Z)
IMPLICIT INTEGER*4 (I-N)
CHARACTER CEQ*60,CACCL*60
PARAMETER (NMAX=20,EPSTOR=1.0D-12,DMINTOR=1.0D-30)
EXTERNAL TERM
REAL*8 X(1:NMAX)
REAL*8 RHO(0:1,0:NMAX)
REAL*8 TRHO(0:1,0:NMAX)
REAL*8 S(0:1,0:NMAX),DS(0:1,0:NMAX)
REAL*8 TS(0:1,0:NMAX),DTS(0:1,0:NMAX)
CEQ= A_I=1/SQRT(I)/I
XTV=2.61237534868549D0
DO 101 IACCL=1,2
GO TO (102,103),IACCL
CACCL= AUTOMATIC GENERALIZED RHO ALGORITHM
GO TO 109
CACCL= AUTOMATIC MODIFIED AITKEN DELTA SQUARE
GO TO 109
CONTINUE
WRITE (*,3000)
WRITE (*,*) CEQ
WRITE (*,*) CACCL
WRITE (*,3100)
ILL=0
XX=0.0D0
DO 201 N=1,NMAX
X0=XX
XX=XX+TERM(N)
X(N)=XX
IF (N.EQ.1) THEN
DX=XX
GO TO 209
ENDIF
IF (N.EQ.2) THEN
DX0=DX
DX=XX-X0
121

202
203
209

210

211
220

221
229

231
232
239

201
300

D2X=DX-DX0
DD=DX/D2X
GO TO 209
ENDIF
DX0=DX
DX=XX-X0
D2X=DX-DX0
DD0=DD
DD=DX/D2X
ALPHA=1.0D0/(DD-DD0)+1.0D0
TH=-2.0D0
NN=N-2
GO TO (202,203),IACCL
CALL GENRHO(ALPHA,TRHO,NN,DMINTOR,KOPT,NMAX,ILL,TH)
GO TO 209
CALL MODAIT(ALPHA,TS,DTS,NN,DMINTOR,KOPT,NMAX,ILL,TH)
GO TO 209
CONTINUE
ERX=ABS(XX-XTV)
SDXER=-LOG10(ERX)
XP=XX
IF (N.LE.2) GO TO 229
GO TO (210,220),IACCL
CONTINUE
DO 211 NN=1,N
XP=X(NN)
TH=ALPHA
CALL GENRHO(XP,RHO,NN,DMINTOR,KOPT,NMAX,ILL,TH)
CONTINUE
GO TO 229
CONTINUE
DO 221 NN=1,N
XP=X(NN)
TH=ALPHA
CALL MODAIT(XP,S,DS,NN,DMINTOR,KOPT,NMAX,ILL,TH)
CONTINUE
GO TO 229
CONTINUE
ER=ABS(XP-XTV)
SDER=-LOG10(ER)
IF (N.LE.2) GO TO 232
WRITE (*,2000) N,XX,ALPHA,XP,SDXER,SDER,KOPT
GO TO 239
WRITE (*,2010) N,XX
GO TO 239
CONTINUE
DXP=ABS(XP-XP0)
XP0=XP
IF (DXP.LT.EPSTOR) GO TO 300
IF (ILL.GE.1) GO TO 300
IF (N.LE.2) GO TO 201
CONTINUE
CONTINUE
122

101
2000
2010
3000
3100
9999

WRITE (*,3100)
IF (ILL.GE.1) THEN
WRITE (*,*) ABNORMALLY ENDED
ENDIF
WRITE (*,3100)
CONTINUE
FORMAT (I4,3D25.15,2F7.2,I5)
FORMAT (I4,1D25.15)
FORMAT (1H1)
FORMAT (/)
STOP
END

FUNCTION TERM(N)
REAL*8 X,TERM
X=DBLE(N)
TERM=1.0D0/X/SQRT(X)
RETURN
END
SUBROUTINE GENRHO(XX,RHO,N,DMINTOR,KOPT,NMAX,ILL,TH)
REAL*8 DMINTOR,ER,TH
REAL*8 RHO(0:1,0:NMAX)
REAL*8 DRHO,XX
KOPT=0
IF (N.EQ.1) GO TO 110
KEND=N-1
DO 101 K=KEND,0,-1
RHO(0,K)=RHO(1,K)
101 CONTINUE
110 CONTINUE
RHO(1,0)=XX
IF (N.EQ.1) THEN
RHO(1,1)=-TH/XX
GO TO 199
ENDIF
DRHO=RHO(1,0)-RHO(0,0)
ER=ABS(DRHO)
KOPT=0
IF (ER.LT.DMINTOR) THEN
ILL=1
GO TO 199
ENDIF
RHO(1,1)=-TH/DRHO
KEND=N
DO 121 K=2,KEND
DRHO=RHO(1,K-1)-RHO(0,K-1)
ER=ABS(DRHO)
IF ((ER.LT.DMINTOR).AND.(MOD(K,2).EQ.1)) THEN
KOPT=K-1
GO TO 140
ENDIF
IF ((ER.LT.DMINTOR).AND.(MOD(K,2).EQ.0)) THEN
123

ILL=1
GO TO 199
ENDIF
RHO(1,K)=RHO(0,K-2)+(DBLE(K-1)-TH)/DRHO
121 CONTINUE
IF (MOD(N,2).EQ.0) THEN
KOPT=N
ELSE
KOPT=N-1
ENDIF
140 CONTINUE
XX=RHO(1,KOPT)
199 RETURN
END

101
110

111
120
140
199

SUBROUTINE MODAIT(XX,S,DS,N,DMINTOR,KOPT,NMAX,ILL,TH)
REAL*8 DMINTOR,TH,COEF
REAL*8 W1,W2,XX
REAL*8 S(0:1,0:NMAX),DS(0:1,0:NMAX)
KOPT=0
IF (N.EQ.1) GO TO 110
KEND=INT((N-1)/2)
DO 101 K=0,KEND
S(0,K)=S(1,K)
IF ((MOD(N,2).EQ.1).AND.(K.EQ.KEND)) GO TO 101
DS(0,K)=DS(1,K)
CONTINUE
CONTINUE
S(1,0)=XX
IF (N.EQ.1) THEN
DS(1,0)=XX
GO TO 199
ENDIF
DS(1,0)=XX-S(0,0)
KEND=INT(N/2)
DO 111 K=1,KEND
W1=DS(0,K-1)*DS(1,K-1)
W2=DS(1,K-1)-DS(0,K-1)
IF (ABS(W2).LT.DMINTOR) THEN
ILL=1
GO TO 199
ENDIF
COEF=(DBLE(2*K-1)-TH)/(DBLE(2*K-2)-TH)
S(1,K)=S(0,K-1)-COEF*W1/W2
IF (N.EQ.2*K-1) GO TO 111
DS(1,K)=S(1,K)-S(0,K)
CONTINUE
KOPT=INT(N/2)
CONTINUE
XX=S(1,KOPT)
RETURN
END

124

Vous aimerez peut-être aussi