Académique Documents
Professionnel Documents
Culture Documents
Editors:
Ch. Blanc, Lausanne; A. Ghizzetti, Roma; P. Henrici, Zurich;
A. Ostrowski, Montagnola; J. Todd, Pasadena
VOL. 14
BasicNumerical
Mathematics
Vol. 1:
Numerical Analysis
by
John Todd
Professor of Mathematics
California Institute of Technology
1979
BIRKHAuSER VERLAG
BASEL AND STUTTGART
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any
form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission
of the copyright owner.
Birkhiluser Verlag, Basel, 1979
Softcover reprint of the hardcover 1st edition 1979
Contents
8
10
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
13
24
33
46
55
66
73
84
98
. 118
. 141
. 160
Bibliographical Remarks
. 247
. .
. 250
Index.
.251
. . . . . . . . . . . . . .
We use the standard logical symbolism, e.g., E for belongs to, c for is
included in, .~. for implies.
R is the set of real numbers.
If Xc R is bounded above so that there is an MER such that x EX
.~. x ::::;; M, then there is a AE R such that x E X .~. x ::::;; A and such that if
e > 0 there is an Xo E X such that xo > A-e. We write A = lub X; it is the
least upper bound. If AE X; we write A = max X. Conventionally, if X is
not bounded above, e.g., in the case X = {1, 2, 3, ... } we write lub X = 00.
In the same way we define glb X, min X.
[a, b] denotes the set of real numbers x such that a::::;; x ::::;; b.
(a, b) denotes the set of real numbers x such that a < x < b.
These are, respectively, closed and open intervals. Half -open intervals
[a, b), (a, b], [a,oo), (-00, b] are defined in an obvious way.
The derivatives or differential coefficients of a real function of a real
variable, f, when they exist, are denoted by
ar+lf(n) = a(arf(n),
r=1,2, .. ..
and a,,+lf(x)=ah(ahf(x.
x~oo
x~xo.~.lf(x)I::::;;A Ig(x)1
x~oo
X
~xl.~.lf(x)l<e Ig(x)l.
Xl
such
j=l
j+i
Preface
Preface
11
which show some of the difficulties which are present in our subject and
which can curb reckless use of equipment.
In the Appendix we have included some relatively unfamiliar parts of
the theory of Bessel functions which are used in the construction of some of
our examples. This is at a markedly higher level than the rest of the volume.
In the second volume, subtitled "Numerical Algebra", we assume that
the fundamental ideas of linear algebra: vector space, basis, matrix, determinant, characteristic values and vectors, have been absorbed. We use
repeatedly the existence of an orthogonal matrix which diagonalizes a real
symmetric matrix; we make considerable use of partitioned or block matrices, but we need the Jordan normal form only incidentally. After an initial
chapter on the manipulation of vectors and matrices we study norms,
especially induced norms. Then the direct solution of the inversion problem
is taken up, first in the context of theoretical arithmetic (i.e., when round-off
is disregarded) and then in the context of practical computation. Various
methods of handling the characteristic value problems are then discussed.
Next, several iterative methods for the solution of systems of linear equations
are examined. It is then feasible to discuss two applications: the first, the
solution of a two-point boundary value problem, and the second, that of
least squares curve fitting. This volume concludes with an account of the
singular value decomposition and pseudo-inverses.
Here, as in Volume 1, the ideas of "controlled computational experiments" and "bad examples" are emphasized. There is, however, one
marked difference between the two volumes. In the first, on the whole, the
machine problems are to be done entirely by the students; in the second,
they are expected to use the subroutines provided by the computing
system - it is too much to expect a beginner to write efficient matrix
programs; instead we encourage him to compare and evaluate the various
library programs to which he has access.
The problems have been collected in connection with courses given
over a period of almost 30 years beginning at King's College, London, in
1946 when only a few desk machines were available. Since then such
machines as SEAC, various models of UNIVAC, Burroughs, and IBM
equipment and, most recently, PDP 10, have been used in conjunction with
the courses which have been given at New York University, and at the
California Institute of Technology.
We recommend the use of systems with "remote consoles" because, for
instance, on the one hand, the instantaneous detection of clerical slips and,
on the other, the sequential observation of convergents is especially valuable
to beginners. The programming language used is immaterial. However, most
12
Preface
CHAPTER 1
= f(Xn, Yn)
Yn+1 = g(Xn, Yn)
Xn+l
where f, g are given functions and xo, Yo are given, from the point of view of
the rates of convergence of the sequences {Xn}, {yn}. In most cases xo, Yo will
be non-negative, the sequences {Xn}, {yn} will be monotonic and bounded
and therefore convergent. The limits of {Xn}, {yn} will be the same in each
case but the rates of convergence to these limits will differ markedly from
case to case.
The generation of the sequences {Xn}, {yn} on a computer of any kind is
an exercise suited to a complete novice.
The theoretical discussions which establish the observed rates of convergence require only the basic facts of calculus.
bn+1 = .J(~bn).
tween
~, bn
= 0 for all n.
Thus the sequences {~}, {bn } are bounded monotone sequences and so
convergent with limits a, (3 such that a ~ (3. We show that a = (3.
14
Chapter 1
Clearly
r r
so that
.J~-./Y x-y
-=-=<-.Jx+"/y x+y
which can be proved by dividing across by
It follows that
[a
Let us estimate for what value of n this quantity will be <1(10-8 ) and so
negligible to BD. We want to find n so that
[1_10-4 ]2" <1x 10-8
Now, from tables, e-20 = 2.06 ...
to choose n so that
15
a6=
b7 =
a7 =
b8 =
a8 =
2.
.J-;;::;
}
.J \ [/3n - a,.]
2 a,.+l + f3n1
and since the first factor on the right is less than ;i, the limits of the
sequences coincide. The convergence in this case is ultimately much slower
than that in the Gaussian case. The same results obtain when f30 < Clo.
We shall now determine the limit in closed form. Although the defining
relations are still homogeneous they are not symmetrical. Assume first that
f30 = 1 > ClO' Let (J be defined by cos (J = Clo.
Using the relation 1 + cos x = 2 cos2 (xI2) repeatedly we find
16
Chapter 1
2n sin (2- nO)f3n = 2n- 1 cos (2- 10) ... cos (2- n+10)[2 sin (T"O) cos (2- nO)]
=
= sin o.
Hence
f3n
(sinh t)/t.
3.
Suppoe that ao> boo Then we find from the definitions that
The sequences
{~}, {bn }
~+1-bn+1 =
+ b )/2}
{.J(~
~
h
(~-bn)
n
"~+,, bn
where the first factor on the right is always less than .Ji < 1 and approaches
it follows that the sequences have a common limit, say ,. The rate of
convergence is, however, ultimately much slower than "in the Gaussian case.
!,
17
(a~-b~)/2
a~-b~
2 log (a,.+1/bn+l)
Hence, for all n
a~-b~
a~-b~
where
a~
a~- b~
2/a~
2 log (ao/boY
F=
a~ ~ I.
This gives
b2
ao- 0
2 log (ao/b o)
so that
4.
HISTORICAL REMARKS
It seems clear that he was searching for a formula for M(a, b), of the kind
given in 1.2, 1.3 and some of the problems. It was not until 1799 that he
made progress. At that time he computed the definite integral
dt
= Jo J(1- t4)
He then recalled his value of M(J2, 1) given above and observed that the
product AM(J2, 1) coincided to many decimal places with !1T. In his diary,
18
Chapter 1
on 30 May 1799, Gauss wrote that if one could prove rigorously that
AM(../2, 1} =11T, then new fields of mathematics would open. In his diary, on
23 December 1799, Gauss noted that he had proved this result, and more;
in later years his prophesy was fulfilled.
Stirling had actually obtained A to 16D in 1730. In Chapter 9 below
we shall show how to evaluate A approximately, using however another
method due to Gauss, quite different from that of Stirling. It has been shown
that this integral, which can be interpreted geometrically as the quarterperimeter of the lemniscate of Bernoulli, is a transcendental number.
There is no obvious way to establish Gauss' result. All known methods
have the same character, which is similar to the developments in 1.3, in
Problem 1.8 and in Problem 1.12. The identity AM(../2, 1} = 11T is the
special case k = 1/../2 of the following theorem.
= a o1T/[2K'(k 2 }]
where
COS6=~
t+a 2
-adt
d6=----=
2(t+a 2 }.Ji
and we find
x(x+bD
x+ai '
t=--~
dt= (x+a1a}(x+a1b) dx
(x+aD 2
19
where
to get
1
R(a 2 , b 2 ) = -
foo
7T 0
dx
= R(ai, bi).
J[x(x+ai)(x+bi)]
where M
r'll"/2
7T.lo
dO = M- 1
Hence
M
2 2)]-1
= [R (ao, bo
7Ta o
= 2K'(k 2 )
where
Chapter 1, Problems
1.1. Read again a proof of the fundamental theorem on monotonic sequences: if x" :5x,,+1 and if x,,:5M for all n = 0,1,2, ... then there is an
x :5 M such that
lim x" =x.
1.2. Show that if x" ~ Yn for all n = 0, 1, 2, ... and if x" ~ x and Yn ~ Y then
x~y.
20
Chapter 1
1.5. Show that as X'- OO the sequence (1-x- I )X increases to e- l and the
sequence (1- X-l)x-l decreases to the same limit.
1.6. Observe the behavior of the arithmetic-geometric sequences {a,,}, {bn }
e.g., when ao = 1, bo =0.2 and when ao =../2, bo = 1.
Specifically, compute enough terms of the sequences to find the
arithmetic-geometric means to 8 decimal places. At each step print out the
values of
a", bn , a" - bn , 2ao[(a o- bo)/(ao+ bo)]2".
1.7. H M(ao, bo) is the arithmetic-geometric mean of ao, bo, show that for
any t~O
M(tao, tbo) = tM(ao, bo),
i.e., M is homogeneous (of degree one).
Use the fact to determine, with the help of the result of Problem 1.6,
M(6000, 1200).
1.8. For ao, bo given (both different from zero) define
a,,+1 = ~(a" + bn ), the arithmetic mean,
bn +1 = 2/{(l/a,,) + (l/bn )}, the harmonic mean.
Show that ao~bo implies that {a,,} is monotone decreasing, that {bn } is
monotone increasing and that both sequences converge to .J aobo, the
geometric mean of ao, boo
Show that
a,,+1 - bn +1 = [a" - bn ]2/(4a,,+1)'
Observe the behavior of the sequences {a,,}, {bn } in special cases.
1.9. For ao, bo given (both different from zero) define
a,,+l = 2/{(1/a,,) + (llbn )},
bn +1 = Ja"bn
Discuss the convergence of the sequences {a,,}, {bn } either directly, or by
relating them to the Gaussian sequences for ao\ bOlo
1.10. a) Observe the behavior of the Borchardt sequences in the cases
21
when ao = 0.2, 130 = 1 and when ao = v'2, 130 = 1. Compare the rates of convergence with those in the Gaussian case. Check the limits you obtain from
tables.
b) Repeat a) for the Carlson sequences.
1.11. If A o, Bo are non-negative and if we define for n = 0, 1, 2, ...
Bn+1 =.J~Bn
An+1
= !(~ + Bn +1)
ao =!(1+x),
go=JX,
gn+1
= J ~+1gn>
n=O, 1, ... ,
where x>O.
Calculate 10g.Ji. Examine the convergence of the sequence {i( ~ +
2gn )}.
b) Discuss the convergence of the algorithm
ao=l,
~+1 =!(~ +g,.),
gn+l
=J~+1g,.,
where x>O.
Calculate arctan 1.
1.17. Let ao>bo>O. Define
n=O, 1, ... ,
22
Chapter 1
Show that {a,.}, {bn } have a common limit. Determine this limit and discuss
the rate of convergence.
1.18. (B. C. Carlson)
Show that the sequences {x,,}, {Yn}, where 0:-:::; Yo < Xo and where for
n;z: 0,
Yn+l = .Jx"x,,+1,
converge to a common limit 1= l(xo, Yo). First of all print out {x,,}, {X~I/2} for
n = 0(1) 20 when Xo = 1, Yo = O. Is the convergence monotonic? Is the
convergence ultimately geometric, and if so, what is the COmmon ratio?
By means of the change of variable
f"
(t + x~t3/4(t + y~)-1!2 dt =
Conclude that
4[I(x o, YO)]-I/2 =
f'
dx.
(1- T4)-1I2
dT.
r7T/2 ___d_O---:-_-=
Jo
1 + k'
1,"/2
dq,
(1- ki sin 2 cp )1/2
1.20. (J. Schwab) Find expressions for a,., the radius of the circle inscribed
in a regular n-gon with perimeter 1, and for bn> the radius of the circumscribing circle. Hence, or otherwise, show that
23
CHAPTER 2
1.
ORDERS OF MAGNTIUDE
f(x) = (J(x 2) as
x~CXl.
(2)
g(x) = (J(1)
as
x~CXl.
(3)
h(x) = o(x)
as
x~O.
(4)
k(x)=o(1)
as
x~O.
The relation (1) means that for some constant A and some Xo
x ~xo~lf(x)I:5Ax2.
For instance, this is true for any Xo with A = 2 if f(x) = 2X2 and for Xo ~ 1
with A = 5 if f(x) = 2X2 + 3x. Similarly (2) means that for some constant A
and some Xo
x ~xo~lg(x)I:5A.
For instance, this is true for any Xo and A = 1 when g(x) = sin x. In other
words (2) means that g(x) is bounded as X~CXl.
The relation (3) means that given any e >0 there is 8 = 8(e) such that
h(x)1
25
f(x)
= O(g(x))
as
x~oo
(7)
h(x) = o(k(x))
as
x~oo
(9)
= A (xo) (which
Xo
= x o( e),
such that
REMARKs: (10) The definition is phrased for the case when x is a real
variable - a similar definition applies when we consider functions of a
positive integral variable n.
(11) Usually g(x) and k(x) are standard positive functions like x R or eX and
the absolute value signs are not needed on the right of (8) and (9).
The vagueness about the interdependence of A and Xo in (1) and e and
Xo in (2) can be disturbing to the numerical analyst. For instance, if he is
is useless for this is true for f(x) = 10100 sin x. In practice, a calculation
involving O's and o's, while useful in preliminary investigations, must be
reworked, getting explicit estimates for the constants involved, before being
employed in actual computation.
The meaning of statements like
f(x) = (J(g(x))
as
x~o
h(x)=O(k(x)) as
x~a
or
is clear. In the first case the conclusion of (8) must hold for
for some
'J}
> 0 and in the second the conclusion of (9) must hold for
O<lx-al:::;~
for some
~>O.
26
Chapter 2
(12)
f(x) = O'(g(x
and
g(x) = O'(h(x
imply
f(x) = O'(g(x
and
hex) = O'(g(x
imply that
(13)
f(x) = O'(h(x
f(x)h(x)
= O'g(x)f).
Observe that if f(x) = O'(g(x it does not follow that g(x) = O'(f(x nor
do the relations
fleX) = O'(g(x,
f2(X) = O'(g(x
imply anything about the relative orders of fl' fz, let alone their eqUality.
Before proving in detail a general result of use in Chapter 4 we discuss
two examples.
Examples
(a) To prove that g(n) = 2n 3 + 3n 2 +4n + 5 = 0'(n 3 ) as n~OO we have various possibilities which are exemplified by the following:
(15)
and hence
(16)
and
(17)
g(n):52.12n 3
n;::: 100.
if
Each of the three relations (15), (16), (17) justifies the statement g(n) =
0'(n 3 ) and the interdependence of the no and A is clear. The constant A can
be taken to be any number bigger than 2 but the no increases as we take A
closer to 2.
(b) To prove that xl! = o(e X ) as x~oo we can proceed as follows. Note that
xl!
-------:-::---<
Xl2
12!
X
x>o.
x;:::xo
27
108 <
This is not a very realistic value; indeed a more realistic value is xo = 60 for
60 11 =3.62 ... x 1019 < 10-6 e60 = 1.14 ... X 1020
and
(xne-
X ),
= (n -
x)e- x x n- 1 < 0 if
x> n.
x~O,
f(x) = 1+ax+(J(x 2 ),
(19)
then, as
g(x)= 1+bx+(J(x 2 )
x~O,
(20)
Proof. This result can be obtained by combining the following results,
which follow from the same hypotheses (19), again as x~O,
(21)
(22)
f(x)g(x)
= 1 + (a + b)x + (J(x
2 ),
l/f(x) = 1- ax + (J(x 2 ).
We shall establish (21) and (22). The meaning of (19) is that there are
positive constants A, B, a, {3 such that
(23)
If(x)-1-axl:5Ax 2
if
(24)
Ig(x)-1-bxl:5Bx 2
if Ixl:5{3.
Ixl:5a,
If we write Al = max (A, B), al = min (1, a, (3) then (23), (24) both hold
with A, B replaced by Ai> and a, {3 by a1. We have to show that there are
positive constants C, 'Y such that
(25)
If(x)g(x)-[1+(a+b)x]I:5Cx 2
if Ixl:5'Y.
(26)
+ ax(g-1)+ bx(f-1)
+(f-1-ax)+(g-1- bx)
-abx 2
28
Chapter 2
(26)
using
(23), (24),
::5A~x4+lallxl [lbllxl+ At
we find for
Ixl::5at:
+ Atlxl2+ At Ixl2
+ lallbllxl2
::5C Ixl2
if
C=Ai+2Iallbl+(lal+lbl)At +2At+lallbl
=Ai+(2+lal+lbi)At +3lallbl
since Ixl::5at::51. This gives (21) or (25) with y=at.
To establish (22) we use the identity
!-(1-ax)= 1-(f-1-ax)(1-ax)-(1 +ax)(1-ax)
(27)
a 2 x 2 -(f-1-ax)(1-ax)
=----~----~--~
Since f(x) = 1+ax +0'(x 2) as X-+O we can find an a2 such that If(x)I~! if
Ixl::5 a2. We shall find a2 explicitly. First we take at = min (1, a). Then (23)
gives
We want to have
i.e.,
which will follow, since we
lallxl + A Ixl2::5!,
are assuming Ixl::5 at ::51,
from
Ixl(lal + A) ::5!,
i.e., from
We therefore take
Observe that while a can be zero, we may always assume that A is not zero
so a2 is well defined. We now return to the identity (27) and, taking absolute
29
values, we find
Ixl ::5a2
which is (22).
2. RATES OF CONVERGENCE
In Chapter 1 we studied sequences which had differing rates of convergence. We shall now give a formal definition of order of convergence.
where 0<111<00.
It is not difficult to construct sequences which converge to zero, e.g.,
with any prescribed order, A, say. Take any x, 0 < x < 1 and then consider
the sequence x" = X AR In this case en = X AR and en+1 = e~ so that
lim (en+1/e~ = 1, and the sequence converges to zero with order A.
Certain variants of this definition are in use. For instance, we may only
require that
lenl' -
30
Chapter 2
as required.
Similarly, if B is the limit of the Borchardt sequence we have
{3n > {3n+l > an +1 = !(<Xn + (3n) > <Xn
and
B-<Xn+1{3n+l-<Xn+l)= ( , - ~\ (/3n-<Xn)
2 V<Xn+l+V{3n/
<i({3n -<Xn)
<!(B-<Xn).
Chapter 2, Problems
2.1. Read an account of the Integral Test for the convergence and divergence of a series with positive terms.
2.2. The following series are well-known to be divergent
L-,L-,L-,L-
In n
In n nOn n)
00
00
00
00
Estimate for each series a integer N such that the sum of the first N terms
exceeds 10. Where reasonable, obtain the least such N on your computer.
2.3. Obtain
rough
estimates
for
the
sums
of
the
series
2.4. If 0 < a
31
fa(n) =
(l-a)-
L r-
,=1
= 1?
L r- 1- a <c~n-a
00
C1n-a <
r=n
Ch C2.
2.6. Use the results of Problem 2.5 to show that it is not profitable to try to
evaluate
'\'
-1
'- a" =
~ (2n-1)!!
1
t...
.-n=1 (2n)!!
4n + 1
32
Chapter 2
c) If
show that
1
1 B
2n 2N n 2
-----:s;Yn -YN:S;-+-+-
n~oo,
x being fixed,
CHAPTER 3
where N is a given number, 0 < N < 1, and p is a real number, !,i, -1, for
instance. We shall discuss various sequences {x,,} whose limit is the solution
sought and whose terms are defined by recurrence relations, usually of the
form
x,,+1 = f(x,,)
REcIPROCALS
= (1- Nx,,).
34
Chapter 3
y.(1-N)x+l
The first shows the monotony of the sequence and the second its boundedness. Let I denote the limit. Passage to the limit in the defining relation gives
1= (1- N)l + 1, i.e., Nl = 1,
35
= x,,+l - N- l = 2x" -
Nx~- N- l
= -N[x~-2N-lx" + N- 2 ]
=-NB~,
we have
x"+1-x,, >0.
36
Chapter 3
Y-X
Y'x[3(1-Nx)+ (Nx)2]
i.e.,
I(Nl- 2)(Nl-l) = 0
2.
37
SQUARE ROOTS
(5)
IN;
their
x,,+1=!(x,,+NX;;l).
This is easily motivated: if x" ::i IN then N/ x" ~ IN and it would appear that
their average !(x" + (N/x,,)) would be a better approximation than either.
(7)
(8)
(9)
IN:
Chapter 3
38
3.
ITERATION
Then (1) the relation x,,+l = g(x,,) makes sense for all n and all the points
(xo), Xl' x2, ... lie in J and (2) x" ~ x* E J, the only fixed point of g in J.
0 . on-ll xl - xol =
on IXI -
xol.
Now
and so
1x,,+1-Xol~(on +on-l+
~ 1- 0 IXI - xol
1
1-0
~-h(1-0)=h
i.e., x,,+l E J.
The inequality Ix" - x,,-ll ~ 0,.-1 IXI - xol shows that the sequence x,,+1 =
Xo + (Xl - xo) + ... + (x,,+l - x,,) consists of the partial sums of a convergent
series and is convergent to a limit, x*, necessarily in J.
39
The limit x* is the unique fixed point (in J). For if we had another fixed
point y we would have
g(y) = y,
g(x*)=x*
and so
x*-y
=hO'.
Let n ~ 00 and we get
(11)
4.
(12)
we get
f'(x) =x-2
and
x,.+1 = x,. -
N _X;;l
-2'
Xn
i.e.,
40
Chapter 3
we get
It is not obvious whether or not some of the other relations can be put
in the Newtonian form. This problem can be examined in the following way
(d. also Problem 3.12).
Consider the Dedekind recurrence (9) which we can write as
Xn+1 -
f' 3x 2 + N
f 2x 3 -2Nx
We solve this by expressing the right hand side in partial fractions getting
[=-~+~
f
2x x 2 -N
which can be integrated at sight to give
log f(x) =
-~ log
x + log (x 2 - N) + constant.
f'(x)
f(x)
1
x - g(x}"
Hence
logf(x)=
41
(t-g(t)t 1 dt.
which gives
f(x) = (Nx _l)l/N,
which gives
IN- X)2/.JN,
f(x)= (~
"N+x
Note that in each case f'(l) = 0 so there is no contradiction with the fact that
the Newton process is quadratically convergent-this fact will be established
assuming that f'(l) =1= O.
In the case where
we have
f'(x) =2x.
for which
and
42
5.
Chapter 3
PRAcnCAL COMPUfATION AND THEORETICAL ARITHMETIC
43
with the size of the numbers which occur in our computation in the second
we must be sure that all intermediate results of our computation remains
within scale i.e., do not cause overflow or underflow on the machine being
used. To guarantee this again requires a detailed examination of the
algorithm and of the machine.
Another example of the distinction we are making is the following:
Theoretically the harmonic series
l+!+i+l+ ...
is divergent. If one tries to sum this on a computer we will have a finite
series, for n -1 will become and remain zero from the point of view of the
machine, and apparently we have convergence.
Chapter 3, Problems
3.1. Demonstrate the relative speeds of the various algorithms for N- 1 and
for N 1/ 2 by printing out in parallel columns the corresponding values of x".
Observe how the choice of Xo affects the speed of convergence.
Xo
is given and
3.3. Discuss the behavior of the following sequence when Yo is given and
when O<N<1.
Yn+1 = 2y~(3y~- N).
and by
x,,+1 =!Y(x,,)+!Z(x,,)
where
X(x)=![x+Nx- 1 ],
Chapter 3
44
Xo
is given and
and if
n=1,2, ...
N>O,
O<a<m.
=~[Zn + ~]
when A is real and positive and when Zo is complex. Can you conjecture
what sets of Zo give convergence?
Continue your experiments in the case when A = ae i9 is complex. For
what set of initial values Zo do you expect convergence?
Examine the case when (J = 7T and Zo is real.
Examine the case when A = - 3 and Zo = 1.
3.12. Show that the recurrence relation
x,,+l = x" (p + 1- Nxr:J/p
arises from a Newton process for N-1IP.
45
Xo
is given.
3.14. Discuss the linear scheme for N- 1 in the light of the general theorem.
3.15. Discuss the convergence of the iteration scheme for a zero of x + x 3
3.16. Use one of the recurrence relations for mto make a table of mfor
N = 12451(1) 12500. Describe carefully your stopping rule. If V is the
50-dimensional vector with components VN =m =SQRT N, N =
12451(1) 12500 find the following three norms of V:
IIV1b= ~L v~,
Here SQRT N indicates the output of the machine subroutine for
./N.
CHAPTER 4
1.
QuADRATIC EQUATIONS
The formulas for the case of cubics and quartics are given in algebra texts
and the reader should try to use them, checking the results obtained from
tables.
The simple formula above need not be the answer to the problem of
solving a quadratic. H we use a typical floating point machine in the case
q = 1 and p large and negative, say, there will be severe cancellation in the
calculation of the smaller root from the above formula
e.g., p=
q=l
gives x_ =
which has a relative error of
to avoid this pitfall. We have
x+ = -!p +.J{(p2/4)-1}
%. It is easy
and
x_= l/x+
q=
in our machine we fail since p2/4 is out of range. We can counter this by
calculating the radical as
P SORT [1 - (q/p )/p].
2. BAD
47
EXAMPLES
has four roots 1,1,1,1. If we change the right hand side to 49 x 10~8 Z2 i.e.,
change the coefficient of Z2 by less than 1 part in 107 , then (1) becomes
(z -1)4 = (7 X
{(z -1)2-7 X
1O~4
1O~4
Z)2.
has 10 zero roots. If we change the right hand side to 1O~lO the roots of the
new equation all have absolute value 1O~l.
20
(3)
I1(z-r)=O, i.e.,
r=l
about
1O~9,
1.000000000
2.000000000
3.000000000
4.000000000
4.999999928
6.00000 6944
6.999697234
8.007267603
8.91725 0249
20.84690 8101
48
Chapter 4
Do we mean that
IXa -
xol
is small
If(xa)1
is small?
Results of the first kind are called "forward" error estimates and those
of the second are "backward". Results of the second kind are very acceptable to the experimental scientist whose data is generally subject to experimental error and to the numerical analyst who cannot introduce most
numbers exactly into his machine. Let us quote two results about the simple
case of square roots i.e., when f(x) == X2 - N.
(4) Given an s-place fixed point binary machine, there is a square root
algorithm such that the alleged square root and the true square root never
differ by more than 0.76 x 2-' in absolute value.
(5) Given a floating point machine, there is an algorithm which produces
from an input ii = 2'1 X A, 1:5 A:51 an output whose square differs from ii
by at most
2'11-27
x 4.38.
= x" -[f(x,,)]lf'(x,,)
is a better one, which is motivated in the diagram, is well suited for the
approximate determination of the roots of f(x) = 0, once they have been
separated. We outline a proof that the sequence of approximations has
quadratic convergence.
49
'(Xn)f---~
Fig. 4.1
Suppose
f(~)
(6)
It follows that
l+(x-~)f'(~)/f'(~)+(Jx-l;f)
(7)
x,,+1 -
~ = x" -
[f(x,,)/f'(x,,)] -
x [l-!(x" -
as announced.
so
Chapter 4
To establish convergence properly we require more elaborate considerations. The main difficulties are that we do DDt know ~ nor how close Xo
must be to it to make sure x.. ~~.
We give here an account following Henrici. We shall introduce the
conditions as we require them in the proof - these conditions are ascribed to
Fourier. The convergence of Newton's Method can be established under
much weaker conditions and in much wider contexts.
We first want to be sure that there are zeros of f(x) and that we know
what zero we are going after, i.e., we want to separate the zeros. To deal
with this point we assume that in an interval [a, b], where a < b we have,
say,
(8)
f(a)<O,
f(bO.
f'(xO
in [a, b]
so that f is increasing then there can be at most one zero in [a, b]. We shall
denote this zero by ~.
We shall assume that
(10)
f"(x):s;O
in [a, b]
= b intersects the
axis between a, b.
That this suffices is almost clear geometrically. The reader is advised to solve
Problems 4.5, 4.10 before continuing.
We now proceed with our proof. Take an initial guess Xo such that
a :s; xo:S; b. Let ~ be the unique zero of f(x). We distinguish two cases
a:s;xo:S;~, ~<xo<b. [If xo=~ we have ~=XO=XI .... ]
51
We shall prove that x,.::;, for all n and that x,. t . It follows that this
bounded monotone sequence converges to an 'I'J::; f Since f, f' are continuous, and f' f:. 0, passing to the limit in
gives
so that
But , is the only zero of f in [a, b J. Hence ,= lim x,..
Assume that x,.::;f The First Mean Value Theorem gives f(')-f(x,.)=
-f(x,.) = (' - x,.)f'(') where x,. ::;,::; g. Since ["(x)::; 0 it follows that f' is not
increasing and so f'(')::; f'(xn) giving
so that
where
~::;,::; Xo.
Since
f'
so that
Xl = Xo -
[f(xo)/f'(xo)]
The fact that a::; Xl follows from the condition (8). To justify our
diagram we have to show that the point (xo, f(xo)) is below the tangent at b.
52
Chapter 4
by the First Mean Value Theorem, where Xo < C< b, and because
f'(b)-f'(C)~O. The fact that the tangent at Xo has a greater slope than that
at b follows because f'(x) is not increasing.
This completes the proof of convergence of the Newton Process.
Provided that we can "separate" the roots, the Newton process is
reasonably satisfactory. We shall not discuss the problem of separation
here - Sturm's Theorem can be used in the important case when all the roots
are known to be real.
4.
fn = f(x).
Is this the best possible algorithm? It is certainly better, e.g., than direct
calculation of each term and then summation. Consider, e.g., 3x 2 + 2x + 1. If
we proceed thus:
we use 3 multiplications and 2 additions as compared to the 2 multiplications and 2 additions required if we use the Homer scheme
3x, (3x + 2), x(3x + 2), x(3x + 2) + 1.
53
/' 8 22 48
ul . r . b 2
4 11"" 24.1' 49 = f(2))"m tip lcation y
4
/'!.)"38
19 62 =1'(2)
as follows
fo=ao,
fn = f(x),
go=O,
gn = f'(x).
Chapter 4, Problems
4.1. Find the root of 3x 4 +4.6x 3 -O.7x 2 +9.2x+ 10 = 0, near x = -1, using a
desk calculator.
4.2. Write a program for the Newton process, applied to a polynomial of
degree s10, using the scheme outlined in the text to evaluate the function
and its derivative. In the first place this program may be written in the case
of real coefficients and for real roots and tested on the quartic equation of
the preceding problem. A complex program should then be written and used
on the quartic, to find roots near 0.7 1.2i.
54
Chapter 4
4.3. (L. Collatz) Write a program to obtain the quotient q(x) and remainder r(x) when a polynomial
aOxR+alx R- 1+ ... +~-lX+~
4.4. Discuss the behavior of the Newton process, when the root sought is a
multiple root.
4.5. Discuss the Newton method for the determination of the solution of
H(x)=3x 3 -3x+28 =0
4.6. Justify the scheme given for the calculation of {(a), f(a) when
{(x) = aoxR + a1x R- 1+ ... +~.
2{(x,.)f(x,.)
]
2{!'(x,.)F - !(x,.)f'(x,.)
4.9. Find the two least positive roots of the equation tan x = x. Find an
approximation to the root near (n +!hr in the form
x
Xo
CHAPTER 5
rn=U-
Lo u,=Un+Un+1+
L
L u,+ L = L(u,+v
Vr
r )
where
1.
Wr
UOVr
UNIFORM CONVERGENCE
56
Chapter 5
(1)
or
(2)
it being assumed for instance, that each u,(x) is continuous and that L u,(x)
is convergent for x near a in the first case, and for x between a and b in the
second.
It will appear that the concept of uniform convergence is the appropriate one in the present context. This concept is particularly natural in the
numerical study of convergence. We discuss, for two examples, the dependence of the no( E) on the argument x.
Consider first the series
co
Since
-------------=-----(n+x+1)(n+x+2) (n+x+1)
1
(n+x+2)
we have
n-1
L= 1+x
,=0
1
n+1+x
so that the sum is (1 + X)-1 and the remainder after n terms is (n + 1 + X)-1.
Our second example is more complicated. It was studied by G. G.
Stokes in 1847. Since
x(x+2)n 2 +x(4-x)n+1-x
Un == n(n + 1)[(n -1)x + 1](nx + 1)
[1~- n 1]
[2
2]
+ 1 + (n -1)x + 1 nx + 1
we have
1
2
---+----.
n+1 nx+1'
57
"o(e.x)
x
Fig. 5.1
if n~no(e,x).
A graph of no( e, x) in the case of the first series is of the form in Fig.
5.1 while that in the second series is of the form in Fig. 5.2.
These examples suggest the following definition.
Definition 3. A series L u,(x), whose terms depend on a variable x,
which is convergent to a sum u(x) for x E X, is said to be uniformly convergent
"o(e.x)
100 L_--=::======~
Fig. 5.2
__x
58
Chapter 5
to u(x) in X, if, given any e >0, there is an no= no(e), independent of x, such
that
ifn~no(e).
It is clear that our first series is uniformly convergent to its sum in the
interval X =[0,1] and that the second is not uniformly convergent in the
interval X = [0, 1].
It can be shown that the results (1) and (2) are true if the series is
uniformly convergent in an interval including a or in the interval [a, b].
An equivalent approach to the definition of uniform convergence and
one which is equally appropriate in the numerical context is the following.
Take the series I u,. (x) convergent to u (x) for x E X. Fix n and consider
rn(x) as a function of x: it will have an upper bound, M n, say. This means
that the error committed by truncating our series after n terms does not
exceed M,. for any x EX. H we now let n vary, we can ask whether Mn ~O
as n~oo, or not. In the first case, we can get an arbitrarily good approximation to u(x), for all x E X, by taking an appropriate (fixed) number of terms
and in the second, this is not possible. The first case is exactly that of
uniform convergence. [0. Problem 5.4].
2.
f(x) =
L fr(x)
where each f,. is a polynomial (not necessarily of degree r). H this were so we
could then calculate f(x) approximately throughout [a, b] by truncating the
series at an appropriate point. This, fortunately, is true. The result was
proved by Weierstrass in 1885 and there have been many proofs given since
then. One of these was given by S. N. Bernstein in 1912. It has apparently
an enormous advantage over most of the others in so far as that the
polynomial approximations to an arbitrary f(x) can be written down in terms
of the values of f(x) at the rational points.
59
Theorem
s.
Take the case f(x) = x 2 Then an easy calculation with binomial coefficients (cf. Problem 5.5) gives
S F F
F S F
F F S
x(1-x)(1-x)
(1- x)x(1- x)
(1-x)(1-x)x
r near nx
Pn,r =l=
1,
r not near nx
Pn,r =l=0.
60
Chapter 5
Hence the significant part of the sum for Bn (f, x) occurs for r near nx but,
since f is continuous, f(rln) is then near f(x). Thus
Bn(f,x)=';=f(x).
We have noted the apparent advantage of the Bn(f, x), that they can be
written down as a mean of the values of f at the points rln, r = 0, 1, 2, ... , n.
However, they have a disadvantage in so far as that the convergence is slow:
thus even for f(x) = x 2 , the error is O(n- 1), according to Problem 5.5. This
order result is true at any point Xo when f(x) has a second derivative at Xo.
This means that in order to get a 6 decimal approximation to sin x we have
to take the Bernstein polynomial of order about a million. Much better
results are possible - see NBS Handbook, p. 76.
3.
asxsb
are
-l~x:=s;l
61
+l
-1
dx
Tm(x)Tn(x) ~=
1 x
I. .
cos mO cos nO dO
=!
1. .
{o
=!7T
62
Chapter 5
+1
-1
Tm(x)f(x) .J
dx
(1- x)
1
0
Chapter 5, Problems
5.1. Let no(e,x) be the least no such that (n+l+x)-1<e for
Tabulate no(e, x) for say
n~no(e,x).
5.3. Calculate
M,. = max ITn(x)1
Osxs1
when
1
T =--n
n+l+x
and when
Tn (X) =-1+--1.
n+
nx+
5.4. Suppose that u(x) = I u,.(x), the series being convergent in the interval
[0,1] and that u(x)=uO(X)+U1(X)+ ... +Un_1(X)+Tn(X). Let M,.=
maxosxs1ITn(x)l. Let no= no(e, x) be the least integer such that if n ~ no then
ITn(x)l:5:e. Let no(e)=lubosxs1 no(e, x). Prove that
no(e)<oo for all
e>O
63
ql(X) = _kX4+~X2+~,
q2(X) = -1.06553 7x 4+ 1.93029 7x 2+O.067621,
q3(X) = Ns(-7x 4+ 14x 2+ 1),
qix) = 2(-16x 4+36x 2+3)/(157T).
Which qi(X) gives the best unifonn approximation to Ixl in [0, I]?
5.7. a) Sum the series
.J2{i-e cos 0 +e 2 cos 20 -e 3 cos 30+ ... }
where e = 3 - 2.J2, 0 = arccos (2x -1) and O:s; x:s; 1.
b) Show that for a certain 1>
(l+x)-l-7Tn (x)=
(-It-len
4
cos (nO +1
where
en
7Tn (X) = J2m-eTTex) + ... + (-It- l e n - l T:','_l(x)]+ ( - I t - l2 T:','(x)}
-e
when
and when
t2(X) = !+~J2-~x = 1Tl(X),
xr
Osxsl
e) Use the computer to find the least number of tenns of the series
(1 + x
= 1 - x + x 2- x 3+ ...
64
Chapter 5
-t:s;y:s;t
5.9. The Pade fractions r....v(x) where IL, v = 0,1,2, ... , corresponding to a
power series C(x) = I c,.xn, are the unique rational functions
r....v(x) = N ....v(x)/D....v(x)
3,
L a,T,(x)
r=O
where
aO = 2.53213
and where the prime indicates that the first ternl of f(x) is !ao. Confine your
attention to the interval [-1, 1].
5.11. Obtain the fOrnlal expansions:
(a)
2 4 co (_l)n+1
Ixl =-+4 2 1 T2n(X).
7r 7r n=1 n-
(b) sin (!7rX) = 2Jt (!7r)Tt (x)-2J3 (!7r)T3 (x)+ 2JsG7r)Ts(x)- ....
5.12. Suppose an experiment has a probability 0.6 of success. What is the
probability P20.k(0.6) of exactly k successes in 20 independent repetitions of
this experiment? Give the computed values of
P20.k
(0.6) for
k = 0(1)20
and draw a rough graph of these values. Repeat in the case of 50 repetitions.
65
xl s 0.1
5.14. Find theoretically and practically estimates for the error incurred in
truncating the power series for the Bessel function Jo(x), specifically in
taking as an approximation,
al = 2.24999 97
a2 = 1.26562 08
a3 = 0,3163866
a5 = 0.00394 44
a4 = 0.04444 79
a6 = 0.00021 00.
L x 2(1 + x2tn.
n=O
sn(x) = n 2xe- nx
and the truth of the relation limS~sn(x)dx=SHlimsn(x)]dx.
CHAPTER 6
SEQUENCE
IAI<1.
(1)
Then
x,,+l-X:
x" -x
A.
x,,+1 - X
Xn +1- X
x,,-X
(1.75 -1.5?
1.75-2(1.5)+1
= 2-2-n .
67
In this case
1.75 +0.25 = 2.
x,,+1)2
we may expect that the sequence {i,.} converges faster than {x,,}. This is
indeed the case. We have the following qualitative result: if
\,\\<1
then
Xn-x=o(,\n).
(3)
n~co,
i,. - x
--~O
x" -x
so that the new sequence converges faster than the original one.
If the original sequence does not have the approximate geometric
behavior, even if it is more rapidly convergent, the new sequence can have
slower convergence. As an example, consider the sequence of approximations to N- 1 given by Zo = 1, Zn+1 = zn(2- Nzn) which have quadratic con-
vergence. The behavior of the differences of this sequence and the sequence
zn is given below in the case when N = 1.
Zn
6.zn = Zn+1 -
Zn
0.5
-0.125
1.5
0.375
-0.2578
1.875
3.000
0.1172
1.9922
2.0455
68
Chapter 6
We note that it does not pay to go too far with this improvement. The
correction term -(x,,+2 - x,,+1)2/(x,,+2 - 2x,,+1 + x,,) is the ratio of two quantities which tend to zero as n~oo and will in practice be poorly determined.
We have stated that we want to have cheap acceleration devices. The
Aitken process is such a one. The main effort involved is in the determination of the terms of the sequence - the computation of the transformed
sequence only requires a few operations.
= 1 +!l..
= (2 + !l.)-1 UO
=!(1 +!!l.)-1Uo
=![uo-!!l.uO+i!l.2uo- ... ]
In 2 = l-!+i-i+i- ....
It is possible to apply the Euler transformation directly to this series,
but it is more convenient to apply it to the tail. We obtain
69
+0.002020202
-0.009090909
-0.000505051
0.090909091
+0.001515151
-0.007575758
+0.000155402
-0.000349649
-0.00005 5505
0.083333333
giving
In 2 = 0.69314 7169,
which we compare with the true value
In 2 = 0.69314 7181.
It can be proved that the Euler transform of a convergent series always
converges (and to the same sum). This is an exercise in the manipulation of
binomial coefficients. [Problem 6.12].
We conclude our account of the Euler transform by showing that the
following conditions are sufficient to ensure that the transformed series
converges faster than the original one.
(a) u" is a null sequence such -that for all m, n 2::: 0
(- Ll)mu" 2::: 0
and
= ~u,,+l
>!uok n + 1
70
Chapter 6
:5
IA"+1uol
Uo [
1 1
]
Uo
2"+2 + ... 2"+2 1 +2+ 22+ ... =2"+1
Hence
1.
Thus if Un+l/Un --+ 1 then the Euler transform converges practically like
IZ-".
Chapter 6, Problems
6.1. H x,. - x = AA n + o(A ") where IAI < 1 show that, in the notation of the
text,
x = o(A ").
x.. -
x..
6.4. Apply the Aitken process twice to the sequence {On}, checking your
results with those given
0.734347668
2.146907221
1.180755792
1.832664021
1.39566 2355
1.68771 8271
1.49279 9008
1.62280 9187
1.536116743
1.59391 7337
1.55538 1939
1.57317 3615
1.57078 8289
1.57079 6499
1.57079 6327
71
Un
= (1n n)/n,
or when the terms are stored as numbers. The program should also allow for
a choice of the division of the series into a head and tail.
Alternatively, thoroughly check the Euler subroutine available in your
system.
Apply this program to compute
00
and
L (-1)"(1n n)3/n.
00
1
L Unxn =-1L (~nao)
x
00
00
"=0
n=O
x )"
-1- .
X
Then write down the Euler transform of 1-!+i- ... and sum it.
6.8. Extend the results of Problem 6.7 to the case where there is a delay of r
in starting the Euler transform.
6.9. Evaluate the Euler transforms of the three series, each summed from
s =0 to s =00,
Verify that each converges to the proper sum and state whether or not the
speed of convergence has been improved.
72
Chapter 6
Lo (-1t/(2n +1),
00
(1f/4) = arctan 1 =
can be written as
~[vo+Mvo+Wvo+ ... ].
6.12. Show that if Sn is the sum of the first n terms of a series (so = 0) and if
Sn is the sum of the first n terms of the Euler transform of the series then
Sn ~ s
then
Sn ~ s.
B1(x)=-y-Iogx+ k~l
(_l)k-lxk
k(k!)
6.15. Apply the Euler transform to determine the sums of the series
00
(-1t
00
(-1t
00
(-1t
00
(-1t
~ 3n+1'~4n+1'~5n+1'~6n+1
6.16. Discuss the behavior of the Aitken transform {zn} of the sequence {zn}
defined by
0<N<1,
Zo= 1,
Zn+Z- N
0
-l~
zn+Z- N
CHAPTER 7
Asymptotic Series
1. A
CLASSICAL EXAMPLE
f'
t-1e x - t dt.
Since the integrand is positive g(X) increases with X, and, from the
fundamental theorem on monotone functions, limx->oo g(X) will exist i.e., the
infinite integral f(x) will converge, if we can show that g(X) is bounded as
X --+00. Clearly, for t 2! x.
and so integrating
g(X):::;;x-1e IX e- t dt = x-1eX[e- -e-X]:::;;x- 1
X
f(x) = -et
=
1 1
(-l)n-l(n -1)'
=--2+ ... +
n
+(-l)nn!
x x
x
2!
3!
--2+3-4+
.
x x x x
I'" t-n-1ex-tdt.
x
74
Chapter 7
This series is never convergent for its terms do not tend to zero. Nevertheless
it can be properly used to estimate [(x). To see this let us consider
:5n!x- n - 1
I"" ex-tdt
x
our estimate for Irn(x)1 decreases as n increases from 1 to [x], the integral
part of x, and then increases to 00. Thus for a fixed value of x, there is a limit
to the accuracy to which we can approximate [(x) by taking a number of
terms of the series - if this accuracy is good enough for our purposes, we
have a very convenient way of finding [(x) but if it is not, we have to
consider other means. Note that for fixed n, the estimate for rn(x) can be
made arbitrarily small by increasing x. Thus this method is likely to be
successful for large x.
We examine the case when x = 15 numerically. We note that in our case
but not in general the error estimate is just the first term omitted. It is
convenient to write a program to print out consecutive terms as well as the
partial sums. We find
= Ul +U2+"
Un
1
2
0.06666 66667
-0.00444 44444
0.0666666667
0.0622222223
14
15
16
17
-0.0000002133
0.0000001991
-0.0000001991
0.0000002124
0.06272 01779
0.06272 03770
0.06272 01779
0.06272 03903
sn
+Un
75
Asymptotic Series
f(15)
= 0.06272 028,
so that the actual error is about 0.00000 008, i.e. about half the estimate.
We shall now indicate an alternative approach to this problem. We
change the variable in the infinite integral from t to T where t = x + T. then
dt = dT and the limits become T = 0, T = 00. We find
f(x)
= roo e- dT.
T
.lo
X+T
f (X)=--2++
x x
between 0 and 00
()n-l (n -I)!
-1
n
+rn
x
where
which is the result already obtained and which establishes the asymptotic
expansion for f(x).
Let us emphasize the difference between convergent series and asymptotic series such as that discussed. In the first case, for any x, limn-->oo rn(x) =
0; in the second, for any n, limx-->oo rn(x) = O.
The formal definition of an asymptotic series is due to Poincare:
F(x)~Ao+AlX-l+A2X-2+ ...
as
x---+oo
if
lim [F(x)-(A o+ . .. +Anx-n)]x n = 0 for
x-->oo
n = 0,1,2, ....
76
Chapter 7
J"
x
X-
I I I ! 2! 3!
t- dt----+---+ ...
X
x 2 x 3 X4
This means that the same asymptotic series can represent many functions.
It can be shown that asymptotic series can be manipulated fairly freely
e.g., it is clear that we can add and subtract asymptotic series. Formal
multiplication is generally legitimate and asymptotic series can usually be
integrated term by term. We shall not develop this theory but instead shall
discuss several more examples.
2. THE FRESNEL INIEGRALS
U(x) =
V(x) =
Jx Jt
This is not the standard notation but is convenient in this section: see
Problem 7.7. It is convenient to handle these together by writing w =
U + iV. We shall actually discuss
w(a)=
J"
x
eit
dt
to.
()=ieiX[I+~+a(a+l)
a(a+l) ... (a+n-l)]+
(.)2 + ... +
(.IX )n
rn+l
x o.
IX
IX
wa
where
r n+l = i- n- 1 a(a + 1) ... (a + n)w(a + n + 1).
1"
x
dt
to.+n+1 =
Asymptotic Series
77
so that
Irn+ll ~ lu n +
11
ie iX
w(x)~-[X-iY]
Jx
where
and
1 1.3.5 1.3.5.7.9
Y=----+--(2X)3
2x
(2X)5
Thus
~.]
The error function and the complementary error function are defined
by
erf x =2-
.[;
I" e0
t2
dt
'
i=
2 " e- t2 dt,
erfc x = 1-erf x = .[;
Chapter 7
78
We shall actually prove that ITnl :5ltn+11, i.e., the absolute value of the error
committed by truncating the series I t. at any stage is less than the absolute
value of the first term omitted.
To do this we change the variable in the integral in Tn from t to y where
t=.J y +x 2
The limits of integration in y are 0 and
00.
Hence
so that certainly
x~oo.
Asymptotic Series
79
'15
1 .
-7
=(15)! . 295:::;: 1. x 10
r'"
xe- Y dy
g(x)=.10 (Y+X 2?/2;
we now use the binomial expansion of (y + X 2)-1I2 and integrate term-byterm. See Problem 7.9.
4.
J.
n!-(;)" ~2'7Tn
and we shall establish this in Chapter 9.
5.
80
Chapter 7
i.e.,
y'=y-x- I
o.
In order to get a "power" series for f(x) we need the following lemma.
Lemma.
'Y =
Proof. Both integrals on the right hand side are convergent. We use the
definition
'Y = lim
(1 +-i+ .. . +';-IOg n)
Iun (1 --;;x)n =e
-x
e1-(1-tt
r1-Tn
1
1
_"":"""t-:.-. dt = .10 -1---T dT=1+2:+ .. . +-;;.
Jo
Hence
. {L
'Y =Iun
in
81
Asymptotic Series
where all the integrals are meaningful. Our lemma shows that the first two
integrals give -y, the third gives -log x and the last can be integrated
term by term after expanding the exponential. We get
00
El(X)::::-y-logX+k~l
(_1)k-l Xk
k(k!)
Chapter 7
82
Chapter 7, Problems
=.r:
7.2. Repeat Problem 7.1 for the Fresnel integrals, i.e., evaluate U(x), V(x)
for x = 5(5)20.
7.3. Using the asymptotic series developed for g(x) = .firxe X2 erfc x evaluate
g(x) as accurately as you can for x = 3(1)6, estimating the errors in your
result.
Obtain the correct values of erfc x and g(x) from tables and give the
actual errors.
7.4. (E. T. Goodwin-J. Staton) Obtain an asymptotic expansion for
r"" e-
F(x) =
du,
u+x
u2
as x~oo. How many terms of the series are needed to give F(x) correct to
4D for x;;:: 10? Compute, in particular, F(10).
7.5. Show that the function F(x) of Problem 7.4 satisfies the differential
equation
,
x>O.
Y +2xy =V7r--,
7.6. Obtain the following representation for the function F(x) of Problem
7.4, in terms of the auxiliary functions e -x 2 and log x (which are well
tabulated), and power series:
[
F(x)=e- x2 .fir
t
""
X 2n + 1
n!(2n+1)
""
x2n
~ n!(2n) -logx-h .
Asymptotic Series
83
7.7. (L. Fox-D. H. Sadler) Discuss computationally convenient representations of the function
{(x)
= f= sin t dt.
o
rx+i
7.9. Justify the third method suggested for the determination of the asymptotic expansion for
g(x) = 2xe x2
e- t2 dt.
CHAPTER 8
Interpolation
1.
LAGRANGIAN INTERPOLATION
We assert that f(x) is the unique polynomial of degree 1 agreeing with the
data at xo, Xl' Compare Fig. 8.1, where p is taken between xo, Xl only for
convenience.
This is the simplest example of polynomial interpolation - interpolation
by rational functions (the ratio of two polynomials) or by splines (functions
which are "piece-wise" polynomials) or by trigonometric or exponential
polynomials is often considered, but cannot be discussed here. [See, however, Problem 8.17]
In order to discuss polynomial interpolation in general we need the
"Fundamental Theorem of Algebra" which is originally due to Gauss.
Theorem 1. Every (polynomial) equation
has a root.
Many proofs of this are available. We note that in order that it be true
we must admit complex roots even though we restrict the coefficients to be
real- the equation Z2 + 1 = 0 shows this. However it is remarkable that if we
admit complex numbers as solutions, then the result is true even if we allow
complex numbers as coefficients.
Interpolation
85
"
fo~-+-----I
Fig. 8.1
If f(a)
= 0 then
f(z)
= f(z)- f(a) = (z -
because
It follows from this remark (sometimes called the Remainder Theorem) and
the Fundamental Theorem of Algebra that we can factorize any f(z) in the
form
(2)
where the ai are not necessarily distinct. From this fact we can conclude that
if a polynomial vanishes for more than n distinct values of x it vanishes
identically. Suppose that f(z) vanishes for z = al>' .. , lXn and for z = ao. The
relation (2) must hold. If we put z = ao in this we get
0= a O(aO-al)(aO-a2)' .. (ao-lXn).
None of the factors ao - a; can vanish. Hence a o = O.
Hence we have a polynomial of degree n -1,
fl(Z) = alz n - 1+ ... + ~
which certainly vanishes for more than n - 1 distinct values of z. The same
argument shows that a l = O.
Proceeding we see that f must be identically zero.
It is a corollary to this that we need to establish uniqueness of
polynomial interpolation. In fact, if two polynomials of degree n at most
agree for more than n distinct values of the variable, they must coincideindeed, their difference, of degree n at most, vanishes for more than n
distinct values of the variable.
86
Chapter 8
Ii (x) =
(3)
n (x - Xj)/(X; - Xj).
n
i=O
iji
~(x)
Hence
(4)
L(f, x) =
L t~(x)
n
i=O
87
Interpolation
y
,,y-t (x)
,,
,,
a
Fig. 8.2
We give, without comment, two examples which show how the scheme
can be carried out and how labor can be saved by dropping common initial
figures. Many extensions of the method have been given by Aitken and
Neville.
Given /(1) = 1, /(2) = 125, /(3) = 729, /(4) = 2197, find /(2.5).
l=a
2=b
3=c
4=d
l=A
125=B
729=C
2197=D
187=B1
547=C1 367=C2
1099=D1 415=D2
343=P.
o
1
2
3
47.434165
539437
644517
749346
0.584954
682
517
837
60
-1.4321
-0.4311
+0.5689
24 + 1.5689
Chapter 8
88
xl>
,x". Consider
fl)(X) =
p]
[f(xo) Xo et f(x) x - p ,
X-Xo
x-xo
Xo
f(xo)
Xl
f(Xl)
X2
f(x 2) fl)(X2)
x"
fl)(Xl)
f 2)(X2)
INvERSE INTERPOLATION
We have noted that the Aitken process does not require the Xi to be
equally spaced. It can be used to solve equations: for to solve
f(x)=O
is just the evaluation of
f-l(O).
-342
-218
386
1854
0
1 2.7581
2 0.9396 2.1018
3 0.4672 0.5171
124
728
604
1.9926 2196 2072
1468
89
Interpolation
This is a very bad result. Indeed f(x) = (4x+ 1)3_343 and the zero is 1.5 not
1.9926.
Inverse interpolation - except linear inverse interpolation, which is the
same as direct - is tricky and great care is required.
4.
ERRORS IN INfERPOLATION
in [a, b].
form
(7)
c=c(p),
a$;c$;b,
a$;d$;b.
We have, for the error in linear interpolation for f(x) between f(a) and f(b):
f(a + ph)-[f(a) + p{f(b) - f(a)}] =!h 2{p2f'(C)- pf'(d)}
Chapter 8
90
and choose a Po, 0 < Po < 1 and then choose K so that F(po} = O. Then since
F(p} also vanishes for p = 0, p = 1 we conclude that F' (p) vanishes twice in
[0,1] and
F'(p} == h 2f'(a
+ ph} - 2K
= ih 2f'(a + ()h).
where
5.
= ~(p) E J.
HERMITE INIERPOLATION
H'(x;) = f{,
i = 0, 1, ... , n.
91
Interpolation
H(x) =
n= f'(X;) where f
(11)
2n + 2)(X)
= f(x;)
and
exists then
_
f2n+2)(~)
f(x)-H(x)- (2n+2)!
(x-X;)
where ~ is in the interval including xo, Xl, ... , Xn, x. This can be done by
repeated applications of Rolle's Theorem to
(12)
L fr}(x-xoYlr!.
r=O
We note that the process of counting the coefficients which works in the
Lagrangian, Hermitian and Taylor case does not always work. For instance
it is not possible to find a quadratic q(x) for which, say, q(1) = 0, q'(O) = 1.
The so-called Hermite-Birkhoff problem arises in this area.
6.
92
Chapter 8
Interpolation
93
J 3 (x)
10.0 0.05858
10.2 0.10400
lOA
0.14497
10.6 0.17992
10.8 0.20768
0.08129
10.7
0.19380.
Values to 8D, from the British Association Tables, Vol. 10, p. 40, are
10.1
0.08167273
10.7
0.19475993
Using 2-, 3-, 4-, and 5-point interpolation for x = 10.5 gives
0.16244, 0.16320, 0.16327, 0.16328
l(x)=I1j~o(x-Xj).
Show that
l(x)
i;(x) = (x - xJl'(xJ .
94
Chapter 8
8.3. Write a program which when given a, b, f(a), feb), p as input gives as
output the linear interpolant of f at p.
8.4. Write a program to implement the Aitken algorithm, say for up to 11
points. Arrange to have it give as output the appropriate interpolant only, or
the whole triangular array, according to a choice of an entry or a parameter.
Alternatively, thoroughly check the Aitken subroutine available in your
system.
8.5. No specific problems on direct and inverse interpolation will be set.
The reader should set up his own problems and at the same time familiarize
himself with the special functions by using the NBS Handbook as follows:
Take a function tabulated, say at interval 0.001. Use values at interval
0.05 as data and interpolate using various values of n and varying positions
of x relative to Xo, Xl> . ,x,.. Compare the results obtained with the
tabulated value.
Take values of x near where a function, e.g., Jr(x), changes sign and
find its zero by inverse interpolation. Check the result from the tables, if the
zeros are tabulated.
8.6. Consider the interpolation of f(x) = X4 by a cubic, Lix), the chosen
abscissas being -1, 0, 1, 2. In particular discuss the behavior of the error
E(x) = X4- L 3 (x) in the range [-1,2].
8.7. Write down the fundamental Lagrangian polynomials li(x) in the case
of four-point interpolation at the nodes -1, 0, 1, 2. Tabulate these for
x = -1(0.1)2 and show how to use these tables to facilitate interpolation.
8.8. What is a bound for the error incurred in linear interpolation in a table
of sin x, at interval 0.2?
8.9. Evaluate
L xl~ (0)
n
Sj =
for j
0, 1, 2, ... ,n, n + 1.
i=O
3,
Xo
1,
Xl =
2,
X2 =
3,
X3
4.
xln ) = -1 +-, k
0, 1, ... , n.
Evaluate Ln@ for n = 1(1)12 using the computer. Evaluate 4(0), 4(1).
Interpolation
95
IL = max (\f(O)\,
is least.
and
t2n+2)(~)
f(x)-H(x) = (2n+2)!
(x-xY
1
Vn
V(xo,
Xl> ... ,
Xl
x~
xn) = det
#0
1 Xn
xnn
0
1
2
3
4
5
f(x)
-855
-513
-172
+167
+507
+846
17443
73755
86901
39929
03555
00809
96
Chapter 8
8.15. (a) If f(x) is a real function of the real variable x, O~x ~ 1, and
f(O) = 0, f(l) = 1, what can you say about fm?
(b) If, in addition, f is positive and convex, what can you say about
f(!)?
(c) If, in addition, f is a quadratic in x, what can you say about f(!)?
(d) If, alternatively, f'(x) exists in [0, 1] and is bounded by 1 in
absolute value, what can you say about fm?
(e) If, alternatively, all the derivatives of f at 0 exist, and are bounded
by 1 in absolute value and if the Maclaurin series for f converges to f in
[0, 1], what can you say about fm?
8.16. Discuss the existence of Hermite interpolants, e.g., in the two-point
case, along the lines of Problem 8.13.
8.17. Establish the existence of a (cubic) spline in the two panel case.
Specifically, show that there is an f~ such that, no matter what fo, f!> f'cr.!>
are, the Hermite interpolants in [x-I> xo] and [xo, Xl] fit smoothly together at
Xo i.e., that the second derivatives there coincide.
Work out the case when
Xl
=1,
xo=O,
fo= 1,
fl =0,
f'cr.l =0.
Generalize.
8.18. Show how to determine the turning point i and the extreme value
1= f(i) of a function f(x) given its values at three points near i, by
assuming that f(x) = a +bx +cx 2
8.19. Show that if fr = f(r), r = 0,1,2, ... , n and if A has its usual meaning
then
CHAPTER 9
Quadrature
f(x)dx.
f(x)w(x) dx
(1)
(2)
1.
TRAPEZOIDAL QUADRATURE
(b - a) . Bf(a) +!f(b)].
98
Chapter 9
If 1f"(x)1 :::;M2 in [a, b], using the result about the error in linear interpolation, we can conclude that
(4)
= --b(b - a)3f"(c),
(6)
r+1)(C(x)) n
(n+l)! n(x-xJ
to get
II-OI:::;max Itn+ 1)(x)1
(7)
(n+l)!
I Ii Ix-~I
b
dx.
i=O
REMARK: Note first that (7) implies that any n + I-point quadrature of a
polynomial of degree n is exact. Note also that this estimate is a very crude
one: we cannot apply the Mean Value Theorem of the Integral Calculus
<f>(x)I/I(x) dx = <f>(c)
I/I(x) dx,
unless we know that 1/1 is of one sign in [a, b]. [Contrast this with Problem
9.12]'
99
Quadrature
Xo =
1,
Xl
= 0,
X2 =
-1,
o =U( -1)+U(0)+U(1)+U(2),
with an error estimate
-1~c~2,
o =1[f( -1)+4f(0)+f(1)]=
fl
[Q2(X)+cx(x 2-1)] dx
to M4
90 .
The numerical factors are -1/6480 and -1/2880. To discuss costeffectiveness we must remember that we have to make an extra function
evaluation in the first case; this discussion is included in Problem 9.20.
100
2.
Chapter 9
RICHARDSON - EXTRAPOLATION - ROMBERG QUADRATURE
H this sort of error is still not satisfactory what can we do? There is
available a very elegant technique due to W. Romberg, which is based on
ideas popularized by L. F. Richardson. We shall begin by discussing the
Richardson device, which can also be applied with profit in many other areas
of numerical analysis. [See Problem 9.22 and Chapter 10, p. 126.]
Suppose we are trying to evaluate a quantity cp, or a function cp(x),
which is the solution to a continuous problem e.g., the fundamental frequency of a vibrating string (length I, density p, under tension T) or the
circumference of a circle of radius unity. We approximate the continuous
problem by a discrete one, characterized by a mesh size h. In the first case
we replace the continuous string by beads of mass ~m, m, ... , m, ~m, where
m = pl/(n + 1), placed at distance h = I/(n + 1), the beads of mass ~m being at
the endpoints. In the second we replace the circle by a polygon with n = h- 1
sides. H this discretization is done reasonably we often find that if cp(x, h) is
the solution to the discrete problem then
(8)
for some cpix), where R2 is small. H we neglect R2 and observe or calculate
cp(x, h) for two values of h, hb h2 say, we have, approximately,
Fig. 9.1
101
Quadrature
Let US carry this out in the two cases mentioned. In each case we shall
be trying to find a constant, not a function.
We know from the theory of vibrations that the frequency in the
discrete case, with n interior beads is u(h) where
u(h) =
2~': sin ;~
u=T~~'
The effect of extrapolation can be best seen if we consider the following
table
lh- l =(n+1)
2
u(h)/u
0.9003
1.000.
We know that the perimeter of a square inscribed in the unit circle is
4J2 while that of a regular hexagon is 6. Extrapolating we get as an estimate
for 7T
~(4J2)-k(6) 108-32J2 31373
2(~-l~)
20
..
In each of these cases the extrapolation is legitimate because the
existence of a relation of the form
cp(h) = cp + h 2cp2 + R2
102
Chapter 9
We choose this successive halving of the panels so that we can always make
use of functional values already computed. [A more detailed investigation
shows that, taking more things into consideration, this scheme is not the
most economical one.] We now refer back to the result (6) and see that
h 2 -extrapolation is indicated. This means we compute a new column T)k) by
'T't'k) _
Ii -
4n + )-n
k
0
k)
0
and we can expect this to be more rapidly convergent than the first column.
This is indeed the case for we have really obtained the values given by
Simpson's Rule for which
k )-
64T2k + 1) - T2k )
63
0.694444444
0.693253967 0.693 174603
0.693154532 0.693147901 0.693147479
0.693 147652 0.693 147 193 0.693 147 182 0.693 147 181
Quadrature
103
GAUSSIAN QuADRATURE
sequence
1,
X,
x2,
f(x)g(x)w(x) dx.
[We must assume that all the moments ILn = S! xnw(x) dx are finite.] That is,
we can find a sequence of polynomials 'lTn(x) of exact degree n, such that
'lTn(x)'lTm(x)w(x)dx=O
if
m~n.
In the case
a= -1,
b=1,
w(x)=1
Chapter 9
104
a=O,
w(x)=e-
b=1,
r= 1, 2, ... , n.
f(x)w(x) dx = Q =
=
ita f(x;)
~(x)w(x) dx
Consider now weighted quadratures with the nodes chosen to be the zeros
of the polynomial1Tn(x) orthogonal with respect to w(x) in [a, b]. Let f be a
polynomial of degree 2n -1 at most and write
f(x) = q(X)7Tn (x) + r(x)
(10)
r
r
f(x)w(x) dx =
q(X)7Tn (X)W(x) dx +
r(x)w(x) dx.
By orthogonality, the first of the two integrals on the right is zero and so
1=
r(x)w(x) dx.
L r(x;) ~
105
Quadrature
7rn (x)
X;
f(x;) = r(x;),
Hence
1= Lf(x;)~
11- 01 :s c..M2n
could be established for integrands in C 2 n[a, b], where Cn is a constant
depending on the weight function. See Problem 9.12.
Tables of the x;, ~ and the Cn are available in all the most important
cases. See e.g., the NBS Handbook, Chapters 22, 25.
4.
[+1
A". = t1
Tn (x)
(x -
dx
r'7l"
cos n8 d8
cos 6 -cos 6m
r.
n)
1'71"
cos n8 d8
cos 8 -cos 8m
7r
- T~(x.n)
n
A". =7r/n.
Actually T~(x) = -sin (n arccos x) x n x (-l/J(l- x 2)) giving
T' (
n
x.n
-lr- n
1
sin 8m
I". directly. We
106
Chapter 9
Method 1
We first notice that the integral 1m is "improper" for the integrand
becomes infinite for 0 = Om. We must interpret r,::) as a "Cauchy Principal
Value Integral":
lim
s --+0+
[1
9m
-"
do+l'"
... dO].
8", +s
The theory of Principal Value Integrals is important but rather delicate. The
prototype is, where 0 < a < 1,
CPV
11 ~=
lim [1
x-a
,,--+0
-"
1 ~]
+r
Ja +" x-a
sm!(q>-O)
then
d:u <1>(0)
Hence, putting
r!) =
1
cos O-cos ~ .
s--+O+
I'"
cos 0 dO
COS
O-cos Om
I'" [1 +
COS
cos Om
O-cos Om
] dO
107
Quadrature
cos n8 ]
0
0 dO
cos -cos m
Thus we have
-7r
sin n8m
8
sm m
as announced.
Method 2
The integrand can be factorized in the form
cos n8
cos 8 -cos 8m
2n -
[cos 8-cos 8r ]
r=O
rf'm
-this because Tn (x)=2n - 1 [t'.:J (x-cos 8r ). We can reconstitute this product as a sum of cosines of multiples of 8, say,
(12)
cos n8
cos 8 - cos 8m
ao+alcos8+"'+~-lCOS
7r
(n-1) 8.
Chapter 9
108
T~(x".)
2;) }=o
(we have inserted the zero term cos n8.n in the numerator). With the usual
notation
cos n8-cos n8m =Tn(x)- Tn (x".) = l".(x)T~(x".)
cos 8 - cos 8m
X - x".
and, as 8 -+ 8m ,
-+ x". and
while
cos (8m + 2;:) -
COS 8m
0 if r = 1, 2, ... , n-l.
since the coefficients of all the other a's vanish - the formula for the sum of
the cosines of angles in an arithmetic progression gives
cos r8m + cos r( 8m + 2n71') + ... + cos r( 8m +(n -1) 2n71')
cos ( rOm + 71'(nn- 1)sin r7r
. r71'
n
SIn-
=0.
For another evaluation of all the coefficients a;, see Problem 9.24.
109
Quadrature
Method 3
We begin with a special case of the Christoffel-Darboux formula.
Lemma 12. If x -:j:. y then:
I +2
r=l
cos rx cos ry =
Proof. Write
2 cos rx cos ry = cos r(x+y)+cos r(x-y)
and use the formula
~cosrO-
r=l
.1
SlD'ZO
1 +2
r=l
'IT
to get
so that
Chapter 9
110
Fig. 9.2
dt
o J1- t 4
1
B=~=Il~4
2w
J1-t
'
fl
-1
dt
J1- t4 =
fl
-1
J1 + t2 . J1- t2
and
where
Xr =
dt
(2r - 1}7r/2n.
Quadrature
111
n=1
2
3
On
Rn
7r =3.14159
.J2/37r=
7r
"3 {I + [4/.J7n=
.J2/37r/2=
2.62207554 = 2A
1.1981402347 = 2B
7r/.J7 =
4
5
6
7
8
9
10
20
00
4AB=7r
STIRLING'S FORMULA
n! ~ (;) n J27rn.
The basic idea is that log n! = g'log r and to approximate this first by
J~ log t dt and then approximate the integral by trapezoidal sums.
Since (xlnx-x),=lnx we have
In lnxdx =nlnn-n+1.
Chapter 9
112
We now write
I" 12 1
1= 1+
+ ... +
Lr+
1"
+ ... + -1
(14)
Hence we have
n In n-n+l =![(In 1 +In 2) + (In 2+In 3)+ ... +(In (n-l)+In n)]+
L e,.
"-1
.=1
i.e.,
n-l
L e,.-1
(n+!)Inn-n=In(n!)+
.=1
or
(15)
In(n!)=(n+!)lnn-n+
"-1
L e,.-I.
.=1
where
00
Pn=Le,.
r=n
We now have two problems, the first to estimate p.. and the second to
evaluate u. It is clear that
A L (r+l)-2:5p..:5A
00
r=n
L r-2
00
r=n
Quadrature
113
---<p,.<
,n2:2.
12(n + 1)
12(n -1)
(17)
~1T.
Take logarithms in (17) and use (16) twice (once as is, and once with n
replaced by 2n) to get
4n In 2+4(n +~) In n -4n -4u- 4p,.
-2[(2n +~) In 2n -2n -u-P2n]-ln (2n + 1) ~ In ~1T.
Simplifying and noting that
In (2n + I)-In 2-ln n =In (1 +(2n)-1)~0
we get, remembering that u is a constant,
u
-!In 21T.
Hence we have
In n! = (n+~) In n -n +In ..[i;-Pn
which gives (13).
Chapter 9, Problems
9.1. Find the coefficients of the Lagrangian quadrature on the interval
( -1,2), based on the nodes -1, 0, 1, 2. Find an error estimate.
9.2. Find the coefficients of the Lagrangian quadrature, on the interval
( - 2, 2), based on the nodes 0, 1. Find an error estimate.
9.3. Derive the error estimate for n-panel trapezoidal formula
I-Q= -A{b-a)3f'(c)/n 2 ,
a:5c :5b,
fl
1E
Chapter 9
114
IElsioM 4.
Extend this to the many-panel case.
9.5. Estimate the difference between the Milne quadrature (Problem 9.2)
and the Simpson quadrature (Problem 9.4.)
9.6. Prove that the second column of the Romberg array gives the Simpson
quadratures,
9.7. Show that the perimeter of a regular n-gon inscribed in the unit circle
IS
2n sin 7Tln
and that the ratio of this to the circumference of the circle is
7T 2
1- 6n 2+O(n-4).
9.8. Write a program for carrying out a Romberg quadrature for J:f(x) dx,
where f(x) is specified by a subroutine. Allow for up to 10 steps and
arrange for printing out the whole Romberg triangle. Alternatively,
familiarize yourself with the Romberg subroutine available to you, and
check it thoroughly.
9.9. Complete the table of Qn> R,..
9.10. Prove that
~=
(2n)!
o J1-t 4 n=o4n(n!?(4n+1)
X 4n + 1
Ixl<1.
fl
A dx - oJ1-x 4 -
(2n)!
4n(n!)2(4n+1)
By writing (1- X4)-1/2= (1- x 2)-1/2(1 + X2)-1/2 and using the fact that
J.o t2n(1 -t
l
2)-112
Quadrature
115
h = 0.05
0.2966574757
h = 0.0125
h = 0.005
0.2966575010 0.2966575011.
f
=f
11 =
dx/Jo(x),
12 =
14
{xJO(x)/J1(x)} dx,
Is =
f
f
x dx/Jl(x),
Jl(x) dx/Jo(x),
h=
J2 (x) dx/Jl(x),
Chapter 9
116
24n X (n !)4
2n!)f(2n + 1)
'TT
2
rJ1d~t4X rJ~~:4=~
9.18. Check the expression derived for the error in the "~" Rule (Problem
9.1) in the case of a function f(x) which is such that fEC 4 [a,b] by
evaluating the quadrature error in the case of X4 dx.
Suppose the interval [a, b) is divided into N equal panels and the "~
rule" is applied to each. Give an estimate for the total error in the general
case. How many panels are required if the total error is to be less than
E (b - a)5? How many times is it necessary to evaluate the integrand?
S:
r
a
i = 1, 2, ... , n.
and the
f(x) dx, where
~-Rule
J~
(4)
If(x)1 ::51,
9.21. Draw a rough graph of Io(x) in the range O::5x ::55. Using a library
subroutine (which you should check (and say how)), or your own subroutine,
for Io(x) evaluate
f(x) =
Io(t) dt
for x = 1.2(0.2)3.
9.22. Experiment with the Richardson idea in connection with the evaluation of Euler's constant "I lim "In lim {1 +!+ ... + (l/n)-log n} =
0.57721566 along the following lines.
From Problem 2.10, in the first place, we observe that
I'n = I'+0'(n- 1 ).
Quadrature
117
Evaluating 'Yn for two values of n and tentatively replacing the 0(n- 1 ) by
An -1 enables an estimate of l' to be obtained.
From the final result of Problem 2.10 we observe that
'Yn ='Y+~n-l+0(n-2).
lOx
105 f(x)
37500
-41206
33794
-23300
23200
+20794
7294
10
+100000
-11300
11
+224294
-28906
12
+404700
-40800
lOx
= i.J;.
CHAPTER 10
DIFFERENCE EQUATIONS
(1)
Uo given
a;eO,
is manifestly
(2)
and as
(u,,+2- a u,,+1)
= (3(u,,+l- a u,,).
Un+l - (3u"
= (Ul -
u"+1-au,,
= (Ul- auO)(3n.
(3uo)a n
119
r"
1
1n(x) = 1T.Io
cos (x sin O-nO) dO.
This function satisfies
1n+1(x) = 2nx- 11n(x)- 1n- 1 (x).
12 =0.11490 34848
13 = 0.01956 33535
14 = 0.00247 66362
Is = 0.00024 97381
h = 0.00002 07248
17= -0.00000 10385
It is clear that this seemingly attractive approach is completely unsuitable.
The trouble is caused by the "numerical instability" of the process - the
unavoidable rounding errors in the data are amplified by the factor 2nx- 1
and there is also some cancellation.
However something can be saved: the fact that the ascending recurrence is unstable suggests that the descending one (i.e., solving for 1n - 1 in
120
Chapter 10
terms of In and I n+1) might be stable! There is a difficulty here since we have
no initial values but it turns out that arbitrary initial values will do!
For instance if we take i 40 = 0, i 39 = 1 X 10-8 and use
i n- 1 = 2nin - i n+ 1
we obtain
i 40 =0
i39 = 0.00000
i38 = 0.00000
i37 = 0.00005
i 36 =0.00438
i 35 =0.31567
i 20 = 0.43716
001
078
927
520
512
256 x 1026
This, of course, is not likely to be the correct result since the difference
equation is homogeneous and we can multiply by any scale factor. There are
several ways of finding the appropriate mUltiplier: the simplest is to carry on
the above recurrence to obtain
io = 0.86360016 x 1050
Comparing this with the correct value of 10 we see that the appropriate scale
factor is
k =lo/io = 0.88605 550x 10-50
Applying this to
we obtain
io = 0.80486703 x 1023 ,
k = 0.95071315 X 10-23 ,
120 = 0.38735029 x 10-24
Another way to obtain the normalizing factor k is to use the fact that
121
DIFFERENCING
and, generally,
(5)
fEe then
then
a'(F(nh
= hrpr)(p)
when F
is a smooth function. The differences of order r + 1 or higher of a polynomial of degree r are exactly zero, no matter what the interval is. Indeed
ahx r = (x + hY - xr = rx r- 1 h + ...
122
Chapter 10
a triangle with a vertex at the wrong entry and binomial coefficients along
the base thus
o
o
o
1
-1
-2
o
o
o
-3
3
DIFFERENTIATION
= l+a,
and we can obtain h 2D2 by formally squaring the logarithmic series to get
h 2 D2 =a 2 -a3 +(11/12)a4 -
~h(f(O =
f@.h)-f(-!h),
~~(f(0 =
123
Note that, as compared with the previous representation, there are no odd
terms present and the coefficient of the even ones are smaller. [Collections
of useful representations are available, e.g., in Interpolation and Allied
Tables.]
We shall make use of the first two terms of (6). Let us see what sort of
approximation they give. If f E C4
f(x h) = f(x) hf'(x) +(h 2/2!)f"(x) (h 3 /3!)f"(x)+ O(h4)
f E C6 , we find
(7)
The comparative strength of these, in approximating 1"(1) = - f(l), is indicated by the following table, which also indicates the effect of varying h.
4.
sin x = f(x)
0.6
0.8
1.0
1.2
1.4
0.564642
0.717356
0.841471
0.932039
0.985450
f'(1)
3-point
interval 0.4 -0.8303125
interval 0.2 - 0.83867 5
interval 0.1 -0.84077 0
interval0.05 -0.8412
interval 0.01 -0.84
5-point
-0.841562
y'=f(x, y),
y(a) = b
124
Chapter 10
(9)
y(O) = 0,
y(l) =0
y(O) = 1
(10)
is (see Appendix for details)
(11)
y = x f(i)IlIih 2) + 2f(i)Il/4(h 2 )
where
y' =..Ix+JY
(13)
y"=xy
(14)
y"=6y2+X
(15)
y"= -yo
No formulas for the solutions to (12) or to (14) are available. The solution to
(15) is, of course, y = a cos x + b sin x. The solution for (13) which satisfies
the initial conditions
y'(O) = r
2/3
/fG) = 0.35502 8 .. .
Ai(x) =
125
where
= [Y']
Y
and F
[01 -1]O
y"+A.xy =0,
y(O) = 0 = y(l),
Yn+1(x)=b+
{(t, Yn(t)) dt
then the sequence Yn(x) converges to the required solution. Since we know
how to integrate numerically this is, in principle, a solution to our problem.
A few experiments, in which the integration is done analytically, shows that
convergence is likely to be slow. This method, however, is usable for small
ranges of x.
Another scheme, due to Euler, consists in approximating the integral
curve by polygons: we write
x(n)=a+nh,
y(n) = y(x(n))
126
Chapter 10
y
ok---~------~----~
Fig. 10.1
(20)
Our approximate solution (19) only agrees with (20) as far as the linear term
since y'(x)=f(x, y). The local error is therefore O(h2). Since there will be
O(h- 1 ) steps required to cover a finite interval we might expect the total
error to be O(h- 1 )O(h 2) = O(h) so that we would have to take a step size of
10-6 to get a 6D solution. By a slight modification of this, due to Heun, we
can improve things by an order of magnitude.
We define, for n ~ 0,
y*(n + 1) = yen) + hf(a + nh, yen)),
(21)
Compare diagram.
If we assume that y can be expanded in a power series we find, after a
little manipulation, that the local error is 0 (h 3 ); in fact, with some care, we
find it does not exceed in absolute value
3
h
12
Y"(~)_3y,,(~)(af)
ay (~,'11)
I.
yeO) = 1
127
4Yo.o~-
which is correct.
Continuing the integration we get at x = 2(1)5:
0.73662 x 10
x = 2: 0.73831 x 10 (0.73831 +~(0.00169)) x 10 = 0.73888 x 10
0.19993 X 102
x = 3: 0.20061 x 102
0.54261 X 102
x = 4: 0.54511 x 102
Thus the errors are -1, -1, -1, 1 in the last place.
The popular Runge-Kutta method is a development of the Heun
algorithm: in it 4 values are averaged compared with 2 in the Heun case.
The local error in the Runge-Kutta method is O(h 5 ) and the global error is
O(h4) so that h 4 -extrapolation can be applied.
Among other methods are some in which extrapolation is built-in. We
shall not discuss these but now turn to two less sophisticated methods.
6.
MILNE-SIMPSON PREDICfOR-CORRECfOR
f2
with error
14h 5 f4({} )
45
! ,
128
Chapter 10
[+1
we can "predict"
(22)
by Milne's quadrature. We then compute f4 and apply Simpson's quadrature
to get a better value of Y4 from
as
(23)
If there is satisfactory agreement between the two values of Y4 we can
7.
129
y"=xy.
For a discussion of this equation see Appendix. The Airy integral given by
(16) is one solution. We differentiate this n times by Leibniz' Theorem and
find, in the T-notation,
(n + 1)(n +2)Tn+2 = h 2 xT n
+h3 T n- 1 ,
n>O,
2T2y = h2 xy.
We recall that Ai(x) is defined as the solution of (25) for which
(28)
We proceed as follows, choosing h = 0.1 and working to 6 places:
x=O
y
ry
T2y
0.355028
0
0.000059
T4y
-0.000002
0.329203
0.380849
I'+ -0.025713
I~
0.329203
-0.025882 -0.025713
T3 y
I+
I_
x=0.1
-0.025697
0.000165
x=0.5
0.231694
-0.022491
130
Chapter 10
The numbers in the first column are obtained as follows. The first two
come from the initial conditions (28), the third from (27). The fourth comes
from (26 1) and the fifth from (262 ). The entries opposite L come from (24)
and give y(O.l): the first gives us a new value to be entered at the top of
the second column and the second would enable us to check y( -0.1) had
this been available. The entries opposite L~ are
T
and give the values of T(O.l): the first is entered in the second column and
the second would enable us to check T( -0.1) had this been available. We
then compute T2y(0.1) from (27) and we are ready to proceed.
This method has the advantage that we check our values as we go along
and do not have to wait until, for instance we can check our numerical
solution from an asymptotic representation of the solution.
8.
INSTABILITY
We want to discuss the equation (15) further in the case when y(O) = 0,
y'(O) = 1. We replace it by
y(n + 1) = (2- h 2 )y(n) - y(n -1),
(29)
y =:=
h2
Using h = 0.1 and the correct value for y(l) we get the solutions given in
column (2) - the correct 10D values of sin x are given in column (1). The
values obtained are good to about 3D. If we want better results we seem to
have two possibilities: take a smaller h, or take a better approximation to
the differential equation. We choose the latter and use the better 5 point
approximation to y" which we have derived earlier. We get, for h = 0.1
(30)
If we use the correct values for y(l), y(2) and y(3) we get, according to
whether we work to 5D or 10D, the results given in columns (4) and (3).
How is this to be explained?
The general theory of linear difference equations with constant coefficients which was outlined earlier indicates that the solution to (30) is of the
form
131
b.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
9. Two
(1)
(2)
(3)
0.0000000000
0.0998334166
0.1986693308
0.2955202067
0.38941 83423
0.4794255386
0.56464 24734
0.64421 76872
0.7173560909
0.7833269096
0.8414709848
0.8912073601
0.9320390860
0.9635581854
0.9854497300
0.9974949866
0.99957 36030
0.00000
0.09983
0.19866
0.29550
0.38939
0.47939
0.56460
0.64416
0.71728
0.78323
0.84135
0.89106
0.93186
0.96334
0.98519
0.99719
0.99922
(4)
0.0000000000 0.00000
0.0998334166 0.09983
0.1986693308 0.19867
0.2955202067 0.29552
0.3894183865 0.38934
0.4794259960 0.47819
0.56464 90616 0.54721
0.6443099144 0.40096
0.7186422373 -2.67357
0.8012545441
1.09135 22239
4.37411 56871
POINT PROBLEMS
y"=-AY,
y(O) = 0 = y(l)
are, on the whole, representative of the behavior of the general case (9). The
general solution of (31) is
y = A sin ./Ax + B cos ./Ax
and the condition y(O) = 0 gives B = O. For non-trivial solutions we must
132
Chapter 10
-AY1
Y 2 -2Y3 + Y4 =-AY.
(1/16)
by values
ILY + Y
1
=O}
Y 1 +ILY2 + Y 3 =0
Y 2 +ILY3 =0
133
det[~ ~ ~] =0,
o
1 IL
we shall use this to get the exact values of y(l) which the reader should
compare with those got on the computer:
A =9
A112 =3,
A = 9.61,
A1/2 = 3.1,
A = 10.24,
A1/2 = 3.2,
=0.047040,
1341
61
62.79
45699
= Un + Un-l>
Uo= U1 =
1.
n ~ 1,
Chapter 10
134
when
Uo
is given and
a,
b, c are real.
In =
x n ex - 1 dx
I o =1-e- 1 =0.632121 to 6D
calculate II> 12 , 13 , ,110 using the above recurrence relation.
Is there anything obviously wrong with your results so far?
Calculate, correct to 6D, the value of 110= 1334961 - 3628800e -1.
10.5. Experiment further with the numerical differentiation of a table of
sin x, using various intervals and various formulas. Discuss the balancing of
the round-off and truncation errors.
10.6. Examine theoretically the Picard and Euler methods in the case of the
differential equation
Y' = y, y(O) = 1
which has the solution y = eX. Show that in the first case
Yn(x)
135
(1 +~) n.
l --,
x+t
Chapter 10
136
already been given (using these you can conveniently check your computations from standard tables). In (14) you may take y(O) = 1 and y'(O) = ~m3.
You should observe that in this case the solution tends to infinity as
x ~ X(A) where X(A) depends on the value A of y'(O). In order to estimate
X(A) it will be necessary to decrease h as x increases.
10.12. Discuss the differential equation
y(O) = 0 = y(l)
y"= -AXY,
Ixl<l,
V(x)=O,
Ixl~1.
V(x) = 0,
Show that there is exactly one value of E for which there is an even or odd
(which?) solution with I/I(x) ~ 0 as Ixl ~ 00. Find E to 2D and the corresponding I/I(x) for x =0(0.1)l.
10.15. Write a program to print out the differences of a set of say 12 values
-98912726
-77220413
-55444184
-33583877
-1639330
218
219
220
221
222
10389609
32530132
54701371
76984498
99352675
137
y(1) = 0.809525,
y'(l) = -0.232199.
Find y(1.1), y(1.2), ... , y(2) and y'(1.1), y'(1.2), ... , y'(2) using h = 0.01,
h =0.0025.
Xl
given.
f(x)
1 771
91
2
48
36
5
32
-41
6
7
10.19. Is it likely that the following sequences are the values of a polynomial at equally spaced arguments?
(a)
2, 3, 5, 7,11,13,17,19,23, ...
(b)
(c) 41, 43, 47, 53, 61, 71, 83, 97, ...
at
138
Chapter 10
Aix
1.1
1.2
1.3
1.4
1.5
1.6
1.7
0.12004943
0.10612576
0.09347467
0.08203805
0.07174950
0.06253691
0.05432479
y = 0.13529242,
y' = -0.15914744
where
h2 x
h3
along the following lines. Express x(O), x'(O) in terms of r -functions. Show
that
y(x) = exp (!x 2 )X(x)
yHO) =0,
Y2(0)=0,
y~(O)
= 1.
139
k>l.
APPENDIX
Bessel Functions
(2)
11
In(x) = 21T
....
11. .
G)n+2r
L,r .(n(-1)'
,+ r).
00
r=O
Our first objective is to establish the equiValence of (1), (2) and we begin by
finding some properties of the In's. We indicate by subscripts on the
numbers of our formulas the definitions on which they are based.
Using the fact that cos A -cos B = 2 sin !(A + B) sin !(B - A) we obtain
142
Appendix
! r'7l"
I n - 1(X)-J n + 1(X) =
=2J~(x)
assuming it permissible to differentiate (1) with respect to x, i.e., to interchange the operations d/ dx and I ... dO on the right.
We note that when n = 1 we have
1 2'71"
J1 (x) = 2'IT.Io cos (0 - x sin 0) dO
=
'IT
'IT ",/2
1.
We first note that the series (2) is convergent for all x: the ratio of the
r+1-st term to the r-th is -x 2 /(4r(n + r)) and this tends to zero as r~oo for
143
Bessel Functions
"
In(x) -
(-1Y{n+2r)(n+2r-1) (x)n+2r-2
-2
.
r=O
r. n+r)'2
. 2
'(
co
x2J~(x)+xJ~(x)+(x2-n2)Jn(x):
it is
G) r~o
(x)2r
-
(-1)'
co
J (x)= -
r!r(r+ 1 + v) 2
Indeed, if v is not an integer, Lv(x) also satisfies this equation and J", Lv
form a pair of independent solutions to (8). We note from (2') that, for
n =0, 1,2, ...
Ln(x)=
( x)-n
2:
(X)2r
(-1)'
co
r~or!r(r+1-n) 2:
and the coefficients of the terms with index 0, 1, ... , n -ion the right
vanish because of the behavior of the r(t) when t = 1- n, 2- n, . .. ,0.
Hence, putting s = r - n,
Ln(x)=
( x)-n
2:
{~)-n
= \2
=(-1t
co
-1)'
(x)2r
r~nr!r(r+1-n) 2:
co
.~o
(-1t+.
(x)2'+2n
r(s+ 1 + n)r(s + 1) 2:
( x)n
co
(-1)'
(X)
2: .~os!r(s+l+n) 2:
2.
= (_1)RJn (X).
144
Appendix
These are simple consequences of (2). We deal only with the first
relation. The coefficient of x2v+2r-1 on the right is
1
(-1)'
1
1
(-1)'
1
--'---'---. _ . 2r+2v =_.
.
-.
(r+ v)
2v r!f(r+l+v) 22r
2v- 1 r!(r+v)f(r+v) 22r
which after cancelling the factor (r + v), is exactly the coefficient of x2v+2r-1
on the left.
H we differentiate out the left hand sides of (92 ) and multiply across by
x-v and -XV we obtain
v
x
-J~(x)+- J)x)
= J v+1(x).
2J~(x)
= J v- 1x)-Jv+1(x).
We now show how to derive the first part of (1l~ on the basis of the
original definition (1). [The second part of (11 2 ) is just (3 1),]
Elementary trigonometry gives, if x -F 0,
cos n + 1)8- x sin 8)+cos n -1)8- x sin 8)
= 2n cos (n8 x
x sin 8)
We are now able to identify (1), (2). We do this by checking the results
first for n = 0, and then using the fact the recurrence relations are satisfied
on the basis of (1), (2).
145
Bessel Functions
Since the series for cos t is convergent for all t we can expand the
integrand in (1) as a power series and integrate using the fact that for all
even r
(/2)'71"
smrOdO=
'IT
.2
to find
1
-
r'7l"
'IT
.
1 r'7l"
cos (x sm 0) dO ='IT
2r 0
(-1)'x2r sin
L
(2 )'
r=O
r .
00
=r=O
L
00
[(
dO
_1)'x2r 1 r'7l" .
]
(2 ) , . sm2r OdO =
r.
'IT
(-1)' (x)2r
,)2 -2 .
r=O r.
L -(
00
3.
We write J = J 1/3(~A 1/2X3/2) and shall show that y = x 1/2J is the solution.
Differentiating and substituting we find
We now replace x by ~A 1I2X3/2 in this equation and solve for h 3/2J" getting
AX3/2J" = -~A 1/2J' - (h 3/2 -ix-3/2)J
which when substituted in the expression for Y" + AXY cancels out everything.
4.
146
Appendix
The properties of the I's can be obtained by translation of those of the 1's
just as we obtain properties of the hyperbolic functions from those of the
trigonometric ones, or by a parallel development. Some care has to be taken
in the determination of the multiplier on the right in (12) when we deal with
cases when n is not an integer.
We show that the complete solution of the differential equation u"x 2 u =0 is
(14)
Differentiating we find, omitting the argument 1X2 of the I's,
u' =1X-1/2(CIIl/4 + ... )+ XlI2 . X(clI~/4 + ... )
and
so that
u" - x 2u = -iX- 3/2 [(1 + 4X4) (clIlI4 + .. .)-8x2(clI~/4 + ...)-4x 4(clIlI4 + ...)].
Now IlIit) satisfies
t 2y"+ ty'-<l6+t2)y = 0
Bessel Functions
147
Iv(x)~(x/2t/f(v+1)
so that
y ~2clf(3/4)/c2f(1/4).
y = X f(~)Il/4(h2)+2f(~)Il/4(h2) .
5.
which was mentioned in Chapter 10. We do this on the basis of the series
definition (2).
For s"?m the coefficient of (iz)2s in J 2m (z) is (-l)m-s[((s-m)!)
s+m)!)]-l. Hence the coefficient of (~Z)2S in Jo(z)+2I:~lJ2m(Z) is
(
-1
s[-s!s!1 - (s-l)!(s+l)!
2
2 ]
+ + -1
... ( ) 0!(2s)!
s __
= ~;:;;
2(s-r
2S) b
y
(2S) (2S)
s-r + s+r .
148
Appendix
"~ (zZ
11 1)2S (2s)!
1 (
)2s _ " IzI2s _
II
1+1 - ~(2s)!-cosh Z <00.
We shall next establish the following relation of P. A. Hansen, which
can also be used for normalization in backward recurrences:
J~(x)+2Ji(x)+2Pz(x)+ . .. = 1.
We note that for x real and r = 1, 2, ... this gives the inequalities
(-1)n
n - r)!(s - r)!
r=O
( -1)n s-n
(S) (
= (S!)2
r~o
= (-1)n
(.!.)2(
2s )
s!
s-n
s-n-r
s;e 1
s=1
6.
BOUNDS ON DERIVATIVES OF
Jo(X)
Bessel Functions
149
In fact from the representation (1), if we can differentiate under the integral
SIgn,
(-1)'
1~2r)(x) =-:;;:-
and
l~r+1)(X) =
f"
(_1)'+1 f"
7T .10 Sin2r + 1 0 sin (x sin 0) dO.
The stated result follows since the integrands do not exceed 1 in absolute
value.
7.
We shall establish part of (15) and (16), (17), (18). Bessel and Lommel
showed that if -!<1I:5! then lv(x) has the following signs in intervals m7T,
(m+!)7T:
!~
~ ~11'
211'
~11'
311'
rlT
4~
+.
-.
150
If we write
Appendix
2m+8
= L2 +
14 + ... +1
2m
m-2
2m 8
+
m
The step from - ~:s; v:s;~ to general v is carried out as follows: Rolle's
theorem and the recurrence relations
show that between any consecutive pair of zeros of x-VIv(x) there is at least
one zero of x- vI v+1(x) and between any consecutive pair of zeros of
x v+II v+1(x) there is at least one zero of xv+IIv(x). Thus we can move up and
down from - ~:s; v :s;~.
To establish (16) we use the differential equation
Z2 Y"+zy'+(Z2_ V2)y =0
to conclude that if there was a double zero at a point x'" 0 then y"(O) and all
higher derivatives would vanish there and so y would be identically zero.
To establish (17) we use, ~ before, the recurrence relations.
To establish (18) we proceed as follows:
When v> 0 the power series of Iv and I~ show that they are positive for
small positive x. The differential equation, written in the form
shows that if x < v and Iv > 0 then xJ~ is positive and increasing and so Iv is
increasing. That is, so long as 0 < x < v both Iv and xJ~ are positive and
increasing and so cannot vanish.
1
Ai(x) = 'Tf'
Bessel Functions
151
y"=xy.
(20)
A direct proof of this fact, according to Hardy, "is not a particularly simple
matter". It, together with
Bi(x) =
(21)
has been tabulated extensively [British Association, Math. Tables, Partvolume B, 1946]. The pair Ai(x), Bi(x) are an independent pair of solutions
of the equation (20) and were expressed in terms of the Bessel functions
I1/3' J1/3 with argument ~X3/2 by Wirtinger and Nicholson. (See (25)
below.)
First we have to discuss the convergence of (19). This follows from the
fact which can be proved by integration by parts, that,
LX> Q(x)eiP(x) dx
where P, Q are polynomials, P having real coefficients, is convergent if and
only if q ~ p - 2 where q = degree Q, p = degree P. We shall discuss the
problem directly by a method which we use again.
Since 1 = (t 2+ x - x)/t 2 we have
I(b) =
f"
-x
fO cos(~t3+xt)dt/f=I1+12'
say.
so that
152
Appendix
Clearly
Thus
(22)
and then to show that this integral is uniformly convergent with respect to x,
so that we can apply a standard theorem on differentiation under the
integral sign. [See e.g., Titchmarsh, 59.]
Since
we have
where
11 =
I 2 =-x
i=
dt,
11 = b- 1 sin GBi+ xB 1 )
12 = -xb- 3 sin GBi +xB 2)
where b :5B1> b :5B 2. The third integral is estimated trivially as
1131 :5~x3b-2.
153
Bessel Functions
Hence
and so the integral I(b) is convergent and indeed uniformly convergent with
respect to x in any interval [-X, X]. It follows, from the theorem cited,
that
Ai'(x) = -1T- 1
1
00
foo sin y dy
which is divergent.
To get the result we require we follow Stolz and Gibson in using a
technique of de la Vallee Poussin. We describe it formally in a general case
and then deal with the special case rigorously.
Suppose that
F(x) =
f(x, t) dt
IX fx(x, t) dx.
(23)
00
we get
F(X)- F(O) = ;~
IX dx 1fx(x, t) dt
00
Appendix
154
fx(x, t) dt.
IT
iT I{I(x, t) dt
where
while
IT I{I(x, t) dt
~~ i
dx
In these circumstances
F(X) - F(O) =
and, as before
F'(x) =
LX dx L I{I(x, t) dt
oo
L I{I(x, t) dt.
oo
In the special case we have f(x, t) = t sin (it 3 + xt) and we want to find
appropriate (f), 1/1 so that
We write
so that
155
Bessel Functions
3
3
+ ILT)d IL = cos(iT )-cos(iT +x1)--+ 0
(lT3
SID 3
IT cos
(it 3 + xt) dt
d~ [-1T-1
cos Gt 3 +xt) dt
i.e.
Ai"(x) = xAi(x)
as required.
REFERENCES
9.
REPRESENTATION OF
Ai(x)
AS A POWER SERIES
Let
= 0 we get:
156
Appendix
(l+ ;, x +...)
ao
Ai(x) =
where
-a1(x+ 41 x 4+ ... )
u(8)== U =
v(8) == v =
1
1
00
00
Both these integrals are uniformly convergent with respect to 8 in -7T/2 <
-8o !5:.8 !5:.80 < 7T/2 because they are dominated by
1"
o
e-COS6oYyR dy =
r(n +1)
.
(cos 8ot+1
du =
d8
dy
dv
d8=nu
= -nv
157
Bessel Functions
so that
u(O) = A cos nO + B sin nO.
When 0=0
v (0) = 0,
so that
u(O) = r(n) cos nO,
v(O)
oo
-ax
xn
-1
cos b d
r(n) cos
X x=-nO
sin
rn sin
that
n7T
coscos x dx = ____2
o x 1- n
bn
'
oo
r(n)
ooSinbX
_ r(n)sinT
~dxo X
10.
REpRESENTATION OF
Ai(x)
bn
O<l-n<l
0<1-n<2.
Chapterl
1.3. Solution
Take 8> 0 and choose a so that 8(1 + a) = 1. This implies a> 0 and so
Hence for any 8> 0, 8n < 8 if n> no(8) = (a8 tl. Thus 8n -+ O. In case 8 -+ 0 we
observe that 18 nl = 181n.
1.5. Solution
Differentiating we find
dx
1-1. x
1
2
=-2+-3+ ... >0.
2x 3x
We use the logarithmic series and the binomial series (in the geometric case)
at *.
Similarly
1 ) +-=
1 -1 -1 - ... <0.
-d [ log (l)X-l]
1-=log (1-dx
x
x
X
2X2 3x 3
1.6. Solution
See Problem 1.13 below for the value of M(1,0.2). The results of
Gauss in the case ao =./2, bo = 1 are:
ao = 1.414213562373095048802
at = 1.20710 6781186547524401
a2 = 1.19815 6948094634295559
a3 = 1.198140234793877209083
a4 = 1.198140234735592207441
bo = 1.000000000000000 00000 0
bt = 1.18920711500272106671 7
b2 = 1.1981235214 03120122607
b3 = 1.198140234677307205798
b4 = 1.198140234735592207439
162
Note that the observed values of a,. - bn are much smaller than the
estimated values of a,. - bn
1.7. Solution
so that a,.bn = aob o and, passing to the limit, [2 = aobo. Since I> 0, it follows
that I is the geometric mean of ao, boo
To discuss the rate of convergence we note that
a,.+1 - bn +1= !(a,. + bn ) - [2a,.b.J(a,. + bn )]
= (a,. - bn )2/(4a,.+I).
Thus the convergence is of the same type as that of the Gaussian
algorithm.
Observe that this algorithm suggests a method for finding square roots.
If we want to find IN then we start with
bo=N/xo.
A moment's thought reveals that this is really the familiar algorithm
a,.+1 =
~(a,. + ~)
1.9. Solution
Let ao = a l , (30 = bOI and form the Gaussian sequences for ao, (30. We
assert that ~ = a;;\ (3n = b;;l. This is true for n = O. Assume that it is true
for n = r then
Chapter 1
163
and
ao=0.2,
(30 = 1,
cos 6 =0.2,
If
0.2= cos (1.369+e) = cos 1.369- e sin 1.369-!e 2 cos 1.369+ ...
then a first approximation to e is
e1
= 0.0004295292 . 0 00043 8
sin 1.369
..
.
We now compute
cos (1.36943 8) = cos 1.369
- e1 sin
= 0.2004295292
1.369 - 0.0004291122
-!eicos1.369-
191
= 0.20000 03979.
The next approximation is given by
e2
We take
6 = 1.36943 841
and then
I = (sin 6/6) = 0.715472776.
(30 = 1,
cosh t =..J2,
I = sinh t = 1.13459266.
t
sinh t= 1.
164
b) Carlson.
(Q.96 _ ~0.48 _ I
0.48
- V2lOg5 - log 5 - V1.60943 79124
1-
bo =0.2:
= 0.546114246.
bo = 1:
ao=..fi,
I = V 210: J2 =
and
g(n) = 2n(~~-a~l/2 = 2n cos (2- 1 6) ... (cos 2-n8)[1-cos2 2-n8]112
=
2n~n
sin 2-n6
= sin 8
so that !(n), g(n) are independent of n.
Next we note that
f(n) arcsin ~~-a~)112/~n)
g(n) = ~n~~-a~)112/~n)
Since an ~ land
~n ~ l
so that I = g(O)/f(O).
we have
Chapter 1
165
1.14. Solution
Since K' depends on k through k 2 some tables have m = k 2 as the
argument. Then
0t 1/2 dO
{(1-t 2)(1-(1-m)f)}-1/2dt.
0.52080 1638.
1.15. Solution
What is involved here is essentially the relation
r/\imfn(t~) d6
when fn(6)
= 2n + 2 '7T- 1
t'IT/2Iim gn(O) dO = O.
For further remarks on similar problems see Chapter 5. Since
it is clear that
an-1<{.
- n (1I)<b-1
11 n
166
b~t-----==;;;;;;o;o-...,
M-l~
a~l~V____----l
O'--------!---
ln
9
2
Also
Suppose the common value of the integrals S;;/2 In (6) d6 was different from
Eo. This supposition leads to a
contradiction in the following way:
Since a..l M, bn t M we have Ib;;-1-a;;-11'IT12a.. -bJ'lT12b~. Since
a,. - bn ~ 0 we can find an no such that for n;a.: no,
(a,. -bnhT/2b~<IEol.
The last two displayed formulae are in contradiction. Thus we have established the relation required.
1.16. Solution
a) If A is the common limit of {a,.} and {gn} then A1JX is the common
limit of the Borchardt sequences with initial values (1 + x)/2JX, 1. The limit
in this case is sinh tIt where cosh t = (1 + x)/2JX. It is clear that sinh t =
v'(cosh2 t-l) = (1-x)/2JX and that
exp t = cosh t + sinh t = 1!JX,
i-x
-2
i.e.,
x-l
x-l
A = - or logx=--.
log x
A
167
Chapter 1
The sequence {a,. - g,.} converges to zero geometrically, with approximate ratio 4 whereas the sequence {:l(a,. +2gn )} converges to its limit
geometrically with approximate ratio 16.
b) If B is the common limit of {a,.} and {gn} then BJ../(1 + x 2 ) is the
common limit of the Borchardt sequences with initial values l/.J(1 + x 2 ), 1.
Hence
B
so that
B=
x
x
.
arctan x' I.e., arctan x = B'
These give
Let
n~ao
and we have
168
1.18. Solution
Clearly x..+l lies between
also between x.., Yn. Further
x.., Yn
x.., x..+l
and so
Hence if x.. ~ Yn then x..+1 s.: Yn+l. It follows that the odd terms and the
even terms of each sequence are monotonic and bounded and therefore
have limits. The relation established above ensures the equality of all the
limits and that as n ~oo
The argument used in Problem 1.13 shows that
1
00
L+
oo
(s
L+
oo
(s
Ft 3/4(S + F)-1/2 ds
12 )-5/4 ds
1
00
(T+1)-3/4y--1I2 dT=4
cos qJ(dqJ/d8)
sin2 8)-3/2.
Chapter 1
169
1l2
i.e.,
r1'/2 dO(a~ cos2 (I + b~ sin2 0)-112 =.10r 2d<p(ai cos2 <p + bi sin2 <p )-112.
l '/
Jo
1.20. Solution
170
0.159154931,
while
(21T)-1 = 0.159154943.
Chapter 2
2.1. Solution
Let f be a positive decreasing function defined for all x ~ 1. Then the
sequence {sn}, where Sn = L~=l f(r), and the integral j f(x) dx converge or
diverge together. The "infinite integral" .17 f(x) dx is said to converge or
diverge according as limx.-oo F(X) exists or not, where
F(X) =
Since
f(r)~f(x)
IX f(x) dx.
f(x) dx = ~t:
n~2,
dx ~ O.
171
Chapter 2
Rewriting this as
f(1)+f(2)+ . +f(n - 1);:::
f(x)dx
f f(r)
1
2!
f oo f(x) dx.
1
Moreover, e(n+1)-e(n)=S~ + I(f(n)-f(x))dX2!O, so that the sequence {e(n)} is an increasing one. Since
e(n) = f(1) -
J: 1'+1
I~1
f(x) dx
n +1
The sum of the shaded areas to the right of the ordinate through 1 is
manifestly positive; translating them to the left as indicated, the sum is
manifestly less than f(1).
It follows in the case of convergence, that
rf(n)::5f(l)+f f(x)dx.
OO
172
We have, therefore, obtained both upper and lower bounds for the
infinite series in terms of the infinite integral.
2.2. Solution
We discuss
we have
rn-
1/ 2
Also,
SN-l:51
X- 1 / 2 dx=2N 1/ 2-2,
i.e.,
~:52N-l/2_1.
These two relations show that SN = (J(Nl/ 2 ) exactly. [Compare Problem 2.5
below.]
The first of these relations gives
SN> 10 if 2(N + 1)1/2> 12, i.e., if (N + 1) > 36, i.e., if N> 35.
100 =
cc
=101.
Similarly
cc
cc
1/9:5L n- 1:510/9,
1
Chapter 2
173
Actually
Ln00
Ln00
lO
= 1.00099 4575,
2.4. Solution
In the notation of Problem 2.2 we have
1
fa(n) =-e(n)+--n-a
I-a
and, letting
n~oo,
we find
limfa(n)=-I+-11 .
-a
When a
= 1 we have
2h- (1 + ~) = 1.12132034
2J3-(1+
4- (1 +
~+ ~)=1.1796446
174
n-a
a
-::5
L f(r) = L r00
00
r=1
r=n
1- a
n-a
a
h f(x) dx
and n- 1 - a +
(1 1)
+- = n-a -+a
2.6. Solution
The first series is obtained by expanding the integrand in the lemniscate
integral S~ (1- x4t1l2 dx by the binomial theorem and integrating term-byterm.
From Stirling's Formula [po 111] we find that the general term an
is approximately (l/47T)n- 3/2 It follows that the error after n-terms is
0'(n- 1I2 ) so that direct summation is unfeasible.
The second series is also obtained from the lemniscate integral by
writing the integrand as (1- X2)-1I2(1 + x 2 tl/2 and expanding the second
factor by the binomial theorem and the integrating term-by-term using the
fact that
175
Chapter 2
we have
a.. - C
a..+1- C
2.8. Solution
This follows by induction from the fact that if f(x) = o(h(x)) and
g(x)=o(h(x)) then f(x)+g(x)=o(h(x)) and the fact that xr=o(e as
X )
x~oo.
x~1
and use the fact that if f(x) = o(h(x)) then Af(x) = o(h(x)), for any constant A.
2.9. Solution
f(x)=
L fk)(a)(x-a)k/k!+E,.(x)
n
k=O
where
1E,.(x)l<
Mlx-al
n +1
(n+1)!
or as
2.11. Solution
Since
d {(
t)n}
dt e t 1-~
te
=--;
t
('
t)n-l
1-~
176
Chapter 3
3.2. Solution
If {x,,} converges to a limit l;t: 0 then I =!(l + Nil) so that 1= IN.
Graphical considerations suggest that there will be convergence to IN if Xo
is positive and to -IN if Xo is negative. It will be enough to discuss the first
case. Further, Bn+1 = !B~X~1 so that convergence is quadratic. If Xo < IN then
x1>JNand we find X1>X2> ... >IN. The sequence {x,,}n;;,,1 is therefore a
bounded decreasing one and its limit is necessarily IN.
3.3. Solution
Deal with this as in Problem 3.4 below, or transform it algebraically
into Problem 3.4. Quadratic convergence to N 1/ 2 takes place for appropriately chosen Yo since
3.4. Solution
For further details see W. G. Hwang and John Todd, J. Approx. Theory
9 (1973), 299-306. See diagram opposite.
When 0 < Xo < (3N)1/2 the sequence converges quadratically to N. When
xo> (SN)1/2 the sequence oscillates infinitely. There is an increasing sequence 13r with 13-1 = (3N)1I2 which converges to y = (SN)1I2 and is such that
when 13r<XO<f3r+1 the sequence {x,,} converges to (-1YN1/2. For Xo=
0, 13-10 130, ... the sequence converges to O. For Xo = Y the sequence oscillates: x" = (-1)ny. The behavior for negative Xo is obtained by symmetry.
Chapter 3
177
3.5. Solution
If we use the relation (8) of 3.2 the only division we need to do is by 6
which can be done mentally. We have
X1-m=
-(xo-!N'f(xo+2JN)/(2N)
Xo = 1.73205 081.
We compute (exactly)
x~ = 3.00000 00084 21656 1
which guarantees xo-J3 < 10-8 We can write the recurrence relation in the
178
form
N-X~]
Xl=XO [ 1+---zN
and complete our calculation as follows:
From our computation of x~ we find
Xo
we get
Adding
Xo
J3 =
O. Emersleben gives 29(353) in place of the last two figures 38 so that our
result appears just good to 15D. NBS Handbook (p. 2) gives 29(35).
3.6. Solution
We find, by elementary algebra, when x,.+1 =~X(x,.)-!Y(x,.) that
x,.+l-vW=~B<x,. + NX~1)_JN]-M{2x!,(3x~- N)}-JN]
3
1
= 4x,. (x,. _IN)2 2(3x~- N) (x,. -JNf(2x,. +JM
= (x,. - IN)3(5x,. + 3JN)/4x,. (3x~ - N)
m.
Chapter 3
179
3.7. Solution
Draw the graph of y = x[4- Nx 3 ]/3. It is obvious geometrically that if
0<XO<4 113 N- 1I3 the sequence will converge quadratically to N- 1/3 . To
prove this we note the identity
x,,+1- N- 1I3 =
_(3N)-1[x,. -
N-1/3]2[X~+2xN-1I3+3N-2/3].
then the above equation shows that Tp+2 = Tp+1 + Tp and the solution to this
Fibonacci difference equation (d. Chapter 10) is Tp = AOP + BO-P ~ AOP.
3.10. Solution
In case i) show that Tn decreases steadily to ../N.
In case ii) show that T2n decreases steadily to ../N - this means that we
can obtain upper and lower bounds for ../N.
For further information on this problem, and extensions to deal with
the solution of quadratics and the determination of cube and other roots, see
180
A. N. Khovanskii. The application of continued fractions and their generalizations to problems in approximation theory, Noordhoff, 1963.
3.11. Solution
This problem is fully discussed theoretically by K. E. Hirst, J. London
Math. Soc. {2} 6 (1972), 56-60 and earlier in E. T. Whittaker and G.
Robinson, The Calculus of Obseroations, Blackie, London {4}, 1944, p. 79.
3.13. Solution
Suppose xo> O. Then
according as
x.. ~.JN.
Also
P = Nl i.e., 1= 0, .IN
Y'x
y-3x
yt x
3 x
3.16. Solution
This is not entirely trivial. See e.g., J. L. Blue, ACM Trans. Math.
Software 4 (1978), 15-23.
181
Chapter 4
Chapter 4
4.1. Solution
-0.882027570.
4.2. Solution
0.67434714 i 1.1978487. the remaining root is -2.
4.3. Solution
If q(x)=qox n- 2+ ... +qn-3X+qn-2 and r(x)=qn-1X+qn then
= 0, 1, 2, ... , n -
2.
4.4. Solution
We take the case of a double root. Then f'(~) = 0 but f'(~) ' O. We find
Xn+1 - ~ = !(Xn -~) + O'(Xn - ~f
(8)
H(-I) = 26>0
Next,
(9)
H'(x)=9x 2-3>0 if
Ixl>l/.J3.
Then
(10)
Finally the tangent to the curve y = H(x) at (-1, 28) has equation
y-28
x+l =6, i.e.,
y =6x+6+26
182
-(4/3)
-1.3333
X2 = -(148/117)
-1.2650
-1.2406
Xl =
lim x"
= -1.2600
= (x -a)q(x)+ r.
t give
i.e.,
(x - a)q(x)+ fn = f(x).
4.7. Solution
0.77091 70,
4.8. Solution
See M. Davies and B. Dawson, Numer. Math. 24 (1975), 133-135,
George H. Brown, Jr., Amer. Math. Monthly 84 (1977), 726-727.
We may assume the zero at ~ = 0 and we use Maclaurin series for
convenience instead of the Taylor series used on page 49. We assume f4}(X)
is bounded in the neighborhood of the origin. Then
f(x)
183
Chapter 5
x [21' + xf' +lx 2f'" + O(x 3 )][f' + xf' +!x 2f'" + O(x 3 )]
2[f' + xf' +!x 2f'" + O(X 3 )]2- x[f' +!xf' +ix 2f'" + O(x 3 )][f' + xf'" + O(x 2)]
X[2/,2 + 3x/,f' + x2(f12+1/'n + O(x 3 )]
21'2 + 3xf'f' + X2(~f'I2+ f'f'~ + O(x 3 )
The new estimate is therefore
4.4934094579, 7.7252518369.
a = 1, b =~, c = U.
4.10. Solution
f(a)=-l,
f'(x)
f(b)=!
= -4x ~O
in [a, b]
The tangent to the curve at x = b has equation y = 2x +~ and cuts the x-axis
at x = -1, between a and b.
Thus all the conditions (8) (9) (10) (11) are satisfied.
If xo=-0.8 then Xl =-0.7125<~=-2-l/2=-0.7071 while if Xo=
-0.6>~ then Xl = -0.7142<~.
Chapter 5
5.4 Solution
Suppose lim M,. = O. Take any E> 0 and then no = no( E) such that
Mn ::;; E if n ~ no; this implies that Irn (x)\::;; E for all x and all n ~ no. Hence
no(E, x)::;; no and so no(E) is finite.
On the other hand, suppose no( E) < 00 for all E > O.
If Mn ~ 0 is false there is an E > 0 such that for every N (however large)
there is an n" > N such that M... > s. Choose N = no(E). Then \rn(x)\ < E for
n > N and so
M.... ::;; E,
a contradiction.
184
5.5. Solution
The solutions are 1; x; x 2 +[x(1-x)]/n and x 3 +[3x 2 (1-x)]/n+
[x(1- x)(2- x)]/n 2 We shall establish the result in the cases k = 1 and k = 2.
In the case k = 1 we have
xL (nr-1
-1)x r- (1_ x)(n-l)-(r-l)
1
=x.
The general term in the case k = 2 which is
( n) xr(1- x)n-r ~2
r
can be split into two by cancelling out an r and an n as in the case k = 1 and
then writing r=(r-1)+1. We find
L (;) x r(1_x)n-l
(;Y n:
=
L (;=~) x r(1-xt- r
+1.
n
n-1
=-
x2
L (nr-1
-1) x r(1- xt-r
+-x L (n-1) x
n
n-1
n
r-1
r-
1 (1-
xt- r
x
n
=--x 2 +2
x(1-x)
=x +--'--n
as stated.
5.6. Solution
ql(X) is the Bernstein polynomial, adjusted to the interval [-1,1] and
obtained in Problem 5.8 below.
qz(x) is (approximately) the polynomial of best approximation, obtained
by Remez.
q3(X) is the truncated Legendre expansion.
qix) is obtained by truncating the Chebyshev expansion obtained in
Problem 5.11 below.
185
Chapter 5
In the evaluation of lie; (x )11 we can confine our attention to O:s; x :s; 1.
The value of \iel(X)\\ is shown to be 0.375 in Problem 5.8. It is less easy to find
the other norms exactly and they can be estimated as
max \e;(x)\,
x =0(0.01)1
which gives
l\eix)1\ = 0.067621,
l\eix)l\=
)}
2
3+cos 0
1
1+x
b) e,.(x) =
{1!X
-1Tn
(X)}
J2
3+cosO
{!(1- e
2) -
(-l)"J2e n
e"-l cos n6
4
(-1)"e"-1
J2
[4 cos nO+4e cos (n-1)6-3J2cos nO
4 2(3+cosO)
- J2 cos 0 COS nO]
186
..I.. _2.J2sin 6
sm ' f I - 3 + cos 6
-W98 * 0.0502.
d) B l =ll(l-x)+!lx=l-tx.
187
Chapter 5
5.8. Solution
Since
Consider the difference e *(y ) = 1y 1- B ~ in the range [0, 1]. Its derivative
is
5.9. Solution
We deal with
r2,1
we find
ao= (30
a1 = (31 + (30
!
.
188
and so we find
r2,l(x) = (6+2x)/(6-4x+ x 2),
The general case is handled in the same way. The IL equations arising
from tV +1 , ,t"'+v involve the IL + 1 coefficients 130,"" 13", and we can
find a non-trivial solution. Putting these in the equations arising from
1, t, ... ,tV gives the <xo, ,<Xv directly.
Observe that if XX divides D so that fJO=fJl= ... =fJX-l=O then XX
divides N for <Xo = ... = <XX-l = O. Thus we can divide out by XX and
normalize with the constant term in the denominator unity.
Observe also that the rational function NID is necessarily unique. To
see this suppose that NID and NID are IL, v entries in the Pade table for
f(x). Then Df-N and Dt-N each involve powers of X larger than IL+V.
Hence E = ND - ND involves powers of x larger than IL + v but E is of
degree IL+V at most: hence E=O and so (NID) = (NrD).
We can therefore assume NID irreducible and, say, D(O) = 1. This will
give uniqueness.
In some cases the general Pade fraction can be written down explicitly.
see e.g., Y. L. Luke (The special functions and their approximations, I, II
Academic Press, New York, 1969) who gives, e.g., information on (1 +
Z-1)1I2, (1 + Z-1)1/3, Z 10 (1 + Z-l), Z-l arctan z.
1,
1+x,
x2
1+x+ Z '
1
,
1-x
2+x
,
2-x
6+4x+x 2
,
6-2x
2
6+2x
2-2x+X2' 6-4x+X2'
12+6x+x 2
12-6x+x 2'
5.11. Solution
a) Suppose lcos 01 = tao + L a.. cos nO. If we multiply across by cos mO
and integrate between -7T, 7T we get, on the right, 7Ta",. Similarly, on the left,
we get
1m =
t:
COS
mOlcos 01 dO = 2
=2
fT
COS
,"/2
mOlcos 01 dO
COS
mO COS 0 dO - 2
I'"
,"/2
COS
mO COS 0 dO.
Chapter 5
189
C'II"/2
1m =2 ~
In =
By putting 6 = 7T Since the cosine is
tegral as SO/2 + S:/2
part, we find when
be shown that if
t:
In(z) = (27T)-1
(1)
t:
cos (n6 -
sin 6) d6
then, if n is odd,
(2)
I n(z)=;;-(-I)(n-l)/2
! I'll"
COS
! {'II"
n6 cos (z sin 6) d6 +
Putting 6 = 7T - cp shows that the first integral vanishes for n odd so that in
this case
In(z) =
!r
n odd.
190
5.14. Solution
The last approximation to Jo(x) is due to E. E. Allen (see NBS
Handbook, p. 369) and it is asserted that for -3:5x:53,.!e,.(x)!:55xlO- s .
5.15. Solution
For x = 0, every term is zero and so the sum is zero. For x;e 0 the series
is a geometric one with first term x 2 and common ratio (1 + X 2)-1 < 1; it is
therefore convergent with sum
x 2 J(1-(1 + X2)-1) = 1 + x 2
5.16. Solution
Since Sn (0) == 0, lim Sn (0) = O. For x;e 0, by Problem 2.8, lim Sn (x) = O.
Hence s(x)=limsn(x) is identically zero in [0,1]. Hence gs(x) dx=O. On
the other hand, integrating by parts,
r r
sn(x)dx=
(nx)e-nx(ndx) =
te-tdt
e- t dt
[-te-t];j+
-en + 1)e- n + 1 ~ 1, as
n ~OO.
We show that there is non-uniform convergence. It is more convenient to use the Mn - definition. Since Sn (x) - S(x) = Sn (x) we need to
191
Chapter 6
evaluate
Since sn(x)=n 2[I-nx]e- nX, sn(x) increases from 0, at x=O to n/e at
x = n-t, and decreases to .n2/e n at x = 1. Clearly Mn = n/e and this is
certainly not a null sequence.
Chapter 6
6.3. Solution
Try, for instance, Xn = 2(1!2t + 3(1!3t + 4(1/4t.
6.4. Solution
See John Todd and S. E. Warschawski, On the solution of the
Lichtenstein - Gerschgorin integral equation in conformal mapping, II, NBS
Applied Math. Series 42 (1955), 31-44, esp. 41.
6.5. Solution
See J. Liang and John Todd, The Stieltjes Constants. J. Research Nat.
Bureau Stand. 76B (1972), 161-178. The first sum is 0.1598689037, the
second 0.0653725926 and the third 0.0094139502.
6.6. Solution
=1~xL(l~Xr;1nao
6.7. Solution
By induction, beginning with ;1ao = (-I)O[aoJ. If ;1nao is in the given
form we have
;1n+l a o = ;1(;1nao) = ;1nal -;1 nao
= (_l)n+l
- a l + ... + (-It+l
_
n+l [
-(-1)
a o-
+1 a n+1]
(n +1 1) a 1+ .. +(-1 )n (n +1 1) a..
+(-It+ 1a n+l] .
192
r r
(l-x)ndx=
n)x2
xn]1
= [ x- ( 1 "2+ ... +(-1)n n+1 0
= 1_(n)!+ +(_1)n_1_
1 2 ...
n+1
Ixl<1
by putting x = 1/2.
6.S. Solution
Observe that
anU, =(-1)"
= (-1)n
= (-1)n
xr(l-x)ndx
r!n!
(n+r+1)!
6.9. Solution
We use the result of Problem 6.7 above to compute amf(O) when
f(s) = n-S In fact
a mf(O)=(-1r
r.
=( -1)m[1-n- 1
The Euler transfonn of I (-n)-S (which has sum n/(n+1) when n>1)
Chapter 6
193
is therefore
(1/2)
8=0
8=0
6.10. Solution
We consider using a head of r terms
1 1
1--+-+
[2r~1- .. .].
= (-1)"
x2r(1- x2)n dx
"'/2
sin2r 6 COS2 n+ 1 6 d6
=(-lt2 n n!(2r-1)!!
r=O.
These results are easily established using the reduction formulas for
l(m, n) =
112",
1(0, 1) =
",12
cos xdx = 1.
194
I(m, n) =
= [
",/2
Sinm + 1 x
] ",/2
1 cosn- 1 X
+
m+
X dx
1"'/2 Sin
- (n -1) cosnm+l
m+1
2 X
sin x dx
so that
(m + 1)I(m, n) = (n -1)
=(n -1)
1"'/2 sinm+
f.
",/2
x cosn- 2 X dx
X dx
Hence
(m +n)I(m, n) = (n -1)I(m, n -2).
We therefore obtain, if we start the transform from the beginning,
Observe that this series is much more rapidly convergent than the Gregory
series. In fact
1 n!2 n!
f7r
u..=2 (2n+1)!-V;;2
n
-n-l
We can check that the new series actually has the proper sum, e.g., as
follows. Consider the Maclaurin series for
so that
195
Chapter 6
If we put x = 0 then
This gives
22
2 .4
3+ _ _ 5+
f( x ) -- x +3! x
3!5! x ....
Putting x = 7T/.J2 we find
~/ fI=~[1+!+1.2+ ... ]
4
'J2: J2
3 3.5
= (1/2)(1- m)-lvo
= (1/2)[vo+ Mv o+ Wvo+ ... ].
6.12. Solution
by induction hypothesis
196
give a direct proof in the present special case. Because (0) + (~) + ... + (;:) =
(1 + 1t = 2n we have
and therefore it will be sufficient to establish the result when {s,.} is a null
sequence. For convenience put a",r = 2-n(~) so that 1 ~ an,r> 0, L~=o an,r = 1.
Notice that for r fixed an,r is a polynomial in n (of degree r) divided by
an exponential 2n = e n10g2 - hence an,r ~ 0 as n ~ 00. [Compare Problem
2.8].
Given any e > 0, we can find no = no( e) such that ISn I< ~e if n ~ no.
Then
1
1
-2
1
2:
'3
'3
-4
-6
1
12
r=4:
111
1-2:+"3-4= 112/192
192 log., 2 = 133.08
197
Chapter 6
6.15. Solution
dx
- 1r=
o +X
1
i1
(-l)n
[1-x r +x 2r - ... ]dx=L1 -
00
~ 1+3n
00
--=
11
0
dx
-~=
1+x 3
[1-log
3
+m
x+1
2X-1]1
+-arctan-v'(x 2-x+1) J3
J3 0
1
7r
=-log2+ ~=0.83564885.
3
3,,3
For the other integrals see Problem 9.13. The results are
fo 1+4n
(-It =0.86697299,
(-l)n
Lo -+1
5 =0.88831357,
n
00
(-It
L
6 =0.90377177.
o 1+ n
00
6.16. Solution
This gives
N2B~(1- ~e~f
The second term on the right is O(E~) and swamps the first which is O'(e~).
Hence En+2=0(E~~0. However
En+2=1+ l+O'(E~
Bn+2
NBn(l +O'(En))
= +1
NBn
--+-00.
+0(1)
198
Chapter 7
7.1. Solution
In the usual notation [(x) = eXEt(x) and we find from NBS Handbook,
pp.242,243
7.2. Solution
We have
U(x) =
U(5) = 0.42999 5
S2(5) = 0.46594 2
V(5) = 0.08537 1
C2(10) = 0.43696 4
U(10) = 0.15800 8
S2(10) = 0.60843 6
V(10) = -0.27180 9
Ci15) = 0.56933 5
U(15) = -0.17379 7
Si15) = 0.57580 3
V(15) = -0.19001 0
Ci20) = 0.580389
U(20) = -0.20150 5
S2(20) = 0.46164 6
V(20) = 0.961392
1.0000000 -0.05555 56
0.00925 93 -0.00257 20
0.0010002 -0.0005001
0.00030 56 -0.0002207
0.00018 34 -0.00017 37 ~ least term
0.0001834
Chapter 7
199
x=5:
x=6:
The correct values of erfc x were obtained from pp. 242, 295 - of
National Bureau of Standards Applied Math. Series #41, Tables of the
error function and its derivatives. Washington, D.C. 1954. From these the
correct values of g can then be obtained from
g(x) = ';;xe x2 erfc x
1[
U]-l
u+x=~ 1+~
1 n-l (-u)r
=~r~o ~
(-u)n
+xn(u+x)"
so that
ro
u n- 1e- u2 duo
Hence Irn(x)1 does not exceed the absolute value of the first term omitted.
The integrals occurring can be expressed in terms of the Gamma
function (by using the change of variables from u 2 to t).
[0
e-u2ur-1 du =!f(!r).
200
rm = J;. we find
The error being bounded above by the absolute value of the first term
omitted we see that, correct to 4 decimals for x ~ 0,
.;; {1 I} -2X2-1
F(x)=- -+2 x 2x 3
since
1 1
-4
Ir3 (x) I<-4:5-X
10 .
2x
oo ue-u2 du
u+x
1
2x
1 roo e- u2 du
21
(u+xY
Write u/(u+x) in the left hand side of the above relation as 1-[x/(u+x)]
and we find
foo
o
F(x)=
foo e- u2 du
-1
(U+X)2'
201
Chapter 7
F(x) =
roo t2tt )
.10 e x+t
and note that for t - 0, et2 -1 + t 2 We therefore suspect that F(x) and
roo
dt
and, integrating,
_
1
[x+t
Fo(x) - (1 + x2) log (1 + t2)1/2 + x arctan t
00
1
1
- (1 + x2) [-log x +z1TX].
_
We therefore have
1]
dt .
F(x)-Fo(x) = l oo [ e- t2 _
-2 1 +t
x+t
lim{F(x)+logx}="2.1oroo [ e-
1] d
1+T
TT,
Easy manipulations of the integrals in the lemma (p. 80) show that
roo
Jo
[e-
1 ] dT = -'Y,
1 +T T
T __
so that
lim{F(x)+logx}= -!-y.
(1)
x2
x- 1 (e X2 -1).
202
<X>
X2n + 1
e F+logx=v''lTL
'(2
1)
o n. n+
X2n
L----'--(2
)+const.
n. n
<X>
-h and hence
7.7. Solution
The standard definitions of the Fresnel integrals are
C(y) =
1 IX sin t dt 1 IX
-r=11/2(t) dt,
'V2'lT 0 'Vt
2 0
Six)= , -
1 IX cos t dt 1 IX
---r=1 1/2(t)dt.
v2'lT 0 vt
2 0
C 2(x)= r;::-
C(x), Sex) and C 2(!'lTX 2), Si!'lTx 2) are tabulated in the NBS Handbook.
In the present context it is more convenient to have Cix), S2(X). If we
write u = v'(x + t) we find
f'x
sin u 2 du +2 sin x
rx
cos u 2 duo
203
Chapter 7
sin 1 = 0.84147 1
Ci1) = 0.721706,
cos 1 = 0.54030 2,
to get
f(1) = 0.80952 55.
rJi
= -2nIn -2nxIn - 1
so that
Also
Hence
12 ,+1
22(2r + 1)(2r)x 2
= (4r+ 1)(4r+3) 12 ,-1
so that
I
_ -22,+I2r+ 1)!)X(4'+3)/2
2,+1 -
(4r+3)!!
204
and
f(x)=~!17(cosx-sinx)+(-2)
(-lYI2r+I/(2r+l)!
r=O
Checking the ratio of terms in the last series, or estimating the general
term, we see that convergence is reasonable for, e.g., 0:5 x:5 3.
An asymptotic representation of f(x) can be found by repeated integration by parts. Thus
f(x) = [ -(x + tr I/2 cos {1;'+ [' cos t( -!(x + t?/2) dt
f'" !-~(x
+ t)-3/2 sin t dt
Generally we find
f (x) =
r=
-2.10
sin t dt
(x + t)3/2'
dt
f '(x) = ~4.10r= (xsin+ tt)5/2
.
r=
1 1
cos tdt
[ -cos t]= 11= cos tdt
f(x)= (X+t)1/2 0 - 2 0 (x + t)312 = xl/ 2-2.1o (X+t)3/ 2
If we integrate the expression for f"(x) by parts we get
" [3
-2
sin t ]
= 32
r=
1= cos t dt 1
cos t dt
0 (X+t)3/2=2.1o (x + t)3/ 2
the required equation which can be used in the intermediate range 3-10.
205
Chapter 7
o
1
2
3
4
5
1.2533141
0.8095255
0.6429039
0.5468906
0.4828942
0.436552
10 0.314027
15 0.257368
20 0.223196
7.B. Solution
We have
Hence
so that
Hence
x 2n - 1 IG(x)-{ .. .}I=(2n)!/x2~0
as
x~oo.
Also
x 2n IG(x)-{ .. .}+O x- 2n l = (2n)!/x ~ 0 as
1 2!
4!
G(x)----+x x 3 x 5 '"
x ~ 00.
206
For x = 5, two terms give G(x) to within 10-2 ; for x = 10 five terms give
G(x) to within 4x 10-5 and for x = 15 seven terms give G(x) to within
2x 10-7
7.9. Solution
The only trouble is finding the error in the binomial expansion and from
it to show the true asymptotic character of the series derived formally.
Integrating the relation
d
- (1 + t)v = v(1 + t)v-l
dt
from 0 to x we get
(1+x)V-1=v
=
r
r
(1+t)v-l dt
(1 + x - T)v-l dT,
(1+x-tt- 1 dt.
(1 +x-t)V-2t dt.
where
rn +l(x)=v(v-1) ... (v-n)
(1+x-tt- n - 1 (t n /n!)dt.
i.e., the remainder is less in absolute value than the first omitted term.
207
Chapter 8
where
tn
2nn!
x2n
and where
.(2n + 1) yn+1
2n+1(n + 1)!
x 2n '
rn+1
(1)
=f
1...
n=O
2r
2r
n+1
where
Chapter 8
8.2. Solution
+ ...
+(x - X1)(X - x2)(x - X3) ... (x - Xn-1)'
l'(~)
=n (~- Xj),
the
208
8.6. Solution
We can write down L 3 (x) in the general form and simplify it to
L 3 (x)=x(2x 2 +x-2). Draw a graph otL3 (x) using its values at the nodes
and the fact that it has turning points at x = (-1 m)/6 with values
(19=t= 13m)/54.
The error can be written down directly by noting that it is a polynomial
of degree 4 with leading coefficient 1 which vanishes at 1, 0, 2. Thus
The results just obtained indicate how the error in interpolation varies
with the position at which we interpolate - as common sense suggests, it is
better to interpolate near the center of a table, than at its ends.
8.7. Solution
We have L 1(x) = -x(x -l)(x -2)/6, etc. giving
L1
x = 0.4 -0.064
10
11
12
0.672
0.448
-0.056
To find the interpolant 1(0.4) given 1-1> 10' 11> 12 we have just to form the scalar
product of these numbers with the numbers in the row labelled x = 0.4
above.
For further information on interpolation, and much good advice on
computation, see Interpolation and Allied Tables.
8.8. Solution
Since (sin x)" = -sin x and since Isin xl:s 1 the error does not exceed
i x (0.2)2 xl = 0.005. This means that it is not reasonable to use more than
two decimals in the table since the round-off error would then be much
209
Chapter 8
8.9. Solution
If f(x) is a polynomial of degree n at most then
n
f(x)=
L f(xJHx).
i=O
If we take f(x)
SO= 1,
To deal with the case j
'7Tn(x)
= (x -
= n + 1 we note that if
'7Tn (X;)
= 0,
Special case:
,o{x)
(x - 2)(x - 3)(x - 4)
-6
12(x)
, 0 (0)
(x -l)(x - 2)(x - 4)
-2
(x -l)(x - 3)(x - 4)
2
'
1 ( ) = (x -l)(x - 2)(x - 3)
3 X
'
= 4,
So = 1,
'7T3(X)
I ()
Sl = S2 = S3 = 0,
S4 = - 24.
210
8.10. Solution
Lemma.
If
then
r
r
(n -i)!
co -c1+c2-c3 + ... +(-1)c,.=(-1) r!(n-r-l)1"
(1 + xt[x r - x r- 1+ x r- 2 _ . .. +(-1)']
i.e., in
(-1)'[1- x + ... + (-l)'xr][l + x]n
i.e., in
i.e., in
i.e., in
(-1)'[1 + x]n-1
which is the result given.
L,.(1) is clearly +1 since 1 are always nodes. Hence L,.(1)~ 1.
If n is odd it is clear from symmetry that L,. (0) = O. We deal now with
L 2n (0). We write
k=1,2, ... , n,
=(-1)n-1[(2n-3)!!]2
2 2n - 1
=(-1)
(signxk)(2n-l)
n-1 [(2n-3)!!]2
k.
(2n-l)
k-l .
Chapter 8
211
Remembering that Xl> .. , x" are negative and Xn+l' ... , X2n positive and
that the coefficients in a binomial series are symmetric we see that the sum
in the last expression is just
2[co -
where
Co,
Cl
C1o ... are the coefficients in (1 + x?n-l. From the lemma this is
(2n-2)!
Hence
L 2n (0)
J2
[ 2(2n-3)!!
n- l (n -I)! .
L2n(0)~(1Tn)-1.
Indeed Stirling's
so that
212
IL
L Z..'( n _.)'~1.
I.
i=O
However
L i!(n-i)!
(n)
n! L i
(1 + It
n!
2n
nr
Hence IL ~ n !/2n but since we may take {; = (-1 Yn !/2n we can have equality
and uniqueness, apart from sign.
8.12. Solution
4(~)
and again
H'(.x;) = [21[(x;) -21[(.x;)]{; + {r = {r.
With E(z) defined by (12) it is clear that E(x) = 0 and E(.x;) = 0 for
i = 0,1, ... , n. Hence E'(z) vanishes for (n + 1) values of z distinct from the
.x;. But it is clear that E'(x) = 0 for i = 0, 1, ... , n. Thus E'(z) vanishes for
2n + 2 distinct values of z. Repeated applications of Rolle's Theorem shows
that E(2n+2)(z) must vanish somewhere in the interval including x and the .x;
say at ~. Since H is a polynomial of degree 2n + 1 at most we have
xY]
n~i>j
For n = 1,
1
V l =det [ 1
xo] =xl-XO
Xl
Chapter 8
213
= k(x -
Xr+l),
k=TI ..
r~t>J~l
(Xi-X.)
J
so that
(_1)'+1 = (x - X1)(X - x 2) ... (x - x r+1)
TI
(Xi - Xj).
r::2!i>j
and so
V(xo, Xl> ... , Xr+1)
TI
(Xi - Xj).
r+l~i>j
xC:
x~
x~
1n(x)
x = 2.50781103.
214
There is a zero of [(x) between 2 and 3. Using all six points for
Lagrangian interpolation to subtabulate indicates a zero between 2.5 and
2.6 since
[(2.5) = -:2 65784
[(2.6) = +3136604.
Estimating the position of the zero by linear interpolation gives 2.5078. We
therefore subtabulate again getting
f(2.507) =
[(2.508) =
f(2.509) =
8.15. Solution
For discussions of wide generalizations of parts of this problem see, e.g.
P. W. Gaffney, J. lost. Math. Appl. 21 (1978), 211-226.
C. A. Micchelli, T. J. Rivlin, S. Winograd, Numer. Math. 26 (1978),
191-200.
For instance, if [(0) = 0, [(1) = 1 and 1[,(x)I::::;2 in [0, 1] then the graph
of [(x) must be within the left hand parallelogram. If [(0) = 0, [(1) = 1 and
1f'(x)I::::;2 in [0,1] the graph of [(x) must lie within the lens shaped region on
the right.
y-2x-x 2
215
Chapter 8
8.16. Solution
We want to show that there is a cubic H(x)=a+bx+cx 2 +dx 3 such
that
(1)
H(xJ=t,
H'(xJ=ir,
i=0,1,
n,
x~ x~
1 Xo
[ '0 1 2xo 3x~
h = det 1 Xl xi xf
o 1 2XI 3xi
Taking row2 -row l and row3 -rOWI we find, on dividing through the second
and third rows by (Xl - xo),
and
-6foh~2 + 6flh~2 - 4nh~l- 2f~h~l.
Equating these gives a linear equation for f~ with coefficient 4h=1 +4h~1 f= O.
Hence f~ is determined uniquely.
216
n,
3X2
+1
in [0, 1].
8.18. Solution
For simplicity suppose the three points are equally spaced and, without
loss, take them to be Xl = -1, Xo = 0, Xl = 1. Then we have fo = a, fl =
ab+c so that b=!(fl-f-l) and c=!(f-1-2fo+fl). It is clear that x=
-b/(2c) and {= a - b2 /(4c). For instance, given f-l = -0.3992, fo = -0.4026,
fl = -0.4018 we estimate
x = (13/42) = 0.3095,
{=0.4028.
8.19. Solution
This is Newton's interpolation formula. It is, by uniqueness, necessarily
a rearrangement of the Lagrangian expression. It can be obtained formally
by writing
The quartic is
so that q(2.5) = 2884.
217
Chapter 9
789
567
1 1356
345
912
2 2268
123
468
1380
791
3 3648
2171
4
200
323
200
523
1314
5819
3485
5 9304
The constant term in the quartic is clearly q(O) = 789 and the leading
term must be (200/24)x 4 to give a constant fourth difference of 200. If we
subtract off these terms and divide through by x we are left with a quadratic
whose coefficients are those of the three middle terms of the quartic.
The Newton interpolation formula is:
[(2.5) = 789 + (2.5 x 567) + (2.5 x 1.5)345/2!
2206.5,
2853.375,
2891.8125,
2884.
Chapter 9
9.1. Solution
x(x-1)(x-2)
(-1)(-2)(-3)
1[X 3 --6
3x 2 + 2x],
and
1
2
-1
L1(X)dx=i,
(2 lix) dx = i.
L1
218
'i Rule'
The fact that the sum of the weights is 3, which is the integral of I(x) = 1
between -1, 2, is a check.
We shall show that if Ie C 4 [-1,2] then
t:
1
3k
-3k
_~h5/(4)(~)
i.e.,
~E'(h) = [f-3 - 1-1 - 11 +13]+ h[f~3 +1~1 - I~ - I~];
~E"(h) = -2[f~3 - 1~1 + I~ - I~]- h[3f~3 + 1"--1 +
n+ 3/~];
= -8hr)(~l) -
54hr)(~2) - 2hr)(~3)
= -64hr)(~4)
hr)(~it)) dt = -72f4)(~s)h2;
integrating again
E"(h) = -24hy4)(~6)'
E'(h) = -6h4f4)(~7)'
Chapter 9
219
and, finally,
where
-3h~~~3h.
9.2. Solution
t:
q(x) dx
=4[(0)+e~)b =~[2[(-1)-[(0)+2t(1)].
We note that this quadrature is of "open" type; it does not involve the
values at the end points and can therefore be used as a "predictor" in the
solution of differential equations e.g., in Milne's method.
We also note that it is to be expected that the error incurred in open
formulas is larger than that in closed ones.
It does not seem possible to obtain an error estimate in this case by the
method used in the previous problem. We use an essentially general method
based on a modification of Steffensen's classical account given by D. R.
Hayes and L. Rubin (Amer. Math. Monthly, 77 (1970), 1065-1072).
Another general method for this type of problem is due to Peano (cf. A.
Ghizzetti and A. Ossicini, Quadrature Formulae, Birkhiiuser and Academic
Press, 1970); see also B. Wendroff, Theoretical Numerical Analysis,
Academic Press, 1966).
Let L(t, x) denote the Lagrangian polynomial based on the nodes
-h, 0, h and let 7T(X) = (x + h)x(x - h). Then we put
(1)
and get
(2)
1- Q =
2h
7T(x)R(x) dx.
-2h
This defines R(x) except at the nodes where we define it by continuity. With
this convention it is easy to verify that R(x) is continuously differentiable.
If we write
l(x) =
rx
12h
7T(t) dt
220
parts, we find
2h
t2h
1- Q = [1(t)R(t)f-~h -
l(t)R'(t) dt
2h
= -
l(t)R'(t) dt.
-2h
Now as l(x)::50 in (-2h, 2h) we may apply the Mean Value Theorem to get
1
=-R'(~)x -1;:h
1- Q = - R'(~)
2h
l(t) dt,
-2h
(3)
Since L,
11'
(4)
~(4)(X) =
(4)(X)-4!R'(x*).
~'(x*)=O
Csuch that
~"(x*) =
~(4)(C) = O.
We observe that
0 if x* is a node.
= f'(x) -
L'(x)-1T(x)R'(x*)-1T'(X)[R(x*) + (x - x*)R'(x*)]
= f'(x)-.L"(x)-21T'(x)R'(x*)-1T"(X)[R(x*)+(x -
x*)R'(x*)J
= 1T(x*)R"(x*)
221
Chapter 9
When x* is a node, ~(x) has three zeros and ~'(x) has two zeros by
Rolle's Theorem and an additional distinct one by (5). Hence ~"(x) has two
zeros by Rolle's Theorem and an additional distinct one at x* by the second
part of (5). Again ~"(x) has three zeros.
In both cases, by Rolle's Theorem, ~(4)(X) has a zero, say at ~. It then
follows from (4) that
t<;:~) = R'(x*)
where
depends on x*, but x* was arbitrary. This completes the proof of (3).
9.3. Solution
Assume that m::5 f'(x)::5 M in [a, b]. By the error estimate for linear
interpolation applied to f(x) in the subinterval [a,., a,.+1] where a,. =
a+rh, h =(b-a)/n, we have
f(x) - L(x) = (x - a,.)(x - a,.+1)f'(c,.)/2!
where a,.::5 c,.::5 a,.+1' This is true for r = 0, 1,2, ... , n -1. Since for a,.::5 x::5
a,.+1 we have
!Mf'+l
f'+1
[f(x) - L(x)] dx
::5!m
f'+l
(x - a,.)(x - a,.+1) dx
-Mh 3/12::5
f'+l
123 [b [L(x)-f(x)]dx::5M.
Ja
nh
Now for a fixed h, the middle term above is a constant between m and M;
since f'(x) is continuous in [a, b] and bounded there by m, M, it must
assume this value (at least once), say at c. Hence
[L(x)-f(x)]dx=T-I=nh3 f'(c)=(b-a)3f'(c)
'
12
12n 2
222
9.4. Solution
We give two rather similar proofs.
(1) We verify by integration by parts, that
t:
Since
uv", dx.
we have
t:
(x + h)3(3x - h)P4)(X) dx +
(x - h)(3x + h)3p4)(X) dx = 72
t:
F(x) dx.
Now, L 2(x) being a quadratic, P(4)(X) = ( 4)(x) and so 1P(4)(x)1 ::5M4. Hence
M=(4/3)[2f(-1)-f(0)+2f(1)]
S=
(2/3)[f(-2)+4f(0)+f(2)] so that
S - M = (2/3)[f(-2)-4f(-1) + 6f(0)-4f(1) + f(2)]
= (2/3)114f(-2)
Chapter 9
223
b-a = 1,
T~l) = U(O) +!tG) +U(1)
so that
1Tn(x)k"xnw(x) dx
r(
1Tn(x)fw(x) dx
so that
1Tn(x)xnw(x) dx = k;:l.
f 1T~l)(x)xnw(x)
b
dx
= (k~l)-l.
1Tn (X)1T~l)(X)W(x) dx
= k,Jk~l) = k~l)/k"
so that
k~
= (k~1)2
r
a
[1Tn(X)-1T~1)(x)]2w(x) dx = 1-2+ 1 = O.
224
Hence
a contradiction.
9.12. Solution
Let Xl, . .. , Xn be the zeros of 'lTn(x). Let H(x) be the Hermite interpolant introduced in Problem 8.12. If we mUltiply the error estimate (11),
(p. 91), across by w(x) and integrate between [a, b] we get
fb
a
(f(x)-H(x))w(x)dx=
1b f2n)(~(x))
(2n)!
n
!1(x-xY w(x)dx.
= LAiH(X;) = LAJ(X;).
Thus
1- Q
= rb f 2n )(g(x))
Ja
f(2n)(g)
= (2n)!
[fIi~l (x - :x;f]w(x) dx
1 !1
(2n)!
b [
]
(X - X;? w(x) dx
using the Mean Value Theorem, since the last integrand is positive.
In the Chebyshev case we have
= (2n)!22n- 2
1T
(2n)!2 -
=---....,.
2n 1
r'IT cos2 nO dO
To deal with the Legendre case we note that it can be verified (e.g., by
integration by parts, Apostol II, 179) that if
Pn(x) = [1/(2 n(n !]Dn{(x 2-1)"}
Chapter 9
225
then
Also
Hence
f 2n )(e) is
9.13. Solution
(a) The indefinite integral is
1
x 2-x+1 1
xJ3
-log
+-arctan-6
(x+1)2 J3
2-x
when -1 < x < 2. Hence the definite integral is
-log 4 + arctan J3
6
J3
+-log
5
(1
) J(10+2v'5)
+x +
10
arctan
4x+v'S-1
4x-v'S-1
arctan
;
J(10+2v'5)
J(10-2,J5)
226
JlO-2.j5
10
3-J"S
arctan -;::::====
JlO-2.j5
J"S
TrJ"S
= Slog !(J"S + 1)+! log 2+50 J(10+2.j5) = 0.88831357;
IOg(~J3) +~=0.90377177.
236
The general results of which these are special cases are due to A. F.
Timofeyev (1933)-see, e.g., formula 2.142 in I. S. Gradshteyn and I. M.
Ryzhik, Table of integrals, series and products, Academic Press, 1965. See
also I. J. Schwatt, An introduction to the operations with series. Philadelphia,
1924.
9.14. Solution
A short table of this integral in which the argument of y is in degrees is
given in NBS Handbook p. 1001. When y =0.5 in radians we find
S(0.5, 0.5) = 0.296657503.
The differences between the given estimates are 9312, 253, 1 in units of
the tenth decimal. The ratio 9312/253 is about the fourth power of the ratio
of the intervals. We may suspect that Simpson's Rule, or some other of the
same accuracy was used. We can use Richardson extrapolation to get a
correction of
16
(625 -16) x 9312 = 245
leading to an estimate of 0.2966575002.
9.15. Solution
These integrals were evaluated by E. Brixy (ZAMM 20 (1940), 236238); unfortunately some of his results seem wrong. It is desirable to draw
rough graphs of the integrands and to check that the integrals all exist. Note
Chapter 9
227
that, e.g., the integrand in 16 is of the form 010 at x = 0 and so some minor
adjustment in the program may be necessary.
Since JMx)=-J1 (x) we have
Is = -
[Jb(x)IJo(x)] dx
I6.10
= r1J (x)-xlHx) d = rl!!:... [1 {_x }] d
xll(X)
x.lo dx og J 1(x)
x
1
= [log
{J ;x)}I
1
= 2-[x
10g{xIJl(x)}~+
log {xIJ1(x)} dx
12 = 2.08773,
13 = 0.08673 4,
14 = 1.91448 5,
17 = 0.04220 4,
Is = 0.02795 4.
9.16. Solution
If 0:5 8 :5~1T then 0 :5 cos 8:51 and so
which implies I 2n - 1 2:: I 2n 2:: I 2n + 1 where
COS2 n- 1
= (n -1)1,.-2,
n 2:: 2,
2n + 1
(2n(n!))2
= (2n + 1)1"
228
we get
[2n- 1 n-1)1)]2
------'->
(2n-1)!
'1T
[2n(n!)]2
22n(n!)2 2
(2n+1)!
(2n)!
. ->"---~
which gives
1 ~> (2n)!(2n + 1)! '1T> 1
+2n
24n(n!)4 2which is the result required.
9.17. Solution
This is established in a way similar to that used to get Wallis' Formula.
We write
In =
tn dt
(1- t4 )1/2
to conclude that
I
4.
and
1
=!. 2.4 ..... (2r)
4.+3 2 3.5 ..... (2r+ 1)"
Chapter 9
229
9.18. Solution
1=
+3(a~2br +b 4]
230
9.19. Solution
We have to show that
r
r
J=
~(x)(1-li(x))w(x) dx = o.
Since l;(.xi ) = 1 it follows that (x - x;) divides 1- 4(x), the quotient being a
polynomial of Qn-2(X) of degree at most n - 2. Thus
J=
{(x-x;)~(X)}Qn_2(X)W(X) dx
and 3h 5 /80,
h being the interval. Suppose we use N panels. Then the total error in the
Simpson case is
N. (b-a)5.~
2N
90
and that in the i-case is
N=~1/2880e
and
N = ~1/6480e.
when we use the interval (-1, 1). For a general interval (a, (3) we have to
231
Chapter 9
f:
write
f(x) dx =
t:l f(i(3
t:1
F(t) dt
where t=(2x-a-(3)/(3-a). As pr)(t)=[!(3-a)]rfr)(x) our error estimate should be mulitplied by [!(3 - a) fn+l. For n = 2 we find
(b
-a)5 4320
1
There are two evaluations per subinterval (at "awkward" abscissas) and so
the corresponding cost is
2N=~1/270B.
1.22 S; 1(i).
9.21. Solution
See NBS Handbook, p. 492. In particular
f(3)
= 1.3875672520-0.9197304101 = 0.4678368419.
9.22. Solution
We find
'YlO = 0.62638316;
'Y~O = 'YlO -
'Y30 = 0.59378975;
'Y~o = 'Y30 -
232
9.23. Solution
This is a table of P4 (x) = (35x 4 -30x 2+3)/8; the endings 4,6 are
exactly 375,625. The exact value of the integral is
0.467352
0.466304
0.467439
We shall examine these results more closely. The errors in the first three
methods are estimated as multiples of a mean of the fourth derivative of the
integrand i.e., 105. The multiples are obtained in Problems 9.4, 9.2, 9.1. We
have to be careful in scaling. The observed errors are respectively
-72x10-6 ,
976 x 10-6 ,
-159x10-6
-7X 10-5 ,
-15.75 x 10-5
in close agreement.
9.24. Solution
2.
SID2CP
to get
{ }=cos(nx+!(a-x-cos(oo+!(a-x
...
2sin!(a-x)
+
Using the relation 2 cos A sin B = sin (A + B)-sin (A - B) four times and
the fact that
4 sin !(a - x) sin !(a + x) = 2(cos x -cos a)
Chapter 9
233
we find
2(cos x -cos a){ . .. }=sin (nx +a)-sin (n -1)x -sin (n+ 1)a + sin (00 - x)
+ sin (n -1)x -sin (nx - a) -sin (n + 1)a + sin (00 + x)
-2 sin na cos x +2 sin 00 cos a
= sin (nx +a)-sin (nx -a)-2 sin (n + 1)a + sin (00 + x)
+ sin (oo-x)
-2 sin 00 cos x +2sin 00 cos a
= 2 cos nx sin a -2 sin 00 cos a -2 sin a cos 00
+2sinoocosx
-2sin 00 cos x +2 sin 00 cos a
= 2 sin a (cos nx -cos 00),
which gives the result required.
If we take a = 8m = !(2m -1)7r/n, x = 8 we get
cosn8
1 {
.
8
8 = -.-8- (_1)m-l + 2 sm (m -1}Om cos 8
cos -cos m sm m
+ 2 sin (m - 2) 8m cos 28 + ... + 2 sin 8m cos (m -1)8}
so that
(_1)m-l
a,. =
ao = sm
. 8m '
2 sin (m - r}Om
r = 1, 2, ... , m -1.
9.25. Solution
(a) For a popular method using double integrals see T. M. Apostol,
Calculus, II, 11.28, Exercise 16.
(b) The following method is given by A. M. Ostrowski, Aufgaben
sammlung . .. III, p. 51, 257.
We prove that
(1)
[f
e- x2(1+f2) dt/(1 + t 2 )
by observing first that both sides vanish for x = 0 since SA dt/( 1 + t 2 ) = 11T and
then noting that the derivatives of the two sides are identical: the derivative
of the right hand side is
-( -2x)
e- x 2(l+f2) dt = 2
e-
x2
u2
= 2e- LX ex2
du,
u2
duo
if u = xt
234
If we now let x~oo in (1) the left hand side has limit [io e- t2 dtf. The
12n+2<12n+l <12n
we have
2n+ 1
Tr
1
2n +2 12n <2(2n + 1) lZ"n <12n
which gives
2n+l
nTr
2n + 2 (n11J < 2(2n + 1) < (n11J
so that
and taking n-th powers and multiplying across by In and integrating we get
.,/r"
e- nx2 dx =
L""e-
t2
dt <.,/r"
(1 + x 2 )-n dx.
Changing the variable in the integral on the left by x = sin (J shows that
r
nTr
lZ"~ Tr 2
"nI2n + 1 = 2(2n + 1) J;;~4
1 I
(where we use a change of variable x = tan 8). Hence the infinite integral
Chapter 10
235
by the formula of Chapter 7. This tail will certainly be less, e.g., than
10-6 if x ~ 3.5. We can therefore restrict ourselves to J~.5 e-x2 dx which
can be handled by any of the usual formulas.
If we use the trapezoidal rule it is found experimentally that remarkable
accuracy is obtained for quite large steps. For further discussion of this see,
e.g.,
E. T. Goodwin, Proc. Cambridge Phil. Soc., 45 (1949),241-245.
D. R. Hartree, Numerical Analysis, Oxford, 1952, 111.
~x
Chapter 10
10.1. Solution
= Aa n + B{3n
x 2 -x-1=0
and we obtain A, B from
1=A+B }
1=Aa+B{3
getting
Un
2J5
= 9227465
14930352
U36= 24157817
U37 = 39088169
U 38 = 63245986
U 39 = 1023 34155
U 35
10.2. Solution
236
Then, if
A.. = kn+1/k..,
is a polynomial of degree at most n and can be expressed as a linear
combination of 110 , 1110 ... I1n. The coefficient of I1n is
The coefficient of I1 n- 1 is
I1n+ 1 I1n- 1 w dx -
A..
xl1nl1 n- 1 dx
= 0- A..
I1n(xl1 n- 1) dx
= -Ank..-1/k.. = -k..+1k..-1/k~.
The results for special cases can be obtained from NBS Handbook, p.
782 and include
(a)
(n + 1)Pn+1 = (2n + 1)xPn - nPn-l>
(b)
(c)
(d)
nL..-1o
10.3. Solution
Vn
aUn +b,
231
Chapter 10
19 = 0.09161 23,
110 = 0.0838771.
10.7. Solution
The exact solution to this equation is y = (1- x)-t, with a pole at x = 1.
Consider integrating at an interval h = N- 1 and write x,. = rh, y, = y(rh)
where
Y,+1 = y, + hy;.
We have
1
1
h
1
h2 y,
-=----=--h+-y r+1 y, 1 + hy, y,
1 + hy,'
= 0, 1, ... , n -1.
Summing we get
1 1
r-l
y
-=--rh+h 2
--s-=1-rh+hR, say.
y, Yo
s=O 1 + hys
'
We shall now estimate the R" which obviously increase steadily from Ro = 0
to RN Oearly
hy,
1
1
R,+1-R,=1+hy, =h-1 -r+R,+1 N+1+R,-r
We use the method of the "Integral Test". First, neglecting the positive
quantities R, we have
RN<
N-lL
r=O
1
<
N+1-r
iN
dx
N+1-x
log (N+1).
Next,
RN>
~1
f~l
1
N +1 + log (N+1)-x
=10 N +10g(N+1)
g log (N +1)+2'
238
and hence
1
{ 1+ ~loglog(h-l)} .
h log h- 1
log (h- 1 )
!"'y,
y(O) =0
h =0.00025
h =0.00125
h =0.0025
h =0.0125
0
0.0125
0.0250
0.0375
0.0500
00000000000
00054793700
00159912174
00299888889
00469064935
00000000000
00054084766
00159108463
00299020417
00468145066
00000000000
00052730731
00157549032
00297323965
00466341119
00000000000
00034938562
00135795112
00273099005
00440237794
0.0625
0.0750
0.0875
0.1000
0.1125
00664121481
00882794453
01123406696
01384648727
01665459906
00663158054
00881792703
01122370425
01383580841
01664362708
00661263636
00879819092
01120325726
01381471217
01662193028
00633609674
00850825638
01090141626
01350207833
01629937288
0.1250
0.1375
0.1500
0.1625
0.1750
01964957501
02282391333
02617113183
02968555308
03336214844
01963832876
02281240853
02615938188
02967356957
03334994148
01961607102
02278962286
02613609625
02964980810
03332572516
01928429044
02244919744
02578751306
02929348394
03296202153
Chapter 10
239
h =0.00025
h =0.00125
h =0.0025
h =0.0125
0.1875
0.2000
0.2125
0.2250
0.2375
03719642145
04118431874
04532216083
04960658706
054034511"39
03718399993
04117169062
04530933327
04959356649
05402130364
03715934729
04114661815
04528385574
04956769724
05399505479
03678858123
04076907044
04489977709
04917731319
05359856951
0.2500
05860308623
05858969672
05856307932
05816067873
10.9. Solution
The solution is y = e E 1 (x) and the following values are obtained from
NBS Handbook, p. 243:
x= 1 0.596347361
2 0.361328617
3 0.262083740
4 0.20634 5650
5 0.177297535
10 0.0915633339
20 0.04771 85455
X
10.11. Solution
Writing z = y' we replace our scalar differential equation by a vector
differential equation
where
[~]
is given for x = 0:
[;,~~J.
We have
and
giving
240
Clearly y(1), z(1) only depend on y(O), z(O), hand f. We have to evaluate f
twice, once with arguments 0, y(O), z(O) (which we use three times), and
once with arguments
h,
y(O)+ hz(O),
1.70 ... ,
2.97 ....
10.12. Solution
Discretizing using a mesh size h = 1, we get a system of three
homogeneous linear equations for y(l), y@, y(~):
(y(0))-2y(1)+y@ = -l~ . lAY (1)
y(1)-2y@+ y(~) = -l~ ~AY(~)
y@-2y()+(y(1))=
--h. AY()
(A -128)
64
32
(A -64)
64
3"
o ] =0.
32
(A _1~8)
This reduces to
A3_ A2[11 ;64] + A[128 X364 X 5] _ [64X6;X 128] = O.
i.e.,
241
Chapter 10
The last equation has an obvious root A = 1 and the residual equation is
3A2 -8A+2=0
which has roots A = (4JiO)/3. Thus the roots are
Al = 64(4 - JiO)/3 = 17.8688,
[ 2+M
-1
-2+M
-2+M
2+M
18.9225,
19.1844,
19.4481
0.00122,
-0.00816,
-0.01747
is a multiple of
y(O) = 0,
y'(O) = 1
242
6.0327,
9.1705, ....
81.8844,
189.219, ...
(which we can compare with the roots of the cubic). It is Al = 18.9564 with
which we are concerned.
The assigned values of A correspond to the following values of ~A 4
~(18.6624)1/2 =~(4.32) =
2.88,
2.90,
2.92,
2.94
Chapter 10
243
This is the equation which determines the characteristic values (if any)
of E. We now return to the special V(x) of the problem and note that an
odd solution of
I/I"(x) +(E +5)I/I(x) = 0
is
cp(x) = sin.JE + 5x
~E+5
E.
then
f(-1) = tan 2+2,
244
is or is not zero. If it is not zero we try to get a better guess for E and repeat
the process until we get T(E) close enough to zero.
Various artifices can be used to determine E, e.g., inverse interpolation
as on p. 133.
The result is that there is an even solution corresponding to E ~
0.337.
10.15. Solution
The table is meant to be one of (3x+912)3-38299 99877.
There were errors in the entries for arguments 217, 218, 219: the
corrected values are respectively-116 39330; 10389619; 325 03132
10.16. Solution
Compare Problem 7.7.
y(1.5) = 0.7127926,
y(2.0) = 0.6429039.
10.17. Solution
_anf(n+(b/a
)
a . x,. f(b/a)
Xo
b). x,. = (Ain + Bin+l)/C
where
A = {cio + (a + b )it}xo - ilXl>
and
J.n = (_c)1/2nJ n+(b/a) (2a- 1(-c)I/2)
f(7) =2983
10.19. Solution
a) These are the odd prime numbers.
b) If Pn is the nth prime number of the form 4r+l so that PI =5,
P2 = 13, P3 = 17, . .. then the sequence is that of the least integer N for
Chapter 10
245
x(O) =
2t2e-t4dt=~r e-Tl1!4dT=~f(~)=0.612708351.
x'(O) =
Differentiating we find
y"=x 2y+
t(4t3-3xt)exp(-~x2+2xt2_t4)dt
-1=
{= ... dt
exp (-~x2+2xt2- t4 ) dt
The first term on the right vanishes at both limits and the second cancels the
third term on the right in the previous display. Hence
The solution is
246
10.23. Solution
See, British Assoc. Math. Tables, Vol. 2. For recent theoretical work
see papers by E. Hille.
10.24. Solution
See, J. L. Synge, On a certain non-linear equation, Proc. Royal Irish
Academy 62A (1961), 17-41. Z. Nehari, On a non-linear differential
equation arising in nuclear physics, Proc. Royal Irish Acadymy 62A (1963),
117-135.
10.25. Solution
We use an interval h = 0.1 and work to 4D. From the power-series
y(x) = 1 + x + x 2 + (x 3 /3) + (x4/12) + ...
we compute y(0.1), y(0.2), y(O.3) and [(0.1), [(0.2), [(0.3) and we then
predict
YP (0.4) = 1 + (4/30)[2.4206 -1.4428 + 3.3994] = 1.5836
We can accept this value and proceed, or alternatively, try a larger h = 0.2.
x
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
YP
1.0000
1.1103
1.2428
1.3997
1.5836
1.7974
Yc
y'=[(x,y)=x+y
1.5836
1.7974
1.0000
1.2103
1.4428
1.6997
1.9836
2.2974
Bibliographical Remarks
RECOMMENDED LITERATURE
TESTS AND
MONOGRAPHS
248
Bibliographical Remarks
Bibliographical Remarks
249
TABULAR MATERIAL
M. ABRAMOWITZ and I. A. STEGUN, ~s. Handbook of mathematical functions, National Bureau of Standards, Applied Math. Series, 55 (U.S.
Government Printing Office, 1964).
BARLOW'S TABLES, ed. L. J. Comrie (Spon, 1961).
L. J. COMRIE, Chamber's Shorter six-figure mathematical tables, (Chambers,
1950).
A. FLETCHER, J. C. P. MILLER, L. ROSENHEAD and L. J. COMRIE, An index of
mathematical tables, 2nd ed., 2 vols. (Addison-Wesley, 1962).
In addition to the literature mentioned above, there are many useful
expository articles available, and for up-to-date surveys of special topics,
reference can be made to Symposia Proceedings, such as those which appear
in the ISNM series.
There is a developing interest in the history of numerical mathematics
and computing machines. For the classical material see, respectively.
H. H. GOLDSTINE, A history of numerical analysis from the 16th through the
19th century (Spnnger, 1978),
B. RANDELL, The origins of digital computers, 2nd ed. (Springer, 1975).
For more recent history see the obituaries of the founders and the
periodical Annals of the history of computing, 1979-.
Contents
Vol. 2, Numerical Algebra
Preface .
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
.13
.16
.19
.29
.44
.53
.65
.71
.83
.99
105
110
. 117
Bibliographical Remarks
.212
Index . . . . . . . . .
.214
INDEX
M. Davies 182
B. Dawson 182
R. Dedekind 37, 40
Deferred approach 100
Difference equation 118
Differencing 121
Elliptic integral 18, 21, 165
F. Emde 141
R. Emden equation 138
O. Emersleben 178
Error analysis
backward 48
forward 48
interpolation 89, 91
quadrature 98, 99, 105, 109, 113, 218,
229
Error function 77, 117
L. Euler 116
constant 31, 116, 231
method 125
transform 68, 69, 71, 72, 192, 193
Exponential integral 73, 81
R. M. Federova 82
W. Feller 87
Fibonacci 133, 179, 235
Fixed point 36, 38
A. Fletcher 81
FMRC 81, 202
G. E. Forsythe 46, 47
J. B. J. Fourier
coefficients 61
method 50
series 64
L. Fox 83, 202
A. J. Fresnel integrals 76, 202
Fundamental theorem of algebra 84, 213
P. W. Gaffney 214
Gamma function 79
K. F. Gauss arithmetic-geometric mean 161
K. F. Gauss - P. L. Chebyshev quadrature
105, 224
K. F. Gauss - A. M. Legendre quadrature
116,224,225,230
252
Walter Gautschi 121
A. Ghizzetti 219
G. A. Gibson 153, 155
E. T. Goodwin 82, 199, 235
I. S. Gradshteyn 226
J. P. Gram - E. Schmidt process 103
J. A. Greenwood 81
E. Halley 54, 182
E. Hansen 134, 237
P. A. Hansen 148
G. H. Hardy 151, 155, 172
H. O. Hartley 81
D. R. Hartree 138,135,245
C. Hastings 61
M. L. J. Hautus 135, 237
D. R. Hayes 219
P. Henrici 50
C. Hermite 232
interpolation 90, 224
K. Heun method 125, 135, 238, 239
E. Hille 246
K. E. Hirst 180
W. G. Horner's method 52
W. G. Hwang 43, 176
Instability 119, 130
Interpolation
Aitken algorithm 86
errors 89, 91
Hermite 90, 224
inverse 88, 241
Lagrange 84, 210
Newton 96, 216
spline 96, 215
Inverse interpolation 88, 241
Iteration 38
C. G. J. Jacobi relation 147
E. Jahnke 141
E. Kamke 124
A. N. Khovanski 180
K. Knopp 195
J. L. Lagrange interpolation 84
C. A. Laisant 235
E. Landau 24
J. Landen 22, 168
A. V. Lebedev 82
H. Lebesgue constants 91
A. M. Legendre
Index
expansion 184
polynomial 184, 236
D. H. Lehmer 244
G. W. Leibniz' Theorem 129, 207
Lemniscate constants 17, 109, 114
J. Liang 191
L. Lichtenstein - S. A. Gerschgorin equation 191
Local Taylor series 129
F. LOsch 141
E. C. J. von Lommel 149
Y. L. Luke 188
Mean value theorems
first 51, 89, 98, 99, 167, 224
second 151, 152
C. A. Micchelli 214
J. C. P. Miller 81
W. E. Milne quadrature 113
I. P. Natanson 211
NBS Handbook 76, 81, 92, 93, 94, 163,
164, 165, 178, 190, 198, 199, 200, 202,
231, 236, 239
Z. Nehari 246
E. H. Neville 87
I. Newton
process 39, 48
interpolation formula 216
J. W. Nicholson 151, 157
F. W. J. Olver 121
Order of convergence 29
Order symbols 24
Orthogonal polynomials 103
Chebyshev 104, 134, 236
Hermite 134, 236
Laguerre 104, 134, 236
Legendre 104, 134, 236
A. Ossicini 219
A. M. Ostrowski 47, 233
H. Pade fractions 64, 188
P. Painleve 240
J. F. Pfaff 19
Eric Phillips 237
E. Picard method 125
S. K. Picken 62
H. Poincare 75
Practical computation 42
Principal value integral 106
253
Index
Quadrature
Gaussian 103, 115
Lagrangian 98
Milne 113, 219, 232
Romberg 100
~ Rule 99, 113, 218, 229, 232
Simpson 99, 113, 222, 232
Trapezoidal 97, 221
Predictor - corrector 127
Recurrence relations 33
for reciprocals 33
for square root 37
Remainder Theorem 85
E. Ja. Remez 184
L. F. Richardson 66, 100, 126
T. J. Rivlin 214
G. Robinson 180
M. Rolle's Theorem 90, 212, 220
W. Romberg quadrature 100
L. Rosenhead 81
G. C. Rota 242
L. Rubin 219
H. Rutishauser 102
I. M. Ryzhik 226
D. H. Sadler 3, 83, 202
H. E. Salzer 215, 216
I. J. Schoenberg 19, 169
E. SchrOdinger 136, 242
J. Schwab 22, 169
I. J. Schwatt 226
Second mean value theorem 151, 152
R. M. Sievert integral 115
T. Simpson's rule 102, 113, 222, 230
Spline 96
J. Staton 82, 199
I. A. Stegun 141, 269
E. Stiefel 102
J. Stirling 18
formula 79, 111, 174, 211
G. G. Stokes 56
O. Stolz 153, 155
Stopping rule 42
J. C. F. Sturm - J. Liouville problem 124,
131
Subtabulation 95, 214
J. L. Synge 246
Synthetic division 53
Olga Taussky Todd 3
H. C. Thacher 121
Theoretical arithmetic 42
~ quadrature 99, 116, 229, 230
A. F. Timofeyev 226
E. C. Titchmarsh 152, 155
O. Toeplitz 195
UNIVAC 238
C. J. de la Vallee Poussin 153, 155
A. Vandermonde 95, 148, 213
G. W. Veltkamp 135, 237