Académique Documents
Professionnel Documents
Culture Documents
Consider solving
y = y cos x,
y (0) = 1
h2
h3
Y (0) + hY (0) + Y (0) + Y (0) +
2
6
We can calculate Y (0) = Y (0) cos(0) = 1. How do
we calculate Y (0) and higher order derivatives?
Y (x) = Y (x) cos(x)
Y (x) = Y (x) sin(x) + Y (x) cos(x)
Y (x) = Y (x) cos(x) 2Y (x) sin(x) + Y (x) cos x
Then
Y (0) = Y (0) sin(0) + Y (0) cos(0) = 1
Y (0) = Y (0) cos(0) 2Y (0) sin(0) + Y (0) cos 0 = 0
Thus
Y (h) =
2
h
Y (0) + hY (0) + 2 Y (0)
3
+ h6 Y (0) +
2
1 + h + h2 +
+
2
8
2
h
yn+1 = yn + hyn
+ yn
,
2
n0
with
yn
= yn cos(xn)
yn
= yn sin(xn) + yn
cos(xn)
We give a numerical example of computing the numerical solution with Taylor series methods of orders
2, 3, and 4.
A 4th-ORDER EXAMPLE
Consider solving
y = y,
y (0) = 1
we obtain
Y = Y = Y
Y = Y = Y, Y (4) = Y
h2 h3
+ Yn + Yn
2
6
5
h
(4)
+ Yn +
Y (4) (n)
24
120
Yn + hYn
h4
h2 h3
2
6
h4
24
+ h y + h y + h y
yn+1 = yn + hyn
2 n
6 n
24 n
1h+
with y0 = 1. By induction,
yn = 1 h +
h2
h3
n
4
h
24
(4)
yn
n0
Recall that
h2 h3 h4
h5
+
e
=1h+
e
2
6
24 120
with 0 < < h. Then
h
yn =
h5
eh + 120 e
h5
n
= enh 1 + 120 eh
n
.
n eh
= exn 1 + x120
h4
Thus
Y (xn) yn = xnexn O(h4)
ERROR
yn + hyn
h2 y ,
2 n
n0
Subtracting,
en+1 =
en + hen
h2 h3
+ en + Y (n)
2
6
Similarly,
= e sin(x ) + e cos(x )
e
n n
n
n
n
e |en| + e
n
n
2 |en|
en + hen
h2 h3
+ en + Y (n)
2
6
we have
2
3
h
h
|en+1| |en| + h en + 2 en + 6 Y (n)
3
h
2
1 + h + h |en| + 6 Y
We can now imitate the proof of convergence for Eulers method, obtaining
2xn
|en | e
2xn 1
2e
|e0| + h
Y
12
provided h 1. Thus
|Y (xn) yh(xn)| O(h2)
y (x0) = Y0
RUNGE-KUTTA METHODS
Nonetheless, most researchers still consider Taylor series methods to be too expensive for most practical
problems (a point contested by others). This leads us
to look for other one-step methods which imitate the
Taylor series methods, without the necessity to calculate the higher order derivatives. These are called
Runge-Kutta methods. There are a number of ways in
which one can approach Runge-Kutta methods, and
I will follow a fairly classical approach. Later I may
introduce some other approaches to the development
of such methods.
with F (xn, yn, h; f ) some carefully chosen approximation to f (x, y ) on the interval [xn, xn+1]. In particular, write
F (x, y, h; f )
= 1f (x, y ) + 2f (x + h, y + hf (x, y ))
2
h
Y (x) + hY (x) + 2 Y (x) +
2
Y (x) + hf + h2 [fx + fy f ]
3
h
+ 6 fxx + 2fxy f + fyy f 2 + fy fx
+ fy2f
r
j=0
1
j!
x
+ (r+1)! x
+ z f (x, z )x=a
z=b
r+1
+ z
f (x, z )x=a+
z=b+
j
+ 12
1
1
c2 =
2 fx +
2 fy f
2
2
c3 =
1 1 2 f
1
xx + 3 2 fxy f
6
2 2
1
1
2
+ 6 2 2 fyy f 2 + 16 fy fx + 16 fy2f
We try to set to zero as many as possible of the coefcients c1, c2, c3,... Note that c3 cannot be made to
equal zero in general, because of the nal terms
1 2
1
fy fx + fy f
6
6
which are independent of the choice of the coecients
1, 2, , .
= 0
1
,
22
1 = 1 2
Case: 2 = 12
yn+1 = yn + h2 [f (xn , yn)
+f (xn + h, yn + hf (xn , yn))]
We can derive other second order formulas by choosing other values for 2. Sometimes this is done by
attempting to minimize the next term in the truncation error. In the case of the above formula,
Tn(y ) = c3h3 +
with
c3 =
1 1 2 f
1
xx + 3 2 fxy f
6
2 2
1
1
2
+ 6 2 2 fyy f 2 + 16 fy fx + 16 fy2f
c3 =
1 1 2
6
2 2
1
2
3
1 1 2
6
2 2
1
6
1
6
fxx
fxy f
fyy f 2
fy fx
fy2f
Then
|c3| d1(2) d2(f )
with
d1(2) =
1 1 2 2
6
2 2
+ 13 2
+ 16 12 2 2
1
+ 18
1
2
+3f (xn +
2 h, y
n
3
2 hf (x , y ))
n n
3
3-STAGE FORMULAS
We have just studied 2-stage formulas. To obtain a
higher rate of convergence, we must use more derivative evaluations. A 3-stage formula looks like
yn+1 = yn + hF (xn , yn, h; f )
F (x, y, h; f ) = 1V1 + 2V2 + 3V3
V1 = f (x, y )
V2 = f (x + 2h, y + 2,1hV1)
V3 = f (x + 3h, y + 3,1hV1 + 3,2hV2)
p-STAGE FORMULAS
i Vi
j=1
V1 = f (x, y )
V2 = f (x + 2h, y + 2,1hV1)
..
p1
Vp = f x + ph, y + h
p,j Vj
j=1
1
1
2
2
3
3
4 5
4 4
6
5
7
6
8
6
=
=
=
=
1
[V1 + 2V2 + 2V3 + V4]
6
f (x, y )
f (x + 12 h, y + 12 hV1)
f (x + 12 h, y + 12 hV2)
f (x + h, y + hV3)
with a suitable d(x). Finally, one can prove an asymptotic error formula
Y (x) yh(x) = D(x)h4 + O(h5)
2
y
1 + x2
Its true solution is
x
Y (x) =
1 + x2
Use stepsizes h = .25 and 2h = .5.
y =
CONVERGENCE
y (x0) = Y0
Introduce
n(Y ) = h1 Tn (Y )
Y (xn+1) Y (xn)
F (xn, Y (xn), h; f )
=
h
We will want n(Y ) 0 as h 0. In this connection,
introduce
(h) =
max
x0xb
<y<
|f (x, y ) F (x, y, h; f )|
and assume
(h) 0
as
h0
Y (xn+1) Y (xn)
Y (xn)
h
+ [f (xn , Y (xn)) F (xn , Y (xn), h; f )]
n(Y ) =
we rewrite it as
Y (xn+1) = Y (xn) + hF (xn , Y (xn), h; f ) + hn(Y )
+hn(Y )
To continue with this approach, we need to know what
happens to F when the y argument is perturbed. In
particular, we assume
|F (x, y, h; f ) F (x, z, h; f )| L |y z|
in which
1
1
F (x, y, h; f ) = f (x, y ) + f (x + h, y + hf (x, y ))
2
2
Then
F (x, y, h; f ) F (x, z, h; f )
= 12 [f (x, y ) f (x, z )]
+ 12 [f (x + h, y + hf (x, y ))
f (x + h, z + hf (x, z ))]
+ 12 |f (x + h, y + hf (x, y ))
f (x + h, z + hf (x, z ))|
12 K |y z| + 12 K [|y z|
h
|F (x, y, h; f ) F (x, z, h; f )| K + K 2 |y z|
4
For h 1, take L = K (1 + K/4). Then
|F (x, y, h; f ) F (x, z, h; f )| L |y z|
+hn(Y )
Taking bounds,
|en+1| |en| + hL |en | + h (h)
with
(h) =
max
x0xn b
|n(Y )|
We analyze
|en+1| (1 + hL) |en | + h (h)
STABILITY
Using the ideas introduced above, we can do a stability
and rounding error analysis that is essentially the same
as that done for Eulers method. We omit it here.
Consider now applying a Runge-Kutta method to the
model equation
y = y,
y (0) = 1
This yields
yn+1 =
=
=
=
Thus
h
yn + h yn + 2 f (xn, yn)
h
yn + h yn + 2 yn
yn + hyn + 12 (h)2 yn
2
1
1 + h + 2 (h) yn
yn = 1 + h +
1 (h)2 n ,
2
n0
ERROR CONTROL
x xn ,
y (xn) = yn
This would have to be analyzed for stability and convergence. But with it, an error per step criteria applied
to yh(xn + 2h),
.5) |trunc| 2)
1
F (x, y, h; f ) = [V1 + 2V2 + 2V3 + V4]
6
V1 = f (x, y )
V2 = f (x + 12 h, y + 12 hV1)
V3 = f (x + 12 h, y + 12 hV2)
V4 = f (x + h, y + hV3)
RUNGE-KUTTA-FEHLBERG METHODS
There are 6-stage methods of order 5. Fehlberg thought
of imbedding a order 4 method into the 6-stage method,
and to then use of the dierence of the two solutions
to estimate the local error in lower order method. In
particular, he introduced methods
yn+1 = yn + h
j Vj
j=1
6
yn+1 = yn + h
j Vj
j=1
V1 = f (xn , yn)
Vi = f x + ih, y + h
i1
i,j Vj , i = 2, ..., 6
j=1
DISCUSSION
In determining the formulas for yn+1 and yn+1, Fehlberg
had some additional freedom in choosing the coecients. He looked at the leading terms in the expansion of the errors
un(xn+1) yn+1,
un(xn+1) yn+1
in powers of h, say
un(xn+1) yn+1 = c5h5 + c6h6 +
un(xn+1) yn+1 = c6h6 + c7h7 +
=
h
5
6
c5h + c6h +
c5
y (0) = 0
i Vi
j=1
Vi = f x + ih, y + h
i,j Vj ,
i = 1, ..., p
j=1
COLLOCATION METHODS
An important category of implicit Runge-Kutta methods are obtained as follows. Let p 1 be an integer,
and let the points
0 1 < 2 < < p 1
be given. Assuming we are at the point (xn, yn), nd
a polynomial q (x) of degree p for which
q (xn) = yn
q (xn + ih) = f (xn + ih, q (xn + ih))
ORDER OF CONVERGENCE
Assume {i} have been so chosen that
1
0
( ) j d = 0,
j = 0, 1, ..., m 1
( ) = ( 1) ( p)
ADDITIONAL REFERENCES
Larry Shampine, Numerical Solution of Ordinary Differential Equations, Chapman & Hall Publishers, 1994.
Larry Shampine and Mark Reichelt, The Matlab ODE
Suite, SIAM Journal On Scientic Computing 18
(1997), pp. 1-22.
Arieh Iserles, Numerical Analysis of Dierential Equations, Cambridge University Press, 1996. This also
includes as extensive development on the numerical
solution of partial dierential equations.
Uri Ascher, Robert Mattheij, and Robert Russell, Numerical Solution of Boundary Value Problems for Ordinary Dierential Equations, Prentice-Hall, 1988.
Uri Ascher and Linda Petzold, Computer Methods
for Ordinary Dierential Equations and DierentialAlgebraic Equations, SIAM Pub., 1998.