Vous êtes sur la page 1sur 39

EXAMPLE - TAYLOR SERIES METHOD

Consider solving
y  = y cos x,

y (0) = 1

Imagine writing a Taylor series for the solution Y (x),


say initially about x = 0. Then
Y (h) =

h2 
h3 

Y (0) + hY (0) + Y (0) + Y (0) +

2
6
We can calculate Y (0) = Y (0) cos(0) = 1. How do
we calculate Y (0) and higher order derivatives?
Y (x) = Y (x) cos(x)
Y (x) = Y (x) sin(x) + Y (x) cos(x)
Y (x) = Y (x) cos(x) 2Y (x) sin(x) + Y (x) cos x

Then
Y (0) = Y (0) sin(0) + Y (0) cos(0) = 1
Y (0) = Y (0) cos(0) 2Y (0) sin(0) + Y (0) cos 0 = 0

Thus
Y (h) =

2 
h

Y (0) + hY (0) + 2 Y (0)
3
+ h6 Y (0) +
2
1 + h + h2 +

We can generate as many terms as desired, obtaining


added accuracy as we do so. In this particular case,
the true solution is Y (x) = exp (sin x). Thus
h2 h4
Y (h) = 1 + h +

+
2
8

We can truncate the series after a particular order.


Then continue with the same process to generate approximations to Y (2h), Y (3h), ... Letting xn = nh,
and using the order 2 Taylor approximation, we have
h2 
h3 
Y (xn+1) = Y (xn) + hY (xn) + Y (xn) + Y (n )
2
6
with xn n xn+1. Drop the truncation error
term, and then dene


2
h


yn+1 = yn + hyn
+ yn
,
2

n0

with

yn
= yn cos(xn)


yn
= yn sin(xn) + yn
cos(xn)

We give a numerical example of computing the numerical solution with Taylor series methods of orders
2, 3, and 4.

A 4th-ORDER EXAMPLE

Consider solving
y  = y,

y (0) = 1

whose true solution is Y (x) = ex. Dierentiating


the equation
Y (x) = Y (x)

we obtain
Y  = Y  = Y
Y  = Y  = Y, Y (4) = Y

Then expanding Y (xn + h) in a Taylor series,


Y (xn+1) =

h2  h3 
+ Yn + Yn
2
6
5
h
(4)
+ Yn +
Y (4) (n)
24
120

Yn + hYn
h4

Dropping the truncation error, we have the numerical


method
2

h2 h3
2
6

h4
24

 + h y  + h y  + h y
yn+1 = yn + hyn
2 n
6 n
24 n


1h+

with y0 = 1. By induction,


yn = 1 h +

h2

h3

n
4
h

24

(4)

yn

n0

Recall that
h2 h3 h4
h5
+
e
=1h+

e
2
6
24 120
with 0 < < h. Then
h

yn =

h5

eh + 120 e


h5

n

= enh 1 + 120 eh


n

.
n eh
= exn 1 + x120
h4

Thus
Y (xn) yn = xnexn O(h4)

ERROR

Returning to the preceding order 2 approximation, we


have
2

Y (xn+1) = Y (xn) + hY (xn) + h2 Y (xn) + h6 Y (n)


yn+1 =


yn + hyn

h2 y  ,
2 n

n0

Subtracting,
en+1 =

en + hen

h2  h3 
+ en + Y (n)
2
6

en = Yn cos(xn) yn cos(xn)


= en cos(xn)
 
e  |en |
n

Similarly,
 = e sin(x ) + e cos(x )
e
n  n
n
n
 n


e  |en| + e 
n
n
2 |en|

Returning to the error equation


en+1 =

en + hen

h2  h3 
+ en + Y (n)
2
6

we have
 


2
3
h
h







|en+1| |en| + h en + 2 en + 6 Y (n)



3
h
2


1 + h + h |en| + 6 Y 

We can now imitate the proof of convergence for Eulers method, obtaining
2xn

|en | e

2xn 1 

  
2e
|e0| + h
Y 

12

provided h 1. Thus
|Y (xn) yh(xn)| O(h2)

By similar means, we can show that for the Taylor


series method of order r, the method will converge
with
|Y (xn) yh(xn)| O(hr )

We can introduce the Taylor series method for the


general problem
y  = f (x, y ),

y (x0) = Y0

Simply imitiate what was done above for the particular


problem y  = y cos x.
In general,
Y (x) = f (x, Y (x))
Y (x) = fx (x, Y (x)) + fy (x, Y (x)) Y (x)
= fx (x, Y (x)) + fy (x, Y (x)) f (x, Y (x))
Y (x) = fxx + 2fxy f + fyy f 2 + fy fx + fy2f

and we can continue on this manner. Thus we can


calculate derivatives of any order for Y (x); and then
we can dene Taylor series of any desired order.
This used to be considered much too arduous a task
for practical problems, because everything had to be
done by hand. But with symbolic programs such as
Mathematica and Maple, Taylor series can be considered a serious framework for numerical methods.
Programs that implement this in an automatic way,
with varying order and stepsize, are available.

RUNGE-KUTTA METHODS

Nonetheless, most researchers still consider Taylor series methods to be too expensive for most practical
problems (a point contested by others). This leads us
to look for other one-step methods which imitate the
Taylor series methods, without the necessity to calculate the higher order derivatives. These are called
Runge-Kutta methods. There are a number of ways in
which one can approach Runge-Kutta methods, and
I will follow a fairly classical approach. Later I may
introduce some other approaches to the development
of such methods.

We begin by considering explicit Runge-Kutta methods of order 2. We want to write


yn+1 = yn + hF (xn , yn, h; f )

with F (xn, yn, h; f ) some carefully chosen approximation to f (x, y ) on the interval [xn, xn+1]. In particular, write
F (x, y, h; f )
= 1f (x, y ) + 2f (x + h, y + hf (x, y ))

This is some kind of average derivative. Intuitively,


we should restrict so that 0 1. Otherwise,
how should the constants 1, 2, , be chosen?
We attempt to make the leading terms in the Taylor
series of Y (x + h) look like the corresponding terms
in a Taylor series expansion of
Y (x) + hF (x, Y (x), h; f )

More precisely, introduce the truncation error


Tn(Y ) = Y (x + h) [Y (x) + hF (x, Y (x), h; f )]

Expand Y (x + h), obtaining


Y (x + h) =

2
h

Y (x) + hY (x) + 2 Y (x) +
2
Y (x) + hf + h2 [fx + fy f ]
3
h
+ 6 fxx + 2fxy f + fyy f 2 + fy fx

+ fy2f

in which all functions f , fx, etc. are evaluated at


(x, Y (x)).
Next expand f (x + h, Y (x) + hf (x, Y (x))) in a
series in powers of h. This requires the multivariable
Taylor series:
f (a + , b + )

r

j=0

1
j!
x

+ (r+1)! x




+ z f (x, z )x=a
z=b

r+1


+ z
f (x, z )x=a+
z=b+
j

in which 0 < < 1. Use this with a = x, b = Y (x),


= h, and = hf (x, Y (x)).

We write out the rst few terms:


f (x + h, Y (x) + hf (x, Y (x)))
= f (x, Y (x)) + hfx + hfy f

+ 12

(h) fxx + 2 (h) (hf ) fxy + (hf ) fyy +

in which all terms f , fx, fy , etc. are evaluated at


(x, Y (x)). Introduce this into the formula
F (x, Y (x), h; f )
= 1f (x, Y (x)) + 2f (x + h, Y (x) + hf (x, Y (x)))

and collect together common powers of h. Then substitute this into


Tn(Y ) = Y (x + h) [Y (x) + hF (x, Y (x), h; f )]

and also substitute the earlier expansion of Y (x + h).

This leads to the formula


Tn(y ) = c1h + c2h2 + c3h3 +
c1 = [1 1 2] f (x, Y (x))


1
1
c2 =
2 fx +
2 fy f
2
2
c3 =


1 1 2 f
1
xx + 3 2 fxy f
6
2 2

1
1
2
+ 6 2 2 fyy f 2 + 16 fy fx + 16 fy2f

We try to set to zero as many as possible of the coefcients c1, c2, c3,... Note that c3 cannot be made to
equal zero in general, because of the nal terms
1 2
1
fy fx + fy f
6
6
which are independent of the choice of the coecients
1, 2, , .

We can make both c1 and c2 zero by requiring


1 1 2 = 0
1 = 0
2
2
1
2
2

= 0

Note that we cannot choose 2 = 0, as that would


lead to a contradiction in the last two equations. Since
there are 3 equations and 4 variables, we let 2 be
unspecied and then solve for , , 1 in terms of 2.
This yields
==

1
,
22

1 = 1 2

Case: 2 = 12
yn+1 = yn + h2 [f (xn , yn)
+f (xn + h, yn + hf (xn , yn))]

This is one iteration of the trapezoidal rule with an


Euler predictor.
Case: 2 = 1.
yn+1 = yn + hf (xn + h2 , yn + h2 f (xn , yn))

We can derive other second order formulas by choosing other values for 2. Sometimes this is done by
attempting to minimize the next term in the truncation error. In the case of the above formula,
Tn(y ) = c3h3 +

with
c3 =


1 1 2 f
1
xx + 3 2 fxy f
6
2 2

1
1
2
+ 6 2 2 fyy f 2 + 16 fy fx + 16 fy2f

We can regard this as the dot product of two vectors:

c3 =

1 1 2
6
2 2
1
2
3
1 1 2
6
2 2
1
6
1
6


fxx
fxy f


fyy f 2

fy fx

fy2f

Then
|c3| d1(2) d2(f )

with
d1(2) =


1 1 2 2
6
2 2

+ 13 2

+ 16 12 2 2

1
+ 18

1
2

and d2(f ) the lenth of the second vector. Pick 2


to minimize d1(2). This occurs at 2 = 34 , and this
yields the formula
yn+1 = yn + h4 [f (xn , yn)

+3f (xn +

2 h, y
n
3


2 hf (x , y ))
n n
3

3-STAGE FORMULAS
We have just studied 2-stage formulas. To obtain a
higher rate of convergence, we must use more derivative evaluations. A 3-stage formula looks like
yn+1 = yn + hF (xn , yn, h; f )
F (x, y, h; f ) = 1V1 + 2V2 + 3V3
V1 = f (x, y )
V2 = f (x + 2h, y + 2,1hV1)
V3 = f (x + 3h, y + 3,1hV1 + 3,2hV2)

Again it can be analyzed by expanding the truncation


error
Tn(Y ) = Y (x + h) [Y (x) + hF (x, Y (x), h; f )]

in powers of h. Then the coecients are determined


by setting the lead coecients to zero. This can be
generalized to dealing with p-stage formulas. The algebra becomes very complicated; and the equations
one obtains turn out to be dependent.

p-STAGE FORMULAS

We have just studied 2-stage formulas. To obtain a


higher rate of convergence, we must use more derivative evaluations. A p-stage formula looks like
yn+1 = yn + hF (xn , yn, h; f )
F (x, y, h; f ) =

i Vi

j=1

V1 = f (x, y )
V2 = f (x + 2h, y + 2,1hV1)
..

p1

Vp = f x + ph, y + h

p,j Vj

j=1

Again it can be analyzed by expanding the truncation


error
Tn(Y ) = Y (x + h) [Y (x) + hF (x, Y (x), h; f )]

in powers of h. Then the coecients are determined


by setting the lead coecients to zero.

How high an order can be obtained?


A number of years ago, John Butcher derived the results shown in the following table.
p
Max order

1
1

2
2

3
3

4 5
4 4

6
5

7
6

8
6

This is counter to what people had believed; and it has


some important consequences. We will return to this
when considering the denition of some new RungeKutta formulas.

A POPULAR FOURTH ORDER METHOD

A popular classical formula uses


yn+1 = yn + hF (xn , yn, h; f )
F (x, y, h; f ) =
V1
V2
V3
V4

=
=
=
=

1
[V1 + 2V2 + 2V3 + V4]
6

f (x, y )
f (x + 12 h, y + 12 hV1)
f (x + 12 h, y + 12 hV2)
f (x + h, y + hV3)

For y  = f (x), this becomes



h
1
yn+1 = yn +
f (x) + 4f (x + 2 h) + f (x + h)
6
which is simply Simpsons integration rule.

It can be shown that the truncation error is


Tn(Y ) = c(xn)h5 + O(h6)

for a suitable function c(x). In addition, we can show


|Y (xn) yh(xn)| d(xn)h4

with a suitable d(x). Finally, one can prove an asymptotic error formula
Y (x) yh(x) = D(x)h4 + O(h5)

This leads to the Richardson error estimate


1
[yh(x) y2h(x)]
Y (x) yh(x)
15
EXAMPLE. Solve
1
2, y (0) = 0

2
y
1 + x2
Its true solution is
x
Y (x) =
1 + x2
Use stepsizes h = .25 and 2h = .5.
y =

CONVERGENCE

For the numerical method


yn+1 = yn + hF (xn , yn, h; f )

we dene the truncation error as


Tn(Y ) = Y (xn+1) [Y (xn) + hF (xn, Y (xn), h; f )]

with Y (x) the true solution of


y  = f (x, y ),

y (x0) = Y0

Introduce
n(Y ) = h1 Tn (Y )
Y (xn+1) Y (xn)
F (xn, Y (xn), h; f )
=
h
We will want n(Y ) 0 as h 0. In this connection,
introduce
(h) =

max

x0xb
<y<

|f (x, y ) F (x, y, h; f )|

and assume
(h) 0

as

h0

Note then that




Y (xn+1) Y (xn)
Y (xn)
h
+ [f (xn , Y (xn)) F (xn , Y (xn), h; f )]

n(Y ) =

The rst term on the right side goes to zero from


the denition of derivative; and the second term is
bounded by (h), and thus it too goes to zero with h.
Returning to
Tn(Y ) = Y (xn+1) [Y (xn) + hF (xn, Y (xn), h; f )]

we rewrite it as
Y (xn+1) = Y (xn) + hF (xn , Y (xn), h; f ) + hn(Y )

The numerical method is


yn+1 = yn + hF (xn , yn, h; f )

To analyze the convergence, we subtract to get


en+1 = en + h [F (xn, Yn, h; f ) F (xn , yn, h; f )]

+hn(Y )
To continue with this approach, we need to know what
happens to F when the y argument is perturbed. In
particular, we assume
|F (x, y, h; f ) F (x, z, h; f )| L |y z|

for all x0 x b, < y, z < , and all small


values of h, say 0 < h h0 for some h0. This
generally can be derived from the Lipschitz condition
for the function f (x, y ).
For example, recall the second order method
yn+1 = yn + h2 [f (xn , yn)
+f (xn + h, yn + hf (xn , yn))]

in which
1
1
F (x, y, h; f ) = f (x, y ) + f (x + h, y + hf (x, y ))
2
2

Then
F (x, y, h; f ) F (x, z, h; f )
= 12 [f (x, y ) f (x, z )]

+ 12 [f (x + h, y + hf (x, y ))
f (x + h, z + hf (x, z ))]

Use the Lipschitz condition on f :


|F (x, y, h; f ) F (x, z, h; f )|
12 |f (x, y ) f (x, z )|

+ 12 |f (x + h, y + hf (x, y ))
f (x + h, z + hf (x, z ))|
12 K |y z| + 12 K [|y z|

+ h |f (x, y ) f (x, z )|]


This leads to

h
|F (x, y, h; f ) F (x, z, h; f )| K + K 2 |y z|
4
For h 1, take L = K (1 + K/4). Then
|F (x, y, h; f ) F (x, z, h; f )| L |y z|

Return to the error equation


en+1 = en + h [F (xn, Yn, h; f ) F (xn , yn, h; f )]

+hn(Y )
Taking bounds,
|en+1| |en| + hL |en | + h (h)

with
(h) =

max

x0xn b

|n(Y )|

We analyze
|en+1| (1 + hL) |en | + h (h)

exactly as was done for Eulers method. This yields


(bx0)L 1
e
(h)
|Y (xn) yn| e(bx0)L |e0| +
L
Generally, e0 = 0; and the speed of convergence is
that of (h).

STABILITY
Using the ideas introduced above, we can do a stability
and rounding error analysis that is essentially the same
as that done for Eulers method. We omit it here.
Consider now applying a Runge-Kutta method to the
model equation
y  = y,

y (0) = 1

As an example, use the second order method


yn+1 = yn + hf (xn + h2 , yn + h2 f (xn , yn))

This yields
yn+1 =

=
=
=
Thus


h
yn + h yn + 2 f (xn, yn)


h
yn + h yn + 2 yn
yn + hyn + 12 (h)2 yn


2
1
1 + h + 2 (h) yn

yn = 1 + h +


1 (h)2 n ,
2

n0

If we consider < 0 or real() < 0, we want to know


when yn 0. That will be true if



2
1
1 + h + 2 (h)  < 1

For real, this is true if


2 < h < 0

This is the region of absolute stability for this second


order Runge-Kutta method. It is not much dierent
than that of some of the Adams family of methods.
It can be shown that there are no A-stable explicit
Runge-Kutta methods. Thus explicit Runge-Kutta
methods are not suitable for sti dierential equations.

ERROR CONTROL

Again we want to estimate the local error in solving


the dierential equation. In particular, let un(x) denote the true solution of the problem
y  = f (x, y ),

x xn ,

y (xn) = yn

In the text, on page 428, we derive the formula


un(xn + 2h) yh(xn + 2h)
1
.
[yh(xn + 2h) y2h(xn + 2h)]
= m
2 1
where m is the order of the Runge-Kutta method. We
denote the error estimate on the right by trunc. Then
for error control purposes, as earlier with Detrap, we
want to have trunc satisfy a local error control per
unit stepsize:
.5)h |trunc| 2)h
 to have
If this violated, then we seek a new stepsize h
 , and we continue with the solution protrunc = )h
cess.

We could also use the extrapolated solution


yh(xn + 2h) = yh(xn + 2h) + trunc

This would have to be analyzed for stability and convergence. But with it, an error per step criteria applied
to yh(xn + 2h),
.5) |trunc| 2)

while keeping yh(xn + 2h) will usually suce. Many


codes do something of this kind.

COST IN FUNCTION EVALUATIONS


We are advancing from xn to xn+2. what does this
cost, if we include calculation of trunc? Consider it
for the classical RK method
yn+1 = yn + hF (xn , yn, h; f )

1
F (x, y, h; f ) = [V1 + 2V2 + 2V3 + V4]
6
V1 = f (x, y )
V2 = f (x + 12 h, y + 12 hV1)
V3 = f (x + 12 h, y + 12 hV2)
V4 = f (x + h, y + hV3)

Calculating yh(xn + h): 4 new evaluations of f


Calculating yh(xn + 2h): 4 new evaluations of f
Calculating y2h(xn + 2h): 3 new evaluations of f
Total evaluations: 11
For comparison, an implicit multistep method will typically use 4 evaluations to go from xn to xn+2.

RUNGE-KUTTA-FEHLBERG METHODS
There are 6-stage methods of order 5. Fehlberg thought
of imbedding a order 4 method into the 6-stage method,
and to then use of the dierence of the two solutions
to estimate the local error in lower order method. In
particular, he introduced methods
yn+1 = yn + h

j Vj

j=1
6

yn+1 = yn + h

j Vj

j=1

V1 = f (xn , yn)

Vi = f x + ih, y + h

i1

i,j Vj , i = 2, ..., 6

j=1

He showed the coecients could be chosen so that


yn+1 was an order 4 method and yn+1 was an order
5 method. Then he estimated the local error using
un(xn+1) yn+1 yn+1 yn+1

DISCUSSION
In determining the formulas for yn+1 and yn+1, Fehlberg
had some additional freedom in choosing the coecients. He looked at the leading terms in the expansion of the errors
un(xn+1) yn+1,

un(xn+1) yn+1

in powers of h, say
un(xn+1) yn+1 = c5h5 + c6h6 +
un(xn+1) yn+1 = c6h6 + c7h7 +

He chose the coecients so as


(1) to make yn+1 yn+1 an accurate estimator of
un(xn+1) yn+1,
[un(xn+1) yn+1] [yn+1 yn+1]
un(xn+1) yn+1
c6h6 + c7h7 +
c6

=
h
5
6
c5h + c6h +
c5

(2) to make c5 as small as possible, while having


c5 c6

His choice of coecients for the order (4,5) pair of


formulas is given in the text.
EXAMPLE. Apply the fourth order formula to the simple problem
y  = x4 ,

y (0) = 0

The RKF method of order 4 has the truncation error


.
Tn(Y ) = 0.00048h5

The classical RK method, discussed earlier, has the


truncation error
.
Tn(Y ) = 0.0083h5

Thus the RKF method appears to be more accurate


than the classical formula.

COSTS For its computational cost when stepping from


xn to xn+2, 12 evaluations of f are required, as compared to the 11 evaluations required in estimating the
local error in the classical fourth order method. In
fact, we do not need to take two steps to estimate the
local error, but can in fact change stepsize at each xn
if so needed.
PROGRAMS
In MATLAB, look at the programs ode23 and ode45.
Also look at the associated demo programs.
In the class account, look at the Fortran program
fehlberg-45.f. It takes a single step, returning both
yn+1 and the estimated local error yn+1 yn+1.
In addition, I have included the program rkf45.f, which
implements the Fehlberg (4,5) pair with automatic
control of the local error. It is from Sandia National
Labs, and it is a very popular implementation.

IMPLICIT RUNGE-KUTTA METHODS


As before, dene a p-stage method

yn+1 = yn + hF (xn , yn, h; f )


F (x, y, h; f ) =

i Vi

j=1

But now, let

Vi = f x + ih, y + h

i,j Vj ,

i = 1, ..., p

j=1

Thus we must solve p equations in p unknowns at


each step of solving the dierential equation.
The reason for doing this is that it leads to numerical
methods with better stability properties when solving
sti dierential equations. In particular, we can obtain
A-stable methods of any desired order.

COLLOCATION METHODS

An important category of implicit Runge-Kutta methods are obtained as follows. Let p 1 be an integer,
and let the points
0 1 < 2 < < p 1
be given. Assuming we are at the point (xn, yn), nd
a polynomial q (x) of degree p for which
q (xn) = yn
q (xn + ih) = f (xn + ih, q (xn + ih))

for i = 1, ..., p. Then this is equivalent to an implicit


Runge-Kutta method; and it is also called a block by
block method.

ORDER OF CONVERGENCE
Assume {i} have been so chosen that
 1
0

( ) j d = 0,

j = 0, 1, ..., m 1

( ) = ( 1) ( p)

Then for our collocation method,


Tn(Y ) = O(hp+m+1 )

and the order of convergence is p + m.


Dening {1, ..., p} as the zeros of the Legendre
polynomial of degree p, normalized to [0, 1], leads to
a method of order 2p. These methods are A-stable
for every p 1.

ADDITIONAL REFERENCES
Larry Shampine, Numerical Solution of Ordinary Differential Equations, Chapman & Hall Publishers, 1994.
Larry Shampine and Mark Reichelt, The Matlab ODE
Suite, SIAM Journal On Scientic Computing 18
(1997), pp. 1-22.
Arieh Iserles, Numerical Analysis of Dierential Equations, Cambridge University Press, 1996. This also
includes as extensive development on the numerical
solution of partial dierential equations.
Uri Ascher, Robert Mattheij, and Robert Russell, Numerical Solution of Boundary Value Problems for Ordinary Dierential Equations, Prentice-Hall, 1988.
Uri Ascher and Linda Petzold, Computer Methods
for Ordinary Dierential Equations and DierentialAlgebraic Equations, SIAM Pub., 1998.

Vous aimerez peut-être aussi