Vous êtes sur la page 1sur 13

2. INITIAL-VALUE PROBLEMS 2.1 The general initial-value problem 2.2 One-step methods 2.2.1 The Euler method 2.2.

2 Order of accuracy 2.2.3 Variants of the Euler method 2.2.4 Runge-Kutta methods 2.3 Multi-step methods 2.3.1 Basic multi-step predictor 2.3.2 Predictor-corrector methods 2.3.3 Error estimation for predictor-corrector methods 2.4 General properties of numerical methods 2.5 Error estimation by Richardson extrapolation

SPRING 2005 - revised

2.1 The General Initial-Value Problem


dy = f ( x, y ) , dx y ( x0 ) = y 0

Important: y may be a scalar or a vector! The objective is to estimate the values y1, y2, y3, at discrete points x1, x2, x3, separated by a step length h = xi xi 1 .

All initial-value problems are solved by forward integration, but there are two main types of numerical procedure: one-step methods: yi+1 depends only on yi. multi-step methods: yi+1 may depend on yi, yi-1, yi-2, .

y new point last point

new point old points

x0

xi

xi+1

x0

x i-2

x i-1

xi

x i+1

One-step method

Multi-step method

Note that the function f always gives the gradient of the curve at any point.

Computational Mechanics

2-1

David Apsley

2.2 One-Step Methods Because no information is required from the solution prior to the last known point, in onestep methods each step can be considered as a new initial-value problem: ( x0 , y 0 ) ( x1 , y1 ) = ( x0 + h, y 0 + k ) where, formally, x +h y1 = y 0 + x00 f dx = y 0 + h f av The problem is that if the expression for the gradient f depends on y then we do not know how it varies over the interval without actually knowing the solution.

2.2.1 The Euler Method (aka forward-differencing) Estimate the average derivative from the gradient at the start of the interval.
y Actual solution

Equation:

dy = f ( x, y ) , dx

y ( x0 ) = y 0

Euler

Estimate of increment: Update:

k = h f ( x0 , y 0 ) y1 = y 0 + k
x0 h

x1

2.2.2 Order of Accuracy The main problem with the Euler method is the error associated with neglecting the change in dy/dx over the interval. This is related to the order (of accuracy) of the method. Taylors Theorem

y ( x0 + h) = y ( x 0 ) + h y ( x0 ) + for some in the interval ( x0 , x0 + h) .

h2 h n 1 ( n 1) h n ( n) y ( x0 ) + ... + y ( x0 ) + y ( ) 2 (n 1)! n!

The Euler method corresponds to just the first two terms on the RHS: y ( x0 + h) = y ( x0 ) + hy ( x0 ) For each step, the truncation error is proportional to h2. However, the number of steps L required to cover a fixed distance L is . Hence, the total accumulated error is proportional h 1 to h 2 = h . The Euler method is said to be of order 1, or of first-order accuracy. h General Definition of Order If, for a numerical scheme, the error in approximating the value at a fixed position is proportional (step size)n then that numerical scheme is said to be of order n.

Computational Mechanics

2-2

David Apsley

In general, methods which approximate y ( x0 + h) by retaining terms in the Taylor series up to order hn, or whose first omitted term is of order hn+1, will be of order n. The higher the value of n, then, in general, the faster the error falls off as the step size is reduced and hence the more accurate the scheme. However, this usually means more work per step.

2.2.3 Variants of the Euler Method

Modified Euler Method Make successive estimates of the average slope, and hence the y increment: k 0 = hf ( x0 , y 0 ) from the gradient at the start of the interval k1 = hf ( x0 + h, y 0 + k 0 ) from the (estimated) gradient at the end of the interval Use the average of these changes (equivalent to going half-way at one gradient, then halfway at the other): y k=1 2 ( k 0 + k1 )
dy = f, dx y ( x0 ) = y 0
Modified Euler Euler

Equation:

Estimates of increment: k 0 = hf ( x0 , y 0 ) k1 = hf ( x0 + h, y 0 + k 0 ) Update:

y1 = y 0 + 1 2 ( k 0 + k1 )
x0

1 2

1 2

h x1

Mid-Point Method Set: k 0 = hf ( x0 , y 0 ) from the gradient at the start of the interval 1 1 k1 = hf ( x0 + 2 h, y 0 + 2 k 0 ) from the gradient at (estimated) mid-point of the interval Estimate the change in y from the gradient at the mid-point only: k = k1
y

Equation:

dy = f, dx

y ( x0 ) = y 0

Midpoint method Euler

Estimates of increment: k 0 = hf ( x0 , y 0 ) 1 k1 = hf ( x0 + 1 2 h, y 0 + 2 k 0 ) Update:

y1 = y 0 + k1
1 2

1 2

h x1

x0

Computational Mechanics

2-3

David Apsley

By writing these two methods as: y ( x 0 + h) = y ( x 0 ) + 1 Modified Euler: 2 h[ f ( x 0 , y 0 ) + f ( x 0 + h, y 0 + hf ( x 0 , y 0 ))] 1 Mid-point method: y ( x0 + h) = y ( x0 ) + hf ( x0 + 1 2 h, y 0 + 2 hf ( x 0 , y 0 )) and expanding using Taylors theorem, it can be shown (see the Example sheet) that the truncation error in each case is of order h3, and hence both are second-order accurate. If the expression f(x,y) for the derivative happens to be linear in both x and y, e.g. dy = x+ y, dx then the modified-Euler and mid-point methods give the same answer.

2.2.4 Runge-Kutta Methods

The Euler method is first-order accurate and requires only one evaluation of f at each step. The modified-Euler and mid-point methods are second-order accurate, but require two evaluations of f at each step, so increasing the computational cost. In general, the higher the accuracy required, the more function evaluations are necessary. The methods described so far are special cases of so-called Runge-Kutta methods:
r 1 j =0 r 1

y1 = y 0 +

w jk j

where
j =0

wj = 1

wj are the weights. The order of the method is less than or equal to r. The scheme is explicit if each kj only depends upon k 0 , k1 ,..., k j 1 ; that is, on estimates of the y increment that have already been computed.
The Euler and Modified-Euler methods are 1st- and 2nd-order explicit Runge-Kutta methods, respectively. However, the most popular variant is the 4th-order explicit Runge-Kutta method often referred to as the Runge-Kutta method and is probably the single most popular method in engineering: Equation:
dy = f, dx y ( x0 ) = y 0

Estimates of increment: k 0 = hf ( x0 , y0 ) 1 k1 = hf ( x0 + 1 2 h, y 0 + 2 k 0 ) 1 k 2 = hf ( x0 + 1 2 h, y 0 + 2 k1 ) k 3 = hf ( x0 + h, y 0 + k 2 ) Update:

y1 = y 0 + 1 6 ( k 0 + 2 k1 + 2k 2 + k 3 )

Note: (i) If f is actually independent of y then k1 = k2 and this reduces to Simpsons rule for numerical integration. (ii) There are 4 function evaluations in all none of which will be used again. There is always a play-off between accuracy and computational expense.

Computational Mechanics

2-4

David Apsley

A slightly more complex method is Runge-Kutta-Merson: dy Equation: = f , y ( x0 ) = y 0 dx Estimates of increment: k 0 = hf ( x0 , y 0 ) k1 = hf ( x0 + 1 h, y 0 + 1 k ) 3 3 0 k 2 = hf ( x0 + 1 h, y 0 + 1 (k 0 + k1 )) 3 6 1 1 k 3 = hf ( x0 + 2 h, y 0 + 8 (k 0 + 3k 2 )) k 4 = hf ( x0 + h, y 0 + 1 (k 0 3k 2 + 4k 3 )) 2 Update:

y1 = y 0 + 1 ( k 0 + 4k 3 + k 4 ) 6

Although this achieves the same (4th-order) accuracy from five function evaluations, it does also permit an estimate of the local truncation error: 1 1 E = 15 (k 0 9 2 k 2 + 4k 3 2 k 4 ) and hence allows some degree of error control/step-size adjustment.

Computational Mechanics

2-5

David Apsley

2.3 Multi-Step Methods

new point old points

For the initial-value problem: dy = f ( x, y ) , y ( x0 ) = y 0 dx forward integration from yi to yi+1 depends on previous values yi, yi-1,... ,yi-l. There is a formal update:
yi +1 = y i m +
x i +1

x0
f ( x, y ) dx

x i-2

x i-1

xi

x i+1

xi m

but evaluation of the integral on the RHS requires knowledge of y(x) between xi and xi+1 and this depends on the solution itself.

General Assessment
L J J

Multistep methods are not self-starting (one-step methods are needed to supply the first few values). Multi-step methods re-use, and hence make more efficient use of, function evaluations. Used as part of a predictor-corrector methodology (see below), multi-step methods are amenable to error estimation and hence step-size adjustment.

2.3.1 Basic Multi-Step Predictor

Basic Idea Write f i = f ( xi , yi ) . (i) (ii) Fit a simple polynomial f ( x) through the values at nodes i, i1, i2, ..., il. Integrate the fitted polynomial: yi +1 = y i m +
x i +1

f ( x) dx
xi m

to advance the solution to the next node i+1. Since the number of points through which the polynomial is fitted and the lower limit of integration can be chosen arbitrarily, there are an infinite number of possible multistep methods, but the most popular are given below.

Computational Mechanics

2-6

David Apsley

Fitting a Function by Lagrange Interpolation A good method is to sum over simple polynomials which have the given value at a particular node and zero at the others. The following examples show how this can be done. Two points can be fitted with a straight line. e.g points (x,f) = (2,4) and (3,7):
f = ( x 3) ( x 2) 4+ 7 23 3 2
1 at x = 2 0 at x =3 1 at x =3 0 at x = 2

= 4( x 3) + 7( x 2) = 3x 2

Three points can be fitted with a quadratic polynomial. e.g. points (x,f) = (2,4), (3,7) and (4,12):
f = ( x 3)( x 4) ( x 2)( x 4) ( x 2)( x 3) 4+ 7 + 12 (2 3)(2 4) (3 2)(3 4) (4 2)(4 3)
1 at x = 2 0 at x =3 , 4 1 at x =3 0 at x = 2, 4 1 at x = 4 0 at x = 2 ,3

= 2( x 2 7 x + 12) 7( x 2 6 x + 8) + 6( x 2 5 x + 6) = x 2 2x + 4

4th-order schemes can be derived by fitting a function through 4 points. If the nodes are equally spaced with step size h then, using the above method, the simplest function that can be fitted through fi, fi-1, fi-2 and fi-3 is (after much algebra): f f f = i 3 ( X + 2)( X + 1) X + i 2 ( X + 3)( X + 1) X 6 2 f i 1 f ( X + 3)( X + 2) X + i ( X + 3)( X + 2)( X + 1) 2 6 x xi . where X = h

In general, to fit a polynomial to the set of points {(xi,fi)} f = f i Pi ( x)


i

where the polynomials Pi(x) have the value 1 at xi and 0 at all the other points: ( x xr ) Pi = r i ( xi x r ) (The symbol means a product, just like means a sum).

Computational Mechanics

2-7

David Apsley

4th-Order Schemes If we integrate the above function f between X = 3 and X = 1 we obtain:

f
Milnes formula
1

fi-3 fi-2
f dX = y i 3 + 4 h(2 f i 2 f i 1 + 2 f i ) 3

fi fi-1

yi +1 = y i 3 + h

x i-3

xi

x i+1

Alternatively, if we integrate between X = 0 and X = 1 we obtain:

f
Adams and Bashforths formula
1

fi-3 fi-2 fi-1

fi

yi +1 = y i + h
0

f dX = y i +

1 24

h(9 f i 3 + 37 f i 2 59 f i 1 + 55 f i )

x i-3

xi

x i+1

In general, fitting a function to n points produces an nth-order method. Note that the function evaluations fi are used in several update steps this is good efficiency.

Computational Mechanics

2-8

David Apsley

2.3.2 Predictor-Corrector Methods

Basic idea. Once one has a predicted value for yi+1, one can then find a new interpolation formula for f and use the integral formulation again to improve the estimate. x dy y i +1 = yi r + f dx = f ( x, y ) dx x
i +1 i r

The general methodology is: 0) Predict yi(+ 1 from y i , y i 1 , y i 2 , ... , y i r


1) ( 0) Correct y i(+ 1 from y i +1 , y i , y i 1 , ... , y i s and (if desired) again ... 2) (1) Correct yi(+ 1 from y i +1 , y i , y i 1 , ... , y i s etc.

The predictor is applied just once and extrapolates the function f to get a reasonable estimate of y i +1 . The corrector can (and often is) applied multiple times this effectively forms an iterative solution of an implicit equation.

Simplest Example of a Predictor-Corrector Method: Modified Euler (This is actually a one-step method) Predictor: Corrector:
0) yi(+ 1 = y i + hf ( x i , y i ) 1) ( 0) 1 yi(+ 1 = y i + 2 h[ f ( xi , y i ) + f ( xi +1 , y i +1 )]

The difference from the previous scheme, however, is that the corrector step could be applied repeatedly: 2) (1) 1 yi(+ 1 = yi + 2 h[ f ( xi , yi ) + f ( xi +1 , y i +1 )]
3) ( 2) 1 yi(+ 1 = yi + 2 h[ f ( xi , y i ) + f ( xi +1 , y i +1 )] . . . . . . . . . until convergence is achieved. In fact, we end up solving, iteratively, the discrete approximation to the implicit equation (the unknown yi+1 appears on both sides): yi +1 = yi + 1 2 h[ f ( xi , yi ) + f ( xi +1 , y i +1 )] This is not the same as the differential equation, except in the limit h 0.

Some popular 4th-order methods are as follows.

Computational Mechanics

2-9

David Apsley

Milne-Simpson method

Milne predictor: Simpson corrector:

0) 4 yi(+ 1 = y i 3 + 3 h[ 2 f i 2 f i 1 + 2 f i ] 1) (0) 1 yi(+ 1 = y i 1 + 3 h[ f i 1 + 4 f i + f i +1 ]

( 0) (0) (where, for example, f i + 1 f ( xi +1 , y i +1 ) ).

Advantage: only 3 function evaluations at each step, yet 4th-order accurate. Disadvantage: prone to numerical instability. (You may note that the corrector step is precisely Simpsons method for numerical integration).

Adams-Bashforth-Moulton method

f fi-3 fi-2 fi-1

Adams-Bashforth predictor: 0) 1 yi(+ 1 = y i + 24 h[ 9 f i 3 + 37 f i 2 59 f i 1 + 55 f i ]


0) Local error: y y i(+ 1 = 251 720

fi

h 5 y (v) ( )

x i-3

xi

x i+1

Adams-Moulton corrector: 1) (0) 1 yi(+ 1 = y i + 24 h[ f i 2 5 f i 1 + 19 f i + 9 f i +1 ]


5 (v ) 1) 19 Local error: y y i(+ ( ) 1 = 720 h y

f fi+1 fi fi-2 fi-1

x i-2

xi

x i+1

4th-order accurate, but formal error slightly larger than Milne-Simpson. Less prone to numerical instability.

Computational Mechanics

2-10

David Apsley

2.3.3 Error Estimation For Predictor-Corrector Methods

If the predictor and corrector steps are of the same order of accuracy then they yield a particularly straightforward means of error estimation. For example, with the Adams-Bashforth-Moulton method, let y i +1 be the exact solution. Then, from the given error terms: 0) 251 5 ( v ) y i +1 y i(+ ( ) Predictor: 1 = 720 h y

1) 5 (v) 19 Corrector: yi +1 y i(+ ( ) 1 = 720 h y (The coefficient in the corrector error is smaller, as one might hope: doing a correction step should improve the accuracy of the solution!)

Basic idea Assume the higher-order derivatives in the error term are the same; i.e. y ( v ) ( ) y ( v ) ( ) . Then subtraction eliminates the exact solution and gives 1) ( 0) 270 5 ( v ) yi(+ 1 y i +1 720 h y

1) ( 0) h 5 y ( v ) 720 ( y i(+ 1 y i +1 ) 270 Finally, substituting in the error term (for the corrector) gives an estimated error 1) ( 0) 19 19 E 720 h 5 y ( v ) 720 720 ( y i(+ 1 y i +1 ) 270 Hence, 19 error = 270 (corrector predictor )

Thus, the estimated error is proportional to the difference between corrector and predictor.

Computational Mechanics

2-11

David Apsley

2.4 General Properties of Numerical Schemes

Denote the analytical solution by an overbar: i.e. y . Consistency A numerical method is consistent (with the differential equation it approximates) if the numerical approximation tends to the actual differential equation as the step size tends to zero; e.g. y yi dy i +1 0 as h 0 for all i dx i h

Convergence A method is convergent (with respect to the differential equation it approximates) if the maximum deviation from the exact solution vanishes as the step size tends to zero; i.e. max y i y ( xi ) 0 as h 0

Stability A method is stable if small changes to (x0,y0) produce small changes in the solution; i.e. small errors do not grow excessively.

Stiff Equations Certain types of equations are particularly stiff i.e. stability of their numerical solution requires a very small step size. An important example is exponential decay: dy = y, y ( 0) = y 0 dx

The exact solution y = y 0 e x decays with x for any positive ; however, for too large a step size, the magnitude of the numerical solution may actually increase exponentially. For example, with the Euler method: yi +1 = y i + hf ( xi , yi ) = y i + h( y i ) = (1 h ) yi The solution after n steps is y n = (1 h ) n y 0 If h > 1 then the numerical solution oscillates in sign. If h > 2 the magnitude of these oscillations will increase exponentially.

(This analysis has been performed for the equation dy/dx = y. It is easily applied, on a local level, to the general equation dy/dx = f(x,y) if we take to be f/y.)

Computational Mechanics

2-12

David Apsley

2.5 Error Estimation by Richardson Extrapolation

Basic Idea: solve the problem with two different step sizes in order to: improve the solution; estimate the error. Suppose the global (i.e. accumulated) error of a numerical method is En = Ch n . n is the order of method; C is some (unknown) constant. Let yh and y2h be the predicted solutions at some point using step sizes h and 2h. Let y be the exact (but unknown) solution. Then the local errors (here defined as yexact ynumerical) are: y y 2 h = C ( 2 h) n y y h = Ch n These are two equations for two unknowns ( y and C). Subtraction eliminates y : y h y 2 h = Ch n (2 n 1) and hence one can estimate the error Eh on the finer grid: y y E h = Ch n = h n 2 h 2 1 In practice, one would find the maximum value of this quantity over all the solution nodes to establish if ones predicted solution met the desired error tolerance. Alternatively, eliminating C yields a new estimate of the actual solution: 2 n yh y2h y= 2n 1 This method of using successive refined grids to estimate the error and/or find a better solution is called Richardson extrapolation. In practice, it is more often used for error estimation than improving the solution, because the latter relies more heavily on accurate knowledge of the order n and the assumption that higher-order error terms are small.

Computational Mechanics

2-13

David Apsley

Vous aimerez peut-être aussi