Vous êtes sur la page 1sur 18

Differential Equations

Differential equations (DEs)in science:


f = m a = m d 2x dt 2

Solving DEs process of integration (derivative terms are eliminated)


rate = d [ A] = k [ A] dt

1 d [ A] = kdt [ A]
t 1 d [ A] = kdt [ A]0 [ A] 0 [ A]

ln[ A] [ A] = kt 0 0

[ A]

ln

[ A]0 = kt [ A]

Any differential equation of the type y = f(x,y) will give an infinite set of solutions process of integration introduces arbitrary constants. extra conditions must be given. For 1st-order equation need one extra condition For nth order equation need n extra conditions dx
dy

e.g.

y = x

=x

dy = xdx y =

x2 +c 2

Need an extra condition to find c. Say y(0) = 1 c = 1 Use analytical solutions if possible as it is difficult to achieve accuracy numerically. i.e. y = 1 when x = 0

Types of numerical solutions: One-step methods use info from a single previous step to find the next point on a curve y = f(x). e.g. Eulers method and Runge-Kutta method Predictor-corrector methods (multi-step methods) use info from more than one previous step to find the next point on a curve y = f(x). Iterative process. e.g. Improved Euler method

Sources of errors using numerical approximations: Round-off error due to the numerical limitations of the computer being used (number of significant digits that it can store and manipulate) Truncation error due to the infinite series used to approx. a function being truncated after a few terms. e.g. Taylor expansion series is an infinite series:
f ( x) = f (a ) + f ( a )( x a ) + f ( a ) ( x a ) 2 + ... 2!
f n (a ) ( x a) n n!

Usually terminated after some term

Propagation error due to the accumulation of previous errors in a numerical scheme.

These three sources of errors give rise to two types of observed errors: Local error the amount of error that enters the computational process at any given computational step. Global error the difference between the computed value and the exact solution at any point in the computation. The global error accounts for the total accumulation of error from the start of the computational process.

Euler-Cauchy Method - most basic method - use a Taylor series


y ( x + h) = y ( x ) + hf + h2 h3 f + f + ... 2 6

For small h the values of h2, h3, etc. will be small; so we approximate:
y ( x + h) y ( x ) + hf

f = y (slope) y1
Slope = y = f =
y1 = y0 + hf

y y1 y0 = x h

h x slope y0

Stepwise: y1 = y0 + hf(x0 , y0) y2 = y1 + hf(x1 , y1) In general: yn+1 = yn + hf(xn , yn) Euler-Cauchy formula. In the type of curve shown

approximation will always lie below the true curve and the errors will accumulate. true solution

y2

computed solution

y1 y0 x0 h x1 h x2 x

Best results: close to x0 with small values of h.

Example: Apply the Euler-Cauchy method to the initial value problem: y= x + y with y(0) = 0 in the range x = 0 to 0.8 using h = 0.2. (True solution: y = ex x 1)

Euler-Cauchy: yn+1 = yn + 0.2(xn + yn) n xn yn yn+1 True yn+1 0 0 0 0 0.021 1 0.2 0 0.04 0.092 2 0.4 0.04 0.128 0.222 3 0.6 0.128 0.274 0.426 4 0.8 0.2736 0.488 0.718 For h = 0.1 (smaller values of h improved results) n xn yn yn+1 True yn+1 0 0 0 0 0.005 1 0.1 0 0.02 0.021 2 0.2 0.02 0.064 0.050 3 0.3 0.064 0.137 0.092 4 0.4 0.1368 0.244 0.149 5 0.5 0.24416 0.393 0.222 6 0.6 0.39299 0.592 0.314 7 0.7 0.59159 0.850 0.426 8 0.8 0.84991 1.180 0.560
8

Improved Euler-Cauchy method Problem with the basic Euler-Cauchy method the slope from the previous point is used to get the next point. Improved Euler-Cauchy: 1) Obtain an approximation to y1 at x0 + h. 2) Get y1' have values of y at both ends of the interval, i.e. y0' and y1'. 3) The average of these two slopes, y0' and y1', should give a better direction. So, first obtain the predictor:
* yn +1 = yn + hf ( xn , yn )

4) The corrected value uses the average slope:


yn +1 = yn + h f ( xn , yn ) + f ( xn +1, y* +1) n 2

Improved Euler method predictor-corrector method

Example: Use the Improved Euler-Cauchy method on the previous example. y= x + y with y(0) = 0 in the range x = 0 to 0.8 using h = 0.2. (True solution: y = ex x 1)

y* +1 = yn + 0.2( xn + yn ) n
yn +1 = yn + 0.1 ( xn + yn ) + ( xn +1 + y* +1) n

]
True yn+1 0.021 0.092 0.222 0.426 0.718

n 0 1 2 3 4

xn 0 0.2 0.4 0.6 0.8

yn

y*n+1

0 0 0.02 0.064 0.0884 0.18608 0.2158 0.37902 0.4153 0.6584

yn+1 0.02 0.088 0.216 0.415 0.703

Better but still not very good. Could successively halve h until reasonable results obtained. Predictor-Corrector Methods
10

start with initial value and some additional points. Say we have the initial value y0 and three additional points y1, y2, and y3. Start with the predictor: Starting with our first-order differential equation:
dy = f ( x, y ) dx

Integrate over the interval [x0, x4]:


x4 x0

dy = ( f ( x, y ) ) dx
x0 x4 x0

x4

where

dy = y x4 = y ( x4 ) y ( x0 ) 0
x4 x0

y ( x4 ) = y ( x0 ) + ( f ( x, y ( x)) ) dx

yp(x4) = predicted value of y(x4). The predictor formula (an extrapolation formula) in terms of five consecutive points xn-3 to xn+1 (initially x0 to x3, and x4) is:

11

4h xf n yn),(2 f(xn 1,yn 1) yp(xn+ 1)= y(xn 3)+ 3 + (2 xf n 2,yn 2)


The value of yp thus predicted allows us to get a value of f(xn+1, yn+1). The corrector formula is:

h (xf n+ 1,yn+ 1)+ xf n,(4 yn) yc(xn+ 1)= (xy n 1)+ 3 + (xf n 1,yn 1)
where yn+1 = yp

12

Example: Given
dy = xy , dx

with

y(0.0) = 1 y(0.2) = 1.02020 y(0.4) = 1.08329 y(0.6) = 1.19722 use the predictor-corrector method given above to compute y(0.8) and y(1.0). Here h = 0.2.

4h xf n yn),(2 f(xn 1,yn 1) yp(xn+ 1)= y(xn 3)+ 3 + (2 xf n 2,yn 2)


y p ( x4 ) = y ( x0 ) + 4h [ 2( x3 y3 ) ( x2 y2 ) + 2( x1 y1)] 3

13

4(0.2) 2(0.6) 1.1 ( ) 9(0.4) 1.70 ( ) 28 23 2 9 yp(0.8) = 1+ 2) 0 2 0 3 + 2(0.2) 1.0 (

14

h (xf n+ 1,yn+ 1)+ xf n,(4 yn) yc(xn+ 1)= (xy n 1)+ 3 + (xf n 1,yn 1)
yc ( x4 ) = y ( x2 ) + h [ ( x4 y4 ) + 4( x3 y3 ) + ( x2 y2 )] 3

0.2 (0.8) 1.3 ( )+ 74(0.6) 1.61 ( ) 39 87 2 2 yc(0.8) = 1.0 + 8 3 2 9 3 + (0.4) 1.0 ( ) 8 3 2 9


n 0 1 2 3 4 5 xn yn xy yp yc 0 1 0 0.2 1.02020 0.20404 0.4 1.08329 0.43332 0.6 1.19722 0.71833 0.8 1.10110 1.37638 1.37714 1.37713 1 1.64700 1.64700 1.64854 1.64872 True yn+1

Disadvantages of predictor-corrector methods:


15

- not self-starting less convenient for computer solution - difficult to alter the step size h Advantages: - tend to be rapid use info from previous steps. Runge-Kutta methods - application of Simpsons rule. - method most commonly used for computer solution Advantages: - simple to use - allows adjustment of the integration length - self-starting Disadvantage: - requires more calculations at each step info from previous steps not employed

16

Various formulae but the most usual one is the fourthorder Runge-Kutta We have: with: y = f(x, y) y(xn) = yn (initial value)

The value yn+1 at xn+1 = xn + h is computed as follows:


k1 = hf ( xn , yn )

h k k2 = hf ( xn + , yn + 1 ) 2 2 h k k3 = hf ( xn + , yn + 2 ) 2 2
k4 = hf ( xn + h, yn + k3 )

and then
yn +1 = yn + 1 ( k1 + 2k2 + 2k3 + k4 ) 6

17

Example: Find y(0.5) from y = xy with y(0) =1 and h = 0.1 (True solution =
2

y = e 0.5 x

n 0 1 2 3 4 5

xn 0 0.1 0.2 0.3 0.4 0.5

yn 1 1.00501 1.02020 1.04603 1.08329 1.13315

k1 0 0.01005 0.02040 0.03138 0.04333

k2 0.00500 0.01515 0.02576 0.03716 0.04972

k3 0.00501 0.01519 0.02583 0.03726 0.04987

k4 0.01005 0.02040 0.03138 0.04333 0.05666

yn+1 1.00501 1.02020 1.04603 1.08329 1.13315

True 1.00501 1.02020 1.04603 1.08329 1.13315

18

Vous aimerez peut-être aussi