Académique Documents
Professionnel Documents
Culture Documents
1 d [ A] = kdt [ A]
t 1 d [ A] = kdt [ A]0 [ A] 0 [ A]
ln[ A] [ A] = kt 0 0
[ A]
ln
[ A]0 = kt [ A]
Any differential equation of the type y = f(x,y) will give an infinite set of solutions process of integration introduces arbitrary constants. extra conditions must be given. For 1st-order equation need one extra condition For nth order equation need n extra conditions dx
dy
e.g.
y = x
=x
dy = xdx y =
x2 +c 2
Need an extra condition to find c. Say y(0) = 1 c = 1 Use analytical solutions if possible as it is difficult to achieve accuracy numerically. i.e. y = 1 when x = 0
Types of numerical solutions: One-step methods use info from a single previous step to find the next point on a curve y = f(x). e.g. Eulers method and Runge-Kutta method Predictor-corrector methods (multi-step methods) use info from more than one previous step to find the next point on a curve y = f(x). Iterative process. e.g. Improved Euler method
Sources of errors using numerical approximations: Round-off error due to the numerical limitations of the computer being used (number of significant digits that it can store and manipulate) Truncation error due to the infinite series used to approx. a function being truncated after a few terms. e.g. Taylor expansion series is an infinite series:
f ( x) = f (a ) + f ( a )( x a ) + f ( a ) ( x a ) 2 + ... 2!
f n (a ) ( x a) n n!
These three sources of errors give rise to two types of observed errors: Local error the amount of error that enters the computational process at any given computational step. Global error the difference between the computed value and the exact solution at any point in the computation. The global error accounts for the total accumulation of error from the start of the computational process.
For small h the values of h2, h3, etc. will be small; so we approximate:
y ( x + h) y ( x ) + hf
f = y (slope) y1
Slope = y = f =
y1 = y0 + hf
y y1 y0 = x h
h x slope y0
Stepwise: y1 = y0 + hf(x0 , y0) y2 = y1 + hf(x1 , y1) In general: yn+1 = yn + hf(xn , yn) Euler-Cauchy formula. In the type of curve shown
approximation will always lie below the true curve and the errors will accumulate. true solution
y2
computed solution
y1 y0 x0 h x1 h x2 x
Example: Apply the Euler-Cauchy method to the initial value problem: y= x + y with y(0) = 0 in the range x = 0 to 0.8 using h = 0.2. (True solution: y = ex x 1)
Euler-Cauchy: yn+1 = yn + 0.2(xn + yn) n xn yn yn+1 True yn+1 0 0 0 0 0.021 1 0.2 0 0.04 0.092 2 0.4 0.04 0.128 0.222 3 0.6 0.128 0.274 0.426 4 0.8 0.2736 0.488 0.718 For h = 0.1 (smaller values of h improved results) n xn yn yn+1 True yn+1 0 0 0 0 0.005 1 0.1 0 0.02 0.021 2 0.2 0.02 0.064 0.050 3 0.3 0.064 0.137 0.092 4 0.4 0.1368 0.244 0.149 5 0.5 0.24416 0.393 0.222 6 0.6 0.39299 0.592 0.314 7 0.7 0.59159 0.850 0.426 8 0.8 0.84991 1.180 0.560
8
Improved Euler-Cauchy method Problem with the basic Euler-Cauchy method the slope from the previous point is used to get the next point. Improved Euler-Cauchy: 1) Obtain an approximation to y1 at x0 + h. 2) Get y1' have values of y at both ends of the interval, i.e. y0' and y1'. 3) The average of these two slopes, y0' and y1', should give a better direction. So, first obtain the predictor:
* yn +1 = yn + hf ( xn , yn )
Example: Use the Improved Euler-Cauchy method on the previous example. y= x + y with y(0) = 0 in the range x = 0 to 0.8 using h = 0.2. (True solution: y = ex x 1)
y* +1 = yn + 0.2( xn + yn ) n
yn +1 = yn + 0.1 ( xn + yn ) + ( xn +1 + y* +1) n
]
True yn+1 0.021 0.092 0.222 0.426 0.718
n 0 1 2 3 4
yn
y*n+1
Better but still not very good. Could successively halve h until reasonable results obtained. Predictor-Corrector Methods
10
start with initial value and some additional points. Say we have the initial value y0 and three additional points y1, y2, and y3. Start with the predictor: Starting with our first-order differential equation:
dy = f ( x, y ) dx
dy = ( f ( x, y ) ) dx
x0 x4 x0
x4
where
dy = y x4 = y ( x4 ) y ( x0 ) 0
x4 x0
y ( x4 ) = y ( x0 ) + ( f ( x, y ( x)) ) dx
yp(x4) = predicted value of y(x4). The predictor formula (an extrapolation formula) in terms of five consecutive points xn-3 to xn+1 (initially x0 to x3, and x4) is:
11
h (xf n+ 1,yn+ 1)+ xf n,(4 yn) yc(xn+ 1)= (xy n 1)+ 3 + (xf n 1,yn 1)
where yn+1 = yp
12
Example: Given
dy = xy , dx
with
y(0.0) = 1 y(0.2) = 1.02020 y(0.4) = 1.08329 y(0.6) = 1.19722 use the predictor-corrector method given above to compute y(0.8) and y(1.0). Here h = 0.2.
13
14
h (xf n+ 1,yn+ 1)+ xf n,(4 yn) yc(xn+ 1)= (xy n 1)+ 3 + (xf n 1,yn 1)
yc ( x4 ) = y ( x2 ) + h [ ( x4 y4 ) + 4( x3 y3 ) + ( x2 y2 )] 3
- not self-starting less convenient for computer solution - difficult to alter the step size h Advantages: - tend to be rapid use info from previous steps. Runge-Kutta methods - application of Simpsons rule. - method most commonly used for computer solution Advantages: - simple to use - allows adjustment of the integration length - self-starting Disadvantage: - requires more calculations at each step info from previous steps not employed
16
Various formulae but the most usual one is the fourthorder Runge-Kutta We have: with: y = f(x, y) y(xn) = yn (initial value)
h k k2 = hf ( xn + , yn + 1 ) 2 2 h k k3 = hf ( xn + , yn + 2 ) 2 2
k4 = hf ( xn + h, yn + k3 )
and then
yn +1 = yn + 1 ( k1 + 2k2 + 2k3 + k4 ) 6
17
Example: Find y(0.5) from y = xy with y(0) =1 and h = 0.1 (True solution =
2
y = e 0.5 x
n 0 1 2 3 4 5
18