Vous êtes sur la page 1sur 59

2.

Overview of numerical analysis


numerical calculus linear algebraic systems interpolation
ordinary differential equations relaxation
1. System of linear algebraic equations
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
|
|
|
|
|
.
|

\
|
n n nn n n
n
n
b
b
b
x
x
x
a a a
a a a
a a a

2
1
2
1
2 1
2 22 21
1 12 11
b Ax =
Order n
The execution time of Gaussian elimination depends on the
amount of floating point operations (FLOPS).


Cramers rule ~ (n+1)!
Say for a 99 X 99 matrix in a 2 GHz computer,
Gaussian elimination: FLOPS ~ 3.5 x10
5
, t = 1.8 x 10
-4
s
Cramers rule: FLOPS ~ 100! = 9.3 x 10
157
, t = 1.3 x
10
142
years!


( )( ) ( ) ( ) ] [
3
2 2 2 2
2
3
1
1
2 2
1
1
n O
n
k k n n n k n k n G
n
k
n
k
+ ~ + + + = + =


=

=
1.1 Band systems (Thomas method)
In typical finite difference or element schemes, band systems
occur often.
n
n
n n n n
n n n n n n
b
b
b
b
x d x a
x c x d x d
x c x d x a
x c x d x a
x c x d
=
=
=
=
+
+ + +
+ +
+ +
+


1
2
1
1 1
1 1 1
4 3 3 3 2 2
3 2 2 2 1 1
2 1 1 1

Instead of storing n n matrix,


we need only to store
a, d and c
|
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
|
.
|

\
|
|
|
|
|
|
|
.
|

\
|

n n n n
n
b
b
b
x
x
x
d a
c
d a
c d a
c d

2
1
2
1
1
1
3 2
2 2 1
1 1
0 0
0
0
0
0 0
n
n
n n n n
n n n n n n
b
b
b
b
x d x a
x c x d x d
x c x d x a
x c x d
x c x d
=
=
=
=
+
+ + +
+ +
+
+


1
) 1 (
2
1
1 1
1 1 1
4 3 3 3 2 2
3 2 2
) 1 (
2
2 1 1 1

pivoting
1
1
1
2
) 1 (
2 1
1
1
2
) 1 (
2
, b
d
a
b b c
d
a
d d = =
n
n
n n n n
n n n n n n
b
b
b
b
b
x d x a
x c x d x d
x c x d
x c x d
x c x d
=
=
=
=
=
+
+ + +
+ +
+
+


1
) 2 (
3
) 1 (
2
1
1 1
1 1 1
4 3 3
) 2 (
3
3 2 2
) 1 (
2
2 1 1 1

) 1 (
) 1 (
1
) (
1
) 1 (
1
) (
1
,

+ +

+ +
= =
k
k
k
k
k
k
k
k k
k
k
k
k
k
k
b
d
a
b b c
d
a
d d
Similarly
pivoting
k = 1,,n - 1
) 1 (
) 2 (
1
) 2 (
3
) 1 (
2
1
) 1 (
1 1
) 2 (
1
4 3 3
) 2 (
3
3 2 2
) 1 (
2
2 1 1 1

=
=
=
=
=
+
+
+ +
+
+
n
n
n
n
n
n
n
n n n
n
n
b
b
b
b
b
x d
x c x d
x c x d
x c x d
x c x d

Until ...
Then use back substitution
1 , , 1
) 1 (
1
) 1 (
=

=

+

n k
d
x c b
x
k
k
k k
k
k
k
1.1.1 Thomas method example
11 4
15 2 4
7 2 3
2 2
4 3
4 3 2
3 2 1
2 1
= +
= + +
= + +
= +
x x
x x x
x x x
x x
|
|
|
|
|
.
|

\
|
11
15
7
2
1 4 0 0
2 4 1 0
0 2 3 1
0 0 2 1
|
|
|
|
|
.
|

\
|
11
15
5
2
1 4 0 0
2 4 1 0
0 2 1 0
0 0 2 1
|
|
|
|
|
.
|

\
|
11
10
5
2
1 4 0 0
2 2 0 0
0 2 1 0
0 0 2 1
|
|
|
|
|
.
|

\
|
9
10
5
2
3 0 0 0
2 2 0 0
0 2 1 0
0 0 2 1
pivot
0
1
) 1 )( 2 ( 2
1
1
) 2 )( 2 ( 5
2
2
) 3 )( 2 ( 10
3
3
9
1 2 3 4
=

= =

= =

= =

= x x x x
1.2 Gauss-Seidel method
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
|
|
|
|
|
.
|

\
|
n n nn n n
n
n
b
b
b
x
x
x
a a a
a a a
a a a

2
1
2
1
2 1
2 22 21
1 12 11

+ + + =
+ + + =
+ + + =

) (
1
) (
1
) (
1
1 1 , 2 2 1 1
2 2 3 23 1 21
22
2
1 1 3 13 2 12
11
1
n n n n n n
nn
n
n n
n n
b x a x a x a
a
x
b x a x a x a
a
x
b x a x a x a
a
x

+ + + =
+ + + =
+ + + =
+

+ + +
+ +
+
) (
1
) (
1
) (
1
) 1 (
1 1 ,
) 1 (
2 2
) 1 (
1 1
) 1 (
2
) (
2
) (
3 23
) 1 (
1 21
22
) 1 (
2
1
) (
1
) (
3 13
) (
2 12
11
) 1 (
1
n
k
n n n
k
n
k
n
nn
k
n
k
n n
k k k
k
n n
k k k
b x a x a x a
a
x
b x a x a x a
a
x
b x a x a x a
a
x

1.2.1 Gauss-Seidel method example


|
|
|
.
|

\
|

=
|
|
|
.
|

\
|
|
|
|
.
|

\
|


5
20
14
1 90 2
40 1 1
1 3 64
3
2
1
x
x
x
5000 . 0 02500 . 0 02500 . 0
05556 . 0 011111 . 0 02222 . 0
21875 . 0 01563 . 0 04688 . 0
) 1 (
2
) 1 (
1
) 1 (
3
) 1 (
3
) 1 (
1
) 1 (
2
) (
3
) (
2
) 1 (
1
+ + =
+ + =
+ + =
+ + +
+ + +
+
k k k
k k k
k k k
x x x
x x x
x x x
k = 0 k = 1 k = 2 k =3
x1 0.21875 0.22916 0.22955 0.22955
x2 0.05556 0.06621 0.06613 0.06613
x3 0.50000 0.49262 0.49261 0.49261
1.2.2 Convergence criterion

= =
>
n
i j
ij ii
a a
1
FLOPS for Gauss-Seidel ~ 4n
2
(k+1) , k depends on accuracy
required obviously

For a 999 X 999 matrix in a 2 GHz single-processor computer,
Gaussian elimination: FLOPS ~ 3.3 x 10
8
, t = 0.167 s
Gauss-Seidel method: FLOPS (say 5 iterations) ~ 2.4 x 10
7
, t
= 0.0121 s
1.3 Relaxation
To accelerate the Gauss-Seidel iteration in Ax = B,
we can consider
) ( ) 1 ( ) 1 (
1
) (
1
1
) 1 ( ) 1 (
) 1 (
1
m
i
m
i
m
i
n
i j
m
j ij
i
j
m
j ij i
ii
m
i
x z x
x a x a b
a
z
e + e =
|
|
.
|

\
|
=
+ +
+ =

=
+ +

Relaxation parameter
2. Interpolation
n i x f x P
i i
,..., 1 ] [ ] [ = =
Interpolation is NOT curve-fitting!
2.1 Lagrange polynomials
[

= =

=
= =
1
0
1
0
1
] [
1 ,..., 1 , 0 ] [ ] [ ] [
n
i j
j i
j
i
n
i
i i n
x x
x x
x l
n i x l x f x P Lagrange polynomial
!
] [
) ( ] [ ] [
) ( 1
0
1
n
f
x x x P x f
n n
i
n
i n

=
[

Lagrange error
Suppose j (0 s j s n-1), - l
j
:


Then

To satisfy this, x
i
(i = j) must be root of l
j





Since l
j
[x
j
] = 1, thus

k j
k j
x l
k j
=
=

=
1
0
] [

=
1
0
1
] [ ] [ ] [
n
i
i i n
x l x f x P
[

= =
+
=
=
1
0
1 1 1 0
) (
) ( ) )( ( ) ( ] [
n
j i
i i
n j j i i
x x C
x x x x x x x x C x l
[

= =

=
1
0
1
n
j i
i
i
x x
C
2.2 Example
2 sin 1 sin 0 sin ] [
2 1 0
2 1 0
i
i
x f
x
i
2
) 1 (
) 2 (sin ) 2 ( ) 1 (sin
2
) 2 )( 1 (
) 0 (sin ] [
2

+

=
x x
x x
x x
x P
2
) 1 (
) 1 2 )( 0 2 (
) 1 )( 0 (
] [
) 2 (
) 2 1 )( 0 1 (
) 2 )( 0 (
] [
2
) 2 )( 1 (
) 2 0 )( 1 0 (
) 2 )( 1 (
] [
2
1
0

=


=
=


=

=


=
x x x x
x l
x x
x x
x l
x x x x
x l
2.3 Pitfalls of interpolating polynomials
2 2 2
1
] [
x b a
x f
+
=
3. Numerical differentiation: 2-point formula
h
x f h x f
x f
h
x f h x f
x f
h
] [ ] [
] [ '
] [ ] [
lim ] [ '
0 0
0
0
+
~
+
=

Two-point formula
] [ ] [ ' '
2
] [ ] [
] [ '
] [ ' '
2
] [ ' ] [ ] [
0 0
0
2
0 0 0
h O f
h
h
x f h x f
x f
f
h
x hf x f h x f
= =
+

+ + = +
3.1 Error of 2-point formula
n x
h x e x f

= = = 10 1 ] [
0
n f[1+h_k] f[1+h_k]-e f'[x]
1 3.004166024 0.2858841600 2.8588416
2 2.745601015 0.0273191870 2.7319187
3 2.721001470 0.0027196410 2.7196410
4 2.718553670 0.0002718420 2.7184200
5 2.718309012 0.0000271830 2.7183000
6 2.718284547 0.0000027180 2.7180000
7 2.718282100 0.0000002720 2.7200000
8 2.718281856 0.0000000270 2.7000000
9 2.718281831 0.0000000003 3.0000000
10 2.718281829 0.0000000000 0.0000000
3.2 Error analysis of 2-point formula
h
h M
h
x f x f x f x f
h M
h
x f x f
h
x f x f
h
x f x f
x f
h
x f x f
x f
o
+ s
+
+ s

2
2
] [ ] [ ] [ ] [
2
] [ ] [ ] [ ] [ ] [ ] [
] [ '
] [ ] [
] [ '
2
0 0 1 1
2
0 1 0 1 0 1
0
0 1
0
3.3 Step-size optimisation
n
n
n
n M
n
h
h n
n M h
h Q
2
1
optimal
] [
)! 1 (
)! 1 (
] [
] [
|
|
.
|

\
| o +
=
o
+
+
=
02587 . 0
1
) 10 5 . 0 ( 6
1
10 5 . 0 2 cos ] [
6
1
10
optimal
10
~
|
|
.
|

\
|

=
=
= o = =

h
M
n x x f
3.4 Interpolatory differentiation
2
0 1
1
2
1 1
0
2
1 0
1 2
0 1 1 1
0 1
1
1 0 1 0
1 1
0
1 1 0 1
1 0
1 2
0 1 0 1 0
2
) ( ) (
] [
) ( ) (
] [
2
) ( ) (
] [ ] [ '
) )( (
) )( (
] [
) )( (
) )( (
] [
) )( (
) )( (
] [ ] [
h
x x x x
x f
h
x x x x
x f
h
x x x x
x f x P
x x x x
x x x x
x f
x x x x
x x x x
x f
x x x x
x x x x
x f x P
h x x h x x x
+
+

+
+
+
=


+


+


=
+ = =

] [
6 2
] [ ] [
] [ '
) )( )( (
6
] [
] [ ] [
] [ ] [ 2 ] [
] [ ' '
2
] [ ] [
] [ '
) 3 (
2
0 0
0
1 0 1
) 3 (
2
2
0 0 0
0
0 0
0
=
+

=
+ +
~
+
~

f
h
h
h x f h x f
x f
x x x x x x
f
x P x f
h
h x f x h x f
x f
h
h x f h x f
x f
3.5 Method of undetermined coefficients
] [
3
2
] [ ' ' '
3
4
] [ ' ' 2 ] [ ' 2 ] [ ] 2 [
] [
24
] [ ' ' '
6
] [ ' '
2
] [ ' ] [ ] [
2
) 4 (
4
0
3
0
2
0 0 0
1
) 4 (
4
0
3
0
2
0 0 0
+ + + + = +
+ + = +
f
h
x f
h
x f h x hf x f h x f
f
h
x f
h
x f
h
x hf x f h x f
] 2 [ ] [ ] [ ] [ ' '
0 0 0 0
h x cf h x bf x af x f + + + ~
3
2
) 3 (
1
) 3 (
0
2
0 0
0 0 0
] [
3
4
] [
6

] [ ' ' 2
2
] [ ' ) 2 ( ] [ ) (
] 2 [ ] [ ] [
h f
c
f
b
x f h c
b
x hf c b x f c b a
h x cf h x bf x af
|
.
|

\
|
+ +
|
.
|

\
|
+ + + + + + =
+ + +
Determine derivatives by comparing coefficients to the
desired degree of accuracy
For f






Similarly for f
h
h x f h x f x f
x f
h
c
h
b
h
a
c
b
h
c b c b a
6
] 2 [ ] [ 4 ] [ 3
] [ '
6
1
,
3
2
,
2
1
0 2
2
,
1
2 , 0
0 0 0
0
+ +
~
= = =
= + = + = + +
2
0 0 0
0
3
] 2 [ ] [ 2 ] [ 3
] [ ' '
h
h x f h x f x f
x f
+ + +
~
3.6 Finite difference formulae
Central difference


Forward / Backward difference
] [
] [ ] [ 2 ] [
] [ ' '
2
] [ ] [
] [ '
2
2
1 1 1 1
h O
h
x f x f x f
x f
h
x f x f
x f
i i i
i
i i
i
+ +
+
~

~
] [
] [ 2 ] [ 5 ] [ 4 ] [
] [ ' '
2
] [ 3 ] [ 4 ] [
] [ '
2
2
1 2 3
1 2
h O
h
x f x f x f x f
x f
h
x f x f x f
x f
i i i i
i
i i i
i
+ +
~
+
~
+ + +
+ +
4. Newton-Cotes integration formulae
}

=
~
b
a
n
i
i i
x f w dx x f
0
] [ ] [
}

}
[
=
=
+
=
~

+

+ =
b
a
n
i
b
a
i i
n
i
i
n
n
i
i i
dx x l x f dx x f
x x
n
f
x l x f x f
0
0
) 1 (
0
] [ ] [ ] [
) (
)! 1 (
] [
] [ ] [ ] [
4.1 Trapezoidal rule
( ) ( )
( )

} }
}

=
+

=
+ +
+
+
+
+
+
+ = =
+ = +

=
+
+
1
0
1
1
0
1
1 1
1
1
1
1
1
1
1
] [ ] [
2
] [ ] [
] [ ] [
2
] [ ] [
2
] [
] [ ] [ ] [
1
1
n
i
i i
n
i
x
x
b
a
i i i i
i i
x
x
i i
i
i
i i
i
i
x f x f
h
dx x P dx x f
x f x f
h
x f x f
x x
dx x P
x x
x x
x f
x x
x x
x f x P
i
i
i
i
Elemental trapezoidal rule
Composite trapezoidal rule
12
] [ ) (
12
] [
error total
12
] [
! 2
] [
) )( ( error n integratio
! 2
] [
) )( ( 1 to error ion interpolat
) 2 ( 2
1
0
) 2 ( 3
) 2 ( 3 ) 2 (
1
) 2 (
1
1

=

= +

=
+
+
+
f h a b f h
f h
dx
f
x x x x
f
x x x x i i
n
i
x
x
i i
i i
i
i
FUNCTION Trap (h,n,f)
Sum = f_0
DO i = 1, n-1
Sum = sum + 2*f_i
END DO
sum = sum + f_n
Trap = h * sum /2
END Trap
4.2 Simpsons 1/3 rule
]) [ ] [ 4 ] [ 2
] [ 4 ] [ 2 ] [ 4 ] [ (
3
] [
]) [ ] [ 4 ] [ (
3
] [
] [
] [ ] [ ] [
1 2
3 2 1 0
2 1 2
1 2
1
2
2
2 1
2
1
1
2
2
1
1
2
2
n n n
b
a
i i i
x
x
i i
i
i i
i
i
i i
i
i i
i
i
i i
i
i i
i
i
x f x f x f
x f x f x f x f
h
dx x f
x f x f x f
h
dx x P
x x
x x
x x
x x
x f
x x
x x
x x
x x
x f
x x
x x
x x
x x
x f x P
i
i
+ +
+ + + + =
+ + =

=

+ +
+ +
+
+
+
+ +
+
+
+
+
+
+
+
}
}
+

Composite Simpsons rule


Elemental Simpsons rule
] [
180
) (
error total
] [
90
2 to from error
) 4 (
4
) 4 (
5

=
= +
f
h a b
f
h
i i
FUNCTION Simpint(a,b,n,f)
h = (b-a)/n
sum = sum + Simp13(h,m,f)
Simpint = sum
END Simpint

FUNCTION Simp13(h,n,f)
Sum=f(0)
DO i = 1,n-2,2
Sum = sum + 4*f_i + 2*f_(i+1)
END DO
sum = sum + 4*f_(n-1) +f_n
Simp13 = h * sum /3
END Simp13
Not much improvement gained
] [
80
) (
error total
] [
6480
3 to error
]) [ ] [ 3
] [ 3 ] [ 2 ] [ 3 ] [ 3 ] [ (
8
3
] [
]) [ ] [ 3 ] [ 3 ] [ (
8
3
] [
3
) 4 (
4
) 4 (
5
1
4 3 2 1 0
3 2 1 3
3

=
= +
+ + +
+ + + + =
+ + + =
=

+ + +
}
}
+
f
h a b
f
h
i i
x f x f
x f x f x f x f x f
h
dx x f
x f x f x f x f
h
dx x P
n
n n
b
a
i i i i
x
x
i
i

4.3 Simpsons 3/8 rule


4.4 Gauss-Legendre quadrature

} }

=
= =
1
0
] [ ] [ ] [
n
k
k k
b
a
b
a
x P w dx x P dx x f
}

=
=
b
a
n
i
i i
x f w dx x f
0
] [ ] [
: objective
Problem:
Find x
k
and corresponding w
k
for various n
so that error is minimum

+
=
+

= =
= =
= =
= = =
+

=
}

even
1
2
odd 0
1
) 1 ( 1
0
2
1 , 1 ] [ let
1
1
1
0
1
1
1
0
1
1
1
0
0
0
k
k
k
k
dx x x w
dx x x w
dx x x w
b a x x P
k
k
n
i
k
i i
n
i
i i
n
i
i i
j
j

] 0 [ 2 ] [
0 , 2
0 , 2 1
1
1
0 0
1
0 1
0
0 0
f dx x f
x w
x w x w n
~
= =
= = =
}

343 . 2
]
3
1
[ ]
3
1
[ ] [
3
1
,
3
1
, 1
0 ,
3
2
, 0 , 2 2
1
1
1
1
1 0 1 0
3
1 1
3
0 0
2
1 1
2
0 0
1
1 1
1
0 0
0
1 1
0
0 0
=
+ ~
= = = =
= + = +
= + = + =
}
}

dx e
f f dx x f
x x w w
x w x w x w x w
x w x w x w x w n
x
4.4.1 Gauss-Legendre nodes
Points Weight Arguments
1 1 0
2 1 0.577350269
3 0.5555556 0.774596669
0.8888889 0
4 0.3478548 0.861136312
0.6521452 0.339981044
5 0.2369269 0.906179846
0.4786287 0.538469310
0.5688889 0
6 0.1713245 0.932469514
0.3607616 0.661209386
0.4679139 0.238619186
4.4.2 Gaussian limit-transform

}
}
}
=

| + o
| + o
(

+
+

~
(

+
+

=

+
= |

= o = | + o = | + o
o
|
o
= | + o =
=
n
i
i i
b
a
b
a
a b
x
a b
f w
a b
I
dt
a b
t
a b
f
a b
I
b a
b a
b a
b a
dt
t
f I x t
dx x f I
0
1
1
2 2 2
2 2 2
,
2
1 , 1
] [
1
] [
4.4.3 Example
793 . 4 exact
793 . 4
5
3
, 0 ,
5
3
9
5
,
9
8
,
9
5
25 . 2
2
25 . 0
2
5 . 2 , 0 . 2
2 1 0 2 1 0
5 . 2
0 . 2
=
~
= = = = = =
=
+
=

= =
}
I
I
x x x w w w
a b a b
b a dx e
x
4.4.4 Characteristics of
Gauss-Legendre quadrature
Advantages:
Can handle singular integrals
Very small number of points to give very accurate results
Simple calculations
Disadvantages:
Complicated theories
Use of table for weights and nodes
Difficult to estimate errors
Unable to use preceding formulae: inefficiency
5. Ordinary differential equations
Initial-value problems (IVP): ODE in which information about
the solution is given at a single time point, e.g. first-order
systems
Boundary-value problems (BVP): ODE in which information
about the solution is given by two or more points, e.g. multi-
order systems

0 0
] [ ]], [ , [ ] [ ' y x y x y x f x y = =
] , [ ' : ] [ '
] [
ion considerat under domain ]) [ , (
0 0
y x f y x y
y x y
x y x
= -
=
e
] ; , [
1
h y x h y y
i i i i
| + =
+
Increment function
5.1 Eulers method
No differentiation needed
Very simple algorithm '
1
1 j j j
hy y y
m
+ =
=
+
5.1.1 Example

001 . 0 ] , [
0 ] , [
0 ] 0 [ , '
1 1 1 2
0 0 0 1
2 2
= + =
= + =
= + =
y x hf y y
y x hf y y
y y x y
0
0.1
0.2
0.3
0.4
0 0.2 0.4 0.6 0.8 1
x
y
exact
euler
x_j y_j
0.0 0.000000
0.1 0.000000
0.2 0.001000
0.3 0.005000
0.4 0.014003
0.5 0.030022
0.6 0.055112
0.7 0.091416
0.8 0.141252
0.9 0.207247
1.0 0.292542
5.2 Foundations of Runge-Kuttas method
Consider the Taylors series for m = 2



Now consider a small step h* (smaller than h)




] , [ ]] [ , [ ] [ ' '
! 3
' '
! 2
'
2
1
j j j j j j
j j
j j
y x f x y x f x y y
h
y
h
y
y y
~ = ~
+ + =
+
*
* *
*
* *
*
*
]] , [ ]] , [ ] [ , [
]] [ , [ ]] [ , [
] [ ' ] [ '
] [ ' ' ' '
h
y x f y x f h x y h x f
h
x y x f h x y h x f
h
x y h x y
x y y
j j j j j j
j j j j
j j
j j
+ +
~
+ +
=
+
~ ~
Let h* = h


Comparing
|
.
|
+ +

\
|
|
.
|

\
|

+ =
+
]] , [ , [
2
1
] , [
2
1
1
1 j j j j j j j j
y x hf y h x f y x f y y
( )
2 2 1 1 1
2 1
1 2 1
2
1
2
1
1
] , [ ] , [
k k h y y
hk y h x f k y x f k
j j
j j j j
o + o + =

= o

= o
+ + = =
+
5.2.1 General Runge-Kuttas formula
s i k h y h x f k
y x f k
k h y y
i
m
m im j i j i
j j
s
i
i i j j
, , 2 ,
] , [
general in
1
1
1
1
1
=
(

+ + =
=
o + =

=
=
+
stage
5.2.2 Eulers method and its modification
1 when minimized error
] , [
1
1
1 1
= o
o + =
=
+ j j j j
y x f h y y
s
(

+ + =
=
+ =
= = = o = o
=
+
1 2
1
2 1
21 2 2 1
2
,
2
] , [
2
1
2
1
1 0
2
k
h
y
h
x f k
y x f k
hk y y
s
j j
j j
j j
Eulers method
Modified Eulers method
Heuns method
5.2.3 Heuns and Ralstons method
(

+ + =
=
|
.
|

\
|
+ + =
=
+
1 2
1
2 1
1
3
2
,
3
2
] , [
4
3
4
2
hk y h x f k
y x f k
k k
h y y
s
j j
j j
j j
(

+ + =
=
|
.
|

\
|
+ + =
=
+
1 2
1
2 1
1
4
3
,
4
3
] , [
3
2
3
2
hk y h x f k
y x f k
k k
h y y
s
j j
j j
j j
Heuns 2
nd
method
Ralstons method
5.2.4 Classical 4
th
-order Runge-Kuttas method
( )
| |
3 2
2 3
1 2
1
4 3 2 1 1
,
2
1
,
2
1
2
1
,
2
1
] , [
2 2
6
hk y h x f k
hk y h x f k
hk y h x f k
y x f k
k k k k
h
y y
j j
j j
j j
j j
j j
+ + =
(

+ + =
(

+ + =
=
+ + + + =
+
SUB Rk4(x,y,h,ynew)
CALL Derive(x,y,k1)
ym = y+k1 * h/2
CALL Derive(x+h/2,ym,k3)
ym = y + k2 * h/2
CALL Derive(x + h/2, ym, k3)
ye = y + k3 *h
CALL Derive (x+h,ye,k4)
slope = (k1+2(k2+k3)+k4)/6
ynew = y + slope *h
x = x+h
END SUB
5.2.5 Errors of various RKs method
5.2.6 Runge-Kuttas method example
i xi yi k1 k2 k3 k4 yi+1
0 0 0 0 0.0025 0.002500016 0.010000063 0.000333335
1 0.1 0.000333335 0.010000111 0.022500694 0.022502127 0.040006675 0.002666875
2 0.2 0.002666875 0.040007112 0.062521783 0.062533558 0.090079571 0.009003498
3 0.3 0.009003498 0.090081063 0.122682454 0.122729148 0.160452686 0.021359447
4 0.4 0.021359447 0.160456226 0.203363317 0.20349399 0.251739628 0.041791288
5 0.5 0.041791288 0.251746512 0.305457034 0.305756316 0.365236971 0.072448125
6 0.6 0.072448125 0.365248731 0.430728406 0.431333095 0.503359068 0.115660305
7 0.7 0.115660305 0.503377306 0.582332855 0.583460365 0.670278207 0.174081004
8 0.8 0.174081004 0.670304196 0.765596188 0.767597115 0.872921065 0.250907869
9 0.9 0.250907869 0.872954758 0.989263005 0.992722749 1.122626133 0.350233742
10 1 0.350233742 1.122663674 1.267634078 1.273577737 1.438093656 0.477620091
0 ] 0 [ , '
2 2
= + = y y x y
5.2.7 Runge-Kuttas parameter
The central principle of Runge-Kutta approach is to
choose the parameters o
i
,
i
and
im
for fixed s so
that a power series expansion in h agrees with that
of Taylors expansion for as high a power of h as
possible.




Almost impossible to solve, trial and error

=
o =
s
i
i i j j
k h y x g
1
] , , [ Define


= =
+ =
1
0 0
) (
!
] , , [
!
] [
p
q
q
q
j j
q
j
p
q
q j
q
q
h
dh
h y x g d
h y h
q
x y
5.3 Predictor-corrector algorithms
Use more previous or intermediate data
to predict values
5.3.1 Adams-Bashforth-Moulton formulae
|
0
= 0 Adams-Bashforth predictor
|
0
= 0 Adams-Moulton corrector
not self-starting, need Runge-Kutta to obtain first few
y
j

| |

=
+ +
=
+ +
| + o =
p
m
m j m j m
p
m
m j m j
y x f h y y
0
1 1
1
1 1
,
Obtain y
j
,,y
j+1-i
by Runge-Kutta formula
Use Adams-Bashforth predictor to obtain y
j+1
(= y
j+1
I
)

Define f[x
j+1
,y
j+1
I
]
Use Adams-Moulton to construct y
j+1
II
with f[x
j+1
,y
j+1
I
]
Compute f[x
j+1
,y
j+1
II
]
Repeat until converge
5.3.2 ABM coefficients
| |

=
+ + +
| + =
p
m
m j m j m j j
y x f h y y
0
1 1 1
,

720
251
720
1276
720
2616
720
2774
720
1901
24
9
24
37
24
59
24
55
12
5
12
16
12
23
2
1
2
3
5 4 3 2 1
5
4
3
2
1 1

| | | | | p

720
19
720
106
720
264
720
646
720
251
24
1
24
5
24
19
24
9
12
1
12
8
12
5
2
1
2
1
4 3 2 1 0
5
4
3
2
1 1

| | | | | p
Adams-Bashforth predictor
Adams-Moulton corrector
5.3.3 Example
|
.
|

\
|
+ + =
|
.
|

\
|
+ =
=
=
+
+
+ +

] , [
2
1
] , [
2
1
] , [
2
1
] , [
2
3
2
] , [ '
1
1
1 1
1 1
j j
C
j j j
j j j j j
C
y x f y x f h y y
y x f y x f h y y
p
y x f y
j
j
Adams-Bashforth
Adams-Moulton
5.4 System of ODE
All methods discussed can be extended to the system shown
above with no apparent difficulty.
| |
| |
| |

=
=
=
n
n
n
y y y x f
dx
dy
y y y x f
dx
dy
y y y x f
dx
dy
,..., , ,
,..., , ,
,..., , ,
2 1 1
4
2 1 1
2
2 1 1
1

5.5 Higher-order BVP


Decomposing the ODE to a system of 1
st
-order ODE
Introduce the new variables y
i
1 0 ] [
]] [ , ], [ , [ ] [
) (
0 0
) (
) 1 ( ) (
s s =
=

k j y x y
x y x y x f x y
j j
k k

] [ ] [
], [ ' ] [
], [ ] [
) 1 (
2
1
x y x y
x y x y
x y x y
k
k

=
=
=

) 1 (
0 0 1
) 2 (
0 0 1 1
) 1 (
0 0 2 3 2
) 0 (
0 0 1 2 1
] [ ]], [ , ], [ , [ ] [ '
] [ ], [ ] [ '
] [ ], [ ] [ '
] [ ], [ ] [ '


= =
= =
= =
= =
k
k k k
k
k k k
y x y x y x y x f x y
y x y x y x y
y x y x y x y
y x y x y x y

|
|
|
.
|

\
|
=
|
|
|
.
|

\
|
=
|
|
|
.
|

\
|
1 (
0
) 0 (
0
0
2 1
]] [ , [
] [
] [ '
] [ '
k
k
y
y
x y x f
x y
x y
x y
y
Solve as a system of 1
st
-order ODE

5.6 Shooting method
Based on converting the BVP into an IVP
Solve by trial and error


Define
200 ] 10 [ , 40 ] 0 [ : 0 ) 20 ( 01 . 0 = = = + T T T
dx
dT
) 20 ( 01 . 0 =
=
T
dx
dz
z
dx
dT
Make guess


Therefore, looking at the boundary conditions, a solution
must exist between 0 and 10
Solve it using any rootfinding techniques, eg secant
method


Continue until converge
[0] 20 [10] 285.8980
[0] 10 [10] 168.3797
z T
z T
= =
= =
6907 . 12 ) 3797 . 168 200 (
3797 . 168 8980 . 285
10 20
10 ] 0 [ =

+ = z

Vous aimerez peut-être aussi