Académique Documents
Professionnel Documents
Culture Documents
Root-nding techniques
2 1 0.5 1.5
y=x y = cos x
y=0
0 0.5 1 1.5
0.5
y = cos x x
0.5 1 1.5 2
Alternative (equivalent) questions: xed point form: what x solves x = g (x) ? zero-nding form: what x solves f (x) = 0 ?
y = x2 1 a
2 1 0 1
a y= x3 x b
f (a) = 1, f (b) = +1, opposite sign but f is discontinuous, so IMVT does not hold
5
y = 1/x
0
a b
0.5
0.5
NB: En+1
so need
log (b1 a1 ) log n+1> log 2
Examples
Does the xed point iteration always work? Take g (x) = cos x, x0 = 0 converges ok
g ( x) = x2 , x0 = 2 oh dear, diverges to innity
Chaos ...
Starting with x0 = 1/3 or x0 = 1/3 + 0.001 ... 0.3333 0.3343 0.8889 0.8902 0.3951 0.3909 0.9560 0.9524 0.1684 0.1813 0.5602 0.5938 0.9855 0.9648 0.0572 0.1357 0.2158 0.4692 0.6770 0.9962 0.8747 0.0151
so
General iteration:
xn+1 = xn
xn cos xn 1 + sin xn
Upshot: Newton method is much more efcient. Why? How to analyse xed point methods in general?
Gives F at points near to x in terms of local information at x alone Truncation of series gives useful approximations:
F (x + h) F (x)
F (x + h) F (x) + F (x)h
Rearrangement gives: 1 3 1 2 En+1 = g (x )En + g (x )En + g (x )En + . . . 2! 3! Error at (n + 1)th step in terms of error at nth step. How fast does error get smaller?
(fast)
(even faster)
Examples
Newton-Raphson is second order. The scheme
xn+1 = A + xn x2 n is proposed for nding A. Analyse its order and rate of convergence.
should give answers more or less independent of n (NB gives rate of convergence)
Summary (I)
Three main methods discussed: Interval bisection Fixed point method Newton method
Interval bisection uses Intermediate Value Theorem repeatedly and is robust rst order method with k = 1/2 Fixed point and Newton methods can be generalised to solve systems of equations
Summary (II)
Fixed point method (solves g (x) = x):
xn+1 = g (xn )
converges much faster than usual xed point method but : is a special case of xed point method
Summary (III)
Analysis of convergence of xed point methods based on Taylor series expansions First order methods: k := g (x ) = 0
En+1 kEn
(faster convergence)