Académique Documents
Professionnel Documents
Culture Documents
Aleksandar Donev
Courant Institute, NYU1
donev@courant.nyu.edu
1 Course G63.2010.001 / G22.2420-001, Fall 2010
5 Conclusions
The final project writeup will be due Sunday Dec. 26th by midnight (I
have to start grading by 12/27 due to University deadlines).
You will also need to give a 15 minute presentation in front of me and
other students.
Our last class is officially scheduled for Tuesday 12/14, 5-7pm, and
the final exam Thursday 12/23, 5-7pm. Neither of these are good!
By the end of next week, October 23rd, please let me know the
following:
Are you willing to present early Thursday December 16th during usual
class time?
Do you want to present during the official scheduled last class,
Thursday 12/23, 5-7pm.
If neither of the above, tell me when you cannot present Monday Dec.
20th to Thursday Dec. 23rd (finals week).
Fundamentals
Order of convergence
Consider one dimensional root finding and let the actual root be ,
f () = 0.
A sequence of iterates x k that converges to has order of
convergence p > 1 if as k
k+1 k+1
x e
k p = k p C = const,
|x | |e |
f ( + ) f () + f 0 () = f
|f | 1
abs = f 0 () .
||
|f 0 ()|
Bisection
First step is to locate a root by searching for a sign change, i.e.,
finding a0 and b0 such that
f (a0 )f (b0 ) < 0.
The simply bisect the interval,
ak + b k
xx =
2
and choose the half in which the function changes sign by looking at
the sign of f (x k ).
Observe that each step we need one function evaluation, f (x k ), but
only the sign matters.
The convergence is essentially linear because
k+1
k x
x b
k
2.
2k+1 |x k |
A. Donev (Courant Institute) Lecture VI 10/14/2010 9 / 31
One Dimensional Root Finding
Newtons Method
f (x k+1 ) f (x k ) + (x k+1 x k )f 0 (x k ) = 0
f (x k )
x k+1 = x k
f 0 (x k )
f (x k ) f 00 ()
1
k
x 0 k = ( x k )2 0 k
f (x ) 2 f (x )
1 k 2 f 00 ()
x k+1 = e k+1 = e
2 f 0 (x k )
which shows second-order convergence
k+1 k+1
x e f 00 () 00
f ()
2
= 2
= 0 k 0
|x k | |e k | 2f (x ) 2f ()
k+1 00 00
x
f () f (x)
2
= 0 k M = sup
0
|x k | 2f (x ) |e 0 |x,y +|e 0 | 2f (y )
2 2
M x k+1 = E k+1 M x k = E k
Fixed-Point Iteration
x k+1 = (x k )
Convergence theory
k+1
< K x k
x
Stopping Criteria
A good library function for root finding has to implement careful
termination criteria.
An obvious option is to terminate when the residual becomes small
k
f (x ) < ,
In practice
A robust but fast algorithm for root finding would combine bisection
with Newtons method.
Specifically, a method like Newtons that can easily take huge steps in
the wrong direction and lead far from the current point must be
safeguarded by a method that ensures one does not leave the search
interval and that the zero is not missed.
Once x k is close to , the safeguard will not be used and quadratic or
faster convergence will be achieved.
Newtons method requires first-order derivatives so often other
methods are preferred that require function evaluation only.
Matlabs function fzero combines bisection, secant and inverse
quadratic interpolation and is fail-safe.
% f=@ m f i l e u s e s a f u n c t i o n i n an m f i l e
figure (1)
ezplot ( f ,[ 5 ,5]); grid
x1=f z e r o ( f , [ 2 , 0 ] )
[ x2 , f 2 ]= f z e r o ( f , 2 . 0 )
x1 = 1.227430849357917
x2 = 3.155366415494801
f2 = 2.116362640691705 e 16
Figure of f (x)
a sin(x)+b exp(x2/2)
2.5
1.5
0.5
0.5
5 4 3 2 1 0 1 2 3 4 5
x
xk+1 = xk + x = xk J1 f xk .
x?
=
ek+1
=
xk J1 f xk x?
=
ek J1 f xk
k+1
x
k+1
J1 k T
J1
kHk
2
H ek
ek
e
=
2 e 2
Fixed point iteration theory generalizes to multiple variables, e.g.,
replace f 0 () < 1 with (J(x? )) < 1.
A. Donev (Courant Institute) Lecture VI 10/14/2010 22 / 31
Systems of Non-Linear Equations
xk+1 = xk + k x.
Update e
J by a simple update, e.g., a rank-1 update (recall homework
2).
In practice
Formulation
Optimization problems are among the most important in engineering
and finance, e.g., minimizing production cost, maximizing profits,
etc.
minn f (x)
xR
where x are some variable parameters and f : Rn R is a scalar
objective function.
Observe that one only need to consider minimization as
max f (x) = minn [f (x)]
xRn xR
A local minimum x? is optimal in some neighborhood,
?
f (x ) f (x) x s.t. kx x? k R > 0.
(think of finding the bottom of a valley)
Finding the global minimum is generally not possible for arbitrary
functions
(think of finding Mt. Everest without a satelite)
A. Donev (Courant Institute) Lecture VI 10/14/2010 25 / 31
Intro to Unconstrained Optimization
Sufficient Conditions
Direct-Search Methods
% R o s e n b r o c k o r banana f u n c t i o n :
a = 1;
banana = @( x ) 1 0 0 ( x (2) x ( 1 ) 2 ) 2 + ( ax ( 1 ) ) 2 ;
% T h i s f u n c t i o n must a c c e p t a r r a y a r g u m e n t s !
b a n a n a x y = @( x1 , x2 ) 1 0 0 ( x2x1 . 2 ) . 2 + ( ax1 ) . 2 ;
f i g u r e ( 1 ) ; e z s u r f ( banana xy , [ 0 , 2 , 0 , 2 ] )
[ x , y ] = meshgrid ( l i n s p a c e ( 0 , 2 , 1 0 0 ) ) ;
f i g u r e ( 2 ) ; c o n t o u r f ( x , y , banana xy ( x , y ) ,100)
% C o r r e c t a n s w e r s a r e x = [ 1 , 1 ] and f ( x )=0
[ x , f v a l ] = f m i n s e a r c h ( banana , [ 1 . 2 , 1 ] , o p t i m s e t ( TolX , 1 e 8))
x = 0.999999999187814 0.999999998441919
fval = 1 . 0 9 9 0 8 8 9 5 1 9 1 9 5 7 3 e 18
100 (x2x12)2+(ax1)2
2
1.8
1.6
800
1.4
600
1.2
400 1
200 0.8
0.6
0
2
0.4
1 1
0 0.5 0.2
0
1
0.5 0
x2 2 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x1
Conclusions/Summary