Vous êtes sur la page 1sur 22

UNCONSTRAINED

OPTIMIZATION
Chapter 3
Outcomes
Describe methods for optimizing unconstrained functions with single
variable.
Perform bracketing, Newton and secant methods for optimizing
unconstrained functions with single variable.
Describe deterministic approach for optimizing multivariable function.
Describe stochastic approach for optimizing multivariable function.
Introduction
Unconstrained optimization problem is defined mathematically by the
following;
Min f(x) or Max f(x)
Constrained optimization problem is defined mathematically by the following;
Min f(x) or Max f(x)
Subject to (s.t): g(x) = 0
and/or : h(x) < 0 or h(x) > 0, where x (or many of x) are vectors
(variables/dimensions)
In general (single variable/multi-variable problem), there are two different
methods for searching for optimal solutions:
i) Derivative based methods: Involve derivatives at each iteration stage
and determine potential optimum by the necessary condition (analytical
solution)
Ii) Function value based methods: Search for an optimum by comparison of
function values at a sequence of trial points (numerical solution)
For maximization minimize f(x)
One Dimensional Unconstrained
Optimization
Analytical Approach
The necessary condition for x* to be an extremum is:
f(x) = 0, that is, a stationary point exists at x*.
For a single variable function, if both the first and the second
derivatives are zero at x*, then successive higher derivatives
should be evaluated at x*.
Continue until one of the higher derivatives is not zero, say nth
derivative,
If n is even: f(n) (x*) < 0 x* is a maximum
: f(n) (x*) > 0 x* is a minimum
n is odd x* is a saddle point
Examples:
1. Does f(x) = x4 have an extremum? Prove it by analytical solution.
2. Determine optimal solution for f(x) = x4 2x + 1
Numerical Approach
General procedure for minimization (or maximization = minimize
f(x))
Start with an initial value, x0
Calculate f(x0)
Change x0 for next stage x1
Calculate f(x1)
Make sure f(xk+1) < f(xk) in each stage
The calculation will stop when f(xk+1) - f(xk) <
Where is pre-specified tolerance or criterion of precision.
Numerical methods for solving
optimization problem with one
Most algorithms for unconstrained and constrained optimization make use of an efficient
variable
unidimensional optimization technique to locate a local minimum of a function of one variable.
In selecting a search method to minimize or maximize a function of a single variable, the most
important concerns are:
Software availability
Ease of use
Robust
Efficiency
Sometimes the function may take a long time to compute, and then efficiency becomes more
important.
In some problems, a simulation may be required to generate the function values, such as
determining the optimal number of trays in a distillation column.
In other cases, you have no functional description of the physical-chemical model of the process
to be optimized and are forced to operate the process at various input levels to evaluate of the
process output.
The generation of a new value of the objective function in such circumstances may be
extremely costly, and no doubt the number of plants would be limited.
Hence, efficiency is a key criterion in selecting a minimization strategy.
Cont..
Methods for numerically solving one dimensional optimization are as
below:
Bracketing method
Newtons method
Finite difference approximation of Newtons method
Quasi-Newton (Secant) method
Polynomial approximation methods (quadratic interpolation and
cubic interpolation)
Bracketing Method
Narrow down the solution region or to avoid excessive search within a
wide range.
This search method require a bracket of the minimum as the first part
of strategy, and then the bracket is narrowed.
Along with the statement of the objective function f(x), there must be
some statement of bounds on x or else the implicit assumption that is
x unbounded ( - <x < ).
In general, procedures for this method like below;
First assume an initial bracket, 0, which should contain the
optimum of a function.
Then determine the reduced bracket, k, at the stage k.
This procedure terminates when an optimum is found.
The formula is xk+1 = xk + 2k-1
Newtons Method
The advantages of Newtons Method:
For a quadratic function, the minimum is obtained in one iteration
As long as f(x) 0, the procedure is locally convergent in
quadratic function.
The disadvantages of the method are:
You have to calculate both f(x) and f(x).
If f(x) 0, the method converges slowly.
If the initial point is not close enough to the minimum, the method
as described earlier will not converge. Modified versions
guarantee convergence from poor starting points.
If function has multiple extremum (non-convex function), it may
not converge for global optimum.
Cont..
Recall
that the necessary condition for f(x) to have an optimum is that
f(x) = 0.
Consequently, you can solve the equation f(x) = 0 by Newtons
method t get

Making sure on each stage k that f(xk+1) < f(xk) for a minimum.
Let us do some examples.
Finite Difference Approximation of Newtons Method

If
f(x) is not given by a formula, or the formula is so complicated that
analytical derivatives cannot be formulated, we can use the following
( a finite difference approximation)

While h is the step size (h is selected to match the difference formula


and the computer (machine) precision with which the calculations are
to be executed.
The disadvantage is the error introduced by the finite differencing.
Secant Method
Secant
method (quasi-Newton method) starts out by using two points
xp and xq spanning the interval.
The first derivative of the function, f(x) should have opposite signs at
xp and xq.
f(x) is approximated by a straight line.
Interval bounds are updated and narrow down at each iteration.
The procedure terminates when f(xkp)
The formula:

It uses only first order derivatives.


Let us do some examples using Newton method, finite difference
method and secant method.
Polynomial Approximation
Methods
Another class of methods of unidimensional minimization locates a
point x near x*
The value of the independent variable corresponding to the minimum
of f(x), by extrapolation and interpolation using polynomial
approximations as models of f(x).
The methods divide into two:
Quadratic Interpolation (quadratic function)
Cubic Interpolation (cubic function)
Let us do example for quadratic interpolation.
Rate of Convergence
Typically, three rates of convergence are used to compare the
effectiveness of different search methods.
Linear
Order P
Super-linear
Unconstrained Optimization
for Multivariable Problems
Multivariable Optimization
Search methods used can be divided into:
Deterministic approach
Stochastic approach
Deterministic methods follow a predetermined search pattern and do not involve any
guessed or random steps.
Deterministic methods can be further classified into direct and indirect methods. Direct
search methods do not require derivatives (gradients) of the function. Indirect methods
use derivatives, even though the derivatives might be obtained numerically rather than
analytically.
Direct methods:
Univariate method
Simplex method
Conjugate method
Powells method
Indirect methods:
Gradient method
Newton method
Secant method
Two of most popular stochastic methods are simulated annealing and genetic algorithm.
Deterministic Methods (Direct Search Methods)

An example of a direct search method is a univariate search. (See the


related Figure)
All of the variables except one are fixed and the remaining variable is
optimized.
Once a minimum or maximum point has been reached, this variable is
fixed and another variable optimized, with the remaining variables
being fixed.
This is repeated until there is no further improvement in the objective
function.
In the figure, two dimensional search in which x1 is first fixed and x2
optimized.
Then x2 is fixed and x1 optimized, and so on until no further
improvement in the objective function is obtained.
The starting point is critical. Use different starting points as to reach
Simplex Method
The method uses a regular geometric shape (a simplex) to generate
search directions.
In two dimensions, the simplest shape is an equilateral triangle. In three
dimensions, it is a regular tetrahedron.
The objective function is evaluated at the vertices of the simplex. (See
the related Figure).
The objective function must first be evaluated at the Vertices A, B, and C.
The general direction search is projected away from the worst vertex (in
this case Vertex A) through centroid of the remaining vertices (B and C).
A new simplex is formed by replacing the worst vertex by a new point
that is the mirror image of the simplex (vertex D).
Vertex A is an inferior point, replaced by Vertex D. The simplex vertices
for the next step are B, C, D.
This process is repeated for successive moves in a zigzag fashion.
Indirect Search Methods
For the gradient method, examples are steepest descent
(minimization) and steepest ascent (maximization).
Both steepest descent and steepest ascent use only first order
derivatives to determine the search direction.
Newtons method for single-variable optimization can be adapted to
carry out multivariable optimization, taking advantage of both first-
and second-order derivatives to obtain better search directions.
One fundamental practical difficulty with both the direct and indirect
search methods is that, depending on the shape of the solution space,
the search can locate optimum rather than global optimum.
Hence, start the search for optimization from different initial points
and repeat the process.
Stochastic Methods
Use random choice to guide the search and can allow deterioration of
the objective function during the search. (Before this in deterministic
method, the algorithms search the optimum points seeking to
improve the objective function at each step, this processes can mean
the search is attracted to a local minimum).
It is important that to recognize that a randomized search does not
mean a directionless search.
The method generates a randomized path to the solution on the basis
of probabilities.
Improvement in the objective function becomes the ultimate rather
that the immediate goal, and some deterioration of the objective
function is tolerated, especially in the early stages of the search.
Ironically, this helps to reduce the problem of being trapped in a local
optimum.
Two Stochastic Methods in
Optimization
Simulated Annealing (SA) : Emulates the physical process of annealing metals.
In the physical process of metal annealing, at high temperatures, the molecules of the liquid
move freely with respect to one another.
If the liquid is cooled slowly, thermal mobility is lost. The atoms are able to line themselves
up and form perfect crystals.
This crystal is one of minimum energy for the system. If the liquid metal is cooled quickly, it
does not reach at this state, rather ends up polycrystalline or amorphous state having
higher energy. So cooling must be slow, allowing ample time for redistribution of the atoms
as they lose mobility to reach state of minimum energy.
An algorithm is suggested based on this physical process.
Genetic Algorithm (GA): Draw their inspiration from biological evolution, moves from one set
of points (population) to another set of points.
Populations of strings (the analogy of chromosomes) are created to represent an underlying
set of parameters (temperatures, pressures, concentrations, etc.)
A simple GA exploits three basic operators:
Reproduction
Crossover
mutation

Vous aimerez peut-être aussi