Académique Documents
Professionnel Documents
Culture Documents
OPTIMIZATION
Chapter 3
Outcomes
Describe methods for optimizing unconstrained functions with single
variable.
Perform bracketing, Newton and secant methods for optimizing
unconstrained functions with single variable.
Describe deterministic approach for optimizing multivariable function.
Describe stochastic approach for optimizing multivariable function.
Introduction
Unconstrained optimization problem is defined mathematically by the
following;
Min f(x) or Max f(x)
Constrained optimization problem is defined mathematically by the following;
Min f(x) or Max f(x)
Subject to (s.t): g(x) = 0
and/or : h(x) < 0 or h(x) > 0, where x (or many of x) are vectors
(variables/dimensions)
In general (single variable/multi-variable problem), there are two different
methods for searching for optimal solutions:
i) Derivative based methods: Involve derivatives at each iteration stage
and determine potential optimum by the necessary condition (analytical
solution)
Ii) Function value based methods: Search for an optimum by comparison of
function values at a sequence of trial points (numerical solution)
For maximization minimize f(x)
One Dimensional Unconstrained
Optimization
Analytical Approach
The necessary condition for x* to be an extremum is:
f(x) = 0, that is, a stationary point exists at x*.
For a single variable function, if both the first and the second
derivatives are zero at x*, then successive higher derivatives
should be evaluated at x*.
Continue until one of the higher derivatives is not zero, say nth
derivative,
If n is even: f(n) (x*) < 0 x* is a maximum
: f(n) (x*) > 0 x* is a minimum
n is odd x* is a saddle point
Examples:
1. Does f(x) = x4 have an extremum? Prove it by analytical solution.
2. Determine optimal solution for f(x) = x4 2x + 1
Numerical Approach
General procedure for minimization (or maximization = minimize
f(x))
Start with an initial value, x0
Calculate f(x0)
Change x0 for next stage x1
Calculate f(x1)
Make sure f(xk+1) < f(xk) in each stage
The calculation will stop when f(xk+1) - f(xk) <
Where is pre-specified tolerance or criterion of precision.
Numerical methods for solving
optimization problem with one
Most algorithms for unconstrained and constrained optimization make use of an efficient
variable
unidimensional optimization technique to locate a local minimum of a function of one variable.
In selecting a search method to minimize or maximize a function of a single variable, the most
important concerns are:
Software availability
Ease of use
Robust
Efficiency
Sometimes the function may take a long time to compute, and then efficiency becomes more
important.
In some problems, a simulation may be required to generate the function values, such as
determining the optimal number of trays in a distillation column.
In other cases, you have no functional description of the physical-chemical model of the process
to be optimized and are forced to operate the process at various input levels to evaluate of the
process output.
The generation of a new value of the objective function in such circumstances may be
extremely costly, and no doubt the number of plants would be limited.
Hence, efficiency is a key criterion in selecting a minimization strategy.
Cont..
Methods for numerically solving one dimensional optimization are as
below:
Bracketing method
Newtons method
Finite difference approximation of Newtons method
Quasi-Newton (Secant) method
Polynomial approximation methods (quadratic interpolation and
cubic interpolation)
Bracketing Method
Narrow down the solution region or to avoid excessive search within a
wide range.
This search method require a bracket of the minimum as the first part
of strategy, and then the bracket is narrowed.
Along with the statement of the objective function f(x), there must be
some statement of bounds on x or else the implicit assumption that is
x unbounded ( - <x < ).
In general, procedures for this method like below;
First assume an initial bracket, 0, which should contain the
optimum of a function.
Then determine the reduced bracket, k, at the stage k.
This procedure terminates when an optimum is found.
The formula is xk+1 = xk + 2k-1
Newtons Method
The advantages of Newtons Method:
For a quadratic function, the minimum is obtained in one iteration
As long as f(x) 0, the procedure is locally convergent in
quadratic function.
The disadvantages of the method are:
You have to calculate both f(x) and f(x).
If f(x) 0, the method converges slowly.
If the initial point is not close enough to the minimum, the method
as described earlier will not converge. Modified versions
guarantee convergence from poor starting points.
If function has multiple extremum (non-convex function), it may
not converge for global optimum.
Cont..
Recall
that the necessary condition for f(x) to have an optimum is that
f(x) = 0.
Consequently, you can solve the equation f(x) = 0 by Newtons
method t get
Making sure on each stage k that f(xk+1) < f(xk) for a minimum.
Let us do some examples.
Finite Difference Approximation of Newtons Method
If
f(x) is not given by a formula, or the formula is so complicated that
analytical derivatives cannot be formulated, we can use the following
( a finite difference approximation)