Vous êtes sur la page 1sur 8

MATLAB Optimization Toolbox

Selection of Optimization Algorithms


MATLAB Optimization Toolbox separates "medium-scale" algorithms from 'large-scale"
algorithms. Medium-scale is not a standard term and is used here only to differentiate these
algorithms from the large-scale algorithms, which are designed to handle large-scale
problems efficiently.
Medium-Scale Algorithms
• The Optimization Toolbox routines offer a choice of algorithms and line search
strategies.
o The principal algorithms for unconstrained minimization are the Nelder-Mead
simplex search method and the BFGS quasi-Newton method.
o For constrained minimization, minimax, goal attainment, and semi-infinite
optimization, variations of Sequential Quadratic Programming are used.
• Nonlinear least squares problems use the Gauss-Newton and Levenberg-Marquardt
methods.
• A choice of line search (or 1-D search) strategy is given for unconstrained
minimization and nonlinear least squares problems. The line search strategies use
safeguarded cubic interpolation (with analytical gradient functions) and quadratic
interpolation and extrapolation (without analytical gradient functions) methods.
Large-Scale Algorithms
All the large-scale algorithms, except linear programming, are trust-region methods.
o Bound constrained problems are solved using reflective Newton methods.
o Equality constrained problems are solved using a projective preconditioned
conjugate gradient iteration.
o You can use sparse iterative solvers or sparse direct solvers in solving the linear
systems to determine the current step. Some choice of preconditioning in the
iterative solvers is also available.

Linear Programming Problems


The linear programming method is a variant of Mehrotra's predictor-corrector
algorithm, a primal-dual interior-point method.

Use of MATLAB Optimization Routines


1. Standard Form of the Optimization Problem
In order to use the optimization routines, the formulated optimization problem needs to
be converted into the standard form required by these routines (case dependent).
2. Definition of Objective and Constraint Functions
Most of these optimization routines require the definition of an M-file containing the
function to be minimized. The M-file, named objfun.m, returns the function value.
Alternatively, an inline object created from a MATLAB expression can be used.
The constraints are specified in a second M-file, confun.m, that return s the value of the
constraints at the current x in vector c.
The constrained minimization routine is then invoked. Maximization is achieved by
supplying the routines with -f, where f is the function being optimized.
3. Optimization Parameter Setting
Optimization options passed to the routines change optimization parameters. Default
optimization parameters are used extensively but can be changed through an options
structure. Optimization options allow you to
• select "medium-scale" or "large-scale" algorithms,
• select what kind of output to be displayed,
• set tolerance value, etc.
help optimset provides information that defines the different parameters and describes
how to use them.
The initial search point, x0, has to be given by the user based upon the problem.
4. Gradient Calculations
Gradients are calculated using an adaptive finite-difference method unless they are
supplied in a function. Analytical expressions of the gradients of objective and constraint
functions can be incorporated through gradient functions, G and DC and proper options
setting.
Parameters can be passed directly to functions, avoiding the need for global variables.
Example of Nonlinear Inequality Constrained Optimization Problem

Consider the problem of finding a set of values [x1, x2] that solves
minimize f ( x ) = ex1 (4 x12 + 2 x22 + 4 x1 x 2 + 2 x2 + 1)
x

subject to
x1 x2 − x1 − x2 − 1.5 ≤ 0
− x1 x 2 − 10 ≤ 0
To solve this two-dimensional problem,
• Write a M-file, obj_fun.m, that specifies the objective function and its gradient.
• Create another M-file, con_fun.m,that specifies the nonlinear constraints and their
gradients.
• The constrained minimization routine, fmincon, is then invoked.

Write an M-file obj_fun.m


function [fun,Grad] = obj_fun(X)
% Reassign the variables.
x1=X(1);
x2=X(2);
% Calculate the objective function.
fun = exp(x1)*(4*x1^2+2*x2^2+4*x1*x2+2*x2+1);
if nargout > 1 % fun called with two output arguments
Grad(1,1)=
exp(x1)*(4*x1^2+2*x2^2+4*x1*x2+2*x2+1)+exp(x1)*(8*x1+4*x2);
Grad(2,1) = exp(x1)*(4*x2+4*x1+2);
end

Write an M-file con_fun.m


function [Ineq_cons,Eq_cons,Grad_ineq,Grad_eq] = con_fun(X);
% Reassign the variables.
x1=X(1);
x2=X(2);
% nonlinear inequalities
Ineq_cons(1) = x1*x2-x1-x2-1.5;
Ineq_cons(2) = -x1*x2-10;
% Nonlinear equalities
Eq_cons = [];
% nonlcon called with 4 outputs
if nargout > 2
% gradients of the inequalities
Grad_ineq(1,1) = x2-1;
Grad_ineq(2,1) = x1-1;
Grad_ineq(1,2) = -x2;
Grad_ineq(2,2) = -x1;
Grad_eq = [];

end
Invoke one of the unconstrained optimization routines
% ------ Solve the optimization problem using SQP method --------
options = optimset('Display','off',...
'LargeScale','off', ...
'GradObj','on',...
'Hessian','off',...
'GradConstr','on', ...
'TolCon',1e-8, ...
'TolFun',1e-8, ...
'TolX',1e-8);

% Set the boundary of the variables.


Lb = [-Inf; -Inf];
Ub = [Inf; Inf];
X0 = [-1; 1];

% Find the lowest point of the accessible area of the cylinder.


[x,fval,exitflag,output] =
fmincon('obj_fun',X0,[],[],[],[],Lb,Ub,'con_fun',options);

After 12 function evaluations, this produces the solution:


x = 0.5000 -1.0000
The function at the solution x is returned in fval:
fval = 0.0
The exitflag tells if the algorithm converged. An exitflag > 0 means a local minimum was
found:
exitflag = 1

The output structure gives more details about the optimization. For fmincon, it includes the
number of iterations in iterations, the number of function evaluations in funcCount, the final
step-size in stepsize, a measure of first-order optimality in firstorderopt, and the type of
algorithm used in algorithm:
output =
iterations: 12
funecount: 35
stepsize: 1
firstorderopt: []
algorithm: 'medium-scale: Quasi-Newton line search'

When there exists more than one local minimum, the initial guess for the vector [xl, x2] affects
both the number of function evaluations and the value of the solution point. In the example above, x0
is initialized to [-1, 1].

The variable options can be passed to fmincon to change characteristics of the optimization
algorithm, as in
x = fmincon('obj_fun', x0, options);
Options is a structure that contains values for termination tolerances and algorithm choices. An
options structure can be created using the optimset function
options = optimset('Display','off',...
'LargeScale','off', ...
'GradObj','on',...
'Hessian','off',...
'GradConstr','on');

In this example we have turned off the default selection of the large-scale algorithm and so the
medium-scale algorithm is used. Option ‘display’ includes controlling the amount of command line
shown during the optimization iteration. When ‘gradobj’ is on, the optimization program will use the
gradient formula provided. When ‘Hessian’ is off, Hessian matrix will be calculated numerically.

Vous aimerez peut-être aussi