Vous êtes sur la page 1sur 23

# 2008 Solutions 4U Sdn Bhd.

Introduction to Optimization Methods
Introduction to Non-Linear
Optimization
Fundamental of Optimization

Optimization in Process Plants
Fundamental of Optimization
Optimization Tree
Figure 1: Optimization tree.
Fundamental of Optimization
What is Optimization?
Optimization is an iterative process by which a desired solution
(max/min) of the problem can be found while satisfying all its
constraint or bounded conditions.
Optimization problem could be linear or non-linear.
Non linear optimization is accomplished by numerical Search
Methods.
Search methods are used iteratively before a solution is achieved.
The search procedure is termed as algorithm.
Figure 2: Optimum solution is found
while satisfying its constraint (derivative
must be zero at optimum).
Fundamental of Optimization
Linear problem solved by Simplex or Graphical methods.
The solution of the linear problem lies on boundaries of the feasible
region.
Non-linear problem solution lies within and on the boundaries of the
feasible region.
Figure 3: Solution of linear problem Figure 4: Three dimensional solution of
non-linear problem
What is Optimization?(Cont.)
Fundamental of Optimization
Constraints
Inequality
Equality
Fundamentals of Non-Linear Optimization
Single Objective function f(x)
Maximization
Minimization
Design Variables, x
i
, i=0,1,2,3..

Figure 5: Example of design variables and
constraints used in non-linear optimization.
Maximize X1 + 1.5 X2
Subject to:
X1 + X2 150
0.25 X1 + 0.5 X2 50
X1 50
X2 25
X1 0, X2 0
Optimal points
Local minima/maxima points: A point or Solution x* is at local point
if there is no other x in its Neighborhood less than x*
Global minima/maxima points: A point or Solution x** is at global
point if there is no other x in entire search space less than x**
Fundamental of Optimization
Figure 6: Global versus local optimization. Figure 7: Local point is equal to global point if
the function is convex.
Fundamentals of Non-Linear Optimization (Cont.)
Fundamental of Optimization
Function f is convex if f(X
a
) is less than value of the corresponding
point joining f(X
1
) and f(X
2
).
Convexity condition Hessian 2nd order derivative) matrix of
function f must be positive semi definite ( eigen values +ve or zero).
Fundamentals of Non-Linear Optimization (Cont.)
Figure 8: Convex and nonconvex set Figure 9: Convex function
Fundamental of Optimization
Mathematical Background
Slop or gradient of the objective function f represent the
direction in which the function will decrease/increase most rapidly
x
f
x
x f x x f
dx
df
x x
A
A
=
A
A +
=
A A 0 0
lim
) ( ) (
lim
....... ) (
! 2
1
) ( ) (
2
2
2
+ A + A ~ A + x
dx
f d
x
dx
df
x x f
p
p x
x
p
(
(
(
(

c
c
c
c
c
c
c
c
c
c
c
c
= = V
z
g
y
g
x
g
z
f
y
f
x
f
J
Taylor series expansion
J acobian matrix of gradient of f with respect to several variables
Fundamental of Optimization
Hessian Second derivative of function of several variables, Sign
indicates max.(+ve) or min.(-ve)
Second order condition (SOC)
Eigen values of H(X*) are all positive
Determinants of all lower order of H(X*) are +ve
(
(
(
(

c
c
c c
c
c c
c
c
c
=
2
2 2
2
2
2
y
f
y x
f
x y
f
x
f
H
Slope -First order Condition (FOC) Provides functions slope information
0 *) ( = V X f
Mathematical Background (Cont.)
Fundamental of Optimization
Deterministic - specific rules to move from one iteration to next ,
Stochastic probalistic rules are used for subsequent iteration
Optimal Design Engineering Design based on
optimization algorithm
Lagrangian method sum of objective function and linear
combination of the constraints.
Optimization Algorithm
Fundamental of Optimization
Multivariable Techniques ( Make use of Single variable Techniques
specially Golden Section)
Deterministic
Direct Search Use Objective function values to locate minimum
Gradient Based first or second order of objective function.
Minimization objective function f(x) is used with ve sign
f(x) for maximization problem.
Single Variable
Newton Raphson is Gradient based technique (FOC)
Golden Search step size reducing iterative method
Unconstrained Optimization
a.) Powell Method Quadratic (degree 2) objective function polynomial is
b.) Gradient Based Steepest Descent (FOC) or Least Square minimum
(LMS)
c.) Hessian Based -Conjugate Gradient (FOC) and BFGS (SOC)
Optimization Methods
Fundamental of Optimization
Constrained Optimization
a.) Indirect approach by transforming into unconstrained
problem.
b.) Exterior Penalty Function (EPF) and Augmented Lagrange
Multiplier
c.) Direct Method Sequential Linear Programming (SLP), SQP and
Steepest Generalized Reduced Gradient Method (GRG)
Figure 10: Descent Gradient or LMS
Optimization Methods - Constrained
Fundamental of Optimization
Global Optimization Stochastic techniques
Simulated Annealing (SA) method minimum
energy principle of cooling metal crystalline structure
Genetic Algorithm (GA) Survival of the fittest
principle based upon evolutionary theory
Optimization Methods (Cont.)
Fundamental of Optimization
Multivariable Gradient based optimization
J is the cost function to be minimized in two
dimension

The contours of the J paraboloid shrinks as it is
decrease

function retval = Example6_1(x)
% example 6.1
retval = 3 + (x(1) - 1.5*x(2))^2 + (x(2) - 2)^2;

>> SteepestDescent('Example6_1', [0.5 0.5], 20,
0.0001, 0, 1, 20)
Where
[0.5 0.5] -initial guess value
20 -No. of iteration
0.001 -Golden search tol.
0 -initial step size
1 -step interval
20 -scanning step
>> ans
2.7585 1.8960

Figure 11: Multivariable Gradient based optimization
Figure 12: Steepest Descent
Optimization Methods (Examples)
Fundamental of Optimization
Numerical Optimization
Newton Raphson Method
1. Root Solver System of nonlinear equations
2. (MATLAB Optimization fsolve)
) ( '
) (
1
1
+
+
=
i
i
i i
x f
x f
x x
2. One dimensional Solver
(MATLAB - OPTIMIZATION Method )
) ( ' '
) ( '
1
1
+
+
=
i
i
i i
x f
x f
x x
Methods
i i i
i i i
d x f x f
d x f x
). ( . ) (
) . (
1
V + =
+ =
+
o
o
i i
d x f ). ( V
Is the magnitude of descent direction.
) (
2 i
x f d V =
achieve the steepest descent
) (
2 i
x f d V =
achieve steepest ascent
Fundamental of Optimization
75 . 0 ) 1 ( 25 . 2 ) 1 ( 3 25 . 2 3 = + = + =
c
c
y x
x
f
0 75 . 1 ) 1 ( 4 ) 1 ( 25 . 2 75 . 1 4 25 . 2 = + = + =
c
c
y x
y
f
2
84375 . 0 5625 . 0 5 . 0 ) 1 , 75 . 0 1 ( h h h f + =
0 ) 1 ( 25 . 2 ) 75 . 0 ( 3 = + =
c
c
x
f
5625 . 0 75 . 1 ) 1 ( 4 ) 75 . 0 ( 25 . 2 = + =
c
c
y
f
2
63281 . 0 316406 . 0 59375 . 0 ) 5625 . 0 1 , 75 . 0 ( h h h f + =
The partial derivatives can be evaluated at the initial guesses, x = 1 and
y = 1,
Therefore, the search direction is 0.75i.
This can be differentiated and set equal to zero and solved for h* = 0.33333. Therefore, the result for the
first iteration is x = 1 0.75(0.3333) = 0.75 and y = 1 + 0(0.3333) = 1. For the second iteration, the
partial derivatives can be evaluated as,
Therefore, the search direction is 0.5625j.
This can be differentiated and set equal to zero and solved for h* = 0.25.
Therefore, the result for the second iteration is x = 0.75 + 0(0.25) = 0.75 and y = 1 + (0.5625)0.25 = 0.859375.

Solve the following for two step steepest ascent
y x y xy y x f 2 5 . 1 75 . 1 25 . 2 ) , (
2
+ =
0 0.2 0.4 0.6 0.8 1 1.2
0
0.2
0.4
0.6
0.8
1
1.2
0
2
1
max
Fundamental of Optimization
%chapra14.5 Contd
clear
clc
Clf x
2

ww1=0:0.01:1.2;
ww2=ww1;
[w1,w2]=meshgrid(ww1,ww2);
J=-1.5*w1.^2+2.25*w2.*w1-2*w2.^2+1.75*w2;
cs=contour(w1,w2,J,70);
%clabel(cs);
hold
grid
w1=1; w2=1; h=0;
for i=1:10
syms h
dfw1=-3*w1(i)+2.25*w2(i);
dfw2=2.25*w1(i)-4*w2(i)+1.75;
fw1=-1.5*(w1(i)+dfw1*h).^2 + 2.25*(w2(i)+dfw2*h).*(w1(i)+
dfw1*h)-2*(w2(i)+dfw2*h).^2+1.75*(w2(i)+dfw2*h);
J=-1.5*w1(i)^2+2.25*w2(i)*w1(i)-2*w2(i)^2+1.75*w2(i)
g=solve(fw1);
h=sum(g)/2;
w1(i+1)=w1(i)+dfw1*h;
w2(i+1)=w2(i)+dfw2*h;
plot(w1,w2) x
i

pause(0.05)
End
MATLAB OPTIMIZATION TOOLBOX

w1(i), w2(i) function J=chaprafun(x)
w1=x(1);
w2=x(2)
J=-(-1.5*w1^2+2.25*w2*w1-2*w2^2+1.75*w2);

%startchapra.m
clc
clear
x0=[1 1];
options=optimset('LargeScale','off','Display','iter','Maxiter',
20,'MaxFunEvals',100,'TolX',1e-3,'TolFun',1e-3);
[x,fval]=fminunc(@chaprafun,x0,options)
Fundamental of Optimization
Newton Raphson Four Bar Mechanism
Sine and Cosine angle components - All angles referenced from global x - axis

In above equations
1
= 0 as it is along the x-axis and other three angles are time varying.

1
st
angular velocity derivative
Fundamental of Optimization
If input is applied to link2 - DC motor then
2
would be the input to the syste

In Matrix for
2. Numerical Solution for Non Algebraic Equations
) ( '
) (
1
_
1
+
+
+ =
i
i
i
i
x f
x f
x x
Fundamental of Optimization
) ( '
) (
_
i
i
q f
q f
q = V
Fundamental of Optimization
) ( '
) (
_
_
i
i
i
i
f
f
u
u
u u

+ =
Newton-Raphson
%fouropt.m
function f=fouropt(x)
the = 0;
r1=12; r2=4; r3=10; r4=7;
f=-[r2*cos(the)+r3*cos(x(1))-r1*cos(0)-r4*cos(x(2));
r2*sin(the)+r3*sin(x(1))-r1*sin(0)-r4*sin(x(2))];
%startfouropt.m
clc
clear
x0=[0.1 0.1];
options=optimset('LargeScale','off','Display','iter','Maxiter',
200,'MaxFunEvals',100,'TolX',1e-8,'TolFun',1e-8);
[x,fval]=fsolve(@fouropt,x0,options);
theta3=x(1)*57.3
theta4=x(2)*57.3
Foursimmechm.m, foursimmech.mdl and possol4.m
Fundamental of Optimization