Académique Documents
Professionnel Documents
Culture Documents
By
Outline
Introduction to Optimization
Optimization Procedure
Different Optimization Algorithms
Different Global Optimization Algorithms
Problems
Optimization
As ageless
as time
Calculus
Optimization is Everywhere
The more we know about something,
the more we see where optimization
can be applied.
Some personal decision making
- Finding fastest route home or class
- Optimal allocation of time for home work
- Optimal budgeting
Goal of Optimization
Component of Optimization
Problem
Objective Function:
An objective function
which we want to minimize or maximize.
For example, in a manufacturing process, we might
want to maximize the profit or minimize the cost.
In fitting experimental data to a user-defined
model, we might minimize the total deviation of
observed data from predictions based on the model.
In designing an inductor, we might want to
maximize the Quality Factor and minimize the
area.
Component of Optimization
Problem
Design Variables:
A set of unknowns or
variables which affect the value of the objective
function.
In the manufacturing problem, the variables might
include the amounts of different resources used or the
time spent on each activity.
In fitting-the-data problem, the unknowns are the
parameters that define the model.
In the inductor design problem, the variables used
define the layout geometry of the panel.
Component of Optimization
Problem
Constraints: A set of constraints that allow the
Mathematical Formulation of
Optimization Problems
minimize the objective function
min f ( x), x x1 , x2 ,......., xn
subject to constraints
ci ( x) 0
Example
ci x 0
2
2
min x1 2 x2 1
2
2
subject : x1 x2 0
x1 x2 2
Constraints
Variable Bounds
Maximum and minimum bounds on each
design variable.
Without variable bounds the constraints
completely surround the feasible region.
Variable bounds are used to confine the
search algorithm within these bounds.
Ex: xi ( L ) xi xi (U )
Classification of Optimization
Methods
Single variable
Multi-variable
Constrained
Non-constrained
Single objective
Multi-objective
Linear
Non-linear
Classifications of Optimization
Methods
f(xB*)f(x), xB.
Ex: Gradient based search methods, Newton-Rapson
algorithms, Steepest Decent, Conjugate-Gradient algorithms,
Levenberg-Marquardt algorithm etc.
Shortcomings: 1) One requires an initial guess to start with.
2) Convergence to an optimal solution depends on the chosen
initial guess. 3) Most algorithms tend to get stuck to a suboptimal solution. 4) An algorithm efficient in solving one
optimization problem may not be efficient in solving another
one. 5) These are useful over a relatively narrow range.
Continued
It uses a number of agents (particles) that constitute
Continued
Another best value that is tracked by the PSO is
Continued
Each particle tries to modify its current position and velocity
according to the distance between its current position and pbest,
and the distance between its current position and gbest.
vn1 vn c1rand 1( ) * ( pbest,n CurrentPosition n ) c2 rand 2( ) * ( g best,n CurrentPosition n )
vn+1: Velocity of particle at n+1 th iteration
CurrentPosition[n+1] = CurrentPosition[n] + v[n+1]
current position[n+1]: position of particle at n+1th
iteration
current position[n]: position of particle at nth
iteration
v[n+1] : particle velocity at n+1th iteration
PSO Algorithm
For each particle
Initialize particle with feasible random number
END
Do
For each particle
Calculate the fitness value
If the fitness value is better than the best fitness value (pbest) in history
Set current value as the new pbest
End
Choose the particle with the best fitness value of all the particles as the gbest
For each particle
Calculate particle velocity according to velocity update equation
Update particle position according to position update equation
End
While maximum iterations or minimum error criteria is not attained
local version:
vx[ ][ ] = vx[ ][ ] + 2*rand( )*(pbest[ ][ ] presebtx[ ][ ]) +
2*rand( )*(pbestx[ ][lbest] presentx[ ][ ])
the literature
Ring Topology (neighborhood of 3)
Star Topology (global neighborhood)
I0
I0
I1
I4
I3
I2
I1
I4
I3
I2
PSO Parameters:
Velocity
An important parameter in PSO; typically the only
one adjusted
Calmps particles velocities on each dimenson
Determines fineness with which regions are
searched
If too high, can fly past optimal solutions
If too low, can get stuck in local minima
I n it ia liz e R a n d o m ly
1 . P o s it io n o f t h e p a r t ic le
2 . V e lo c it y o f t h e P a r t ic le
F o r e a c h P a r t ic le
S im u la te M o d e l P a r a m e te r s
Y 1 1 ,Y 1 2 ,Y 2 1 ,Y 2 2
M e a s u re m e n t
D a ta
g o fo r n e x t
p a r tic le
E v a lu a te F itn e s s F u n c tio n
Is
fitn e s s ( C u r r e n t P o s . )
>
fitn e s s ( g b e s t)
?
Y e s
U p d a te
g b e s t = c u rre n t p o s .
N o
Is
fitn e s s (C u r r e n t P o s )
>
fitn e s s ( lb e s t)
?
Y e s
U p d a te
lb e s t = c u r r e n t p o s .
N o
N o
Is
p a r tic le > M a x . n o .
?
E v a lu a te F it n e s s F u n c tio n E
b a s e d o n g b e s t
Y e s
E x tra c te d
P a ra m e te r
Y e s
N e x t I t e r a t io n
Is
fitn e s s e r r o r < d e f in e d e rr o r
N o
Is
ite r a tio n c o u n t > M a x . c o u n t
?
N o
U p d a te
v lo c it y
L im it v e lo c ity
[V m in , V m a x ]
U p d a te
p o s itio n
L im it p o s it io n
[P m in , P m a x ]
Comparison of Genetic
Algorithm and PSO
Tested in a MATLAB Program, P4 1.7GHz CPU, 256M RAM.
No. of particle/ population size = 100
No. of simulation runs: 10000
70
GA
PSO
60
50
40
30
20
10
5
0
2000
4000
6000
Number of Iteration
8000
10000
Model Fitting
Model Fitting
Inductor Optimization
maximize
subject to
Q n, d , w, s
1 tol Lt arg et L n, d , w, s
L n, d , w, s 1 tol Lt arg et
n
n nmax
min
d
d d max
min
w
wd
min
min
s
s smax
min
d 2n w s 2 s
Thank You