Vous êtes sur la page 1sur 38

Particle Swarm Optimization (PSO) Algorithm

and Its Application in Engineering Design


Optimization

By

Sushanta Kumar Mandal


Research Scholar

School of Information Technology


Indian Institute of Technology Kharagpur
September 9, 2005

Outline
Introduction to Optimization
Optimization Procedure
Different Optimization Algorithms
Different Global Optimization Algorithms

Particle Swarm Optimization (PSO) Algorithm


Application of PSO in Design Optimization

Problems

Optimization

As ageless
as time

Calculus

Maximum and minimum of a smooth


function is reached at a stationary
point where its gradient vanishes.

Optimization is Everywhere
The more we know about something,
the more we see where optimization
can be applied.
Some personal decision making
- Finding fastest route home or class
- Optimal allocation of time for home work
- Optimal budgeting

Goal of Optimization

Find values of the variables that


minimize or maximize the objective
function
while
satisfying
the
constraints.

Component of Optimization
Problem
Objective Function:

An objective function
which we want to minimize or maximize.
For example, in a manufacturing process, we might
want to maximize the profit or minimize the cost.
In fitting experimental data to a user-defined
model, we might minimize the total deviation of
observed data from predictions based on the model.
In designing an inductor, we might want to
maximize the Quality Factor and minimize the
area.

Component of Optimization
Problem

Design Variables:

A set of unknowns or
variables which affect the value of the objective
function.
In the manufacturing problem, the variables might
include the amounts of different resources used or the
time spent on each activity.
In fitting-the-data problem, the unknowns are the
parameters that define the model.
In the inductor design problem, the variables used
define the layout geometry of the panel.

Component of Optimization
Problem
Constraints: A set of constraints that allow the

unknowns to take on certain values but exclude


others.
For the manufacturing problem, it does not make
sense to spend a negative amount of time on any
activity, so we constrain all the "time" variables to be
non-negative.
In the inductor design problem, we would probably
want to limit the upper and lower value of layout
parameters and to target an inductance value within
the tolerance level.

Are All these ingredients


necessary?
Almost all optimization problems have objective
function.
No objective function. In some cases (for example,
design of integrated circuit layouts), the goal is to
find a set of variables that satisfies the constraints of
the model. The user does not particularly want to
optimize anything so there is no reason to define an
objective function. This type of problems is usually
called a feasibility problem.

Are All these ingredients


necessary?
Variables are essential. If there are no variables, we

cannot define the objective function and the problem


constraints.
Constraints are not essential. In fact, the field of
unconstrained optimization is a large and important
one for which a lot of algorithms and software are
available. It's been argued that almost all problems
really do have constraints.

What We Need for Optimization


Models: Modeling is the process of identifying

objective function, variables and constraints. The goal


of models is insight not the numbers. A good
mathematical model of the optimization problem is
needed.
Algorithms: Typically, an interesting model is too
complicated to be able to solve in with paper and
pencil. An effective and reliable numerical algorithm is
needed to solve the problem. There is no universal
optimization algorithm. Algorithm should have
robustness (good performance for a wide class of
problems), efficiency (not too much computer time)
and accuracy (can identify the error)

Flowchart of Optimal Design


Procedure
Need for optimization
Choose design variables
Formulate constraints
Formulate objective function
Set up variable bounds
Select an optimization algorithm
Obtain solution(s)

Mathematical Formulation of
Optimization Problems
minimize the objective function
min f ( x), x x1 , x2 ,......., xn
subject to constraints
ci ( x) 0
Example
ci x 0

2
2

min x1 2 x2 1

2
2
subject : x1 x2 0

x1 x2 2

Constraints

Inequality constraints: x12 x220


Equality constraints: x1 = 2

Variable Bounds
Maximum and minimum bounds on each

design variable.
Without variable bounds the constraints
completely surround the feasible region.
Variable bounds are used to confine the
search algorithm within these bounds.
Ex: xi ( L ) xi xi (U )

Classification of Optimization
Methods
Single variable
Multi-variable
Constrained
Non-constrained
Single objective
Multi-objective
Linear
Non-linear

Classifications of Optimization
Methods

Local and Global Optimizers


A local minimizer, xB*, of the region B, is defined so that:

f(xB*)f(x), xB.
Ex: Gradient based search methods, Newton-Rapson
algorithms, Steepest Decent, Conjugate-Gradient algorithms,
Levenberg-Marquardt algorithm etc.
Shortcomings: 1) One requires an initial guess to start with.
2) Convergence to an optimal solution depends on the chosen
initial guess. 3) Most algorithms tend to get stuck to a suboptimal solution. 4) An algorithm efficient in solving one
optimization problem may not be efficient in solving another
one. 5) These are useful over a relatively narrow range.

Local and Global Optimizers

The global optimizer, x* , is defined so that f(x*)f(x),


xS where S is the search space.
Ex. Simulated Annealing algorithm, Genetic
Algorithm, Ant Colony, Geometric Programming,
Particle Swarm Optimization etc.

Local and Global Optimizers

Particle Swarm Optimization


Evolutionary computational technique based on

the movement and intelligence of swarms looking


for the most fertile feeding location
It was developed in 1995 by James Kennedy and
Russell Eberhart
Simple algorithm, easy to implement and few
parameters to adjust mainly the velocity
A swarm is an apparently disorganized
collection (population) of moving individuals that
tend to cluster together while each individual
seems to be moving in a random direction

Continued
It uses a number of agents (particles) that constitute

a swarm moving around in the search space


looking for the best solution.
Each particle is treated as a point in a Ddimensional space which adjusts its flying
according to its own flying experience as well as
the flying experience of other particles
Each particle keeps track of its coordinates in the
problem space which are associated with the best
solution (fitness) that has achieved so far. This
value is called pbest.

Continued
Another best value that is tracked by the PSO is

the best value obtained so far by any particle in


the neighbors of the particle. This value is called
gbest.

The PSO concept consists of changing the


velocity(or accelerating) of each particle toward
its pbest and the gbest position at each time step.

Continued
Each particle tries to modify its current position and velocity
according to the distance between its current position and pbest,
and the distance between its current position and gbest.
vn1 vn c1rand 1( ) * ( pbest,n CurrentPosition n ) c2 rand 2( ) * ( g best,n CurrentPosition n )
vn+1: Velocity of particle at n+1 th iteration
CurrentPosition[n+1] = CurrentPosition[n] + v[n+1]
current position[n+1]: position of particle at n+1th
iteration
current position[n]: position of particle at nth
iteration
v[n+1] : particle velocity at n+1th iteration

Vn : Velocity of particle at nth iteration


c1 : acceleration factor related to gbest
c2 : acceleration factor related to lbest
rand1( ): random number between 0 and 1
rand2( ): random number between 0 and 1
gbest: gbest position of swarm
pbest: pbest position of particle

PSO Algorithm
For each particle
Initialize particle with feasible random number
END
Do
For each particle
Calculate the fitness value
If the fitness value is better than the best fitness value (pbest) in history
Set current value as the new pbest
End
Choose the particle with the best fitness value of all the particles as the gbest
For each particle
Calculate particle velocity according to velocity update equation
Update particle position according to position update equation
End
While maximum iterations or minimum error criteria is not attained

gbest and lbest


global version:
vx[ ][ ] = vx[ ][ ] + 2*rand( )*(pbest[ ][ ] presentx[ ][ ])
+2*rand( )*(pbestx[ ][gbest] presentx[ ][ ])

local version:
vx[ ][ ] = vx[ ][ ] + 2*rand( )*(pbest[ ][ ] presebtx[ ][ ]) +
2*rand( )*(pbestx[ ][lbest] presentx[ ][ ])

Particle Swarm Optimization:


Swarm Topology
In PSO, there have been two basic topologies used in

the literature
Ring Topology (neighborhood of 3)
Star Topology (global neighborhood)
I0

I0
I1

I4

I3

I2

I1

I4

I3

I2

PSO Parameters:
Velocity
An important parameter in PSO; typically the only

one adjusted
Calmps particles velocities on each dimenson
Determines fineness with which regions are
searched
If too high, can fly past optimal solutions
If too low, can get stuck in local minima

Flow Chart for Extraction Procedure using PSO


In p u t
N o . o f P a r t ic le
N o . o f ite r a tio n c o u n t
E rro r = 0 .0 5

I n it ia liz e R a n d o m ly
1 . P o s it io n o f t h e p a r t ic le
2 . V e lo c it y o f t h e P a r t ic le

F o r e a c h P a r t ic le

S im u la te M o d e l P a r a m e te r s
Y 1 1 ,Y 1 2 ,Y 2 1 ,Y 2 2

M e a s u re m e n t
D a ta

g o fo r n e x t
p a r tic le

E v a lu a te F itn e s s F u n c tio n

Is
fitn e s s ( C u r r e n t P o s . )
>
fitn e s s ( g b e s t)
?

Y e s
U p d a te
g b e s t = c u rre n t p o s .

N o

Is
fitn e s s (C u r r e n t P o s )
>
fitn e s s ( lb e s t)
?

Y e s
U p d a te
lb e s t = c u r r e n t p o s .

N o

N o

Is
p a r tic le > M a x . n o .
?

E v a lu a te F it n e s s F u n c tio n E
b a s e d o n g b e s t

Y e s

E x tra c te d
P a ra m e te r

Y e s

N e x t I t e r a t io n

Is
fitn e s s e r r o r < d e f in e d e rr o r

N o
Is
ite r a tio n c o u n t > M a x . c o u n t
?
N o
U p d a te
v lo c it y

L im it v e lo c ity
[V m in , V m a x ]

U p d a te
p o s itio n

L im it p o s it io n
[P m in , P m a x ]

Comparison of Genetic
Algorithm and PSO
Tested in a MATLAB Program, P4 1.7GHz CPU, 256M RAM.
No. of particle/ population size = 100
No. of simulation runs: 10000

Crossover Prob.= 0.9


Mutation Prob. = .01

70

GA
PSO

Target value of Error minimization

60
50
40
30
20
10
5
0

2000

4000
6000
Number of Iteration

8000

10000

Model Fitting

Model Fitting

Inductor Optimization
maximize
subject to

Q n, d , w, s

1 tol Lt arg et L n, d , w, s
L n, d , w, s 1 tol Lt arg et
n
n nmax
min
d
d d max
min
w
wd
min
min
s
s smax
min
d 2n w s 2 s

Constrained PSO Optimization

Thank You

Vous aimerez peut-être aussi