Vous êtes sur la page 1sur 52

Iwan Sofana (iwansofana@gmail.

com)

Kota Baru Parahyangan


Bandung West Java Indonesia
2017
This slides adapted from many related
slides, papers, & books.

Special thanks to:


James Kennedy & Russell Eberhart
Maurice Clerc
Riccardo Poli, Tim Blackwell, Andry Pinto, Hugo Alves
Ins Domingues, Lus Rocha, Susana Cruz
Jaco F. Schutte
Matthew Settles
Satyobroto Talukder
and many more
Optimization

Basic PSO

PSO Algorithm
Optimization
Optimization
Optimization determines the best-suited solution to a
problem under given circumstances.

Non Linear Optimization problems are generally very


difficult to solve.

Linear/Non Linear problems are equivalent with


Linear/Non Linear function.
Function Example
Case study
Please find the maxima/minima of the following
functions:
f(x)=x2+2
h(t)=3 + 14t 5t2
g(y)=5y3 + 2x2 3x
How do we know it is a minimum or maximum?
Clue: use derivatives & math tools.
Case study
Quadratic

Solution
Case study
Derivative
Case study
Derivative rules
Optimization

Base on problem charecteristics:

Unconstrained Optimization.
Constrained Optimization.
Dynamic Optimization.
Unconstrained Optimizat

Many optimization problems place no restrictions on the


values of that can be assigned to variables of the
problem.
Constrained Optimiza

The process of optimizing an objective function with


respect to some variables in the presence of constraints
on those variables.
Dynamic Optimizatio

Many optimization problems have objective functions that


change over time and such changes in objective
function cause changes in the position of optima.
Optimization Techniques
There are two types of optimization techniques.

Global optimization technique

Local optimization technique


Global Opt. Technique
Global optimization technique seek to find a global
minimum or lowest function value and its
corresponding global minimizer

Global minimizer (X*)


Local Opt. Technique
Local optimization technique try to find a local minimum
and its corresponding local minimizer

Local minimizer (XL*)


Case Study

XL
*
X*
Real-Life
For a known (differentiable) function f , calculus can
fairly easily provide us with the minima and maxima of f
.

However, in real-life optimization tasks, this objective


function f is often not directly known.

The objective function is a black box.


Basic PSO
Inspired from the nature social behavior and dynamic
movements with communications of insects, birds and
fish
In 1986, Craig Reynolds (a biologist) studied the flocking
behavior of birds.

He described this process in 3 simple behaviors:

Separation Alignment Cohesion


avoid crowding local move towards the average move toward the average
flockmates (neighbors) heading of local position of local
flockmates (neighbors) flockmates (neighbors)
In 1990, Heppner and Grenander: research of bird
flocks searching for corn.

In 1995, James Kennedy (a social psychologist) &


Russell Eberhart (an electrical engineer): influenced by
Heppner and Grenanders work developed a powerful
optimization method Particle Swarm Optimization
(PSO).

I believe there are many more researcher... :-)


PSO Origins
Kennedy and Eberhart first introduce The Particle
Swarm Optimization (PSO) algorithm for a solution to
the complex non-linear optimization problem by
imitating the behavior of bird flocks.

The Particle Swarm Optimization (PSO) algorithm is a


multi-agent parallel search technique which maintains
a swarm of particles and each particle represents a
potential solution in the swarm.
Basic Idea
All particles fly through a multidimensional search
space where each particle is adjusting its position
according to its own experience and that of
neighbors.

Each particle adjusts its travelling speed


dynamically corresponding to the flying
experiences of itself and its neighbors.
PSO Algorithm
PSO Algorithm
Basically, there are two type of PSO algorithm:
+ Local Best (lbest) PSO.
+ Global Best (gbest)

They have been developed which differ in the size of


their neighborhoods.
Local vs Global
The local best or personal best PSO (lbest/pbest
PSO) method only allows each particle to be
influenced by the best-fit particle chosen from its
neighborhood.

The global best PSO (or gbest PSO) is a method


where the position of each particle is influenced by
the best-fit particle in the entire swarm.
Local vs Global
The larger particle interconnectivity of the gbest
PSO, sometimes it converges faster than the lbest
PSO.

Another is due to the larger diversity of the lbest


PSO, it is less susceptible to being trapped in local
minima.
Local vs Global
Local best or personal best PSO (lbest/pbest PSO)
use ring social topology/structure.

The global best PSO (or gbest PSO) use star social
network topology/structure.

ring social topology star social topology


local
global
PSO Algorithm
The PSO algorithm consists of just three steps :

1. Evaluate the fitness of each particle.


2. Update individual and global best fitnesses and
positions.
3. Update velocity and position of each particle.

which are repeated until some stopping condition is


met.
PSO Algorithm
PSO algorithm :

. Gbest PSO

. Lbest PSO
PSO Algorithm
Uniform Distribution
A uniform distribution (a rectangular distribution), is
a distribution where the probability of occurrence is
the same for all values. It has constant probability.

For instance, if a die is thrown, then the probability of


obtaining any one of the six possible outcomes is 1/6.
Uniform Distribution
Geometrical Illustratio
Geometrical Illustratio
Charecteristic of PSO
1. Number of particles usually between 20-60 or 10- 50.

2. C1 is the importance of personal best value.

3. C2 is the importance of neighborhood best value


Usually C1 + C2 = 4 or C1 = C2 = 2 (empirically chosen value). Wrong initialization of C1
and C2 may result in divergent or cyclic behavior

4. If velocity is too low algorithm too slow.

5. If velocity is too high algorithm too unstable.


Charecteristic of PSO
6. When C1,= C2 = 0 then all particles continue flying at
their current speed.

7.When C1 > 0 and C2 = 0 then all particles are


independent.

8. C2 > 0 and C1 = 0 then all particles are attracked to


single point (i.e Gbest).

9. When C1 = C2 then all particles attracked towards the


average Pbest and Gbest.
Charecteristic of PSO
10. When C1 >> C2 each particle is more strongly
influenced by its Pbest position, resulting in excessive
wandering.

11.When C2 >> C1 and C2 = 0 then all particles are much


more influenced by the global best position, which causes
all particles to run prematurely to the optima.
Charecteristic of PSO
Diversification Intensification

Inertia (memory) Personal influence Social influence


Charecteristic of PSO
+ Intensification:

explores the previous solutions, finds the best solution of a


given region

+ Diversification:

searches new solutions, finds the regions with potentially


the best solutions
Charecteristic of PSO
Advantages
Insensitive to scaling of design variables
Simple implementation
Easily parallelized for concurrent processing
Derivative free
Very few algorithm parameters
Very efficient global search algorithm

Disadvantages
Tendency to a fast and premature convergence in mid optimum
points
Slow convergence in refined search stage (weak local search ability)
Case study
Please find the maxima/minima of the function:
f(x)= -x2+5x +20 ; -10 <= x <= 10
Use 9 particles :
X1 = -9,6 X2 = -6
X3 = -2,6 X4 = -1,1
X5 = 0,6 X6 = 2,3
X7 = 2,8 X8 = 8,3 X9 = 10
Modified PSO
Several approaches
2-D Otsu PSO
Active Target PSO
Adaptive PSO
Adaptive Mutation PSO
Adaptive PSO Guided by Acceleration Information
Attractive Repulsive Particle Swarm Optimization
Binary PSO
Cooperative Multiple PSO
Dynamic and Adjustable PSO
Extended Particle Swarms

Davoud Sedighizadeh and Ellips Masehian, Particle Swarm Optimization Methods, Taxonomy and Applications.
International Journal of Computer Theory and Engineering, Vol. 1, No. 5, December 2009
Key Concepts
+ Particle Swarm Optimization (PSO) and Ant Colony
Optimization (ACO) are part of Swarm Intelligence (SI).

+ Swarm intelligence (SI) is the collective behavior of


decentralized, self-organized systems, natural or artificial.

3. The expression was introduced by Gerardo Beni and


Jing Wang in 1989, in the context of cellular robotic
Key Concepts
1. PSO algorithm basically learned from animals activity
or behavior to solve optimization problems.

2. Each member of the population is called a particle and


the population is called a swarm.

3. It does not require any gradient information of the


function to be optimized and uses only primitive
mathematical operators.
Key Concepts
4. PSO is well suited to solve the non-linear, non-convex,
continuous, discrete, integer variable type problems.

5. In PSO, each particle flies through the multidimensional


space and adjusts its position in every step with its own
experience and that of peers toward an optimum solution by
the entire swarm.

6. It doesnt always work well, still has room for improvement.

Vous aimerez peut-être aussi