Académique Documents
Professionnel Documents
Culture Documents
Eric Granger
Ismail Ben Ayed
CONTENU DU COURS
• Asynchronous updates:
– New best positions updated after each particle position update
– Immediate feedback about best regions of the search space
– Better for lbest PSO
SYS843: Réseaux de neurones et systèmes flous
D2-21
2) Algorithme PSO canonique
Acceleration
Acceleration Coefficients: ϕ
Coefficients
• The
• Theboxes
boxesshow
showthethe
distribution
distributionof
of the
the random vectorsofof the
random vectors
attracting forcesforces
the attracting of theoflbest and best
the local gbestand global best
• The
• The acceleration
acceleration coefficientsdetermine
coefficients determine the
the scale
scale distribution of
distribution of the random cognitive component vector and
thethe
random
socialcognitive
component and social component vectors
vector
pi(t)
gi(t) pi(t)
gi(t)
xi(t)
xi(t)
vi(t)
vi(t)
ϕ1 = ϕ2 = 1 ϕ1 , ϕ2 > 1
SYS843: Réseaux de neurones et systèmes flous
14 D2-22
LIACS Natural Computing Group Leiden University
2) Algorithme PSO canonique
Original PSO -Stability
Problems problem
• The acceleration coefficients should be set sufficiently high, but
• The acceleration
higher coefficients
acceleration coefficientsshould beless
result in setstable
sufficiently
systemshigh
• Higher
whereacceleration
velocity tendscoefficients
to explode. result in less stable
•systems in which
Solution: thevelocity
keep the velocityv has a tendency
withini
to explode
the range [-v ; +v ]. max max
• •ToHowever,
fix this, the limiting
velocity the velocity does not
vi is usually keptnecessarily
within theprevent
range
particles from leaving the search space, nor does it guarantee
[-vmax, vmax]
convergence
• However, limiting
SYS843: Réseaux de the velocity
neurones et systèmes flous does not necessarily prevent
D2-23
particles from leaving the search space, nor does it help to
2) Algorithme PSO canonique
Inertia weighted PSO
Inertia weighted PSO
• •AnSolution: an inertia
inertia weight ω was weight ω to
introduced to control thevelocity
control the velocityexplosion:
explosion:
vi ωvi U (0, φ1) ( pi xi ) U (0, φ2 ) ( gi xi )
• •If ω,
If ϕω, ϕ1 and
1 and
ϕ2 set
ϕ2 are arecorrectly,
set correctly, this update
this update rule for
rule allows allows for
convergence
convergence without the use
without usingof vvmax
max
• The inertia weight can be used to control the balance between
•exploration
Weight ω andcan be used to control the balance between
exploitation:
– exploration and increase
ω ≥ 1: velocities exploitation:
over time, swarm diverges
– 0Ø<ifωω<≥ 1:1: particles
velocities increase
decelerate, over time, swarm
convergence dependsdiverges
ϕ1 and ϕ2
Ø if 0 < ω <settings:
• Rule-of-thumb 1: particles decelerate,
ω = 0.7298 andconvergence depends ϕ1 and ϕ2
ϕ1 = ϕ2 = 1.49618
•• Other
Other schemes
schemesfor afor
dynamically changing
a dynamically inertia weight
changing inertia weigh
have also been proposed
are also possible and have also been tried
SYS843: Réseaux de neurones et systèmes flous
D2-26
Eberhart, R. C. Shi, Y., 'Comparing inertia weights and constriction
2) Algorithme PSO canonique
Examples de functions pour benchmarking
Rastrigrin Griewank
MOPSO algorithms:
• Pareto-based approaches use leader selection techniques
based on Pareto dominance [Coello Coell, 2008].
• Leaders are defined as particles that are non-dominated with
respect to the swarm.
• Most authors adopt additional information (e.g., information
provided by a density estimator) in order to avoid a random
selection of a leader from the current set of non-dominated
solutions.
statique vidéo
facial model
Dµ(9%) DCIS
DCIS Dµ(9%)
A1 1 W 1 Wab 1
c
A2 2 2 2
x y yab
... ... ... ...
a
A2I 2I J K
|A| |x|
r e
Fusion
C1 ...
C2
f2(h)
…
h2
CK
...
Decision space h1 f1(h)
Feature space
Predefined Search spaces Objective space
Input feature vectors
class labels Hyperparameters Objectives
a = (a1, ..., aI)
W = {C1, ..., CK} h = (h1, ..., hD) o = (f1(h), …, fO(h))
Mapping Mapping
ADNPSO
LTM Hyper- module
parameters
Fitness
Dt
Archive
+
Swarm of (pool of Selection hypt
incremental classifiers) and fusion
hypt-1 classifiers
94
D.2(4) Optimisation du FAM
Particles Influences
selection
Error rate (%)
100
Classification rate (%)
90
80
70
60
50
40
100
90
12
11 80
Is
10
9 60
70
d RO
8
te on
etecificati
7 50
Tim
6 40
f d
er o lass
5
4 30
e 3 20
mb for c
2
1
1
10 u
N sed
u