Vous êtes sur la page 1sur 96

1 EE6701 Evolutionary Computation Chapter 1

Evolutionary Computation
for Multi-objective Optimization
A/Prof Tan Kay Chen
Department of Electrical and Computer Engineering
National University of Singapore
Tel: 6516 2127; Email: eletankc@nus.edu.sg; Office: E4 08-09
2 EE6701 Evolutionary Computation Chapter 1
Rationale and Motivation
Some real-world problems have several (possibly conflicting)
objectives to be optimized
Many of these problems are transformed (or aggregated) into a
single-objective problem (SOP) and are solved using single
objective optimization techniques
Transforming multi-objective problem (MOP) into SOP requires
priori knowledge of relative importance of different objectives
Defining the problem in multi-objective framework is more general
and it provides the decision maker different tradeoffs between
different objectives
The multi-objective optimization (also known as multi-criteria
optimization) aims to find a set of vectors which satisfy the given
constraints and optimizes a number of objective functions.
3 EE6701 Evolutionary Computation Chapter 1
Example:
A two-objective functions, f
1
and f
2
, to be minimized are given as
|
|
.
|

\
|
|
|
.
|

\
|
+ =
|
|
.
|

\
|
|
|
.
|

\
|
=

=
=
2
8
1
8 1 2
2
8
1
8 1 1
8
1
1 ) ,..., (
8
1
1 ) ,..., (
i
i
i
i
x exp x x f
x exp x x f
The Pareto optimal front is all
the points on the line defined by
8
1
8
1
1 8 2 1
s s

. = = = x x x x
f
1
f
2
Trade-off
curve
Unfeasible
region
where -2 x
i
< 2
4 EE6701 Evolutionary Computation Chapter 1
Formal Definitions
A multi-objective optimization problem (MOP) can be written as






min =
1
()
2
()

()

s =
1
()
2
()

()
0
=
1
()
2
()

()
= 0

()



=
1
,
2
, ,

is the dimensional decision variable,


is the number of objectives

,
inequality and equality constraints,

()
and

are respectively the lower and upper bound for each decision
variable

.
5 EE6701 Evolutionary Computation Chapter 1
For an MOP, must be greater than unity
Based on the constraints, a MOP can be classified into one of the
following classes


()

()

Type of MOP
0 0 Unconstrained MOP
0 0
1

2
Bound constrained MOP
> 0 0 x x Inequality constrained MOP
0 > 0 x x Equality constrained MOP
where
1
and
2
are some constants.
For the sake of simplicity, we use denotes the feasible decision
space. The feasible decision space is the region which satisfies the
constrained in decision space
By using this convention, an MOP can be rewritten as
min

1
()
2
()

()

6 EE6701 Evolutionary Computation Chapter 1
Total-order and Partial-order
Order theory provides a formal framework for describing statements
such as this is less than that or this precedes that
A relation is a total order on set S if the following properties hold
Reflexivity: a a for all a S
Anti-symmetry: a b and b a implies a = b
Transitivity: a b and b c implies a c
Comparability: For any a, b S, either a b or b a.
For any partial order set, the comparability property does not hold
In single-objective optimization, the feasible set is totally ordered
according to the objective function f
In multi-objective optimization, the feasible set is partially ordered
according to the objective function set f.

7 EE6701 Evolutionary Computation Chapter 1
Due to the partial ordering in MOP, multitude of trade-off solutions
exist in the objective space
This is one of the major differences between MOP and SOP
Dominance relation

Definition
A solution
1
is said to dominate another solution
2
(
1

2
), if and
only if both of the following conditions are true:
The solution
1
is no worse than
2
in all objectives
The solution
1
is strictly better than
2
in at least one objective
Mathematically,

1

2
if and only if

2
1, ,
1, , |

1
<

2


8 EE6701 Evolutionary Computation Chapter 1
Non-dominated Set
Given a set of solutions, we can perform all possible pairwise
comparisons and find the solutions which are not dominated by any
solution in the set
Definition - Non-Dominated Set
Among a set of solutions , the non-dominated set of solutions

are those that are not dominated by any member of the set
Definition - Pareto Optimal Set (in decision space)
The set of solutions that are non-dominated in the feasible
objective space
Definition - Pareto Optimal Front (in objective space)
The set of non-dominated solutions in the feasible objective
space
9 EE6701 Evolutionary Computation Chapter 1
Classical Approaches
Generating Pareto optimal solutions plays an important role in multi-
objective optimization. The term vector optimization is sometimes
used to denote the problem of identifying the Pareto optimal set
Classical approaches often solve the MOP by scalarization.
Scalarization means converting the MOP into a single or a family of
SOP with a real-valued objective function
We are interested in generating Pareto optimal solutions (POS), only
a posteriori methods are discussed in this module
A Posteriori Methods
Generate the Pareto optimal set (or a part of it)
Present it to the decision maker (DM)
Let the DM select one
10 EE6701 Evolutionary Computation Chapter 1
Weighted Sum Method
This method scalarizes MOP into SOP by multiplying each objective
with a predefined weight
min

()

=1

where

is the weight of the -th objective function. The weight


coefficients

must be real and positive. It is usual practice that weight


are normalized, such that

= 1

=1

To use WSM as a posteriori method, a set of weight vectors has to be
specified. The correlation and some nonlinear effects between weights
and MOP are not easy to be understood
Weighted sum method is relatively simple and easy to use
Evenly distributed weight vectors do not necessary produce an evenly
distributed representation of the Pareto optimal set. When the MOP is
non-convex, some of the Pareto optimal solution may fail to be found



11 EE6701 Evolutionary Computation Chapter 1
Multi-objective Evolutionary Algorithms
(MOEA)
Evolutionary Multi-objective Optimization (EMO) is an application of
evolutionary algorithm to solve multi-objective optimization problem
MOEAs are used to approximate the Pareto-optimal front of multi-
objective problems
Many MOEAs have been proposed in the literature
Multi-objective Genetic Algorithm (MOGA)
Non-Dominated Sorting Algorithm II (NSGAII)
Incrementing Multi-objective Evolutionary Algorithm (IMOEA)
Multiobjective Evolutionary Algorithm based on Decomposition
(MOEA/D)
These algorithms mainly different in, e.g., Fitness assignment
scheme; Mating selection scheme; Environmental selection scheme;
Diversity preservation; Sharing and Elitism

12 EE6701 Evolutionary Computation Chapter 1
Common Terms Used in EA

13 EE6701 Evolutionary Computation Chapter 1
P
2
: 4 0 0 3 0 1 6 1
P
2
: 4 0 0 3 0 1 6 1
P
3
: 0 1 6 4 1 8 0 1
P
2
: 4 0 0 3 0 1 6 1
P
2
: 4 0 1 3 0 8 0 1
P
3
: 0 1 6 4 1 1 6 1
f ( P
1
: 1 2 0 9 0 2 1 7)=5%
f ( P
2
: 4 0 0 3 0 1 6 1)=60%
f ( P
3
: 0 1 6 4 1 8 0 1 )=35%
f ( P
2
)
f ( P
3
)
f ( P
1
)
Fitness landscape
Decoding
Evaluation
Simulation
designs
Final optimized
designs coded
Initial/random
Selection
Crossover
Mutation
New
generation
14 EE6701 Evolutionary Computation Chapter 1
Multi-objective Evolutionary Algorithms
x
1
, x
2
, , x
N
f(x
1
), f(x
2
), , f(x
N
)
How to select
parent solutions
How to
combine
parent
solutions to
create
offspring
solutions
Initialization
Fitness
Evaluation
Mating
Selection
Recombination/
Crossover
Mutation
Perturb the generated
offspring solution
Fitness
Evaluation
Selection
Termination
Check terminated
condition
Select
individual to
survive
f(x
1
), f(x
2
), , f(x
N
)
15 EE6701 Evolutionary Computation Chapter 1
To achieve good approximation of POF, the output of any MOEA
should fulfill the following characteristics: (1) Sufficient proximity to
the exact Pareto-optimal front (convergence); (2) Well distributed
along the Pareto-optimal front (diversity)
Concept of Sharing And Elitism
To achieve the goals of convergence and diversity,
concept of sharing and elitism are introduced
Sharing:
To avoid the non-dominated solution clustering on some
portions of Pareto-optimal front, certain penalty is applied to
the clustering solutions
Example: a and b will have smaller fitness compared to c

Elitism:
The elites (best solutions or non-dominated solutions) of
the population should be given the opportunity to be
directly carried over to the next generation
f
1
f
2
a
b
c
16 EE6701 Evolutionary Computation Chapter 1
Fitness assignment, preference
Good spread, uniform
distribution, minimum proximity
Many objectives, constraints
Noise, dynamic landscape,
robust optimization
f
1

f
2

Unfeasible
Region
Pareto front/Non-
dominated solutions
Fonseca and Fleming,1995
EMO needs to address several issues
EA is a powerful tool for solving MO optimization problems
Population-based, and capable of searching for the global trade-off
Robust and applicable to a wide range of problems
Capable of handling discontinuous and multi-dimensional problems
17 EE6701 Evolutionary Computation Chapter 1
+ The Pareto ranking assigns the same smallest cost for all non-dominated
individuals, while the dominated individuals are ranked according to how
many individuals in the population dominating them
f
1
f
2
1
1
1
2
2
4
5
Pareto optimal
ranking
+The rank of an individual x in a population can be given by rank(F
x
) =
1 + q
x
, where q
x
is the number of individuals dominating the
individual F
x
expressed in the objective domain.
18 EE6701 Evolutionary Computation Chapter 1
Goal (MOGA)
Desired value for each objective
Can be used to specify practical
design specification/requirements
Individuals that satisfy the goal
setting have a lower (better) rank
Tan, K. C., Khor, E. F., Lee, T. H. and Sathikannan, R., 'An evolutionary
algorithm with advanced goal and priority specification for multi-
objective optimization', Journal of Artificial Intelligence Research, vol. 18,
pp. 183-215, 2003.
In design optimization, a set of specification or design requirements are
often given in a-priori
19 EE6701 Evolutionary Computation Chapter 1
MOEA minimization with a feasible goal setting
Gen = 5 Gen = 70
20 EE6701 Evolutionary Computation Chapter 1
Unfeasible goal setting Feasible but extreme goal setting
21 EE6701 Evolutionary Computation Chapter 1
(G
1
G
2
G
3
G
4
) (G
1
G
2
G
3
)
Logical OR and AND connectives among goals
22 EE6701 Evolutionary Computation Chapter 1
Priority/Preference
Assign relative importance of each
objective for practical applications
Among strings of equal rank, priority can
be used to determine superiority
Example: Stability in control system design
Soft/hard Objectives
Soft: Always considered in the evolution
Hard: Considered while goal is not satisfied
Example: SS error and actuator saturation
23 EE6701 Evolutionary Computation Chapter 1

Compared with SO problems, MO optimization often requires a
larger population size in order to cover the trade-off surface
Population size can be changed according to the population
distribution at each generation
Generally, we can start with a small population for initial search.
Subsequently increase or decrease the population size according to
current Pareto front in the evolution process
The increased individuals can be obtained via local fine-tuning by
generating additional good individuals to fill up gaps or
discontinuities in the current Pareto front
Tan, K. C., Lee, T. H. and Khor, E. F., 'Evolutionary algorithm with
dynamic population size and local exploration for multiobjective
optimization', IEEE Transactions on Evolutionary Computation, vol.
5, no. 6, pp. 565-588, 2001.
Dynamic Population
24 EE6701 Evolutionary Computation Chapter 1
Local Perturbation
25 EE6701 Evolutionary Computation Chapter 1
Population size versus generation Population distribution
26 EE6701 Evolutionary Computation Chapter 1
Handling Elements: Provide basic usefulness of finding the non-
dominated individuals
Min-Max, Sub-Pop and others received less interests as
compared to Pareto, Weights, Goals and Pref
Weights has attracted significant attentions from 1985-2000.
The popularity of Pareto continues to grow significantly
MO Handling Elements
0
10
20
30
40
50
1
9
6
1
1
9
6
7
1
9
8
5
1
9
9
2
1
9
9
4
1
9
9
6
1
9
9
8
2
0
0
0
2
0
0
2
Year

C
u
m
u
l
a
t
i
v
e

n
u
m
b
e
r

o
f
M
e
t
h
o
d
s
Weights
Min-Max
Pareto
Goals
Pref
Gene
Sub-Pop
Fuzzy
Others
27 EE6701 Evolutionary Computation Chapter 1
Supporting Elements
0
10
20
30
40
1
9
6
1
1
9
6
7
1
9
8
5
1
9
9
2
1
9
9
4
1
9
9
6
1
9
9
8
2
0
0
0
2
0
0
2
Year
C
u
m
u
l
a
t
i
v
e

n
u
m
b
e
r
o
f

M
e
t
h
o
d
s
Dist
Mat
Sub-Reg
Ext-Pop
Elitsm
A-Evo
Supporting Elements: Play an indirect role of supporting the algorithm
to achieve better performance
It was developed more recently than MO handling elements
The Dist feature and Elitism/Archive feature have been
incorporated in many algorithms
Other features, such as for noise and many objectives problems
etc., are gaining more attentions recently
28 EE6701 Evolutionary Computation Chapter 1
Generational Distance (GD) (Veldhuizen, 1999)
Measures how far the evolved solution set is from
the true Pareto front.
Spacing (S) (Schott, 1995)
Measures how evenly the evolved
solutions distribute itself.
Maximum Spread (MS) (Zitzler, 1999)
Measures how well the true Pareto front
is covered by the evolved solution set.
f
1
f
2
Minimization
M
i
n
i
m
i
z
a
t
i
o
n
Non-dominated solution
Pareto Frontier
Non-dominated set
Hyper-Volume Ratio (HVR) (Veldhuizen, 1999)
Calculates the volume covered by the evolved solutions.
29 EE6701 Evolutionary Computation Chapter 1
Non-dominance Ratio (NR) (Goh and Tan, 2009)
Compare the quality of the solution set from various algorithms
Measures the ratio of non-dominated solutions contributed by a
particular solution set to the non-dominated solutions provided by
all solution sets
Objective Function 1
O
b
j
e
c
t
i
v
e

F
u
n
c
t
i
o
n

2

N
R

0.8
0.2
30 EE6701 Evolutionary Computation Chapter 1
Inverted Generational Distance (IGD) (M. A. Villalobos-Arias, 2005)
Measure the proximity as well as diversity between the obtained
Pareto front and the Pareto Optimal front
The Euclidean distance is measured from each obtained solution to
the optimal solutions set
F2
F1
F2
F1
P
a
r
e
t
o

O
p
t
i
m
a
l

F
r
o
n
t
P
a
r
e
t
o

O
p
t
i
m
a
l

F
r
o
n
t
E
v
o
l
v
e
d

S
o
l
u
t
i
o
n
s
E
v
o
l
v
e
d

S
o
l
u
t
i
o
n
s
=
(

=1


31 EE6701 Evolutionary Computation Chapter 1
Test
Problem
Features
1 ZDT1 Pareto front is convex.
2 ZDT2 Pareto front is non-convex.
3 ZDT3 Pareto front consists of several noncontiguous convex
parts.
4 ZDT4 Pareto front is highly multi-modal where there are 21
9

local Pareto fronts.
5 ZDT6 The Pareto optimal solutions are non-uniformly
distributed along the global Pareto front. The density of
the solutions is low near the Pareto front and high
away from the front.
ZDT1
ZDT2
ZDT3 ZDT4 ZDT6
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1
-1
-0.5
0
0.5
1
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
32 EE6701 Evolutionary Computation Chapter 1
Test
Problem
Features
6 FON Pareto front is non-convex.
7 KUR Pareto front consists of several noncontiguous convex
parts.
8 POL Pareto front and Pareto optimal solutions consist of
several noncontiguous convex parts.
9 TLK Noisy landscape.
10 TLK2 Non-stationary environment.
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
FON
-20 -19 -18 -17 -16 -15 -14
-15
-10
-5
0
5
KUR
0 5 10 15 20
0
5
10
15
20
25
POL
0 0.2 0.4 0.6 0.8 1
0
2
4
6
8
10
TLK
0 0.2 0.4 0.6 0.8 1
0
0.05
0.1
0.15
0.2
0.25
TLK2
33 EE6701 Evolutionary Computation Chapter 1
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
ZDT1
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
ZDT2
0 0.2 0.4 0.6 0.8 1
-1
-0.5
0
0.5
1
ZDT3
0
0.5
1
0
0.5
1
0
0.5
1
DTLZ1
0
0.5
1
0
0.5
1
0
0.5
1
DTLZ2
0
0.5
1
0
0.5
1
0
0.5
1
DTLZ3
Problem
(variables)
Remarks
1 ZDT1 (100) Convex Pareto front
2 ZDT2 (100) Concave Pareto front
3 ZDT3 (100) Non-continuous concave Pareto front
4 ZDT4 (10) Convex Pareto front and highly multi-modality
5 ZDT6 (100) Non-uniformity concave Pareto front
6 DTLZ1 (20) High-dimensionality and linear Pareto optimal front
7 DTLZ2 (20) High-dimensionality and spherical Pareto optimal
front
8 DTLZ3 (20) High-dimensionality and high multi-modality.
34 EE6701 Evolutionary Computation Chapter 1
1. Increasing Dimensionality
Tan, K. C., Lee, T. H. and Khor, E. F., 'Evolutionary algorithm with
dynamic population size and local exploration for multiobjective
optimization', IEEE Transactions on Evolutionary Computation,
vol. 5, no. 6, pp. 565-588, 2001.
Liu, D. S., Tan, K. C., Goh, C. K. and Ho, W. K., 'A multiobjective
memetic algorithm based on particle swarm optimization', IEEE
Trans on Systems, Man and Cybernetics: Part B (Cybernetics),
vol. 37, no. 1, pp. 42-50, 2007.
2. Expensive Function Evaluations
Tan, K. C., Tay, A. and Cai, J., 'Design and implementation of a
distributed evolutionary computing software', IEEE Trans on
Systems, Man and Cybernetics: Part C, vol. 33, issue 3, pp. 325-
338, 2003.
Tan, K. C., Yang, Y. J. and Goh, C. K., 'A distributed cooperative
coevolutionary algorithm for multiobjective optimization', IEEE
Transactions on Evolutionary Computation, vol. 10, issue 5, pp.
527- 549, 2006.
35 EE6701 Evolutionary Computation Chapter 1
Decompose a complex problem into smaller problems via
co-evolving subpopulations cooperatively

A divide-and-conquer strategy
Each subpopulation evolves a
different decision variable
Fitness is dependent on the
collaboration between each
subpopulation
Subpop 1
for
variable 1
Subpop i
for
variable i
Subpop k
for
variable k
Subpop m
for
variable m
Collaborate
Evaluate
Update achive
Rank
Representatives
Representatives
Representatives
Complete solution
Complete solution
and its objective
Achive
Individuals in
subpop i
Assign rank
36 EE6701 Evolutionary Computation Chapter 1
Cooperation Methods
C1: The best individual is selected as representative
C2: The best and a random individuals are selected as representatives.
The better solution resulting from the two co-operations is retained
C3: The best two individuals are selected as representatives. The better
solution resulting from the co-operations with the best representative
and either representative is retained
C4: The best two individuals are selected as the representatives.
Cooperation is performed with either one of the representatives
The model of cooperation among subpopulations is an
important issue in cooperative co-evolution
The simplest approach is to select the best individual as the representative.
However, this approach tends to perform poorly for problems with high
parameter inter-dependency
Alternatively, a random individual can be selected as the representative.
However, this often results in a slower convergence rate
Four different methods have been examined.
37 EE6701 Evolutionary Computation Chapter 1
Peer 1
Server Server
Subpopulations Peers Central server
1
3
2
5
4
6
1
2
Peer 2
3
4
Peer 3
5
6
Parallelization Strategy
Subpopulations are partitioned
into a number of groups and
assigned to peer computers
Indirect cooperation is
achieved through the
exchange of archive and
representatives between peers
and a central server
Peers are synchronized at
fixed intervals to ensure better
cooperation
A Distributed CCEA
38 EE6701 Evolutionary Computation Chapter 1
Developed upon the technology of Java 2
platform of Enterprise Edition (J2EE)
Exploit the inherent parallelism of
evolutionary algorithms
Incorporates the features of robustness,
security, and workload balancing
The DCCEA is designed and embedded in a distributed computing
framework named Paladin-DEC.
39 EE6701 Evolutionary Computation Chapter 1
Comparative Results
Generational Distance
ZDT4
CCEA PAES PESA NSGAII SPEA2 IMOEA
0
0.5
1
1.5
2
Generational Distance
CCEA PAES PESA NSGAII SPEA2 IMOEA
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Generational Distance
ZDT6 FON
KUR
CCEA PAES PESA NSGAII SPEA2 IMOEA
0.01
0.02
0.03
0.04
0.05
Generational Distance
CCEA PAES PESA NSGAII SPEA2 IMOEA
0
0.01
0.02
0.03
0.04
0.05
0.06
Generational Distance
CCEA PAES PESA NSGAII SPEA2 IMOEA
0
0.5
1
1.5
2
Generational Distance
CCEA PAES PESA NSGAII SPEA2 IMOEA
0
500
1000
1500
2000
Generational Distance
DTLZ2 DTLZ3
CCEA escapes from local optima of
ZDT4 and DTLZ3
CCEA scales well for high-dimensional
MO problems
CCEA performs less well for KUR due to
the high parameter inter-dependency
Observation:
40 EE6701 Evolutionary Computation Chapter 1
Comparative Results
Spacing
ZDT4
ZDT6 FON
KUR
DTLZ2 DTLZ3
CCEA PAES PESA NSGAII SPEA2 IMOEA
0.5
1
1.5
2
Spacing
CCEA PAES PESA NSGAII SPEA2 IMOEA
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
Spacing
CCEA PAES PESA NSGAII SPEA2 IMOEA
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
Spacing
CCEA PAES PESA NSGAII SPEA2 IMOEA
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Spacing
CCEA PAES PESA NSGAII SPEA2 IMOEA
0
0.5
1
1.5
2
2.5
3
Spacing
CCEA PAES PESA NSGAII SPEA2 IMOEA
0.5
1
1.5
2
2.5
Spacing
CCEA generally shows good
performance in spacing
CCEA handles the non-uniform
distribution of ZDT6 well
Observation:
41 EE6701 Evolutionary Computation Chapter 1
Comparative Results
Maximum Spread
ZDT4
ZDT6 FON
KUR
DTLZ2 DTLZ3
CCEA PAES PESA NSGAII SPEA2 IMOEA
0.7
0.75
0.8
0.85
0.9
0.95
1
Maximum Spread
CCEA PAES PESA NSGAII SPEA2 IMOEA
0.97
0.975
0.98
0.985
0.99
0.995
1
Maximum Spread
CCEA PAES PESA NSGAII SPEA2 IMOEA
0.5
0.6
0.7
0.8
0.9
1
Maximum Spread
CCEA PAES PESA NSGAII SPEA2 IMOEA
0.7
0.75
0.8
0.85
0.9
0.95
1
Maximum Spread
CCEA PAES PESA NSGAII SPEA2 IMOEA
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1
Maximum Spread
CCEA PAES PESA NSGAII SPEA2 IMOEA
0.5
0.6
0.7
0.8
0.9
1
Maximum Spread
CCEA performs competitively as
compared to other MOEAs
Observation:
42 EE6701 Evolutionary Computation Chapter 1
Behaviors of CCEA
Oscillations of x
1
help to
sample the solutions along the
Pareto front uniformly
Variables x
2
to x
10
converge to
the optimal location gradually
0 50 100 150 200 250 300
0
0.2
0.4
0.6
0.8
1
Generation number
x
1
0 50 100 150 200 250 300
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Generation number
x
2

~

x
1
0
0 50 100 150 200 250 300 350 400
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
Generation number
x
1

~

x
4
x
1
x
2
x
3
x
4
0 50 100 150 200 250 300 350 400
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
Generation number
x
5

~

x
8
x
5
x
6
x
7
x
8
ZDT4
FON
ZDT4
All the variables oscillate and
converge in a similar pace
This behavior is related to the
high inter-dependency among
the parameters
FON
43 EE6701 Evolutionary Computation Chapter 1
Effects of the Cooperation
C1 is the greediest method. It performs well for problems with low parameter inter-
dependency, but gives poor results for those with high inter-dependency
C4 is the least greedy approach. It performs well for problems with high parameter
inter-dependency, but gives poor results for those with low inter-dependency
C2 and C3 provide a balance between greedy search and diversity maintenance,
which achieve generally good and robust performances
Problem C1 C2 C3 C4
ZDT1 4.13E-04 4.24E-04 4.04E-04 3.14E-01
ZDT2 4.05E-04 4.81E-04 6.35E-04 5.09E-01
ZDT3 1.45E-03 1.41E-03 1.27E-03 3.64E-01
ZDT4 6.98E-05 7.08E-05 1.48E-04 1.26E+00
ZDT5 1.27E-06 1.27E-06 1.26E-06 2.90E+00
ZDT6 4.89E-07 5.00E-07 4.86E-07 1.43E+00
FON 2.35E-02 2.32E-02 2.28E-02 2.13E-02
KUR 3.55E-02 2.31E-02 2.35E-02 1.76E-02
TLK 4.80E-01 5.23E-01 4.90E-01 5.34E-01
DTLZ2 8.57E-04 4.07E-01 2.08E-02 5.44E-01
DTLZ3 1.82E+00 2.24E+00 1.65E+00 5.82E+02
Problem C1 C2 C3 C4
ZDT1 0.1757 0.1771 0.1667 0.8240
ZDT2 0.1613 0.1553 0.1600 0.9328
ZDT3 0.2815 0.2867 0.2733 0.9952
ZDT4 0.1333 0.1436 0.1409 1.9624
ZDT5 0.7114 0.7125 0.7125 1.1676
ZDT6 0.1754 0.1822 0.1881 0.8174
FON 0.2687 0.3497 0.5291 0.2676
KUR 0.7919 0.7145 0.7154 0.6817
TLK 1.0905 1.0284 1.0135 1.0772
DTLZ2 0.1245 0.2658 0.1604 0.2929
DTLZ3 0.6638 0.8103 0.7525 0.7201
Median GD for CCEA Median S for CCEA
44 EE6701 Evolutionary Computation Chapter 1
PC Configuration CPU
(MHz)/RAM (MB)

server PIV 1600/512
peer 1 PIII 800/ 512
peer 2 PIII 800/ 512
peer 3 PIII 800/ 256
peer 4 PIII 933/384
peer 5 PIII 933/128
peer 6 PIV 1300/ 128
peer 7 PIV 1300/ 128
peer 8 PIII 933/ 512
peer 9 PIII 933/ 512
peer 10 PIII 933/256
11 PCs in LAN
Populations Subpopulation size 20;
archive size 100
Chromosome length 30 bits for each variable

Selection Tournament selection
Crossover operator Uniform crossover
Crossover rate 0.8
Mutation operator Bit-flip mutation
Mutation rate 2/L, where L is the
chromosome length
Number of evaluation 120,000
Exchange interval 5 generations
Sync. interval 10 generations
Configurations
45 EE6701 Evolutionary Computation Chapter 1
Generational Distance Spacing
Maximum Spread Hyper-volume Ratio
46 EE6701 Evolutionary Computation Chapter 1
Observation
Effective in reducing simulation runtime without sacrificing performance
Speedup achievable is more significant for large problems
The increase of communication cost counteracts the reduction of
computation cost with increasing number of peers
Number of peers ZDT1 ZDT2 ZDT3 ZDT4 ZDT6
2 1.52 1.70 1.47 1.23 1.01
3 2.01 1.99 1.88 1.47 1.11
4 2.25 2.21 1.95 1.50 1.14
5 2.48 2.69 2.15 1.56 1.14
6 2.81 3.03 2.83 1.70 1.29
7 2.87 3.32 2.77 1.88 1.25
8 3.38 3.27 2.92 1.82 1.26
9 3.46 3.36 2.96 1.83 1.26
10 3.46 3.18 2.79 1.82 1.25

Speedup
0
50
100
150
200
250
300
1 2 3 4 5 6 7 8 9 10
Number of peers
R
u
n

t
i
m
e

(
s
)
ZDT1
ZDT2
ZDT3
ZDT4
ZDT6
Runtime
47 EE6701 Evolutionary Computation Chapter 1
3. Noise and Uncertainty
Goh, C. K., and Tan, K. C., 'An investigation on noisy
environments in evolutionary multi-objective optimization', IEEE
Transactions on Evolutionary Computation, vol. 11, no. 3, pp.
354-381, 2007.
Tan, K. C. and Goh, C. K., 'A competitive-cooperative
coevolutionary paradigm for dynamic multi-objective
optimization', IEEE Transactions on Evolutionary Computation,
vol. 13, no. 1, pp. 103-127, 2009.
4. Estimation of Distribution Algorithms
Shim, V. A., Tan, K. C., Chia, J.Y. and Mamun, A. Al., 'Multi-
objective optimization with estimation of distribution algorithm in
noisy environment', Evolutionary Computation (MIT Press),
2012.
Shim, V. A., Tan, K. C. and Cheong, C. Y., 'A hybrid estimation
of distribution algorithm with decomposition for solving the
multiobjective multiple traveling salesman problem', IEEE
Transactions on Systems, Man, and Cybernetic: Part C, vol. 42,
no. 5, pp. 682-691, Sep 2012.
48 EE6701 Evolutionary Computation Chapter 1
Data encountered in practical applications may often be
influenced by noise
Noise may arise from different sources, e.g., sensor measurement
errors, incomplete simulation of computational models etc
The problem with noise is that it is not simply adding a bias or offset
to the objective function
When a system is presented with noise, each evaluation of the same
solution may result in different objective function values
Noise encountered in EMO optimization is often modeled as a
random perturbation to the objective functions
Performance is greatly influenced by the noise model adopted and
the level of noise intensity encountered
49 EE6701 Evolutionary Computation Chapter 1
1.0
0.8
0.6
0.4
0.2
0.0
0.2 0.4 0.6 0.8 1.0 1.2 1.4
f
1
f
2
1.0
0.8
0.6
0.4
0.2
0.0
0.2 0.4 0.6 0.8 1.0 1.2 1.4
f
1
f
2
A
B
A'
B'
Noise may change the way we perceive the solution
E.g., a good solution may be perceived
as a bad solution and vice versa. The
problem is worse for EMO.
Non-dominated solutions in noisy
optimization may be an inferior or
worse a non-feasible solution.
Effect of Noise
50 EE6701 Evolutionary Computation Chapter 1
Existing approaches for handling noise in SO and MO
problems can be classified as follows (Jin and Branke, 2005)
Explicit Averaging (Singh, 2003; Bui et al, 2005; Basseur and
Zitzler, 2006)
Implicit Averaging (Buche et al, 2002)
Selection Modification (Hughes, 2001; Teiche, 2001)
Heuristic Approach (Goh and Tan, 2007)
Design considerations for handling noise in EMO
To minimize error in the selection and elitism process
To improve the efficiency of noise handling mechanism
To handle the final set of solutions in archive for decision-making
51 EE6701 Evolutionary Computation Chapter 1
Explicit Averaging
Each solution is evaluated a number of times and is averaged to
compute the expected objective value
By increasing the number of samples, it reduces the degree of
uncertainty in the optimization
f
1
f
2
Original Solution
Samples
52 EE6701 Evolutionary Computation Chapter 1
A large population size is used
There may be many similar solutions and thus influence of noise is
compensated as the search revisits the same region repeatedly
This approach can be computationally expensive
x
1
x
2
x
1
x
2
Small population Large population
Implicit Averaging
53 EE6701 Evolutionary Computation Chapter 1
Selection Modification
The ranking and selection procedures are modified to account for
uncertainties , such that solution A is judged to be better than
solution B only if it satisfies some criteria
Examples include possibilistic dominance etc
f
1
f
2
f
1
f
2
Deterministic dominance Possibilistic dominance
54 EE6701 Evolutionary Computation Chapter 1
Preliminary Investigation
Parameter Settings
1/ chromosome_length
1/ bit_number_per_variable
Basic Algorithm
Employs a fixed-size
population and an archive.
When predetermined archive
size is reached, a recurrent
truncation process based on
niche count is used.
Binary tournament selection
scheme is used for the
combined archive and
evolving population.
Additive Noise Model
2
( ) ( ) Normal(0, ) f x f x o = +
is represented as a percentage of the maximum of the i-th objective in the
true Pareto front.
2
o
Chromosome Binary coding; 15 bits per decision variable.
Population Population size 100; Archive (or secondary
population) size 100.
Selection Binary tournament selection
Crossover operator Uniform crossover
Crossover rate 0.8
Mutation operator Bit-flip mutation
Mutation rate for FON and KUR
for ZDT1, ZDT4 and ZDT6
Ranking scheme Pareto ranking
Diversity operator Niche count with radius 0.01 in the normalized
objective space.
Evaluation number 50,000
55 EE6701 Evolutionary Computation Chapter 1
0 100 200 300 400 500
-7
-6
-5
-4
-3
-2
-1
0
1
2
Generation
G
D
G
D
Generation
0 100 200 300 400 500
-1
0
1
2
3
4
5
G
D
Generation
0 100 200 300 400 500
-8
-6
-4
-2
0
2
G
D
Generation
0 100 200 300 400 500
-4.5
-4
-3.5
-3
-2.5
-2
-1.5
-1
-0.5
G
D
Generation
0 100 200 300 400 500
-7
-6
-5
-4
-3
-2
-1
0
ZDT1
ZDT6 ZDT4
KUR FON
No noise
0.2%
0.5%
1.0%
5.0%
10.0%
20.0%
Preliminary Investigation
Noise impact on convergence
56 EE6701 Evolutionary Computation Chapter 1
Preliminary Investigation
Noise impact on diversity
ZDT1 ZDT6 ZDT4
KUR FON
0 100 200 300 400 500
0.5
0.6
0.7
0.8
0.9
1
Generation
M
S
Generation
M
S
0 100 200 300 400 500
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Generation
M
S
0 100 200 300 400 500
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Generation
M
S
0 100 200 300 400 500
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Generation
M
S
0 100 200 300 400 500
0.4
0.5
0.6
0.7
0.8
0.9
1
No noise
0.2%
0.5%
1.0%
5.0%
10.0%
20.0%
57 EE6701 Evolutionary Computation Chapter 1
Preliminary Investigation
ZDT1 ZDT6 ZDT4
KUR FON
0.0 0.2 0.5 1.0 5.0 10.0 20.0
0
20
40
60
80
100
Noise Level
A
r
c
h
i
v
e

S
i
z
e
0.0 0.2 0.5 1.0 5.0 10.0 20.0
0
20
40
60
80
100
Noise Level
A
r
c
h
i
v
e

S
i
z
e
0.0 0.2 0.5 1.0 5.0 10.0 20.0
0
20
40
60
80
100
Noise Level
A
r
c
h
i
v
e

S
i
z
e
0.0 0.2 0.5 1.0 5.0 10.0 20.0
0
20
40
60
80
100
Noise Level
A
r
c
h
i
v
e

S
i
z
e
0.0 0.2 0.5 1.0 5.0 10.0 20.0
0
20
40
60
80
100
Noise Level
A
r
c
h
i
v
e

S
i
z
e
Noise impact on archiving
Although MOEA is able to search
for better solutions, the noise-
enhanced solutions in the archive
are keeping the newly found non-
dominated solutions away, thus
resulting in the reduced number
of archived solutions.
58 EE6701 Evolutionary Computation Chapter 1
Preliminary Investigation
ZDT1 ZDT6 ZDT4
KUR FON
0 50 100 150 200
0
0.1
0.2
0.3
0.4
0.5
Generation
D
e
c
i
s
i
o
n
-
e
r
r
o
r

r
a
t
i
o
Generation
D
e
c
i
s
i
o
n
-
e
r
r
o
r

r
a
t
i
o
0 50 100 150 200
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Generation
D
e
c
i
s
i
o
n
-
e
r
r
o
r

r
a
t
i
o
0 50 100 150 200
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Generation
D
e
c
i
s
i
o
n
-
e
r
r
o
r

r
a
t
i
o
0 50 100 150 200
0
0.1
0.2
0.3
0.4
0.5
Generation
D
e
c
i
s
i
o
n
-
e
r
r
o
r

r
a
t
i
o
0 50 100 150 200
0
0.05
0.1
0.15
0.2
No noise
0.2%
0.5%
1.0%
5.0%
10.0%
20.0%
Noise impact on decision-making in selection process
59 EE6701 Evolutionary Computation Chapter 1
Preliminary Investigation
Generation
S
e
a
r
c
h

r
a
n
g
e

o
f

v
a
r
i
a
b
l
e
,

x
0 100 200 300 400 500
0.35
0.4
0.45
0.5
0.55
0.6
0.65
0.7
Generation
S
e
a
r
c
h

r
a
n
g
e

o
f

v
a
r
i
a
b
l
e
,

x
0 100 200 300 400 500
0
0.2
0.4
0.6
0.8
1
Generation
S
e
a
r
c
h

r
a
n
g
e

o
f

v
a
r
i
a
b
l
e
,

x
0 100 200 300 400 500
0
0.2
0.4
0.6
0.8
1
Generation
S
e
a
r
c
h

r
a
n
g
e

o
f

v
a
r
i
a
b
l
e
,

x
0 100 200 300 400 500
0
0.2
0.4
0.6
0.8
1
ZDT1
FON
0% noise level 20% noise level
0% noise level
20% noise level
BMOEA is capable of
narrowing down the search
range for better evolutionary
search optimization in a noise
free environment.


On the other hand, the mean
location of individuals remains
relatively the same, and the
range is concentrated near the
optimal region despite the
presence of noise, which
could be due to the correct
decision-making in the
selection process.
Trace of search range for an arbitrary selected parameter
60 EE6701 Evolutionary Computation Chapter 1
Better decision-making at early generations: Introduce a momentum
term to accelerate movement in the direction of improvement; while
restricting movement otherwise
Generation
D
e
c
i
s
i
o
n
-
e
r
r
o
r

r
a
t
i
o
0 50 100 150 200
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Decision-making error
Experiential Learning Directed Perturbation
( ) ( ) ( 1)
j j j
p t p t p t o A = A + A
min max
( ) ( ), if ( 1)
( 1)
Bit-flip, otherwise
j j j
j
p t p t p t
p t
o

+ A A < A < A

+ =

Variation increases in magnitude


in the same direction of change.
61 EE6701 Evolutionary Computation Chapter 1
Gene Adaptation Selection Strategy
Mean location of search range remains relatively invariant: Adjust the
search range according to the distribution of solutions
It has the advantage of concentrating resources on a smaller region. By
re-evaluating the similar solutions, it gives some form of implicit averaging
j j j
j j j
a lowbd w meanbd
b uppbd w meanbd
=
= +
Activation of gene
adaptation
Generation
Current Search Region
j
a
j
b
Defined Search Region
j
x
j
meanbd
j
lowbd
j
uppbd
Generation
Current Search Region
Defined Search Region
j
x
Activation of gene
adaptation
j
b
j
a
j
meanbd
j
uppbd
j
lowbd
j j j
j j j
a meanbd w meanbd
b meanbd w meanbd
=
= +
Convergence model
Divergence model
62 EE6701 Evolutionary Computation Chapter 1
Possibilistic Archiving
Based on the concept of possibilistic Pareto dominance to
store true non-dominated solutions in the archive
Minimize removal of true non-dominated individuals due to noise
Provide a chance for individuals degraded by noise to survive
2L
2
2L
1
2
x
1
x
L is a reflection of the uncertainty
level present in the system.
The possibilistic archiving removes
a solution only if it necessarily
nominates a solution.
The gray region denotes the region
for which this solution dominates.
Because of uncertainty, this solution
can be anywhere denoted in the
box and therefore it is possible that
any solutions in the box may be
non-dominated solutions.
63 EE6701 Evolutionary Computation Chapter 1
Adapt gene based on
adopted model
Crossover
ELDP
Add offspring to
population
Combine archive
and evolving
population
Possibilistic
Archiving
Evaluation and
Pareto Ranking
Initialize population
Is stopping criteria met? No Yes
Possibilistic
Archiving
Return Archive
GASS
Convergence or
divergence model?
Yes
No
BInary Tournament
selection
MOEA-RF
Experiential Learning
Directed Perturbation
Gene Adaptation
Selection Strategy
Possibilistic Archiving
64 EE6701 Evolutionary Computation Chapter 1
0.0 5.0 10.0 20.0
0
0.5
1
1.5
Noise Level
G
D
0.0 5.0 10.0 20.0
0.5
0.6
0.7
0.8
0.9
1
Noise Level
M
S
0.0 5.0 10.0 20.0
0
0.2
0.4
0.6
0.8
1
Noise Level
H
V
R
MOEA-RF
RMOEA
NTSPEA
MOPSEA
SPEA2
NSGAII
PAES
Performance in Noisy Environment
ZDT1with different noise levels
65 EE6701 Evolutionary Computation Chapter 1
Performance in Noisy Environment
FON with 20% noise
0 10000 20000 30000 40000 50000
0
0.1
0.2
0.3
0.4
0.5
Evaluation
G
D
0 10000 20000 30000 40000 50000
0
0.1
0.2
0.3
0.4
0.5
0.6
Evaluation
M
S
MOEA-RF
RMOEA
NTSPEA
MOPSEA
SPEA2
NSGAII
PAES
66 EE6701 Evolutionary Computation Chapter 1
Performance in Noisy Environment
MOEA-RF RMOEA NTSPEA MOPSEA SPEA2 NSGAII PAES
1
st
quartile 28 6 5 7 18 17 4
ZDT1 Median 31 7.5 6 9 21.5 19.5 4.5
3
rd
quartile 32 10 8 10 27 23 5
1
st
quartile 29 5 6 9 12 20 5
ZDT4 Median 33 7 8 11 26.5 26 6
3
rd
quartile 41 8 10 13 29 32 9
1
st
quartile 82 2 4 3 8 8 2
ZDT6 Median 85 3 5 4 9 9 4
3
rd
quartile 88 5 6 5 11 11 6
1
st
quartile 9 1 1 1 6 6 1
FON Median 11 2 1.5 2 8.5 8.5 2
3
rd
quartile 17 2 2 3 12 12 3
1
st
quartile 25 6 5 8 23 25 7
KUR Median 27 8 5.5 9 25 27 9
3
rd
quartile 30 9 7 11 28 30 10
Number of non-dominated individuals found for the various problems
(20% noise level)
67 EE6701 Evolutionary Computation Chapter 1
Baseline MOEA
Generation (a) 0, (b) 10, (c) 60, (d) 200, and (e) 350 for ZDT4 with 21
9
local Pareto fronts.
Baseline MOEA with ELDP
Baseline MOEA with GASS
0 0.5 1
0
20
40
60
80
100
0 0.5 1
0
5
10
15
20
0 0.5 1
0
0.5
1
1.5
2
2.5
0 0.5 1
0
0.5
1
1.5
2
2.5
0 0.5 1
0
0.5
1
1.5
2
2.5
0 0.5 1
0
20
40
60
80
100
0 0.5 1
0
1
2
3
4
5
0 0.5 1
0
0.5
1
1.5
0 0.5 1
0
0.2
0.4
0.6
0.8
1
0 0.5 1
0
0.2
0.4
0.6
0.8
1
0 0.5 1
0
20
40
60
80
100
0 0.5 1
0
5
10
15
20
0 0.5 1
0
0.5
1
1.5
2
2.5
0 0.5 1
0
0.5
1
1.5
0 0.5 1
0
0.5
1
1.5
(a) (b) (c) (d) (e)
(a) (b) (c) (d) (e)
(a) (b) (c) (d) (e)
Effects of the Proposed Features
ELDP and GASS have the advantage of overcoming local optimality for ZDT4
The population distribution converges faster when ELDP is incorporated
68 EE6701 Evolutionary Computation Chapter 1
ZDT4 with 0% noise
NSGAII
0
0.5
1
1.5
Generational Distance
NSGAII-RF SPEA2 SPEA2-RF
NSGAII NSGAII-RF SPEA2 SPEA2-RF
4
6
8
x 10
-3
Spacing
NSGAII NSGAII-RF SPEA2 SPEA2-RF
0.7
0.8
0.9
1
Maximum Spread
NSGAII NSGAII-RF SPEA2 SPEA2-RF
0
0.5
1
Hypervolume Ratio
NSGAII NSGAII-RF SPEA2 SPEA2-RF
0
1
2
Generational Distance
NSGAII NSGAII-RF SPEA2 SPEA2-RF
0.05
0.1
0.15
0.2
0.25
Spacing
NSGAII NSGAII-RF SPEA2 SPEA2-RF
0.6
0.7
0.8
0.9
1
Maximum Spread
NSGAII NSGAII-RF SPEA2 SPEA2-RF
0
0.2
0.4
0.6
0.8
Hypervolume Ratio
ZDT4 with 20% noise
Effects of ELDP and GASS on
NSGA-II and SPEA2
69 EE6701 Evolutionary Computation Chapter 1
Performance in Noisy Environment
ZDT6 with 20% noise
ZDT6 has a non-uniformly distributed and
discontinuous tradeoff
Most algorithms are unable to find the global
tradeoff. MOEA-RF gives a rather consistent
good performance with a relatively small
variance for the performance metrics
MOEA-RFRMOEA NTSPEA MOPSEA SPEA2 NSGAII PAES
Generational Distance

0
1
2
3
4

MOEA-RFRMOEA NTSPEA MOPSEA SPEA2 NSGAII PAES
-0.2
0
0.2
0.4
0.6
Spacing
MOEA-RFRMOEA NTSPEA MOPSEA SPEA2 NSGAII PAES
0
0.5
1
Maximum Spread
MOEA-RFRMOEA NTSPEA MOPSEA SPEA2 NSGAII PAES
0
0.2
0.4
0.6
0.8
Hypervolume Ratio
70 EE6701 Evolutionary Computation Chapter 1
Summary
The behaviors of EMO in noisy environments are examined

Two heuristics including experiential learning directed
perturbation (ELDP) and gene adaptation selection strategy
(GASS) are applied to exploit MOEA behaviors for better
performance in noisy environment
A possibilistic archiving model to reduce the impact of noise on
the archive
The proposed features also improve performance of general MOEAs,
e.g., SPEA2 and NSGA-II, to exhibit competitive or better performance
in noisy environments
Three noise-handling features are proposed
71 EE6701 Evolutionary Computation Chapter 1
Hard Disk Drive Servo Specifications
The control input should not exceed 2 volts due to physical
constraints on the actual actuator
The overshoots and undershoots of the step response should
be kept less than 5% as the R/W head can start to read or
write within 5% of the target
The 5% settling time in the step response should be less than
2 milliseconds
Excellent steady-state accuracy
Robust in terms of disturbance rejection and uncertainty
Tan, K. C., Sathikannan, R., Tan, W.W. and Loh, A. P., 'Evolutionary
design and implementation of a hard disk drive servo control system',
Soft Computing, vol. 11, no. 2, pp. 131-139, 2007.
72 EE6701 Evolutionary Computation Chapter 1
10
3
10
4
-40
-20
0
20
40
60
frequency: rad/sec
m
a
g
n
i
t
u
d
e
:

d
B
:: Measured
: Identified
10
3
10
4
-400
-300
-200
-100
0
frequency: rad/sec
p
h
a
s
e
:

d
e
g
r
e
e
G
v
(s) =
4.4 10
10
s + 4.87 10
15
s
2
(s
2
+1.45 10
3
s +1.110
8
)
x(k +1) =
1 1.664
0 1
|
\

|
.
|
x(k) +
1.384
1.664
|
\

|
.
|
u
HDD Model
Sampling frequency of 4 kHz
K
p
= K
f
z + ff
1
z + ff
2
|
\

|
.
|
Controllers in Discrete Form
K
s
= K
b
z + fb
1
z + fb
2
|
\

|
.
|
73 EE6701 Evolutionary Computation Chapter 1
Time domain and frequency domain design specifications
Customer specifications Objective Goal
1. Stability ( Closed-loop poles) Nr([eig(A_clp)] > 0) 0
Frequency
Domain
2. Closed-loop sensitivity or
Disturbance rejection
o [S( e j )]
1
3. Plant uncertainty
o [T( e j )]
1
4. Actuator saturation Max(u) 2 volts
Time 5. Rise time T
rise
2 milli sec
Domain 6. Overshoots O
shoot
0.05
7. Settling time 5% T
settling
2 milli sec
8. Steady-state error SS
error
0

Sum
Reference
Input
Position Output
y(n)=Cx(n)+Du(n)
x(n+1)=Ax(n)+Bu(n)
Position Feedback Controller
y(n)=Cx(n)+Du(n)
x(n+1)=Ax(n)+Bu(n)
Plant
y(n)=Cx(n)+Du(n)
x(n+1)=Ax(n)+Bu(n)
FeedForward
Controller
Controller Output
K
p
= (0.038597)
z + 0.63841
z + 0.39488
|
\

|
.
|
K
s
= (0.212)
z 0.783
z + 0.014001
|
\

|
.
|
74 EE6701 Evolutionary Computation Chapter 1
Layout of MOEA Toolbox
Population Handling
Simulation Objectives
Remote Control
Graphical Results
Quick Setup
Master Window
75 EE6701 Evolutionary Computation Chapter 1
Design Trade-off Graph Progress Ratio
Tan, K. C., Lee, T. H., Khoo, D. and Khor, E. F., 'A multi-objective
evolutionary algorithm toolbox for computer-aided multi-objective
optimization', IEEE Transactions on Systems, Man and Cybernetics:
Part B (Cybernetics), vol. 31, no. 4, pp. 537-556, 2001.
76 EE6701 Evolutionary Computation Chapter 1
Servo Output Response Disturbance Rejection
77 EE6701 Evolutionary Computation Chapter 1
Response to Change in Resonant Frequency
e
n
= 14.82 kHz
e
n
= 7.41 kHz
78 EE6701 Evolutionary Computation Chapter 1
Real-time Implementation
79 EE6701 Evolutionary Computation Chapter 1
Vehicle routing problem (VRP) involves
the routing of a set of identical vehicles
with limited capacity from a central depot
to a set of geographically dispersed
customers to satisfy their demands

For VRPSD, customers demands are
stochastic and other parameters such as
vehicle capacity, customers and depots
are known a-priori
Tan, K. C., Cheong, C. Y. and Goh, C. K., 'Solving multiobjective vehicle
routing problem with stochastic demand via evolutionary computation',
European Journal of Operational Research, vol. 177, pp. 813-839, 2007.
80 EE6701 Evolutionary Computation Chapter 1
Capacity constraint
Each customer has a demand. Vehicle capacity cannot be exceeded
Treated as a hard constraint, e.g., a route failure occurs when this
constraint is violated

Time constraint
Service and travel time for each route should not exceed the length of
a drivers workday, e.g., 8 hours
Treated as a soft constraint in the form of remuneration for drivers.
$10 for each of first 8 hours of work and $20 for each additional hour
subsequently
This is to penalize exceedingly long routes which may not be feasible
to implement in practice
Constraints in VRPSD
81 EE6701 Evolutionary Computation Chapter 1
Stochastic demand
Demand of each customer is a random variable whose
distribution is known
A normal distribution is usually used
Actual demand is not known but is revealed only when the
vehicle arrives at the customers location

Examples of VRPSD
Beer and soft drinks distribution
Provision of ATMs with cash
Trash collection
Stochastic Demands
82 EE6701 Evolutionary Computation Chapter 1
Route failure
A vehicle may find that it is not able to satisfy a customers
demand upon arrival due to the capacity constraint

Recourse policy
The vehicle returns to the depot to restock and then continues
delivery according to the originally planned route

Main obstacle
The actual cost of a solution cannot be known before it is
actually implemented, e.g., the main obstacle for solving
VRPSD is to find a suitable objective function
Difficulty in Solving VRPSD
83 EE6701 Evolutionary Computation Chapter 1
Route Simulation Method (RSM)
RSM is applied to evaluate the cost of routes for a particular
realization of the customers demands
Generate N sets of demands based on the known
distributions of the customers demands
Averaging technique is used to obtain the expected cost of
the solution
Depot
1
2
3
4
5
6
Customer Generated
Demand
1 5
2 6
3 2
4 13
5 9
6 5
Example: Vehicle capacity: 15
84 EE6701 Evolutionary Computation Chapter 1
Multiple objectives
Travel distance
Driver remuneration
Number of vehicles


Variable-length chromosome
Encodes the number of
routes/vehicles and the order of
customers served by these vehicles
Every chromosome can have a
different number of routes
0
2
5
7
0
0
1
3
4
8
9
0
0
6
10
0
Chromosome
0
2
5
7
0
0
1
3
4
8
9
0
0
6
10
0
Chromosome
Each route contains a
sequence of customers
A chromosome encodes a
complete routing solution
85 EE6701 Evolutionary Computation Chapter 1
Route-exchange Crossover
Allows good sequence of routes to be shared
Infeasibility after the change can be eradicated easily
A random shuffling operator is applied to increase the diversity
of chromosomes to explore the large VRPSD search space
86 EE6701 Evolutionary Computation Chapter 1
Multi-mode Mutation
Partial swap
Merge shortest route
Split longest route


Split Longest
Route
Random Shuffle
Partial Swap
rand[0,1) < elastic rate
Chromosome
selected for
mutation
No
Yes
rand[0,1) < squeeze rate
No
Merge Shortest
Route
Yes
87 EE6701 Evolutionary Computation Chapter 1
Shortest Path (SP) search
Exploit the structure that route
failures are more likely to occur
at the end of a route
Which Directional (WD) search
Exploit the structure that the cost
of a route is different for both
directions
Rebuild the route in the opposite
direction
For both methods, the new route
is compared with the original one
and the better route is retained Flowchart
Build customer
database
Population
initialization
Route Simulation
Method and
Pareto ranking
Tournament
selection
Route-exchange
crossover
Multi-mode
mutation
Elitism
Local search
exploitation
Stopping
criterion
met?
Start
End
No
Yes
Update archive
Perform
local
search?
Yes
No
Generation loop
Build customer
database
Population
initialization
Route Simulation
Method and
Pareto ranking
Tournament
selection
Route-exchange
crossover
Multi-mode
mutation
Elitism
Local search
exploitation
Stopping
criterion
met?
Start
End
No
Yes
Update archive
Perform
local
search?
Yes
No
Generation loop
While evolutionary operators focus on global exploration, local
search contributes to the intensification of optimization results
88 EE6701 Evolutionary Computation Chapter 1
Performance of Local Search Operators
All Solutions Non-dominated Solutions
0 100 200 300 400
1000
1500
2000
2500
3000
Generation
A
v
e
r
a
g
e

t
r
a
v
e
l

d
i
s
t
a
n
c
e


WD/SP
SP/WD
SP
WD
RAN
NLS
0 100 200 300 400
1000
1500
2000
2500
3000
Generation
A
v
e
r
a
g
e

t
r
a
v
e
l

d
i
s
t
a
n
c
e


WD/SP
SP/WD
SP
WD
RAN
NLS
0 100 200 300 400
1000
1200
1400
1600
1800
2000
2200
Generation
A
v
e
r
a
g
e

d
r
i
v
e
r

r
e
m
u
n
e
r
a
t
i
o
n


WD/SP
SP/WD
SP
WD
RAN
NLS
0 100 200 300 400
800
1000
1200
1400
1600
1800
2000
Generation
A
v
e
r
a
g
e

d
r
i
v
e
r

r
e
m
u
n
e
r
a
t
i
o
n


WD/SP
SP/WD
SP
WD
RAN
NLS
89 EE6701 Evolutionary Computation Chapter 1
Comparison of Different Optimization Criteria
Multiobjective Performance
0.00E+00
1.00E+06
2.00E+06
3.00E+06
4.00E+06
5.00E+06
6.00E+06
7.00E+06
All solutions Non-dominated solutions
T
r
a
v
e
l

d
i
s
t
a
n
c
e

X

D
r
i
v
e
r

r
e
m
u
n
e
r
a
t
i
o
n
MO
DORV
DODV
DODR
SOD
SOR
SOV
90 EE6701 Evolutionary Computation Chapter 1
Search Space of MO
91 EE6701 Evolutionary Computation Chapter 1
Performance for different value of N (e.g., expected cost of a solution
as compared to the actual cost of implementing the solution).
Travel distance
Driver remuneration
Robust VRPSD solutions
1 3 5 10 30 50 70
-100
0
100
200
300
Test Demand Set 1
I
n
c
r
e
a
s
e

i
n

t
r
a
v
e
l

d
i
s
t
a
n
c
e
N
1 3 5 10 30 50 70
-100
0
100
200
300
Test Demand Set 2
I
n
c
r
e
a
s
e

i
n

t
r
a
v
e
l

d
i
s
t
a
n
c
e
N
1 3 5 10 30 50 70
-100
0
100
200
300
Test Demand Set 3
I
n
c
r
e
a
s
e

i
n

t
r
a
v
e
l

d
i
s
t
a
n
c
e
N
1 3 5 10 30 50 70
-100
0
100
200
300
Test Demand Set 4
I
n
c
r
e
a
s
e

i
n

t
r
a
v
e
l

d
i
s
t
a
n
c
e
N
1 3 5 10 30 50 70
-100
0
100
200
300
Test Demand Set 1
I
n
c
r
e
a
s
e

i
n

d
r
i
v
e
r

r
e
m
u
n
e
r
a
t
i
o
n
N
1 3 5 10 30 50 70
-100
0
100
200
300
Test Demand Set 2
I
n
c
r
e
a
s
e

i
n

d
r
i
v
e
r

r
e
m
u
n
e
r
a
t
i
o
n
N
1 3 5 10 30 50 70
-100
0
100
200
300
Test Demand Set 3
I
n
c
r
e
a
s
e

i
n

d
r
i
v
e
r

r
e
m
u
n
e
r
a
t
i
o
n
N
1 3 5 10 30 50 70
-100
0
100
200
300
Test Demand Set 4
I
n
c
r
e
a
s
e

i
n

d
r
i
v
e
r

r
e
m
u
n
e
r
a
t
i
o
n
N
92 EE6701 Evolutionary Computation Chapter 1
Robust VRPSD solutions (Travel Distance)
GEG GEM AEM DET
-100
0
100
200
300
400
Test Demand Set 1
I
n
c
r
e
a
s
e

i
n

t
r
a
v
e
l

d
i
s
t
a
n
c
e
GEG GEM AEM DET
-100
0
100
200
300
400
Test Demand Set 2
I
n
c
r
e
a
s
e

i
n

t
r
a
v
e
l

d
i
s
t
a
n
c
e
GEG GEM AEM DET
-100
0
100
200
300
400
Test Demand Set 3
I
n
c
r
e
a
s
e

i
n

t
r
a
v
e
l

d
i
s
t
a
n
c
e
GEG GEM AEM DET
-100
0
100
200
300
400
Test Demand Set 4
I
n
c
r
e
a
s
e

i
n

t
r
a
v
e
l

d
i
s
t
a
n
c
e
(Dror and Trudeau)
93 EE6701 Evolutionary Computation Chapter 1
Robust VRPSD solutions (Driver Remuneration)
GEG GEM AEM DET
-100
0
100
200
300
400
Test Demand Set 1
I
n
c
r
e
a
s
e

i
n

d
r
i
v
e
r

r
e
m
u
n
e
r
a
t
i
o
n
GEG GEM AEM DET
-100
0
100
200
300
400
Test Demand Set 2
I
n
c
r
e
a
s
e

i
n

d
r
i
v
e
r

r
e
m
u
n
e
r
a
t
i
o
n
GEG GEM AEM DET
-100
0
100
200
300
400
Test Demand Set 3
I
n
c
r
e
a
s
e

i
n

d
r
i
v
e
r

r
e
m
u
n
e
r
a
t
i
o
n
GEG GEM AEM DET
-100
0
100
200
300
400
500
Test Demand Set 4
I
n
c
r
e
a
s
e

i
n

d
r
i
v
e
r

r
e
m
u
n
e
r
a
t
i
o
n
94 EE6701 Evolutionary Computation Chapter 1
Robust VRPSD solutions
Expected
Travel
Distance
Increase
in Travel
Distance
Actual
Travel
Distance
Expected
Driver
Remuneration
Increase in
Driver
Remuneration
Actual Driver
Remuneration
Multiplicative
Aggregate
(X10
6
)
GEG 1086.59 33.85 1120.44 985.95 32.87 1018.82 1.142
GEM 1120.52 16.54 1137.06 990.33 18.64 1008.97 1.147
AEM 1002.08 122.35 1124.43 947.04 125.04 1072.08 1.205
DET 970.17 213.40 1183.57 909.59 217.35 1126.94 1.334
Averaged over the non-dominated solutions at the termination of the simulation.
Test demand set 1
95 EE6701 Evolutionary Computation Chapter 1
Significance of RSM
The RSM was implemented by using demand sets randomly
generated based on the customers demand distributions
In actual implementation, there may not be a need to randomly
generate the demand sets if the company keeps past demand
records of their customers
Past records are useful, and can be used to provide the demand sets
for the RSM to operate on
Important if the customers demand distributions are not known
Multiple objectives were considered in solving the VRPSD problem
New way of assessing quality of solutions (RSM)
Solutions of the MOEA are robust to stochastic nature of the problem
Summary
96 EE6701 Evolutionary Computation Chapter 1
Goh, C. K. and Tan, K. C., Evolutionary Multi-
objective Optimization in Uncertain Environments:
Issues and Algorithms, Springer-Verlag, 2009

Tan, K. C., Khor, E. F. and Lee, T. H. Multiobjective
Evolutionary Algorithms and Applications, Springer-
Verlag, United Kingdom, 2005

Vous aimerez peut-être aussi