Vous êtes sur la page 1sur 7

Multi-objective Optimization based Differential Evolution

Constrained Optimization Algorithm


Mengling Zhao
1,2
Ruochen Liu
2,3
Wenfeng Li
1
Hongwei Liu
2
l: Xian University of Science and Technology,
2: School of Science, Xidian University,
Xian, China
zhaomengling@126.com
3: Institute of Intelligent Information Processing, Xidian
University,
Xian, China
AbstractBased on differential evolution algorithm and the
concept of multi-objective optimization, a differential
evolution algorithm for constrained optimization problem
based on multi-objective constraint handling is proposed in
this paper. This proposed algorithm can achieve global
search and local search effectively by using of the
differential evolution algorithm. For the constrained
optimization problems, this paper presents a new
comparison mechanism based on the concept of Pareto
optimal solution. The grades of Pareto optimal solutions,
feasible solutions and infeasible solutions are prescribed and
enable them to be selected at different probabilities during
the evolutionary process, which can lead the search towards
the direction of the global optimum. In addition, an
infeasible solution of replace mechanism is also given.
When the algorithm gets into a local optimal, we use
infeasible solution, which contains useful information, to
replace those redundant and repeat feasible solutions to
improve the exploration capabilities of the algorithm in the
search space. Compared to Evolutionary Algorithm based
on Homomorphous Maps (EAHM), Artificial Immune
Response Constrained Evolutionary Strategy (AIRCES),
Constraint Handling Differential Evolution (CHDE) and
Evolutionary Strategies based on Stochastic Ranking
(ESSR), the results of the 13 Standard tests show that the
proposed algorithm has certain advantages in the
convergence speed and solution accuracy.
Keywords Differential evolution algorithm; constrained
optimization; multi-objective optimization
I. INTRODUCTION
In recent years, Evolutionary Algorithms (EAs) have
been successfully applied in a wide range of engineering,
including both unconstrained and constrained optimization
[1-3]. At present, there are many literatures on EA for
unconstrained optimization problems. However, researches
on Constrained Optimization Problem (COP) are relatively
few; a challenge task for EAs based COP is how to deal
with constraint. Michalewicz [4]

divided the existing
constraint-handling techniques using EA into 5 categories:
penalty functions; special representations and operators;
repair algorithms; separate objectives and constraints; and
hybrid methods. The penalty function is the most common
method of dealing with constraints. However, the traditional
penalty function method has a strong dependence of the
penalty parameters, to solve this problem, many
investigators proposed new improved methods. Reference [5]
gives a review of work on relative field in details.
Differential Evolution (DE) is proposed by Storn and
Price in 1995[6], since then the DE algorithm has been used
in many practical cases. The original DE was modified, and
many new versions proposed [6-10]. The DE algorithm is a
population based algorithm like genetic algorithms using the
similar operators; recombination, mutation and selection.
The main difference in constructing better solutions is that
genetic algorithms rely on crossover while DE relies on
mutation operation. This main operation is based on the
differences of randomly sampled pairs of solutions in the
population. However, as any other EA, DE lacks a
mechanism to deal with constrained search spaces. In recent
years, multi-objective optimization is introduced to COP, in
which, constraints are treated as one or more objective
functions so that COP is converted to multi-objective
optimization problem.
In this paper, constraints are regarded as one objective
function, and the constrained optimization is converted to
multi-objective optimization with two objective functions.
Based on the Pareto concept, prescribing individual grades is
convenient for the determination of select probability of each
individual in the process of evolution. At the same time
taking into account the ultimate goal of constrained
optimization is to seek the global optimal solution rather than
the Pareto optimal solution set, so we propose the criteria of
replacing for the infeasible solutions containing important
information so as to increase the diversity of population. The
performance of algorithm are tested through 13 standard test
problems, and compared the results with four existed
algorithms, such as evolutionary algorithm based on
homomorphous mapping(EAHM)[11], Artificial immune
response constraint evolutionary strategy(AIRCES)[12],
Constraint Handling Differential Evolution (CHDE) [13],
Evolutionary Strategies based on Stochastic Ranking (ESSR)
[5].
II. RELEVANT BACKGRAND
A. Constrained Optimization Problem (COP)
Without loss of generality, constrained optimization
problems (in the minimization sense) can be described as
equation (1):
2010 Second WRI Global Congress on Intelligent Systems
978-0-7695-4304-8/10 $26.00 2010 IEEE
DOI 10.1109/GCIS.2010.50
320
2010 Second WRI Global Congress on Intelligent Systems
978-0-7695-4304-8/10 $26.00 2010 IEEE
DOI 10.1109/GCIS.2010.50
320
2010 Second WRI Global Congress on Intelligent Systems
978-0-7695-4304-8/10 $26.00 2010 IEEE
DOI 10.1109/GCIS.2010.50
320
2010 Second WRI Global Congress on Intelligent Systems
978-0-7695-4304-8/10 $26.00 2010 IEEE
DOI 10.1109/GCIS.2010.50
320
2010 Second WRI Global Congress on Intelligent Systems
978-0-7695-4304-8/10 $26.00 2010 IEEE
DOI 10.1109/GCIS.2010.50
320
Minimize ( ) f x where
1 2
( , ,..., )
n
n
x x x x 9 = e , (1)
Where x S O e _ , S is an n-dimensional rectangular
space in 9
n
defined the parametric constraints:
( ) ( )
, 1, ,
L U
i i i
x x x i n < < =
( ) f x is the objective function defined in 9
n
. The
feasible region is defined by a set of m additional linear
or nonlinear
constraints ( ): 0 m >
( ) 0, 1, ,
j
g x j s = l
0
)
3,

( ) 0, 1, ,
j
h x j l m = = +
Where l is the number of inequality constraints and m - l
is the number of equality constraints. For an inequality
constraint that satisfies at any point x, we say it is
active at x, apparently, all equality constraints are
considered active at all points of .
( ) 0
j
g x =
( )
j
h x
B. Differential Evolution Algorithm
In this section, first, the basic Differential Evolution
algorithm is described. Then, we give a simply review for
its applications on constrained optimization problem.Like
other EAs, DE algorithm randomly initialized the
population members, supposed
where, NP is the population size. Each individual is encoded
real-valued vectors with dimension D that equals the
number of objective function parameters. Then, the
individuals are evolved by employing the mutation,
recombination and selection. Mutation operation is different
from that of EA. EA with real encoding mechanisms often
adopt Gaussian mutation, Cauchy mutation or their hybrid
strategy, while DE generate new individual by adding the
weighted difference vector of the two individual vectors to
the 3
0 1,0 2,0 ,
( , ,
NP
X X X X =
rd
mutated individual vector in the population, which is
called difference mutation. Supposed three randomly chosen
individuals from the kth generation population for
mutation
1 ,
2
,
, ,
r k
r k r k
X X X , a new individual variation
i, 1
V
k +

after difference mutation is produced as follows:
1 2
, 1 , , ,
(
i k r k r k r k
V X F X X
+
= + -
3
) 2
F is a real and constant parameter which controls the
amplification of the differential variation and it is usually
chosen from the interval . The indices r [0, 2] F e
1
, r
2
,
r
3
denote three mutually different individuals that are also
unequal to the currently regarded individual
i
X , difference
mutation is conducted for every individual of the population.
Each mutated vector is recombined with the
corresponding target vector
i, 1
V
k +
i
X to build the trial vector
as shown in equation (3).
i
U
, 1 1, 1 2, 1 , 1
( , , ,
i k k k Np k
U U U U
+ + +
= )
+
D

, 1
, 1
,
1, 2, , 1, 2,
( ( ) )
( ( ) )
ij k rand
ij k
ij k rand
i NP j
V if rand j CR or j j
U
X if rand j CR or j j
+
+
= =
s =

> =


3
CR is a real control parameter. It is chosen from the
interval [0,1] CRe . Based on the comparison with a random
variable rand (j) from the same interval, it is decided from
which vector the trial vector inherits its components. It is
ensured that the trial vector differs from the target vector in
at least one component by randomly
determining
min
{0, 1} j D e for every individual in each
generation.
DE algorithm adopts one-to-one greedy selection.
Individual for the subsequent generation k+1 are selected
based on their objective function value. Each trial individual
is compared with the respective target vector and the one
with the lower objective function value (if for minimization
problems) is chosen for the next generation:
4
, , ,
, 1
,
1, 2,
U (U ) ( )
i k i k i k
i k
i k
i
if f f X
X
X otherwise
+
=
<
=

NP
These are the main steps of DE algorithm. In practice,
several different variants of DE algorithm are proposed
[14]. This paper proposed a differential evolution
algorithm for unconstrained optimization based on concept
of multi-objective optimization.
III. DIFFERENTIAL EVOLUTION CONSTRAINED
OPTIMIZATION ALGORITHM BASED ON MULTI-OBJECTIVE
OPTIMIZATION
A. Constrained Optimization Problem Based On Multi-
objective Optimization
In order to avoid setting Penalty parameter r in the
penalty function method, more and more scholars began to
try to convert constrained optimization problem to multi-
objective optimization problem. This kind of algorithms
main feature is that constraints are regarded as one or more
targets and convert constrained single target optimization
problem into multi-objective optimization problem, and
then all sorts of multi-objective technologies can be used
to solve the constrained optimization problem. According
to the number of the objective functions converted from
constraints, these algorithms can be divided into two
categories.
One is to redefine the optimization problem as two-
objective problem. Generally speaking, one goal is the
original target function ( ) f x , the other is the degree of
violating the constraints ( ) ( )
1
G max 0,
n
i
i
x g x
|
=
=

_
(

, | is
usually 1 or 2. Deb et al. proposed GA's population-based
approach and ability to make pair-wise comparison in
tournament selection operator are exploited to devise a
penalty function approach that does not require any
penalty parameter [15]. LIN et al. proposed fixed
Proportion and direct Comparison to handle the constraints
[16], which combines direct comparison method and the
strategy to keep a fixed proportion of infeasible individuals.
321 321 321 321 321
It has been successfully integrated with the ordinary GA.
Runarsson et al.
.
proposed a stochastic ranking method for
dealing with constrain problems by introducing a random
number[17] they achieved a balance between the
objective function and constraint function
Another method considers each constraint as an
objective function and converts the single objective
optimization problem with m constraints into multi-
objective optimization problem with objective
functions. Carlos et al. proposed a kind of group multi-
objective technology similar to vector assessment genetic
algorithm [18]. Schaffer proposed a sorting process based
on Pareto [19]. Coello et al. proposed a method based on
Pareto niche genetic algorithm in 2002[20]. Aguirre et al.
proposed a method based on the Pareto archive evolution
strategy [21]. Zhou [22] used the Pareto superior
relationship between vectors and adopted definition of
individuals Pareto strength value to sort the individuals.
Mezura made a experimental comparison for four typical
algorithms for COP based on multi-objective optimization
technique [23].
1 m+
In essence, the constrained optimization algorithm has
two clear goals: 1) population quickly step into or close to
the feasible region, 2) the algorithm converge to the global
optimal solution. The main problem in the existing
constraint processing technology is how to design
reasonable selection criteria and comparison mechanism,
which directly affects convergence. In addition, the search
space of the optimization problem is composed of feasible
region and infeasible region, so, it is very important how
to effectively use infeasible solutions, especially it do
decisive role for the first goals achievement when the
global optimal solution is at the boundary of feasible
region or when the proportion of feasible region is small
Based on the above considerations, a new differential
constrained optimization evolutionary algorithm is
proposed in this paper. In fact, this algorithm belongs to
the first kind of method to handle constraints by using
multi-objective technology and it converts constrained
optimization problem into the multi-optimization problem
with two objectives. One is ( ) f x and the other is ( ) g x .
For the convenience of narrative, we
make ( ) ( ) ( ) , F x f x g x = (

. Here we only consider the
situation of minimizing problem. The maximization
problem can be transformed into minimization.
B. Comparative Mechanism
In multi-objective evolutionary algorithm, all
individuals are endowed with a grade. The grade of all the
Pareto optimal solution is same. For example, the grade of
Pareto optimal solution in literature [24] is 1, while in
literature [22] is 0. In this section, we define grades of all
feasible solutions are 1 and the grades of Pareto optimal
solutions are 2 and other individuals grades are 3. We sort
all individuals according to the following rules:
1) If two individuals have different grades, the low
grade win;
2) If two individuals are feasible solutions, the original
target function is the fitness function, individual with
the smaller function value win;
3) If two individuals same are Pareto optimal solutions,
both of them win with the probability 0.5;
4) If two individuals grades are 3, the original target
function or the constraint function is the fitness
function, each probability is 0.5. Individual with
smaller function value win;
There may be a special individual has two grades when
they are assigned to grade. If the population contains
feasible solutions, then the global optimal solution is not
only a feasible solution but also a Pareto optimal solution.
In this case, we define the special individual grade is 1.
C. Diversity Mechanism
Pareto optimal solution plays an important role in the
population. For instance, there are three Pareto optimal
solutions marked with e, f and g. Where, e represents the
optimal feasible individual in the population, f represents
infeasible individual having minimum against constraints,
and g represents infeasible individual having minimum
objective function value, as shown in Figure 1, we are only
interested in Pareto optimal solutions since they contain
the main information.

Figure 1. Pareto optimal solution in population
Based on the above reasons, this section presents a new
diversity mechanism - infeasible replace mechanism.
We define an extra set A for storing infeasible
individuals selected after each generations evolution.
These individuals are infeasible solutions with minimum
objective function value or minimum disobedient degree.
There are no feasible solutions appear in m generations
evolution. We will randomly select n individuals from A
to replace individuals of population . n P
The diversity mechanism can promote individual
recombination in the population, especially the
recombination of feasible solutions. The recombination of
infeasible solutions and feasible solutions improved
evolutionary algorithms exploring ability in searching
space, which improves the ability of algorithm to find the
optimal solution, especially when the global optimal
solution is at feasible regions boundary that can not be
solved by general search methods.
D. The Proposed Algorithm
In this paper, Multi-objective Optimization based
Differential Evolution for Constrained Optimization
Algorithm (denote as MODE) is proposed. The steps of the
algorithm are as follows.
322 322 322 322 322
Step1: Generate NP individuals

randomly to form the initial
population .These individuals are
randomly selected from the given border of constraints, and
generally are assumed to be subject to uniform probability
distribution. And set the initial parameters of algorithm
including control parameters
0 1,0 2,0 ,
( , ,
NP
X X X X =
0
)
F and , maximum evolving
generations .
CR
max
G
Step2: Evaluate the value of the objective function of each
individual.
Step3: Check halt criteria, if satisfied, exit; otherwise,
continue.
Step4: Differential Mutation operation by using equation
(2).
Step5: Differential recombination operation by using
equation (3).
Step6: Select operation. In most DE algorithm, the one-to-
one greedy selection is used. In this paper, we adopt global
comparison strategy, Namely, MODE sort the mixed
population generated after recombination and mutation,
and generate the offspring population.
IV. EXPERIMENTAL RESULTS AND THE ANALYSIS
The experiments include two parts. First, a comparison
between the proposed algorithm and other four algorithms is
made. Secondly, two important parameters used in MODE
are analyzed
All experiments were carried out on a desktop computer
with Intel(R) Core(TM) 2CPU (1.86GHz) and 2 GB of
RAM, Running Windows XP
A. Comparative Tests
In order to evaluate the effectiveness of the proposed
algorithm, we compare its performance with four other
algorithm using13 benchmark problems described in
[18].The test functions chosen contain characteristics that
are representative of what can be considered difficult
global optimization problems for an evolutionary algorithm.
Their expressions can be found in [18]. The parameters used
for the MODE are the following: NP = 60, = 5800, F
was generated randomly (using a uniform distribution) per
run between [0.3, 0.9] and CR was also randomly generated
between [0.8, 1]. The intervals for both parameters were
defined empirically. Equality constraints were transformed
into inequalities using a tolerance value
max
G
0.0001 o = (except
for problems G03, G11 and G13 where the tolerance
was 0.001 o = . Parameters in other four algorithms are set
as the literature [18].
Each test problem was performed with 30 independent
runs. The best results, worst results, mean results and its
standard deviation are all recorded for each test.
A comparison of the performance of MODE with
respect to four other algorithms such as EAHM, CHDE,
ESSR and AIRCES are presented in Table1, in which NA
= Not Available. / =not find the feasible solution No =
literature did not give the results
Table1. Comparison of our approach (MODE) with respect to EAHM, AIRCES, CHDE and ESSR.
Problem Optimal Item EAHM AIRCES CHDE ESSR MODE
best
-14.7886 -15.000 -15.000 -15.000 -15.000
mean
-14.7082 -15.000
-
14.792134
-15.000 -15.000
G01

-15.000

worst
-14.6154 -15.000
-
12.743044
-15.000 -15.000
best
0.79953 0.803575 0.803619 0.803515 0.803277
mean
0.79671 0.779465 0.746236 0.781975 0.802937

G02

0.803619
worst
0.79119 0.716312 0.302179 0.726288 0.802399
best
0.9997 1.000 1.000 1.000 1.0
mean
0.9989 1.000 0.640326 1.000 1.0

G03

1.000
worst
0.9978 0.999 0.029601 1.000 1.0
best
-30664.5
-
30665.539
-
30665.539
-
30665.539
-
30665.539
mean
-30655.3
-
30665.539
-
30592.154
-
30665.539
-
30665.539
G04
-
30665.539
worst
-30645.9
-
30665.539
-
29986.214
-
30665.539
-
30665.539
best
--- No 5126.4967 5126.497 5126.4967
mean
--- No 5218.723 5128-881 5126.4967

G05

5126.498
worst
--- No 5502.4103 5142.472 5126.4967
best
-6952.1 -6961.814 -6961.814 -6961.814 -6961.814
mean
-6342.6 -6695.987 -6367.575 -6875.940 -6961.814
G06

-6961.814
worst
-5473.9 -6030.333
-
2236.9503
-6350.262 -6957.814
best 24.620 No
24.306 24.307 24.306
mean 24.826 No
104.5992 24.374 24.306

G07

24.306
worst 25.069 No
1120.541 24.642 24.306
best 0.095825 0.095825
0.095825 0.095825 0.095825
mean 0.0891568 0.095825
0.091292 0.095825 0.095825

G08

0.095825
worst 0.0291438 0.095825
0.027188 0.095825 0.095825
best 680.91 680.630
680.630 680.630 680.630
mean 681.16 680.652
692.4723 680.656 680.630

G09

680.63
worst 683.18 680.801
839.783 680.763 680.630
best 7147.9 No
7049.248 7054.316 7049.248
mean 8163.6 No
8442.657 7559.192 7049.253

G10

7049.25
worst 9659.3 No
15580.37 8835.655 7049.268
best 0.75 0.75
0.749 0.750 0.7499
mean 0.75 0.75
0.761823 0.750 0.7499

G11

0.75
worst 0.75 0.75
0.870984 0.750 0.7499
best 0.999999875 1.000
1.000 1.00000 1.00000
mean 0.999134613 1.000
1.000 1.00000 1.00000

G12

1.000
worst 0.991950498 1.000
1.000 1.00000 1.00000
best NA 0.053950
0.053866 0.053957 0.054048
mean NA 0.054716
0.747277 0.057006 0.37277

G13

0.053950
worst NA 0.440825
2.259875 0.216915 0.614138
323 323 323 323 323
. With respect to the EAHM, our approach obtained a
better best and mean solution in twelve problems expect
G02. Also, MODE provided a better worst result expect
for G02. It is clear that MODE was superior in quality of
results than EAHM and it was competitive based on
statistical measures.
With respect to AIRCES. MODE was able to find a
better best results from G08 to G13) and a similar best
result in G01, G03, G04, g06. Also, MODE provided a
better mean result in G02, G04, G07 -G13) and a similar
mean result in G03 and G04). Besides these, MODE
provided a better worst in all problems. However, the
analysis was incomplete because the results found by
AIRCES for problem G05, G07and g10 were not available.
With respect to CHDE, MODE found a better best
than CHDE for problem from G08 to G13. CHDE obtained
a better best results in G02. Also, MODE could find a
better worst and mean results in most of problems
except for g12. And for G12, the results MODE were
similar to that of CHDE. Both CHDE and DEASR are
based on DE algorithm for constrained optimization
problems; MODE presented a better performance for most
tests problem.
With respect to ESSR, both of algorithms had a similar
results in eleven problems expect G02 and G05. For G02,
MODE found a better mean and worst results than
ESSR. For G05, MODE had a better result than ESSR for
three solutions.
From the previous comparison, we can see that the
MODE produce competitive results based on quality with
respect to four techniques representative of the state-of -
the-art in constrained optimization
B. Analysis Parameters of the Algorithm
Most of optimizing algorithms based on natural
computation are stochastic searching algorithms; some
parameters employed in these algorithms have strong effect
on the stability and convergent speed. In this section, we
will focus on two important parameters F and CR, and
ananlysis their influence on the performance of the proposed
algorithm. As shown in Figure3 (a)-(m), in the thirteen test
functions, G02, G03, G08 and G12 are maximum
optimization problems and the others are minimum
optimization problems. In order to facilitate here, we
convert the minimum optimization problems into maximum
optimization problems. It is easy to see that parameters CR
and F have more effect on the performance of the algorithm
when they are smaller for the most of the problems except
G13. And the two parameters have little effect on the
performance of the algorithm when they are bigger. It just
confirmed our initial parameter setting of the algorithm is
correct.
The Pareto control sorting makes it able to sufficiently
mine information carried by infeasible solutions. The global
competition of the differential evolution algorithm makes
the evolution of population near the good solutions. Thus,
once potential solution appears, it can get more space to
develop , which makes the algorithm convergence with
faster rate. They complement each other, making MODE
showed superior performance in solving unconstrained
optimization problem.
V. CONCLUSION AND PROSPECT
Based on relevant theory of multi-objective optimization
algorithm and differential evolution algorithm, this paper
proposes a new differential evolution algorithm. The new
algorithms main feature is converting constraints into a
target, and converts the problem into two-objective
optimization problem. The concept of Pareto of multi-
objective optimization is used to deal constrains and define
survival grade for each individual so as survival of the
fittest. The test results of thirteen standard test functions
show that, compared with the related algorithms, although
the new algorithm is slightly inferior for G05, it has obvious
superiority for the other twelve test functions. The later
research is to improve the constraint processing, to improve
the diversity of the population and to further improve
performance of the algorithm.
VI. ACKNOWLEDGMENTSCLUSION
This work was supported by the follows:
National Research Foundation for the Doctoral Program
of Higher Education of China (20070701022)
National Natural Science Foundation of China under
Grant (No.60803098 and No.60703108),
China Postdoctoral Science Foundation Special funded
project (No. 200801426),
Applied Materials Innovation Fund of Xian city in
2008HDV (high definition video) Acquisition Platform
for Low Illumination Environment based on the Optical
MEMS (XA-AM-200813),
Education Bureau of Shanxi Province: Subsidized Item
of Scientific and Technical Research from Education Bureau
of Shanxi Province in 2007 (07JC11);
Technology innovation fund of Small and medium-sized
enterprise from National Scientific and Technology
department in 2009Intrinsically Safe Industrial
Ethernet equipment for mine (09C26226115674).
REFERENCES
[1] S. Forrest ed. Proceedings of the fifth international conference
on genetic algorithms and their applications[M]. San Mateo,
CA: Morgan Kaufmann, 1993
[2] D. B Fogel and W. Atmar. ed. Proceedings of the first annual
conference on evolutionaESSR programming[M]. La Jolla,
CA, EvolutionaESSR Programming Society, 1992
[3] Z. Michalewicz and D.B Fogel. How to Solve It: Modern
Heuristics[M]. 2nd edn. Springer, Germany, 2004
[4] Z. Michalewicz and M. Schoenauer. EvolutionaESSR Algorithms for
Constrained Parameter Optimization Problems [J]. EvolutionaESSR
Computation,1996, 4(1): 1-32
[5] T. P. Runarsson and X. Yao. Stochastic Ranking for Constrained
Evolutionary Optimization. IEEE Transactions on Evolutionary
Computation. 4: 284294 (2000) A. E. Smith and D. W. Coit.
Constraint
324 324 324 324 324
[16] Lin D, Li M.Q., Kou J.S. A GA-Based Method for Solving
Constrained Optimization Problems [J]. Journal of Software. 2001,
12(4): 628-632.
[6] K V Price. An Introduction to Differential Evolution. New Ideas in
Optimization. 1999, 79-108 (1999)
[7] J. P. Chou, F. S. Wang. A hybrid method of differential evolution
with application to optimal control problems of a bioprocess system.
Proceeding of IEEE Evolution Computation Conference, 627
632(1998)
[17] T. P. Runarsson, X.Yao Stochastic ranking for constrained
evolutionaESSR optimization[J]. IEEE Trans.
EvolutionaESSR Computation. 2000, 4(3): 284-294.
[8] R.Gamperle, S. Dmuller, P. Koumoutsakos. A Parameter Study for
Differential Evolution. International Conference on Advances in
Intelligent Systems, Fuzzy Systems, Evolutionary Computation. 293-
298 (2002)
[18] Coello Coello C.A. Treating constraints as objectives for single-
objective evolutionaESSR optimization[J]. Engineering Optimization.
2000, 32(3): 275-308.
[19] Schaffer J.D. Multiple objective optimization with vector evaluated
genetic algorithms[C]. In: Proc. 1
st
Int. Conf. Genetic Algorithms and
their Applications J. J. Grefenstette, Ed., Hillsdale, NJ. 1985, pp: 93-
110.
[9] Junhong Liu, Jouni Lampinen. A fuzzy adaptive differential evolution
algorithm. Proc. IEEE Conf on Computers, Communications, Control
and Power Engineering. 606611 (2002)
[10] W J Zhang, X. F. Xie. DEPSO: Hybrid particle swarm with
differential evolution operator. Proceedings of the IEEE International
Conference on Systems, Man and Cybernetics, 4: 3816-3821 (2003)
[20] C.A. Coello Coello., E. Mezura-Montes Constraint-handling in
genetic algorithms through the use of dominance-based tournament
Selection[M]. Advanced Engineering Informations. 2002, 16(3): 193-
203.
[11] S. Koziel and Z. Michalewicz. Evolutionary algorithm,
homomorphous mappings, and constrained parameter optimization.
Evolutionary Computation, 7(1): 1944 (1999
[21] A. H. Aguirre, S. B. Rionda, Coello Coello. C.A. etc. Handling
Constraints using Multiobjective Optimization Concepts[J].
International Journal for Numerical Methods in Engineering. 2004,
59(15): 1989-2017.
[12] M G GONG , L. C. JIAO et al. A Novel Evolutionary Strategy Based
on Artificial Immune Response for Constrained Optimizations,
Chinese of journal computers, 30(1): 37-47 (2007) [22] Y. R. Zhou, Y. X. Li, Y. Wang etc. A Pareto Strength
EvolutionaESSR Algorithm for Constrained Optimization [J]. Journal
of Software. 2003, 14(7): 1243-1249.
[13] M. E. Mezura, C A C, Coello, E. I. Morales. Simple feasibility rules
and differential evolution for constrained optimization. Lecture Notes
in Computer Science. Berlin: Springer, 707-716 (2004) [23] E. Mezura-Montes, C. A. Coello Coello. A Numerical Comparison of
some Multiobjecribe-Based Techniques to Handle Constraints in
Genetic Algorithms[M]. Tech. Rep. EVOCINV-03-2002,
EvolutionaESSR Computation Group at CINVESTAV, Session of
Computation, Department of Ingenerate Electrical, CINVESTAV-
IPN. Mexico D.F., Mexico.
[14] K Price. Differential evolution a fast and simple numerical
optimizer.1996 Biennial Conference of the North American Fuzzy
Information Processing Society. New York, 524-527 (1996)
[15] K. Deb. An efficient constraint handling method for genetic
algorithms[J]. Computational Methods for Applied Mechanical
Engineering. 2000, 18: 311-388.
[24] S. H. Leung Lip. Image Segmentation Using Fuzzy Clustering
Incorporating an Elliptic Shape Function[J]. IEEE Transactions on
Image Processing. 2004, 13(1)

Table2. Comparison of our approach (MODE) with respect to EAHM, AIRCES, CHDE and ESSR.

0
0.2
0.4
0.6
0.8
1
0
0.5
1
-16
-14
-12
-10
-8
-6
CR
G01
F
m
e
d
i
a
n

o
f


t
h
e

b
e
s
t

v
a
l
u
e

i
n

3
0

r
u
n
s

0
0.2
0.4
0.6
0.8
1
0
0.5
1
-0.8
-0.7
-0.6
-0.5
-0.4
CR
G02
F
m
e
d
i
a
n

o
f


t
h
e

b
e
s
t

v
a
l
u
e

i
n

3
0

r
u
n
s
0
0.2
0.4
0.6
0.8
1
0
0.5
1
-1
-0.8
-0.6
-0.4
-0.2
CR
G03
F
m
e
d
i
a
n

o
f


t
h
e

b
e
s
t

v
a
l
u
e

i
n

3
0

r
u
n
s

(a) (b) (c)
0
0.2
0.4
0.6
0.8
1
0
0.5
1
-3.08
-3.06
-3.04
-3.02
-3
-2.98
-2.96
x 10
4
CR
G04
F
m
e
d
i
a
n

o
f


t
h
e

b
e
s
t

v
a
l
u
e

i
n

3
0

r
u
n
s
0
0.2
0.4
0.6
0.8
1
0
0.5
1
5100
5200
5300
5400
5500
CR
G05
F
m
e
d
i
a
n

o
f


t
h
e

b
e
s
t

v
a
l
u
e

i
n

3
0

r
u
n
s
0
0.2
0.4
0.6
0.8
1
0
0.5
1
-7000
-6000
-5000
-4000
-3000
CR
G06
F
m
e
d
i
a
n

o
f


t
h
e

b
e
s
t

v
a
l
u
e

i
n

3
0

r
u
n
s

(d) (e) (f)
325 325 325 325 325

0
0.2
0.4
0.6
0.8
1
0
0.5
1
0
100
200
300
400
CR
G07
F
m
e
d
i
a
n

o
f


t
h
e

b
e
s
t

v
a
l
u
e

i
n

3
0

r
u
n
s
0
0.2
0.4
0.6
0.8
1
0
0.5
1
-0.1
-0.095
-0.09
-0.085
-0.08
-0.075
CR
G08
F
m
e
d
i
a
n

o
f


t
h
e

b
e
s
t

v
a
l
u
e

i
n

3
0

r
u
n
s
0
0.2
0.4
0.6
0.8
1
0
0.5
1
600
700
800
900
1000
1100
CR
G09
F
m
e
d
i
a
n

o
f


t
h
e

b
e
s
t

v
a
l
u
e

i
n

3
0

r
u
n
s

(g) (h) (i)

0
0.2
0.4
0.6
0.8
1
0
0.5
1
7000
8000
9000
10000
11000
CR
G10
F
m
e
d
i
a
n

o
f


t
h
e

b
e
s
t

v
a
l
u
e

i
n

3
0

r
u
n
s
0
0.2
0.4
0.6
0.8
1
0
0.5
1
0.74
0.76
0.78
0.8
0.82
CR
G11
F
m
e
d
i
a
n

o
f


t
h
e

b
e
s
t

v
a
l
u
e

i
n

3
0

r
u
n
s
0
0.2
0.4
0.6
0.8
1
0
0.5
1
-1
-0.995
-0.99
-0.985
CR
G12
F
m
e
d
i
a
n

o
f


t
h
e

b
e
s
t

v
a
l
u
e

i
n

3
0

r
u
n
s

(j) (k) (l)
0
0.2
0.4
0.6
0.8
1
0
0.5
1
0.4
0.5
0.6
0.7
0.8
0.9
1
CR
G13
F
m
e
d
i
a
n

o
f


t
h
e

b
e
s
t

v
a
l
u
e

i
n

3
0

r
u
n
s


(m)
Figure. 2: The influence of CR and F on the performance of MODE
326 326 326 326 326

Vous aimerez peut-être aussi