Vous êtes sur la page 1sur 12

Global Optimization

Project 2: Genetic Algorithm


Miguel Daz-Rodrguez
June 16, 2014
1 Abstract
In this report the canonical form of the GA is implemented. Next, a parametric study
is performed in order to analyze the eect of the dierent parameters involved in the GA
implementation. Then, the algorithm is tested on eight unconstrained functions and ve
constrained problems. The online and oine performance of the algorithm for the test
functions is evaluated. Finally, conclusions are presented.
2 Genetic Algorithm (GA)
A FA is an algorithm for global optimization based on heuristic search mimicking the process
of natural selection. The Some concepts of the GA include:
Encoding : In the GA the variables of the problem are encoded in binary form. The actual
variables are called phenotype form and the binary encoding is the genotype form.
Each variable is encoded with an string of length (l) containing zeros and ones. The
initial population is generated randomly according to the following function:
1 function pop=encoding(popsize,stringlength,dimension)
2 pop=(round(rand(popsize,dimension
*
stringlength+dimension+2)));
Decoding : The objective function is evaluated for each population member. In order to
evaluate the function, rst the genotype form is decoded from to the phenotype form,
and second, the objective function is found for each member of the population. This
is done by the following code:
1 function [pop]=decoding(pop,stringlength,dimension,x bound)
2 popsize=size(pop,1);
3 temp=2.(stringlength1:1:0);
4 for i=1:dimension
5 bound(i)=(x bound(i,2)x bound(i,1))/(2stringlength1);
6 end
7 for i=1:popsize
8 for j=1:dimension
9 m(:,j)=pop(i,stringlength
*
(j1)+1:stringlength
*
j);
10 end
11 x=temp
*
m;
12 x=x.
*
bound+x bound(:,1)';
13 pop(i,dimension
*
stringlength+1:dimension
*
stringlength+dimension)=x;
14 y=funname(x);
15 pop(i,dimension
*
stringlength+dimension+1)=y.f;
16 pop(i,dimension
*
stringlength+dimension+2)=y.c;
17
18 end
1
Note that the last two columns of the variable pop consist of the objective function
value and the constraint function evaluation. Thus, in this step, the constraints can
be included for solving constrained optimization problems.
Selection : In order to decide which member of the current generation will take part of the
new generation, The tness of each population member is found as follows:
1 function selected=selection(pop,popsize,stringlength,dimension,minf)
2 popsize new=size(pop,1);
3 fitness=pop(:,dimension
*
stringlength+dimension+1) ...
+pop(:,dimension
*
stringlength+dimension+2);
4 if min(fitness)<0
5 fitness=fitness+minf
*
abs(min(fitness));
6 end
7 fitness=fitness./sum(fitness);
8 fitness=cumsum(fitness);
Note that in the above code, the objetive function is shifted by a value equal to minf.
For instance, if minf=1.01 the function is shifted 1% of the minimum tness function
value (thus, tness function avoid to has negative values).
In this project three selection methods were implemented. The rst one is the roulette
wheel, stochastic method, with two pointers. That is:
1 case 'stochastic'
2 for i=1:popsize new/2
3 pointer=0.5
*
rand+(0:0.5:0.5)';
4 selected(2
*
i1,:)=pop(find(fitness pointer(1),1),:);
5 selected(2
*
i,:)=pop(find(fitness pointer(2),1),:);
6 end
The second approach is the Stochastic Universal Method. In one turn of the wheel all
the population members that are going to be in the mating pool are selected. This is
done by including the same number of pointers in the roulette wheel.. That is:
1 case 'general stochastic'
2 pointer=rand
*
1/(popsize new)+(0:1/popsize new:11/popsize new)';
3 for i=1:popsize new
4 selected(i,:)=pop(find(fitness pointer(i),1),:);
5 end
6 end
The third approach is based on Deb method (Coello, 2002) where a binary tournament
selection with a set of rules is applied to develop the mating pool for the new generation.
The following rules are applied in order to compare two individuals:
a. A feasible solution is always preferred over an unfeasible one.
b. Between two feasible solutions, the one having a better objective function value
is preferred.
c. Between two unfeasible solutions, the one having smaller constraint violation is
preferred.
The code for applying this approach can be write as follows:
1 case 'deb'
2 for i=1:popsize new
3 j=find(fitnessrand,1);
4 k=find(fitnessrand,1);
5 f1=pop(j,dimension
*
stringlength+dimension+1);
6 f2=pop(k,dimension
*
stringlength+dimension+1);
2
7 c1=pop(j,dimension
*
stringlength+dimension+2);
8 c2=pop(k,dimension
*
stringlength+dimension+2);
9
10 if (c1==0 && c2==0) % both are feaseble
11 if f1>f2
12 selected=[selected competitor1];
13 else
14 selected=[selected competitor2];
15 end
16 end
17 if (c1=0 && c2=0) % both are unfeaseble
18 if abs(c1)<abs(c2)
19 selected=[selected competitor1];
20 else
21 selected=[selected competitor2];
22 end
23 end
24 if (c1==0 && c2==1) % both are feaseble
25 selected=[selected competitor1];
26 end
27 if (c1==1 && c2==0)
28 selected=[selected competitor2];
29 end
30 end
Crossover : Once the mating pool is obtained, the crossover is performed. In this code
two crossover methods were perform: 1) single point crossover and 2) multiple points
crossover. That is:
1 function ...
[child1,child2]=cross running(parent1,parent2,stringlength,dimension,pc)
2 if rand < pc
3 cross=2;
4 if cross==1
5 cpoint=round(rand
*
(2
*
stringlength1))+1;
6 child1=[parent1(:,1:cpoint) ...
parent2(:,cpoint+1:stringlength
*
dimension)];
7 child2=[parent2(:,1:cpoint) ...
parent1(:,cpoint+1:stringlength
*
dimension)];
8 end
9 if cross==2
10 cpoint=round((stringlength1)
*
rand(1,dimension))+1;
11 for j=1:dimension
12 child1((j1)
*
stringlength+1:j
*
stringlength)=[parent1((j1)
*
...
...
13 stringlength+1:(j1)
*
stringlength+cpoint(j)) ...
parent2((j1)
*
...
14 stringlength+cpoint(j)+1:j
*
stringlength)];
15 child2((j1)
*
stringlength+1:j
*
stringlength)=[parent2((j1)
*
...
...
16 stringlength+1:(j1)
*
stringlength+cpoint(j)) ...
parent1((j1)
*
...
17 stringlength+cpoint(j)+1:j
*
stringlength)];
18 end
19 end
20 else
21 child1=parent1;
22 child2=parent2;
23 end
24 end
Mutation : According to the probability of mutation, one gene in some of the population
member is ipped from 1 to 0 or vice versa, so that mutation is performed. That is:
1 function new pop=mutation(new pop,stringlength,dimension,pm)
2 new popsize=size(new pop,1);
3
3 for i=1:new popsize
4 if rand<pm
5 mpoint=round(rand(1,dimension)
*
(stringlength1))+1;
6 for j=1:dimension
7 new pop(i,(j1)
*
stringlength+mpoint(j))=1new pop(i,(j1) ...
8
*
stringlength+mpoint(j));
9 end
10 end
11 end
Niching and Speciation : Niching methods extend simple GAs by promoting the forma-
tion of stable subpopulations in the neighborhood of optimal solutions (Sareni and
Krahenbuhl, 1998). In this project the sharing method is implemented. The method
consist of searching in landscape by reducing the payo in densely populated regions.
That is:
1 if niching==1
2 d = distanceMatrix(pop(:,dimension
*
stringlength+1: ...
3 dimension
*
stringlength+dimension));
4 =1;
5 t(1:popsize new)=1;
6 for i=1:popsize new;
7 a=find(d(i,:)<sigma);
8 m(i)=sum(t(a)(d(i,a)/sigma).);
9 fitness(i)=fitness(i)/m(i);
10 end
11 end
where d is a function measuring the euclidean distance among the population members.
Here the distance is measured in the phenotypic space.
3 Parameter of the Implemented GA
Table 1 summarizes the parameters involving in the implemented GA. For each parameter,
six values were tested. Their eect on the solution is study by varying one at a time. The
parametric study is performed by considering the test function listed in Table 2, and a set
of the constrained problem chosen from Floudas and Pardalos (1990).
Table 1: Parameters of the implemented GA and values used for the parametric study
Parameter Subroutine Values
popsize (population size) Encoding {20 50 70 100 200 400}
st (string length) Encoding {6 9 12 18 24 32}
pc(crossover probability) CrossOver {0.60 0.65 0.70 0.75 0.85 0.90}
pm (mutation probability) Mutation {0.1 0.07 0.05 0.02 0.01 0.0}
minF (minimum tness function) Selection {1.005 1.01 1.02 1.03 1.05 1.10}
sigma (radio of the sharing function) Niching {0.05 0.1 0.3 0.5 0.7 1} bound
(penalty factor) Penalty function {1 10 50 100 1000 2000}
3.1 Population Size
Increasing the population size also increases the computational time. However, a small pop-
ulation size could guide the algorithm to poor solutions. For each function these parameters
were varied according to Table 1. The result from test function 2 and 8 and constrained
function C1 are shown in Fig 1. As expected, in most of the cases, the larger the population
size is, the better the solution. However, in some cases the solution did not improve while
increasing the population size, but this fact was because the number of generation was set
4
Table 2: Test Objective Function, unconstrained optimization problems, unimodal and mul-
timodal functions.
Name Function Limits
F1 f
1
=
2

i=1
x
2
i
5.12 x
i
5.12
F2 f
2
= 100

x
2
1
x
2

2
+ (1 x
1
)
2
2.048 x
i
2.048
F3 f
3
=
5

i=1
int (x
i
) 5.12 x
i
5.12
F4 f
4
=
30

i=1

ix
4
i
+ Gauss (0, 1)

1.28 x
i
1.28
F5 f
5
= 0.002 +
25

j=1
1

j+
2

i=1
(xiaij)
6
655.36 x
i
655.35
F6 f
6
= 10V +
10

i=1
x
i
, sin

|x
i
|

, V = 4189.829101 500 x
i
500
F7 f
7
= 20A +
20

i=1
x
2
i
10cos (2x
i
), A = 10 5.12 x
i
5.12
F8 f
8
= 1 +
10

i=1

xi
2
4000

10
i=1

cos

xi

500 x
i
500
to 200. The results improved as the as the number of generations increased. For the ana-
lyzed functions, the results show that a population size of 200 represents a proper threshold
between accuracy and computational time.
3.2 String length
The string length denes the size of the search grid space.The larger is the string length,
the more rened the grid, so that the resolution increases. However, increasing the string
length also increases the computational time. The results from a parametric study between
the string length and the computational times shows that the computational time increase
almost linearly with respect to the string length. From this study, the string length was set
equal to 12.
3.3 Crossover
Table 3 shows objective function values with respect to the percentage of crossover for F2-F8.
F1 is not included because the results for this test function show an insignicant variation.
The minimum values were obtained when using pc between 70% 90%.
Table 3: Parametric study for parameter crossover probability.
Function/pc (%) 60 65 70 75 85 90
F2 0.060 0.059 0.060 0.050 0.064 0.051
F3 -24.8 -24.8 -24.8 -25.0 -24.9 -25.0
F4 -6.81 -7.62 -7.47 -6.04 -7.73 -6.64
F5 1.59 1.30 1.20 1.34 1.37 1.30
F6 219.39 266.90 304.18 222.76 214.30 151.23
F7 49.12 54.83 52.04 46.26 47.82 44.556
F8 2.81 3.13 2.89 2.78 2.83 3.02
From the above results, 75% of crossover probability will be used for section 4.
3.4 Mutation
Table 4 shows objective function values with respect to the percentage of allowable mutation
for F2-F8 The minimum values were obtained when using pm between 2% 7%. Based on
5
0 50 100 150 200 250 300 350 400
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
Population Size
F
2
(a)
0 50 100 150 200 250 300 350 400
25
24.5
24
23.5
(b)
0 50 100 150 200 250 300 350 400
8
7.5
7
6.5
6
5.5
5
4.5
Population Size
F
4
(c)
0 50 100 150 200 250 300 350 400
1
2
3
4
5
6
7
8
9
Population Size
F
5
(d)
0 50 100 150 200 250 300 350 400
320
340
360
380
400
420
440
460
Population Size
F
6
(e)
0 50 100 150 200 250 300 350 400
30
35
40
45
50
55
60
65
70
75
Population Size
F
7
(f)
0 50 100 150 200 250 300 350 400
1.4
1.6
1.8
2
2.2
2.4
2.6
2.8
3
3.2
Population Size
F
8
(g)
0 50 100 150 200 250 300 350 400
17
16.5
16
15.5
15
14.5
14
13.5
13
Population Size
C
1
(h)
Figure 1: Analysis of the population size for; stringlength=12, pc=75 %, pm=5%,
minF=1.05, 200 Generation, 30 runs): (a) F2; (b) F3; (c) F4; and, (d) F5. (e) F6 (400
Generation); (f) F7 (600 Generation); (g) F8; (800 Generation) and, (h) C1. (400 Genera-
tion with niching
6
the results, 5% of mutation probability will be used for section 4.
Table 4: Parametric study for parameter mutation probability.
Function/pm (%) 0.0 1.0 2.0 5.0 7.0 10.0
F2 0.084 0.042 0.050 0.062 0.032 0.11
F3 -23.5 -23.8 -24.2 -24.5 -24.5 -24.3
F4 -6.56 -7.74 -7.57 -7.84 -6.83 -6.84
F5 4.31 2.24 1.17 1.17 1.18 1.20
F6 480.10 232.99 272.57 186.66 197.57 204.91
F7 104.26 49.57 50.62 45.74 47.13 44.81
F8 11.13 1.30 1.50 1.97 2.68 4.34
3.5 Minimum Value for shifting the function
GAs search for the global minimum, thus the function within the GA is computed as the
negative value of the objective function. However, the tness function only takes into account
positive values, so as to keep the tness positive, the objective function is shifted according
to fmin value. The shifting parameter is dened here as a function of the minimum value of
the tness function (before shifting) plus a percentage of it. The values are the ones listed
in Table 1.
Table 5 shows objective function values with respect to the fmin value for F2-F8 The min-
imum values were obtained when min takes the values between 1.03% 1.05%. Based on
the results, 1.05% of fmin was used for section 4. However, since the values of fmin were
evaluated up to 1.05, a parametric study is recommended in order to evaluate values from
fmin=1.10 to 1.15.
Table 5: Parametric study for parameter fmin.
Function/fmin (%) 1.005 1.01 1.015 1.02 1.03 1.05
F2 0.052 0.050 0.045 0.063 0.045 0.053
F3 -24.90 -24.90 -24.90 -24.80 -25.00 -24.90
F4 -7.66 -6.70 -5.97 -6.30 -4.61 -7.18
F5 1.30 1.19 1.17 1.18 1.18 1.32
F6 240.69 222.54 195.45 183.58 163.77 143.76
F7 51.00 53.03 49.77 51.24 48.18 43.25
F8 1.33 1.4 1.38 1.38 1.32 1.38
3.6 Niching
In order to evaluate how niching aects the solution of the implemented GA, three test func-
tions were studied. The sharing method was implemented. The evaluation was performed
by varying the diameter of the sharing function. The values are those presented in Table 1.
Table 6 shows objective function values with respect to the value for F2, F5 and C1. The
functions were selected because they are multimodal function with several local minimum.
The minimum values were obtained when takes the values between 0.7 1 the search space
size. In this case the results improve considerably with respect to the cases that disregard
sharing methods.
Table 6: Parametric study for evaluating niching function (sharing methods).
Function/fmin (%) 0.05 bound 0.1 bound 0.3 bound 0.5 bound 0.7 bound 1 bound
F2 0.2084 0.2232 0.9041 0.4364 0.1908 0.3558
F5 1.6068 1.1782 1.7502 1.4397 1.1677 1.1768
C1 -14.20 -14.84 -14.04 -14.58 -14.78 -15.81
7
3.7 Constraints Handling Techniques
Two approaches were implemented for dealing with the constraints: the penalty methods,
and a set of rule based on Deb method. Figure 2 shows the objective function value for
function C1. The penalty factor approach require to set up the penalty factor. In the gure
the number 1-6 in the horizontal axis indicates the penalty factors according to the values
of Table 1. The Deb method result is represented by bar 7.
0 1 2 3 4 5 6 7 8
18
16
14
12
10
8
6
4
2
0
Constriant Method
C
1
Figure 2: Objective function value for function C1. Blue bars: Penalty method with nich-
ing, Red bars: Penalty method with niching, Green bar: Debs method. Red line: global
minimum of the function value
In the gure, bars 1 (both blue and red) represent the results when using the lowest penalty
factor, and the solution corresponds to mean value of the minimum value for 30 runs of the
GA. However, the solution violates the constraint, indicating that the penalty factor is too
low. Increasing the factor allows to get a solution satisfying the constraints. On the other
hand, bar 2-6 show similar results whether using sharing method or not. The best result
was obtained using Debs method (green bar), but both approach yield to similar results if
the number of iteration increase.
Figure 3 shows the objective function value for the constrained problem C5 when imple-
menting the penalty factor method and the Deps method. As can be seen, both methods
provides similar results, but Deps Method nds a solution that violate constraints.
4 Experimental Results
This section presents the results for oine and online performance of the GA when solving
test function F1-F8 and C1-C5. The performance index were computed as follows:
Oine performance :The oine measure is the best value in the current population over
the entire run (number of generations). The oine performance indicates how well the
GA tracked the moving optimum.
Online performance :The online performance is simply the average of all evaluations of
the entire run.
4.1 Unconstrained Problems
First, the result of the set of problems in the category of unconstrained optimization are
presented. Table 7 shows the online and oine performance of the algorithm, the values
are expressed as a percentage of the optimal value achieved by the GA across the eight test
problems (Digalakis and Margaritis, 2000). Also the objective function value is presented.
8
Penalty (100) Dep's Method
6
5
4
3
2
1
0
Constriant Method
C
5
Figure 3: Objective function value for function C5.
Table 7: Online performance for unconstrained test problems
Function Online % Oine % Online Oine
F1 95.24 99.99 0.0168 2.0078E
4
F2 92.83 99.99 0.0379 8.0470E
6
F3 96.43 100 -24.63 -25
F4 44.56 58.11 -3.86 -17.25
F5 6.88 99.80 1.55 0.998
F6 - - 281.66 6.96
F7 - - 65.33 30.03
F8 - - 4.68 0.96
The results in Table 7 indicate that the algorithm was able to nd to global minimum for
functions F1-F5. For functions F6-F8, which are multimodal function with lots of variables
(10 or 20 dimensional space), the algorithm could not found the global minimum and in
some cases it found a local minimum. The evolution for the mean value of the best solution
over 30 runs is plotted in Figure 4. As can be seen from the gure, most of the solutions
seems to converge (horizontal line) as the number of generations increase.
5 Numerical Test Constrained Problems
Table 2 summarizes the results from applying the GA algorithm on the problems; 2.1, 2.3,
2.7, 3.1 and 4.6 described by Floudas and Pardalos (1990).
Table 8: Online performance for constrained test problems
Function Online % Oine Online Oine
C1 79.32 99.98 -14.49 -15.1
C2 85.75 95.87 -12.8625 -16.99
C3 - - -62.40 -100.80
C4 - - 2.0014E
3
544.43
C5 88.85 99.98 -5.72 -5.48
The results in Table 8 indicate that the algorithm was able to nd to global minimum for
functions C1, C2 and C5. For functions C3-C4 the algorithm could not found the global
minimum. Test function C3 corresponds to a quadratic problem with linear constraints, while
C4 corresponds to a linear objective function subjected to inequality constraints nonconvex
9
0 50 100 150 200
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Generation
F
1
(a)
0 50 100 150 200 250 300 350 400
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Generation
F
2
(b)
0 100 200 300 400 500
26
24
22
20
18
16
14
Generation
F
3
(c)
0 50 100 150 200 250 300 350 400
20
0
20
40
60
80
100
Generation
F
4
(d)
0 50 100 150 200
0
2
4
6
8
10
12
Generation
F
5
(e)
0 500 1000 1500 2000
0
500
1000
1500
2000
2500
Generation
F
6
(f)
0 100 200 300 400 500 600 700 800
50
100
150
200
250
300
Generation
F
7
(g)
0 100 200 300 400 500 600 700 800
0
10
20
30
40
50
60
70
80
90
Generation
F
8
(h)
Figure 4: Minimum objective function value (30 runs): (a) F1; (b) F2; (c) F3; (d) F4; (e)
F5; (f) F6; (g) F7; and, (h) F8.
constraints. These problems happen to be dicult to solve with the implemented GA.
The mean value of the best solution over 30 runs is plotted in Figure 5. The objective function
tends to converge while the number of generations increase. Comparing the unconstrained
function with the constrained case, the shape of the curve for the former case happen to be
10
smother that the latter case because of the introduction of the penalty factor for handling
constraints.
0 50 100 150 200 250 300 350 400
20
15
10
5
0
5
Generation
C
1
(a)
0 500 1000 1500 2000
15
14
13
12
11
10
9
8
7
Generation
C
2
(b)
0 100 200 300 400 500
400
350
300
250
200
150
100
50
0
Generation
C
3
(c)
0 100 200 300 400 500 600 700 800
1000
2000
3000
4000
5000
6000
Generation
C
4
(d)
0 50 100 150 200 250 300 350 400
6.6
6.4
6.2
6
5.8
5.6
5.4
5.2
5
4.8
Generation
C
5
(e)
Figure 5: Minimum objective function value (30 runs): (a) C1; (b) C2; (c) C3; (d) C4; and,
(e) C5.
6 Conclusion
In this report a canonical genetic algorithm was put forward. The algorithm was tested on
13 study cases, eight unconstrained and ve constrained problems. The report presented all
the steps necessary to implement the method: encoding, decoding, selection, crossover, and
mutation. In order to handle the constrained problems, two approaches were implemented.
In addition, the sharing method for niching was implemented. The implemented algorithm
shows good performance in most of the study cases. The genetic algorithm can be consid-
ered as a approach for nding the optimal for unconstrained and constrained optimization
problems.
11
References
Coello, C. A. C. (2002). Theoretical and numerical constraint-handling techniques used with
evolutionary algorithms: a survey of the state of the art. Computer Methods in Applied
Mechanics and Engineering, 191(1112):1245 1287.
Digalakis, J. and Margaritis, K. (2000). An experimental study of benchmarking functions
for genetic algorithms. In Systems, Man, and Cybernetics, 2000 IEEE International Con-
ference on, volume 5, pages 38103815 vol.5.
Floudas, C. A. and Pardalos, P. M. (1990). A Collection of Test Problems for Constrained
Global Optimization Algorithms. Springer-Verlag New York, Inc., New York, NY, USA.
Sareni, B. and Krahenbuhl, L. (1998). Fitness sharing and niching methods revisited. Trans.
Evol. Comp, 2(3):97106.
12

Vous aimerez peut-être aussi