Vous êtes sur la page 1sur 12

Simple Genetic Algorithm (by TRB)

Genetic Algorithms (GA) are the most popular techniques in evolutionary computing [72].
Various applications of GAs have been reported in literature. In this research, GA is
proposed for solving the optimization of HEN cleaning schedule. No report has
been found in the literature on the application of GA for HEN cleaning schedule optimization.

The major differences between GA and conventional optimization


methods are summarized in Table 5 .1. GA is recommended for solving optimization
problems with the following conditions [72]:

1 The search space is large, complex or unpredictable

2 Domain knowledge is scarce or expert knowledge is difficult to encode to narrow the search
space

3 The mathematical analysis is unavailable

4 Traditional search fails.

Among the various GA algorithms reported, simple GA is easier, famous, and widely employed.

Table 5.1. The major differences between GA and conventional optimization


methods

No Conventional Genetic Algorithm


1 Conventional methods work GAs work with the coding of
with the solution itself. solution set.
2 Almost all conventional GAs always operate on a whole
optimization techniques population of points (strings)
search from a single point
3 Conventional methods use GA uses fitness function for
derivatives for evaluation evaluation. As a result, they
can be applied to any kind of
continuous or discrete
optimization problem.
4 Conventional methods for GAs use probabilistic transition
continuous optimization rules.
apply deterministic
transition rules

GAs suffer a few disadvantages: GAs are not proper for real time
applications and take long time to reach the optimal solution. It should be
noted that in most cases where conventional optimization methods can be
applied, GAs are much slower because GAs do not take secondary
information like derivatives into account. The huge potential of GAs lies when
the optimization is non-differentiable or even discontinuous functions such as
optimization of cleaning schedule of HEN.

In a simple GA, the cleaning period for each heat exchanger is represented as a chromosome.
There are variations of chromosomes in one generation. The chromosomes are processed using
three operators, namely: selection, crossover, and mutation. In this study, elitism is also
implemented in the optimization of cleaning schedule. Elitism is a mechanism that guarantees
top-fit chromosomes in a population are always taken to the next generation.

Selection

The role of parent selection (mating selection) is to make a distinction among individuals based
on their quality to permit the better individuals to become parents of the next generation. Parent
selection is probabilistic. Thus, high quality individuals will get a higher chance to become
parents than those with low quality. The proportionate reproductive operator is commonly used
where a string is selected from the mating pool with a probability proportional to the fitness.

Roulette selection is one of the traditional GA selection techniques. The principle of roulette
selection is a linear search through a roulette wheel with the slots in the wheel weighted in
proportion to the individual’s fitness values. A target value is set, which is a random proportion
of the sum of the fitness values in the population. The population is stepped through until the
target value is reached. The wheel is spun N times, where N is the number of individuals in the
population. On each spin, the individual under the wheel’s marker is chosen to be in the pool of
parents for the next generation.

The method applied is as follows:

1) Sum the total expected value of the individuals in the population. Let it be T.
2) Repeat N times:
i. Choose a random integer, r, between 0 and T.
ii. Loop through the individuals in the population, summing the expected values, until the
sum is greater than or equal to r. The individual whose expected value pushes the sum
over this limit is the one selected.

Crossover

Crossover is the method of taking two parent solutions and generating from them a child. After
the selection (reproduction) process, the population is enhanced with better individuals.
Reproduction makes clones of good strings but does not produce new ones.

Crossover is a recombination operator that is carried out in three steps:

1. The reproduction operator chooses a couple of two individual strings for the mating,
randomly.
2. A crossover site is chosen at random along the string length.
3. The position values are exchanged between the two strings following the crossover site i.e.,
all the bits before and after this point are copied from the first and second parents,
respectively to produce the child as shown in Figure 5 .1.
Figure 5.1. Single point crossover

If appropriate crossover site is chosen, better children can be obtained by combining good
parents else it will severely obstruct string quality. The basic parameter in crossover method is
the crossover probability (Pc). Crossover probability expresses the portion of crossover in the
new generation. Offspring are exact copies of parents without crossover. If crossover is used,
offsprings are built from parts of both parent’s chromosome. If crossover probability is 100%, in
that case all offsprings are made by crossover. If it is 0%, the whole new generation is built from
exact copies of chromosomes from old population. Crossover is made in expecting that new
chromosomes will contain good parts of old chromosomes and thus the new chromosomes will
be better. However, it is good to retain some part of old population that survives to next
generation.

Mutation

Mutation plays the role of improving genetic materials by randomly disturbing the genetic
information. Mutation has traditionally been considered as a simple genetic operator. If crossover
is expected to exploit the current chromosomes in population to find better ones, mutation is
supposed to help for the exploration of the whole search space. It establishes new chromosomes
in the population by randomly modifying some of its building blocks. Mutation helps escaping
from local minima’s trap and maintains chromosomes diversity in the population. It also keeps
the gene pool well stocked, and thus ensuring dynamics of gene processing. Mutation of a bit
involves flipping a bit, changing 0 to 1 and vice-versa. A simple mutation inverts the value of
each gene with a low probability.

The essential parameter in mutation operation is the mutation probability (Pm). The mutation
probability expresses the portion of chromosomes that will be mutated. Without mutation, the
offspring are copied without any change. If mutation is performed, one or more parts of a
chromosome are modified. If mutation probability is 100%, whole chromosomes are modified, if
it is 0%, no chromosome is modified. Mutation probability should be a low value, since GA with
high mutation probability will become an absolute random-search algorithm.

Figure 5 .2 shows the mutation concept. A parent is considered and a child chromosome is
produced by mutation. If the generated random value is lower than the mutation probability, then
the corresponding bit in a parent chromosome is flipped (0 to 1 and 1 to 0) and child
chromosome is generated. For example, the mutation genes occur at 2 places (bold and italic
fonts), the corresponding bits in parent chromosome are flipped and child is generated.

Parent 1 0 0 1 1

Child 1 0 1 0 1

Figure 5.2. Mutation flipping

Elitism

The method in which the offspring and parents merge to form the new population for the next
generation is another GA design consideration that has a direct effect on the optimization
performance. A non-elitist method will replace all individuals in the current population while an
elitist one always keeps the best solutions found in the previous population. The former approach
may result in a slow convergence while the later may cause the search to be trapped in a local
optimum.

An elitism mechanism is employed in the GA for faster convergence. The elitism method
involves picking a number of elite chromosomes (5% of the population size) in the current
population and combining with the processed chromosomes in the evolving population. The
flowchart of elitism is shown in Figure 5 .3.

Figure 5.3. Flowchart of elitism

1.1 Optimization of Cleaning Schedule of HEN using GA

In order to solve the mixed integer nonlinear programming (MINLP) problem in HEN cleaning schedule,
simple genetic algorithm is utilized. The solution provides the optimum cleaning period for the 11 heat
exchangers by minimizing the optimization problem in (5.8). Cleaning period for each heat exchanger is
designed to be between 0 to 31 months represented by 5 bits in a chromosome (Table 5 .2). Hence, the
total number of bits in a chromosome for 11 heat exchangers is 55. Fitness value is obtained by
simulating HEN under fouling conditions at different cleaning period scenarios during the time horizon of
0 to tF.

Table 5.2. Cleaning period of a heat exchanger in a chromosome

Binary Decimal Cleaning period(month)


0 0 0 0 0 0 No cleaning
0 0 0 0 1 1 1
0 0 0 1 0 2 2



1 1 1 1 1 31 31
The flowchart of the solution of the HEN cleaning schedule optimization problem is shown in Figure
5.4. A brief explanation of the steps shown in the flowchart is as follows:

Step 1. GA creates a population of 100 individuals of 55 bits each assigned arbitrarily or randomly

Step 2. Calculate the fitness of all chromosomes in the population using the HEN process simulation
model under fouling conditions.

Step 3. GA processing

a. Select a pair of parent chromosomes from the current population, the probability of
selection being an increasing function of fitness. Selection is done with replacement,
meaning that the same chromosome can be selected more than once to become a parent.
b. With probability Pc = 0.7 (the crossover probability or crossover rate), cross over the pair at
a randomly chosen point (chosen with uniform probability) to form two offspring. If no
crossover takes place, form two offspring that are exact copies of their respective parents.
c. Mutate the two offspring at each locus with probability Pm = 0.001 (the mutation
probability or mutation rate), and place the resulting chromosomes in the new population.
Step 4. Replace the current population with the new population.

Step 5. Calculate the fitness of each chromosome in the new population.

Step 6. Elitism mechanism automatically keeps 5% top-fit individuals from the current population. The
rest of the new population is filled out by the normal selection, crossover, and mutation
procedure.

Step 7. Stopping criteria. If the change in the fitness value between the current and the previous
generations is less than the tolerance limit, it is assumed that the optimum has been reached.
Otherwise, go to step 3. In this case, the maximum number of generations is fixed to be 100
and if no optimum is found with the 100 generations, the optimization is restarted with a
different starting population.
Figure 5.4. GA flowchart
clear all;
close all;
clc;

Hasilmax=[];
fitnessvector =[];

Population = 100;
MaxGeneration = 200;
Kromosom = 20;
elitism = 0.1;
ProbCrossOver = 0.8;
ProbMutate = 0.5;
% Individu = [];
IndividuInteger = [];
eIntSc = [];
esched = [];
Datafit = [];
Data1fit = [];
DataSort = [];
ElitIndividu = [];
HMI = [];
DataFGAfit = [];
maxall = [];

%=====Generasi Popoulasi=====
Individu = floor(rand(Population,Kromosom)+rand());

%=====Binary to Int=====
for i = 1:(Kromosom./10)
for j = 1:Population
IndividuInteger(j,i) = bi2de(Individu(j,((i*10)-9):(i*10)),'left-
msb');
end
end

Datafit = [];

for k = 1:Population

X0 = (((IndividuInteger(k,1)-0)/1024)*10);
Y0 = (((IndividuInteger(k,2)-0)/1024)*10);

%=======fitness function==============
% cost = -(X0*sin(4*X0)) - (1.1 * Y0 * sin(2*Y0));
cost = -((((X0.^2)+(Y0.^2)).^0.5).*cos((X0)-
(Y0))).*exp(cos(((X0).*(Y0+5))./7));;

%======================================
fitness = cost;
Datafit = [Datafit;fitness];
[fitemax,nmax] = max(Datafit);

end
disp('GA Processing');
for Generasi = 1:MaxGeneration

%=====GA Processing=====

if (Generasi > 1)

%=====sortir=====
sort_fit = sortrows(sort, Kromosom + 1);
Individu1 = sort_fit(round((1-elitism)*Population+1):Population,:);
Remain = sort_fit(round(elitism*Population) + 1:Population, :);

X = Individu1;
M = size(X,1);

for i=1:M
fitnessvector(i) = X(i,Kromosom + 1);
end

fitnessvector = fitnessvector';

%=====Setting Probability=====
for i=1:M
Probability(i) = fitnessvector(i) / sum(fitnessvector);
end

for i=2:M
Probability(i) = Probability(i) + Probability(i-1);
end

for i = 1:M
n=rand;
k=1;
for j =1:M-1
if (n>Probability(j))
k = j+1;
end
end
Xparents(i,:) = X(k,:);
end

%=====Crossover=====
[M,d] = size(Xparents);
Xcrossed = Xparents;
for i=1:2:M-1
c = rand;
if (c<=ProbCrossOver)
p = ceil((d-1*rand));
Xcrossed(i,:) = [Xparents(i,1:p) Xparents(i+1,p+1:d)];
Xcrossed(i+1,:) = [Xparents(i+1,1:p) Xparents(i,p+1:d)];
end
end
if (M/2~=floor(M/2))
c = rand;
if (c<=ProbCrossOver)
p = ceil((d-1)*rand);
str = ceil((M-1)*rand);
Xcrossed(M,:) = [Xparents(M,1:p) Xparents(str,p+1:d)];
end
end

%=====Mutation=====
[M,d] = size(Xcrossed);
Xnew = Xcrossed;
for i=1:M
for j=1:d
p = rand;
if (p<=ProbMutate)
Xnew(i,j) = 1-Xcrossed(i,j);
end
end
end

%=====New Population Fitness Calculation=====

Individu = [Xnew(:,1:Kromosom);Remain(:,1:Kromosom)];
end

ElitIndividu = [ElitIndividu; Individu];

for i = 1:(Kromosom./10)
for j = 1:Population
IndividuInteger(i,j) = bi2de(Individu(j,((i*10)-9):(i*10)),'left-
msb');
end
end

Datafit = [];

for po = 1:Population
X0 = (((IndividuInteger(1,po)-0)/1024)*10);
Y0 = (((IndividuInteger(2,po)-0)/1024)*10);

%=======fitness function==============
% cost = -(X0*sin(4*X0)) - (1.1 * Y0 * sin(2*Y0));
cost = -((((X0.^2)+(Y0.^2)).^0.5).*cos((X0)-
(Y0))).*exp(cos(((X0).*(Y0+5))./7));
%=====================================
fitness = cost;
error = 0.001;
Datafit = [Datafit;fitness];
end

Data1fit = Datafit;
[fitnessmax, nmax] = max(Data1fit);
DataFGAfit = [DataFGAfit;fitnessmax];
IndividuMax = Individu(nmax,:);
IndividuMaxLast = IndividuMax;
Hasilmax = IndividuMax;
sort = [Individu Datafit];
maxall = [maxall; sort];
for i = 1:(Kromosom./10)
HasilMaxInt(1,i) = bi2de(Hasilmax(1,((i*10)-9):(i*10)),'left-msb');
end
HMI = [HMI; HasilMaxInt(1,1), HasilMaxInt(1,2)];
end

plot(DataFGAfit);
hold on

[fitnessmaxf, nmaxf] = max(DataFGAfit);


X0maxfix = (((HMI(nmaxf,1)-0)/1024)*10);
Y0maxfix = (((HMI(nmaxf,2)-0)/1024)*10);

X0maxfix
Y0maxfix
[fitnessmaxf, nmaxf] = max(DataFGAfit)

Vous aimerez peut-être aussi