Vous êtes sur la page 1sur 207

Genetic algorithms

are search
techniques based
on the mechanism
of natural
selection.

GA was developed by John Holland,


professor of CS and Psychology at
University of Michigan in 1975.

To understand the adaptive processes of natural


systems

To design artificial system software that retains


the robustness of natural systems


1.
2.
3.
4.

Important steps involved in GA


Encoding
Initialization
Selection
Reproduction
Crossover
Mutation

5.

Termination criteria

Representation
1.Binary

strings
2.Real number coding
3.Integer coding

Binary strings
Difficult

to apply because it is not a natural

coding.

01110011
This is a chromosome.
Gene

Real number coding


Mainly

useful for constrained optimization


problem.

3.44 4.65 6.29 1.34 8.27


5
9
5
6
1

Integer coding
Mainly

applicable for combinatorial


optimization problem.

GA works on coding space and solution


space alternatively.
Genetic operations work on coding space
(chromosomes).
Evaluation and selection work on solution
space.

Decoding

Coding space

Solution space

Genetic
Operation
s

Evaluation
and
Selection
Encoding

Three critical issues emerge concerning with


the encoding and decoding between
chromosomes and solutions.
1.The

feasibility of a chromosome.
2.The legality of chromosome.
3.The uniqueness of mapping.

Feasibility as name suggested means


whether a solution decodes from
chromosome lies in feasible region or not.
Legality means whether a chromosome
represents solution to given problem or
not.
Mapping has 3 cases
1 - n mapping (Undesired mapping)
n 1 mapping (Undesired mapping)
1 1 mapping (Desired mapping)

Illegal
one

Infeasible one

Solution Space

Coding
space
Feasible
one

Feasibility and Legality

1 to n
mapping
Coding
space

n to 1
mapping

Solution
space

1 to 1
mapping

Mapping from chromosomes to


solutions

Create initial population of


chromosomes.
1.Randomly
2.Local

search
3.Feasible solution
But in most of the cases initial population is
randomly generated.

Example- Let us assume , initially the


population with a chromosome of 8 bit and
6 parents.
Chromosomes
1

Parent no

Example- The values of initial population is


put in our objective function and we obtain
certain values.
1

Fitness
value

14.3567

9.3245

21.7632

17.6543

16.5463

18.2453

From previous slide, chromosome 4 is the


fittest one which has higher fitness value
and the chromosome 3 is the weakest
one which has lowest fitness value.

It is clear that the more fit chromosome


will have more responsibility for being
parents than less fit chromosomes.

We must ensure that the best individuals


have a major possibility for being
parents.

But, worse chromosome must have the


opportunity to reproduce. They can give
useful genetic information in the
Reproduction process.

There are thee basic issues involved in


selection process..
1.Sampling

space
2.Sampling mechanism
3.Selection probability

Two types of sampling space


Regular sampling space
Enlarged sampling space
Regular sampling space
Produced offspring may be weaker than its
parent. So if we replace parent with its offspring
then we may loose some fitter chromosomes.
So different replacement technique's have been
examined.
Dying parent was chosen as that parent who
most resembles the new offspring.

Crossove
r

Mutation

Initial sample
space

Offspring

New sample
space

Enlarge sampling space


When

selection is performed on enlarge


sampling space then both parents and
offspring have same chance of competition
for survival.
If offspring are generated from
parents then no of chromosomes in enlarge
sample space are ( + ).

Crossover

Mutation

Initial sample space

Offspring

New sample space

Sampling Mechanism: How to select


chromosome from sampling space.
1- Stochastic sampling
Roulette wheel selection:
To determine survival probability
proportional to fitness value.
Randomly generate a number between 0
and 1 and select the individual.

pk

fk

pop _ size

j 1

Zone of kth
individual

fj

2- Deterministic sampling
select best popsize individuals from the
parents and offspring
No duplication of the individuals

3- Mixed sampling

Both random and deterministic samplings are


done simultaneously.

Next main step is to generate a second


generation population from those who are
selected as parents.
Main processes are
Crossover
Mutation

1.
2.
3.
4.

Crossover operator is based on crossover


probability.
It depends on representation also.
The recombination must produce valid
chromosomes.
Types of crossover
Single point crossover
Double point crossover
Uniform crossover
Arithmetic crossover etc.

Single point crossover


0

Double point crossover


0

Uniform crossover

h
A

Arithmetic crossover
a

(a+A)/2 (b+B)/2 (c+C)/2 (d+D)/


2

(e+E)/2 (f+F)/2

Let the crossover probability be 0.8

Chromosomes
0
1
1
0

Compariso Chromosom
n of
e selected
crossover
or not
probability
0.56
< 0.8
no
0.92 > 0.8
yes
0.74 < 0.8
no
0.21 < 0.8
no
0.84 > 0.8
yes
0.87 > 0.8

Mutation operator is based on mutation


probability pm .
-- Rule of thumb
pm = 1/no of bit in
chromosomes
Size of chromosome must be controlled
otherwise value of pm becomes very low.
It must produce valid chromosomes.
It is main operator for global search as it
checks each and every bit of chromosome.

Some imp points


Provided gene j needs to be mutated, we make
a bit-flip change for gene j, i.e., (1 to 0) or (0 to
1). This operator a bit-flip mutation.
The individual after mutation is called the
mutant.
Mutation should not occur very often, because
then GA will in fact change to random search.
Mutation is important in the role of recovering
the lost information and randomly disturbing
the genetic information.

Example- no of chromosome here= 10. So


crossover probability = 0.1
1

0.2
1

0.4
5

0.0
4

0.6
1

Less than
0.1

1
0.4
6

0.8
4

0.0
5

0.7
3

0.1
9

0.4
4

Less than 0.1

Mutation
0
1

In last diagram a chromosome with their


mutation probability(selected randomly
between [0,1]) are given.
We select mutation probability to be 0.1
and then any chromosome having mutation
probability less than 0.1 will mutate i.e
change its value from (0 to 1) or (1 to 0).

1.
2.

In a typical Genetic Algorithm the population


size is constant .One has to decide which
individuals to keep and which ones to
discard.
The replacement strategy chooses the
individuals which survive, i. e. are kept for
the next iteration and which ones are
discarded.
Various replacement strategies are
Generational replacement
Steady-State replacement

Generational replacement
In generational replacement the new generation
completely replaces the old generation.
But this replacement scheme has a big
disadvantage. If one replaces the whole
generation the risk of dismissing a very
promising solution is high.
So De Jong introduced the concept of elitism
within Generational replacement.
The most promising solution is always kept in the
population termed as elite.

Steady-State Replacement
In steady-state replacement only one
solution is replaced rest all remains the
same.

Repeating the above steps until the


termination criteria is not satisfied

Every function has its own termination


criteria.

There can be many termination criteria's for


different types of problem. Some of these are
1.Achievement

of maximum no of generations
as given in problem.
2.No improvement for some generations.
3.A solution is found that satisfies minimum
criteria.

P(t) and C(t) are parent and offspring in current


generation t.
Begin
t
0;
initialize P(t);
evaluate P(t);
while (not termination condition) do
recombine P(t) to yield C(t);
evaluate C(t);
Select P(t+1) from P(t) and C(t);
t
t+1;
end
end

1.Genetic

algorithms work with a coding of


solution set, not solution themselves.
2.Genetic algorithms search for population of
solutions, not a single solution.
3.Genetic algorithms use fitness function, not
derivative or other knowledge.

Concept is easy to understand


Modular, separate from application
Supports multi-objective optimization
Good for noisy environments
Always an answer; answer gets better
with time
Inherently parallel; easily distributed

Many ways to speed up and improve a


GA-based application as knowledge about
problem domain is gained
Easy to exploit previous or alternate
solutions
Flexible building blocks for hybrid
applications
Substantial history and range of use

Alternate solutions are too slow or overly


complicated
Need an exploratory tool to examine new
approaches
Problem is similar to one that has already
been successfully solved by using a GA
Want to hybridize with an existing
solution
Benefits of the GA technology meet key
problem requirements

Solving the following problem with GA


Maximizesin( x ),
Subject to,
0 x .

In this problem 5 bit string is needed to


represent the variable with one decimal place
of accuracy.
Assume the population size = 4
String (00000) represent the x= 0 solution and
String (11111) represents the x= solution.
To get x from decoded value we use
x xmin (( xmax xmin ) /(2l 1)) DV

For Initial Population


String

DV

f(x)

01001

0.91
2

0.791

10100

20

2.02
7

00001

0.10
1

fi/favg

AC

Mating
Pool

1.39

01001

0.898

1.58

10100

0.101

0.18

10100

11010

11010
26
2.63 0.485 0.85
DV
Decoded
5 value
AC
Actual count of strings in the
favg
0.569
population

For string 1
Decoded value
01001 0 24 1 23 0 22 0 21 1 20 9
Thestring value
x xmin (( xmax xmin ) /(2l 1)) DV
0
0 9 5 0.912
2 1
theFunction value
f ( x) sin(0.912) 0.791
fi
0.791

1.39
f avg 0.569

The proportionate reproduction scheme assigns a number of


copies according to a strings fitness .
The expected number of copies for each string is calculated.
For example string (10100), the expected number of copies is
equal to 2 and for string(00001), the expected number of
copies is zero because the fitness of the string is very small
as compared to the average fitness of the population and it is
eliminated.
The second string is good one and made two copies in the
mating pool.

Simple problem max (x)1/2


Subject to
1<x<16
G A approach:
-Representation: binary Code, e.g. 100101
37
-Population size :6
-1-point Crossover, bitwise mutation
-Roulette wheel selection
-Random Initialization
We show one generation done by hand:

Stri
ng
No.

Initial
Decod
Populatio ed
n
value

1
2
3
4
5
6

100101
011010
010110
111010
101100
001101

37
26
22
58
44
13

x
f(x)=
value (x)1/2
9.81
7.19
6.24
14.81
11.48
4.09

3.13
2.68
2.50
3.85
3.39
2.02

Pselection

f
0.18
0.15
0.14
0.22
0.19
0.12

sum=17.57
Average f=2.92
maximum f=3.85
f

Avg=.16
6

Expecte
d count
(fi/f)
1.07
0.91
0.85
1.31
1.16
0.69

For string 100101

100101 = 1*25+0*24+0*23+1*22+0*21+1*20 = 37
The value of x

1 37

16 1
9.81
26 1

Fitness value = (9.81) (1/2) = 3.13


Total fitness = 3.13+2.68+2.50+3.85+3.39+2.02 = 17.57
Avg fitness = 17.57/6 = 2.92
Maximum fitness = 3.85
Probability = 3.13/ 17.57 = 0.18
Avg probability = (.18+.15+.14+.22+.19+.12)/6 = .166

From the calculation, it is clear that string 3 and 6 are


not considered for next step because they have
lesser probability as compared to avg fitness
probability.
There are two new copies formed by string 4 and 5 for
cross over.

Roulette wheel selection and crossover and mutation


Mating
Actual
pool
count
Roulett
e wheel

Parents

Cross
over
point

Crossov
er site

Children
String

Mutatio
n

1
1
0
2
2
0

10010
1
11101
0
01101
0
10110
0
11101
0

2
2
3
3
3
3

10010
1
11101
0
01101
0
10110
0
11101
0

10101
0
11010
1
01110
0
10100
0
11110
0

10101
0
11010
1
01110
0
10100
0
11110
0

10010
1
01101
0
11101
0
11101
0
10110
0

Decoding
Decoded
value
42
53
28
40
60
42

X
Value

11.00
13.62
7.67
10.52
15.28
11.00

f(x)=(X)1/2

3.32
3.69
2.77
3.24
3.91
3.32

Sum
=20.25
Average
f=3.37

From the table it is clear that the avg fitness is greater


than the previous population.

A supply chain of a computer


manufacturing industry.
Demand of each customer zone is satisfied
exactly by one production plant
Each production plant supplies to exactly one
customer zone

Custome
r Zone
1

Custome
r Zone
2

Custome
r Zone
3

Custome
r Zone
4

Plant 1

20

25

17

22

Plant 2

14

19

Plant 3

13

15

23

30

Plant 4

25

12

Distance (in Km) matrix as shown


Objective functionMin. Distance

Average fitness of
all string

Fitness value (f) of a string = (100 D)


Selection
expected count (=f/f avg)

Random
Randomno
noofofString
String
13and
and24isisless
lessthan
than
CC
So
So
String
String
1
3and
and
24
p, p,
isisselected
selectedfor
for
crossover
crossover

Crossover Procedure

Ch1

Pr1

Pr1

Ch2

After crossover

Before crossover

Selection Criteria:String No
Let , Crossover Probability (CP) = 0.8
Random no. used for crossover:-

Iteration 1

Iteration 2

0.98

0.47

0.93

0.55

0.45

0.88

0.57

0.93

Initial Population
String
1)

2)

3)

4)

Objective function and fitness value


For string 1
D = 20 + 9 +23 +7 = 59
fitness (f) = (100-D)= 100-59 = 41
For string 2
D = 14 +25+17+30 = 86
fitness (f) = (100-D)= 100 86 = 14
For string 3
D = 13 +25 +12+6 = 56
fitness (f) = (100-D) = 100 -56 = 44

For string 4
D = 8 +15 +19+22 = 64
fitness (f) = (100-64) = 100 -64 = 36
String

Objective
function value
(D)

Fitness
value(f)

Expected
count

1 2 3 4

59

41

1.21 1

2 4 1 3

86

14

0.41 0

3 1 4 2

56

44

1.302

4 3 2 1

64

36

1.07 1

ftotal= 135
favg=
33.75

Mating Pool
String
1)

2)

3)

4)

Crossover:- Between string 3 & 4


3)3 1 4 2
Ch1 1 3 2 4
4) 4 3 2 1
Ch2 2 1 4 3
After Crossover
String
1)

2)

3)

4)

Iteration 2Objective function & fitness value calculation


For string 1
D = 20 + 9 +23 +7 = 59
fitness (f) = (100-D)= 100-59 = 41
For string 2
D = 13 +25+12+6 = 56
fitness (f) = (100-D)= 100 86 = 14
For string 3
D = 20 +15 +19+7 = 61
fitness (f) = (100-D) = 100 -61 = 39
For string 4
D = 14 +25 +12+30 = 81
fitness (f) = (100-D) = 100 -81 = 19

String

Objective
function value
(D)

Fitness
value(f)

Expected
count

1 2 3 4

59

41

1.14 1

3 1 4 2

56

44

1.23 2

1 3 2 4

61

39

1.031

2 1 4 3

81

19

0.53 0

Mating Pool
1)1 2 3
2)3 1 4
3)3 1 4
4)1 3 2

ftotal= 143
favg=
35.75

4
2
2
4

Crossover:- String 1 & 2 participate in crossover


1)1 2 3 4
Ch1 2
1
4
2)

Ch2 4

3
3

After Crossover
String
1)

2)

3)

4)

Objective function and fitness value


For string 1
D = 14 + 25 +12 +30 = 81
fitness (f) = (100-D)= 100-81 = 19
For string 2
D = 8 +9+23+22 = 62
fitness (f) = (100-D)= 100 62 = 38
For string 3
D = 13 +25 +12+6 = 56
fitness (f) = (100-D) = 100 -56 = 44
For string 4
D = 20 +15 +19+7 = 61
fitness (f) = (100-D) = 100 61= 39

String

Objective
function value
(D)

Fitness
value(f)

Expected
count

2 1 4 3

81

19

0.54 0

4 2 3 1

62

38

1.08 1

3 1 4 2

56

44

1.252

1 3 2 4

61

39

1.11 1

ftotal= 140
favg= 35

Maximum fitness value = 44


Best solution is 56.

For m1
(12.1-(-3)) x 10,000 = 1,51,000
217 < 1,51,000 < 218 1

So m1 = 18 and
Similarly m2 = 15

Therefore total length of chromosome is


m1 + m2 = 33

000001 01010010100 11011110111 11110

Binary Number
Number

Decimal

Initial population : initial population is


randomly generated as follows.

Selection : In most cases Roulette Wheel


approach is adopted.
F = Sum of all fitness values
=(19.805119+17.380769+.
+10.252480)
=178.135372
Probability of chromosome k = fitness of k/ F

Chromosome
No

Probability

Cumulative
probability

0.111180

0.111180

0.097515

0.208695

0.053839

0.262534

0.165077

0.427611

0.088057

0.515668

0.066806

0.582475

0.100815

0.683290

0.110945

0.794234

0.148211

0.942446

10

0.057554

1.000000

Now we spin the roulette wheel 10 times, and


each time we select a single chromosome for
new population. Let these numbers are
0.301451
0.881893
0.177618
0.197577

0.322062
0.350871
0.343242

0.766505
0.583392
0.032685

For 0.301451 we select chromosome 4 because


its cumulative probability lies in this range.

bit_ no

Chromosome Bit no of
no
chromoso
me

Random
no

105

0.009857

164

32

0.003113

199

0.000946

329

10

32

0.001282

Corresponding decimal values of variables are


f(6.159951,4.109598) = 29.406122
Example- for chromosome
1 here
f(6.159951,4.109598) = 29.406122
f(-0.330256,4.694977)=19.763190
f(11.907206,4.694977)=5.702781
f(8.024130,4.170248) = 19.91025
f(9.342067,5.121702)= 17.958717
f(6.159951,4.109598)= 29.406122
f(6.159951,4.109598)= 29.406122
f(-2.687969,5.361653)=19.805119
f(0.474101,4.170248)= 17.370896

Tournament Selection
In tournament selection, tournaments are
played between two solution and the better
solution is chosen and placed in the mating
pool.
Proportionate Selection method
Solutions are assigned copies, the number
of which is proportional to their fitness value.
Based on Roulette-Wheel Selection method

Here,
Fi

is the fitness of ith solution.


Total Number of Solutions.

pi

Pi

j 1

Here the expected number of


copies for a particular solution
pi N
is given by
Solution 3 will be having 2
copies(max).

Fi

i (1, N )

j 1

Solution ,
i

Fi

pi

Pi

piN

25.0

0.25

0.25

1.25

5.0

0.05

0.30

0.25

40.0

0.40

0.70

2.00

10.0

0.10

0.80

0.50

For solution 1:
Total sum of fitness(F) = 25+5+40+10+20=100
The probability pi = 25/100 = 0.25
Pi N = 5 * 0.25 = 1.25

In crossover two strings are picked from


the mating pool at random and some
portion of the strings are exchanged
between the strings to create two new
strings.

Main Types of crossover areSingle Point Crossover


Double Point Crossover
Uniform Crossover

1.
2.
3.

In a single-point crossover operator, this


is performed by randomly choosing a
crossing site along the string and by
exchanging all bits on the right side of
the crossing site.
0 1 0 0 0 0 1 0 1 0

0 1 0 1 0 0 0 1 1 0

0 1 1 1 0 0 0 1 1 0

0 1 1 0 0 0 1 0 1 0

Parent Strings
Strings

Offspring

In a Double-point crossover operator, this


is performed by randomly choosing two
crossing sites along the string and by
exchanging all bits between the crossing
sites.
0 1 0 0 0 0 1 0 1 0

0 1 0 1 0 0 0 0 1 0

0 1 1 1 0 0 0 1 1 0

0 1 1 0 0 0 1 1 1 0

Parent Strings
Strings

Offspring

In this type of crossover, offspring is


constructed by choosing every bit with a
probability p ( Usually p=0.5 is used)
from either parent

0 1 0 0 0 0 1 0 1 0

0 1 1 1 0 0 1 1 1 0

0 1 1 1 0 0 0 1 1 0

0 1 0 0 0 0 0 0 1 0

Parent Strings
Strings

Offspring

Mutation operator is also responsible for


the search aspect of genetic algorithm.
A bit-wise mutation operator changes a 1
to a 0 and vice versa, with a mutation pm
probability of
A bit-wise mutation requires the creation
of a random number for every bit.
0 1 0 0 0 0 1 0 1 0

Parent Strings
Strings

0 1 1 0 0 0 1 0 1 0

Mutated

Suggested by Goldberg in 1989.


After a bit is mutated, the location of the next
mutated bit is determined by an exponential
distribution.
The mean of the distribution is assumed to be

1
pm

First create a random number r in (0,1),


Estimate the next bit to mutate by skipping
bits from current bit.
p m ln(1 r )

GA work with a population of solutions


instead of a single solution.
Some classical direct search methods
work under the assumption that the
function to be optimized is unimodal, GA
do not impose any such restrictions.
GA use probabilistic rules to guide their
search.
GA can be easily and conveniently used
in parallel systems.

Definition:

Set of strings with certain


similarities at certain string position is
called schema.
Representation:

In binary coding a triplet


(0,1,*) is used to represent schema.
*
0 or 1.

Order of Schema (H) is defined as


number
of defined position in schema.

e.g. H= (* 1 0 * * 0 * * *) have 3 defined


position(2, 3 and 6).
Order of Schema H {O(H)}=3.

Defining Length of Schema H is defined as the


distance between outermost defined
positions
e.g. H=( * 1 0 * * 0 * * * )
Leftmost Rightmost
Defining length of H(
) = Rightmost position
(H)

Leftmost position
= 6 2 =4

Schema Theorem estimates the growth of


a Schema in next generation.
Goldbergs equation for the lower bound
of the Schema growth in next generation
for single point crossover is,

f(H)
(H)

m(H,t+1) m(H,t)
1 - pc
- p m O(H)
f avg
l 1

Where,
m(H,t) = Number of copies of Schema H in generation t
f(H)
f avg

= Fitness of Schema H
= Average fitness of population

pc

= Crossover probability

pm

= Mutation probability

(H) = Defining length of H


O(H) = Order of H
l
= Length of string

Example:
Maximize Sin(X), Variable bound 0Xpi
Initial population

String

DV
(decoded
Value)

Sin(X)

fi/favg.

01001

0.912

0.791

1.39

01001

10100 (H1)

20

2.027

0.898

1.58

10100

00001 (H2)

0.101

0.101

0.18

10100

11010

26

2.635

0.485

0.85

11010

Pc= 1, pm=0,

H1= ( 1 0 * * * )

AC
Mating pool
(actual count)

favg=0.569

, H2 = ( 0 0 * * * )

m(H1,1)(1)(0.898/0.569)[1-{11/(5-1)}-02]
1.184, H1 must increase.
m(H2,1)0.133
Mating
pool

CS
(Cross
site)

New population
String

DV

Sin(X)

01001

01000

0.811

0.725

10100

10101

21

2.128

0.849

10100

10010

18

1.824

0.968

11010

11100

28

2.838

0.299

favg.= 0.710

Consider H1=(1 0 * * *) ---- represents the string with x value


varying from 1.621 to 2.330 and function value varies 0. 725
to 0.999
H2 =(0 0 * * *) ----- represents the string where x0.0 to 0.709
function value lies ----0.0 to 0.651
Since our objective is maximize the function, we would like to
have more copies of strings representing schema H1 and H2.
From the inequality, we have to estimate the growth of H1 and
H2.

For H1=(1 0 * * *)
m(H1, 0) = 1
Crossover operation = pc = 1
Mutation operation = pm = 0
f(H1) = 0.898
Order of string o(H1) = 2
Defining length = (H ) = 2-1 = 1
The average fitness = f avg = 0.569

m( H1 ,1) (1)
1.184

0.898
1
[1 (1.0)
0 (2)]
0.569
5 1

For H2=(0 0 * * *)
m(H2, 0) = 1
Crossover operation = pc = 1
Mutation operation = pm = 0
f(H2) = 0.101
Order of string o(H2) = 2
Defining length = (H ) = 2-1 = 1
The average fitness = f avg = 0.569

m( H 2 ,1) (1)
0.133

0.101
1
[1 (1.0)
0 (2)]
0.569
5 1

From the above calculation suggest that the number of


strings representing schema H1 must increases.
For schema H2, there is no representative string exist in
the new population because the growth of schema
H1 is more as compared to schema H2.

Schema that are


1.Short
2.Low order
3.Above average, are known as building
blocks.
In each generation building blocks
combined to form better and bigger
building blocks.

Population size plays significant role in


obtaining optimal solution.
It depends on the complexity of the
problem.
Adequate number of strings should be
present in each region of solution.

Fig. 1 represents the bimodal Gaussian function.


Number of regions=2
H1 , with above average fitness value, becomes building block,
optimal solution achieved
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0

0.1

0.2

H0

0.3

Figure0.5
1

0.4

average value

0.6

0.7

H1

0.8

0.9

Population size required to solve the


above problem =2, where is
1. >1
2.Number of copies of Schema required
in each region to adequately represent
the
fitness variation.

Fig. 2 represents the modified bimodal Gaussian


function.
If number of regions =2, H0 becomes building
block , may achieve local optimal solution.
3.5
3
2.5

f(x)

1.5
1
0.5
0
0

Figure 2.
0.1

0.2

H0

0.3

0.4

0.5

Fig. 2

0.6

0.7

0.8

0.9

H1

If number of regions =4.


H0 or H1 becomes building blocks, sub optimal
solution may achieved.
3.5
3
2.5

f(x)

2
1.5
1
0.5
0
0

0.1

0.2

H0

0.3

0.4

H1

0.5

0.6

H2

0.7

0.8

0.9

H3

If number of regions =8.


H6 becomes building blocks , optimal solution is achieved.
Required Population size = 8.

3.5
3
2.5

f(x)

1.5
1
0.5
0
0

0.1

H0

0.2

H1

0.3

0.4

H2

0.5

H3

0.6

H4

0.7

0.8

H5

0.9

H6

H7

GA searches solution space by:


1.Exploration- by Selection operation.
2.Exploitation- by Crossover and
mutation.
Balance between exploration and
exploitation is required to avoid
premature convergence on local optima.

Different types of crossover operators


are:
1. Linear

Crossover
2. Nave crossover
3. Blend Crossover
4. Simulated Binary Crossover
5. Unimodal Normally Distributed Crossover
6. Simplex Crossover
7. Fuzzy Connectives Based Crossover
8. Unfair Average crossover

Linear Crossover:
Three Solutions are generated from two
parent solution.
Best two are chosen as offspring.

Parents

xi(1,t) and xi(2,t). t=generation number

Solutions: 1. 0.5(xi(1,t) + xi(2,t)).


2. (1.5 xi(1,t) 0.5 xi(2,t)).
3. (-0.5 xi(1,t) +1.5 xi(2,t)).

Offspring

xi(1,t)

xi(2,t)

This Crossover operator is similar to the


crossover operators used in Binary coded
GAs.
Cross-sites are only allowed to be chosen at
the variable boundaries:
Parent 1:(x11,t, x21,t, x31,t,xn1,t)
Parent 2:(x12,t, x22,t, x32,t,xn2,t)
Offspring 1:(x11,t, x21,t, x32,t,xn1,t)
Offspring 2:(x12,t, x22,t, x31,t,xn2,t)

This Crossover Operator does not have an


adequate search power and thus search has
to mainly rely on mutation operator.

For two Parent solutions xi1,t and xi2,t


(assuming xi1,t <xi2,t ) the BLX-
randomly picks up a solution in the
range :
[xi1,t - (xi2,t - xi1,t), xi2,t +(xi2,t - xi1,t)]
If ui is the random number between 0 and 1
Offspring xi1,t+1= (1-i) xi1,t + ixi2,t .
(1)
i=(1+2 )ui -

It has been investigated that BLX-.5( =0.5) has


given better results than any other value.

Property of BLX- location of offspring


depends on position of parent solution.
Rewriting Equation (1)
(xi1,t+1 - xi1,t ) = i(xi2,t - xi1,t )
If difference between parents is small then the
difference between offspring and parent will be
small.

Thus this allows us to constitute adoptive


search. It will allow searching entire space
early on and maintain focused search when
we tend to converge.

Spread Factor
It is the ratio of the absolute difference
in offspring values to that of the parents.
It is denoted
.
i by
i

xi( 2,t 1) xi(1,t 1)


xi( 2,t ) xi(1,t )

Procedure:

Select parent strings xi(1,t) and xi(2,t)

Choose a random number ui [0,1].

Determine qi , where area under the


following probability curve between 0 to
qi is equal to ui.
p(i)= 0.5(c+1) ic , if i1;
= 0.5(c+1) (1/ ic+2), otherwise.
c is any non-negative real number.

qi can be calculated by following equation:


qi = (2ui)1/(c+1) , if ui0.5;
=(1/2(1-ui))1/(c+1), otherwise.

Calculate the offspring by following equation:


xi(1,t+1) = 0.5[(1+qi) xi(1,t) + (1- qi) xi(2,t)],
xi(2,t+1) = 0.5[(1- qi) xi(1,t) + (1+ qi) xi(2,t)]

Special case of SBX, with c=1 and triangular


probability distribution with apex at parent
solution and base at d [xi(2,t) xi(1,t)], is called
fuzzy recombination crossover.
d is a tunable parameter.

Three or more parent solution are used to create two or


more offspring.

Offspring are created from an ellipsoidal probability


distribution with one axis formed along the line joining two
of the three parent solution and the extent of orthogonal
direction is decided by the perpendicular distance of the
third parent from the axis.
3
2
x2
1
x1

Select n+1 parent string to use as a simplex,


where n is the number of decision variable.
Calculate the Centroid of the parents.
Enlarge the simplex by extending the apex at
points away from the Centroid.
Each apex is placed on a line joining the
Centroid and each parents.
With uniform probability distribution, a H
number (H=200 is suggested) of solutions are
created from extended simplex.

Generally two parents are replaced by H


solution set.
First offspring is the best solution of H
and the second one is chosen by rank
based roulette wheel selection.

This crossover create offspring towards


one of the parent solutions.
xi(1,t 1)

(1 ) xi(1,t ) xi( 2,t )

xi(1,t ) (1 ) xi( 2,t )

xi( 2,t 1)

(1 ) xi(1,t ) xi( 2,t )

xi(1,t ) (1 ) xi( 2,t )

for two variables n=2.


The value of [0, 0.5]

for
i 1,..., j
for i j 1,..., n
for
i 1,..., j
for i j 1,..., n

Xj

x (1, t+1)
x (2, t)
x (2, t+1)
x (1, t)
Xi

The parameter j is randomly chosen integer between 1 and n,


indicating the cross site. For example two variable(n = 2) and with
a cross site at j = 2, the bias in creating the offspring is clearly
shown in the above figure.
The shaded region marks the possible location of the offspring

Under a crossover operator following


postulates should be followed:

1.

The population mean should not change


The population diversity should increase,
in general.

2.

Different Types of Mutation:


1.
2.
3.
4.

Random Mutation
Non-uniform Mutation
Normally distributed distribution
Polynomial Mutation


1.
2.
3.

Random mutation is:


Independent of parent solution.
Equivalent to random initialization.
Can be given by the equation:
xi(1,t+1) = ri(xi(U) xi(L)), Where ri [0,1]

Other way of doing Random Mutation is


to create a solution in the vicinity of
parent instead of entire search space.
xi(1,t+1) = xi(1,t) +(ri 0.5)i,
Where i is the user defined perturbation
in the ith iteration.
Care must be taken to generate solution
within the lower and upper bound.

1.
2.

Non-Uniform Mutation can be given by the


equation:
yi(1,t+1) = xi(1,t+1) + (xi(U) xi(L))[1- ri(1-t/tmax)^b],
Where, = -1 or 1 with equal probability,
tmax = maximum number of
generation.
b= user defined function.
Probability of creating solution closer to
parent
is more than away from it
increases with generation.

Normally Distributed Mutation can be


given by equation:
yi(1,t+1) = xi(1,t+1) + N(0,i)
Where i is a user-defined parameter and
can change in every generation with
predefined rule.
Care must be taken to generate solution
within lower and upper bound.

Polynomial Mutation can be given by:


yi(1,t+1) = xi(1,t+1) + (xi(U) xi(L)) i
Where, i is calculated from polynomial
distribution,
i = (2ri)1/(nm+1) 1, if ri < 0.5,
= 1- [2(1 - ri)]1/(nm+1) , if ri 0.5.
nm is user-defined parameter.

Constrained Handling Methods have been


classified in following categories:
1.
2.
3.
4.

Methods based on preserving feasibility of


solutions.
Methods based on penalty functions.
Method biasing feasible over infeasible
solutions.
Method based on decoders

In this process decision variables are


eliminated by equality constraint.
e.g. h(x) 2x1 x22 x3 = 0,
x1 can be expressed as x1= 0.5x22 x3
Getting x1 in terms of x2 and x3 will
satisfy the constraint h(x) 0.
In optimizing equation with n variables
and k equality constraints (n>k), k
decision variables can be eliminated. This
will automatically satisfy k equality
constraints.

Penalty function is added to the objective function for


infeasible solutions.
For any optimization function can be written as,
min f(x),
s.t. gj(x) 0, for j = 1 to J,
hk(x) = 0, for k= 1 to k.
after adding penalty function new objective function
can be given as:
F(x)= f(x) + Rj <gi(x)> + rk |hk(x)|,
Where, Rj and rk are user defined penalty parameters.
<gi(x)> = |gi(x)| if gi(x) < 0,
= 0, elsewhere

Static penalty function:


One penalty parameter (R) is used for
constraint violation.
F(x)= f(x) + R [ <gi(x)> + |hk(x)| ]
Dynamic penalty function:
penalty parameter is changed with
generation (t).
F(x)= f(x) +(C.t) [ <gi(x)> + |hk(x)| ],
Where C , and are user defined
constants.

Fitness value of best infeasible solution is


made equivalent to worst feasible
solution. So that,
1. Any feasible solution is preferred over
any infeasible solution.
2. Among two feasible solutions, one with
better objective function is preferred.
3. Among two infeasible solution, the one
having smaller constraint violation is
preferred.

Methods:
1. By adding Generation dependent
penalty term in static penalty term.
F(x)= f(x) + R [ <gi(x)> + |hk(x)| ] + (t,
x).
(t, x) is the difference between the static
penalty function between best infeasible
solution and worst feasible solution.

2. By modifying objective function:


F(x) = f(x) , if x is feasible;
= fmax(x) +[ <gi(x)> + |hk(x)|],
otherwise.
where fmax(x) is the objective function
value of worst feasible solution.

Chromosome stores information about


fixing infeasible solution .
1. By keeping the information about
ordering of decision variables and
altering the infeasible solution
accordingly to make it feasible.
2. By mapping between a feasible and
infeasible solution.

Q 1. We would like to use genetic algorithms to solve the


following NLP problem:
2
2
(
x

1
.
5
)

(
x

4
)
2
Minimize 1
2
4
.
5
x

x
18 0
1
2
subject to
2 x1 x2 1 0

0 x1 , x2 4

We decide to have three and two decimal places of


accuracy for variable x1 and x2 respectively.
(a) How many bits are required for coding the variables?
(b) Write down the fitness function which you would be
using in the selection Procedure

(a) Since x1 and x2 lies between 0 and 4


To get three places of accuracy String should have the
minimum value as 4000( First digit for integer part and
other 3 parts for decimal).
Since, 212 4096Therefore number of bits required is 12.
To get two places of accuracy String should have the
minimum value as 400( First digit for integer part and
other 2 parts for decimal).
Since, 9
Therefore number of bits required is 9.
2 512

(b)

The String is encoded in the string which has minimum


value 0 and maximum value 4 so,
0 x1 , x2 4 cannot be as a constraint.
Forming the lagrangian to get the fitness function

f ( x) ( x1 1.5) 2 ( x2 4) 2 1 max(4.5 x1 x22 18,0) 2 max(2 x1 x2 1,0)

Where, and are lagrange multiplier added if the


solution goes to infeasible region.
max(.,0) is added to check whether the solution is in
feasible region or not. If solution is in feasible region
then constraints are satisfied and max(.,0) is 0.
1

Q 2. Using the simulated binary crossover (SBX) with


, find two offspring from parent solutions
x (1) 10.53
x ( 2) 0.68 and c 2 . Use the random number
0.723.

Since random number is greater than 0.5 so we use the


formula for asqi

qi
2
(
1

)
i

1
c 1

Taking i 0.723 and c 2 we get


Formula for offspring are-

qi
2(1 0.723)

1
3

xi(1,t 1) 0.5[(1 qi ) xi(1,t ) (1 qi ) xi( 2,t ) ]

We get,

xi( 2 ,t 1) 0.5[(1 qi ) xi(1,t ) (1 qi ) xi( 2,t ) ]

(1,t 1)
i
( 2 ,t 1)
i

x
x

0.5[2.218 *10.53 0.218 * 0.68) 11.752


0.5[0.218 *10.53 2.218 * 0.68) 1.902

and

1.218

Q 3. Using real-coded Gas with simulated binary


crossover(SBX) having c 2, find the probability of
creating children solutions in the range 0 x 1 with
two parents x 0.5 & x 3.0. Recall that for SBX
operator, the children solutions are created using the
following probability distribution:
(1)

( 2)

0.5( 1)
P( )
2
0.5( 1) /

1
1

Where
and c1,c2 are children solutions
(c 2 c1) /( p 2 p1)
created from parent solution p1,p2

Calculating (Spread) for children solution between


(0,1)
(1 0) /(3 0.5) 0.4

Since is less than 1 so using the formulaP ( ) 0.5( 1)


P( ) 0.5 * (2 1) * 0.4 2 0.24

Thus, Probability of creating children solutions in (0,1)


is 0.24

Q 4. Two parents are shown below:


x (1) (10.0,3.0)T , x ( 2) (5.0,5.0)T

Calculate the probability of finding the child in the


range xi [0.0,3.0] for i=1,2 using
(a) Simulated binary crossover operator with 2
c
(b) Blend crossover(BLX) operator with
0.67

(a)

Considering x1
Range of x1 for children solution is (0,3) and
parent solution varies from (5,10) so the spread
factor will be,
(3 0) /(10 5) 0.6

Calculating probability for by


P1 ( ) 0.5( c 1) c
P1 ( ) 0.5 * (2 1) * 0.6 2 0.54

Considering x2
Range of x2 for children solution is (0,3) and
parent solution varies from (3,5) so the spread factor
will be,
(3 0) /(5 3) 1.5 greater than 1
Calculating probability for by P ( ) 0.5( 1) /
2

c 2

P2 ( ) 0.5 * (2 1) / 0.63 0.324

Therefore net Probability =

P1 ( ) * P2 ( ) 0.54 * 0.324 0.175

(b) Considering x1
x1 for parent solution is (10,5), Calculating
children solutions with 0.67
x1(1,t 1) [10 0.67 * (5 10)] 13.35
x1( 2,t 1) [5 0.67 * (5 10)] 1.65

Calculating probability for by linear formula,


P1 ( ) (3 1.65) /(13.35 1.65) 0.115

Considering x2
x2 for parent solution is (3,5), Calculating children
solutions with 0.67
x1(1,t 1) [3 0.67 * (5 3)] 1.66
x1( 2,t 1) [3 0.67 * (5 3)] 4.34

Calculating probability for by linear formula,


P2 ( ) (3 1.66) /( 4.34 1.66) 0.5

Therefore Probability =

P1 ( ) * P2 ( ) 0.115 * 0.5 0.0575

Q 5. Apply the polynomial mutation operator to create


a mutated child of the solution x (t ) 5.0 using the
random number 0.675. Take x (u ) 10 x ( L ) 0 and
m 2

For Polynomial mutation with

ri 0.675 ,

y ( t 1) x (t ) [ x (u ) x ( L ) ] i

Where,

i 1 [2(1 ri )]1/( m 1)

if ri 0.5

i 1 [2(1 0.675)]1/( 21) 0.134

Calculating mutated child with

x 5.0 x 10 x ( L ) 0
(t )

y ( t 1) 5 [10 0] * 0.134 6.34

(u )

Most real-world problems involve simultaneous


optimization of several objective functions.
Generally, these objective functions are noncommensurable and often competing and conflicting
objectives.
Multi-objective optimization having such conflicting
objective functions gives rise to a set of optimal
solutions, instead of one optimal solution because
no solution can be considered to be better than any
other with respect to all objectives. These optimal
solutions are known as Pareto-optimal solutions

Generally, multi-objective optimization


problem consisting of a number of
objectives and several equality and
inequality constraints can be formulated
as follows:
Minimize/Maximize f i ( x)

i 1,........., N obj

Subject to,
g k ( x) 0, k 1,........., K
hl ( x) 0, l 1,.........., L

Where,
f i is the objective function, x is a
decision vector that represents a
solution
N obj
and
is the number of objectives. K
and L are the number of equality and
inequality constraints respectively.

Most optimization problems naturally have


several
objectives
to
be
achieved
(normally conflicting with each other), but
in order to simplify their solution, the
remaining objectives are normally handled
as constraints.

Evolutionary algorithms seem particularly


suitable to solve multi-objective
optimization problems, because they deal
simultaneously with a set of possible
solutions (the so-called population).

Capable to find several members of the


Pareto optimal set in a single run of the
algorithm.

Additionally, evolutionary algorithms are


less susceptible to the shape or
continuity of the Pareto front
Evolutionary algorithm can easily deal
with discontinuous or concave Pareto
fronts

Classification of Evolutionary MultiObjective optimization (EMOO) :

Non-Pareto Techniques
Pareto Techniques

Non-Pareto Techniques include the following:

Aggregating approaches
Vector evaluated genetic algorithm (VEGA)
Lexicographic ordering
The
- constraint Method

Target-vector
approaches

Pareto-based Techniques include the


following: Multi-objective genetic algorithm (MOGA)
Non-dominated sort genetic algorithm
(NSGA-II)
Multi-objective particle swarm optimization
(MOPSO)
Pareto evolution archive strategy (PAES)
Strength Pareto evolutionary algorithm
(SPEA-II)

Approaches that do not incorporate


directly the concept of Pareto optimum.
Incapable of producing certain portions of
the Pareto front.
Efficient and easy to implement, but
appropriate to handle only a few
objectives.

Suggested by Goldberg (1989) to solve


the problems with Schaffers VEGA.
Use of non-dominated ranking and
selection to move the population towards
the Pareto front.
Requires a ranking procedure and a
technique to maintain diversity in the
population.

In the absence of weights for objectives,


one of the pareto optimal (non-dominated)
solutions cannot be said to be better than
the other; therefore it is desirable to find all.
Classical optimization methods including
MCDM methods can find one such solution
at a time
With a population of solutions, GAs seem to
be well-suited for approximating the pareto
optimal frontier in a single run

Pareto front
Pareto front
y2 approximation

Pareto set
Pareto set
Decision
approximation

X2

Objective
Space

Space

y1

X1
(X1,X2,, Xn
)

Sear
ch

(y1,y2,
,yn)

Evaluatio
n

f2

Convergence

Diversity
f1

Convergence to
pareto optimal
frontier
Diversity
(representation of
the entire pareto
optimal frontier)

f2

Third front
Second front
First front
f1
Minimization of f1 and
f2

Fitness assignment:

Solutions in the first


non-dominated
front have the
highest fitness
(they are all ranked
1)

Solutions in the
same front have the
same fitness (they
all have the same
rank)

NSGA-II key features:


Fast non-dominated sorting

Diversity preservation

Crowding comparison operator

Fast non-dominated sorting:1. For each solution p in population, find


: number of solutions that dominate p
np
: set of solutions that p dominates
s p all p with
2. Place
in set
, the first front
3. For each
,nvisit
each F and reduce (Rby
p 1)
p 0
1
one. In doing this, if
becomes 0 then place q in
q Sp
nq
F1
set
( qpbelong
to the second
front,
)
nq each member of
F2
4. Repeat step 3 with
to find the
third front, and so on
Rq 2
F2

f2

i-1

cuboi
d
i

i+1
f1

f1 and f2 are to be minimized

Sharing in NSGA is
replaced with
crowded
comparison
Crowded distance of
solution i in a front is
the average side
length of the cuboids'.

1.

Sort all l solution in a front in ascending order


fm
of
and compute

f m ( xi 1 ) f m ( xi 1 )
CDim
, i 2,......l 1
f m ( xmax ) f m ( xmin )
1.

Repeat step 1 for each objective and find the


crowding distance of solution i as
CDi

CD
m 1

im

Given two solutions i and j, solution i is


preferred Rto
solution j if
i R j or (Ri R j and CDi CD j )

Between two solution with different nondomination ranks, the one with lower (better)
rank is preferred
When two solutions have the same nondomination rank (belong to the same front), the
one located in a less crowded region of the
front is preferred

Steps:1. Initialization of population


2. Fast non-dominated sorting
3. Calculate crowding distance
4. Tournament selection using crowding comparison
operator
5. Crossover
6. Mutation
7. Combine parent population with offspring population
8. Elitist Replacement:-Select population from combine
population using crowding comparison operator
9. Repeat step 2 to 8 for next generation until stopping
rule is satisfied.

Population size:-depends on the nature of


the problem
Probability of crossover (Pc):-0.65 to 0.8
Mutation probability (Pm):usually small {0.001,,0.01}
Rule of thumb pm = 1/no. of bits in
chromosome

Number of generation :- Depends on


convergence and search space of the
problem

Maximum number of generation


No improvement in fitness value for fix
number of generation

Example consist of total 18 activity


Each activity has alternative to complete
the project namely, time, cost and quality.
Each alternative influences three objective
functions viz. project time, cost and quality
Conflicts among three objective functions
In example there is an average 3.12 options
to complete each activity, this create nearly
about 745 million (
)combination for
scheduling
the entire project.
18
3.12

Project network has 11 path to complete


the project.
Path duration varies with alternative.
It is very difficult to decide, which path is
critical

Goal : To find best alternative which balances


three fitness functions viz. project time,
cost and quality.
Minimization of project duration
Minimization of project cost
Maximization of project quality

n
1

n
2

n
3

n
4

n
5

n
6

n
7

n
8

n
9

n
10

------

X represents gene
Index 1, 2, 3..18 indicates activities
n represents alternative of activity.
Each alternative represents different time, cost and quality.

n
X18

START
Population
Initialization
Evaluate Fitness
Child
populati
on
No

Combine parent
population of size N
with child population of
size N
Fast non-dominated
sorting on combined
population
Create new population
of size N using crowding
comparison operator

Yes

Fast non-dominated
Sorting
Calculate crowding
distance

Yes

Create child population


using Genetic operators
viz. selection, crossover
and Mutation

Last
generatio
n
No
Next generation

STOP

Create initial population of solutions

Randomly
Local search
Feasible Solutions

For optimization problem


Minimize project time and cost
Maximize project quality

Population size = 300

Decisi
on
Variab
le

Fitness
value

Activity
Soluti
alternative
on
Tim
e
No
4 5 1 3 4 3 1 1 1 1 1 2 3 2 1 3 1 1
1
12
2
2 1 2 1 2 1 1 1 1 1 1 1 3 1 1 3 1 1 3
3
2 4 1 3 4 3 1 1 3 1 1 1 1 1 1 5 1 1 10
4
30
0

1 1 1 3 4 3 1
2 2 1 3 4 3 1

Cost Quali
ty

1146
20

73.6
3

1483
65

85.4
6

1 2

1 1 4 3 2 1 5 1 2 8

2 2

12
1 1 1 3 2 1 4 1 1 0
13
1

1179
50

75.9
3

1153
60

73.5

11

1166

74.5

Integer

Indentify the best non-dominated set


- A set of solutions that are not dominated by any
individual in the population

Discard them from the population


temporarily
Identify the next best non-dominated set
Continue till all solutions are classified
F1: Rank=1

F1: Rank=1
F2: Rank=2

Crowding (niche-preservation) in objective


space
Each solution is assigned a crowding
distance

-Crowding distance = front density in the


f2 neighborhood
-Distance of each solution from its nearest
neighbors
A
Solution B is more
crowded than A
B
f1

Perform tournament selection using


crowding comparison operator
Crowding comparison operator guide the
selection process to obtain uniformly spread
out pareto optimal front
Every solution has two attributes
1. Non-domination rank
2. Crowding distance

Crossover operation (Based


on crossover probability)
Select parents from
population based on
crossover probability
Randomly select two
points between strings to
perform crossover
operation
Perform crossover
operations on selected
strings
Known for Local search
operation

Crossover
Point
Parent 1
Parent 2

Offspring 1
Offspring 2

Mutation operation (based on mutation


probability pm)

Each bit of every individual is modified


with probability pm
main operator for global search (looking
at new areas of the search space)
pm usually small {0.001,,0.01}
rule of thumb pm = 1/no. of bits in
chromosome

For optimization ith solution string from the population


problem
2

Let pm = 1/18 =
0.055

Select bits having


probability less than
pm

Interchange the bits


with each other

.. 2

0.2 0.4 0.0 0.5 0.2 .


3
7
2
6
1

0.4
1

Mutation

.. 2

Combine parent population of size N with


offspring population of size N to form combine
population of size 2N
Select best population of size N from combined
population of size 2N.
Elitism ensures that fitness of the best
population does not deteriorate in advances
Elitism can speed the performance of the
genetic algorithm
It help to prevent the loss of good solution once
they are found.

Select better ranking individuals and use crowding


distance to break the tie
Non-dominated
fronts

Pt

F1
F2
F3

Using crowding
comparison
operator

Qt

Combined
population

(R t )

Pt
Qt

Pt+1

Rejected
solutions
= Parent population

= Offspring
population
Pt+1 = Population for next

Combined population R t is sorted according to


non-domination
Solution belonging to the non-dominated set
F1
considered as a best solution in combined
population
F
If size of 1 is smaller than population size N then
Pt+1
choose all member ofF1 for new population
P
The remaining member of the population t+1
are
chosen from subsequent front in order of their
F1 FL
ranking
In general, the count of solution from
to
would be larger than population size ( N )
Use crowding comparison operator
FLto choose
exactly N population member from last front

Best parameter for NSGA-II is selected through


several test simulation run

NSGA-II Parameter

Parameter value

Population size

300

Number of generations

1000

Crossover probability
(Pc)

0.8

Mutation probability
(Pm)

0.1

Time-cost-quality tradeoff
surface

Cost-quality tradeoff analysis

Time-cost tradeoff analysis

Vous aimerez peut-être aussi