Vous êtes sur la page 1sur 10

Adaptation of Genetic Algorithms for

Engineering Design Optimization


Khaled Rasheed

shehata@cs.rutgers.edu

Andrew Gelsey

gelsey@cs.rutgers.edu

Computer Science Department


Rutgers University
New Brunswick, NJ 08903
USA
(908) 445{2001, FAX {5691

Abstract

Genetic algorithms have been extensively used in di erent domains as a means of


doing global optimization in a simple yet reliable manner. However, in some realistic
engineering design optimization domains it was observed that a simple classical implementation of the GA based on binary encoding and bit mutation and crossover was
sometimes ine cient and unable to reach the global optimum. Using oating point
representation alone does not eliminate the problem. In this paper we describe a way
of augmenting the GA with new operators and strategies that take advantage of the
structure and properties of such engineering design domains. Empirical results (initially
in the domain of conceptual design of supersonic transport aircraft and the domain of
high performance supersonic missile inlet design) demonstrate that the newly formulated GA can be signi cantly better than the classical GA in terms of e ciency and
reliability.

http://www.cs.rutgers.edu/ shehata/papers.html

Appeared in the Fourth International Conference on Arti cial Intelligence in Design:


Evolutionary Systems in Design Workshop, 22 June 1996

1 Introduction
Genetic Algorithms 7] are search algorithms that mimic the behavior of natural selection.
In Genetic Algorithms (GAs) an attempt to nd the best solution to some problem (e.g. the
maximum of a function) is made by generating a collection (population) of potential solutions
(individuals) to such problem and through some mutation and recombination (crossover)
operations, better solutions are hopefully generated using the current solutions. This process
continues until an acceptably good solution is found. GAs have a lot of advantages over other
search techniques, one of which is the ability to deal with di erent kinds of domains such as
continuous variable domains, discrete or quantized variable domains or mixed type variable
domains. The classical approach has been to convert all the variables involved in the search
to the domain of binary integers and encode each individual in the solution population as a
bit string. Subsequently, the mutation and recombination operations are bit manipulation
operations. This representation has been successful in solving numerous problems in various
domains of science and engineering.
However, if the problem involves only continuous variables, a case that is not uncommon
in engineering design domains, more e ective operators based on oating point representations may also be used. In the design domains we investigated in this research the search
space was induced by a number of continuous design parameters so a potential solution was
a vector (point) in a multidimensional vector space. The value ( tness) of each point was determined by a simulator that may or may not be able to accurately produce the value of the
point. Therefore there are evaluable and unevaluable points in the search space and among
the evaluable points there are feasible points (representing physically realizable designs) and
infeasible points. The evaluable points were observed to form slab-like evaluable regions that
are not axis parallel and therefore, as 20] pointed out, the classical crossover operator of the
GA is not likely to give any good results. In addition, the sparsity of the evaluable (feasible)
region implies that the classical mutation operator, which ips a few bits at random, is not
likely to do any better than random search. For these reasons a non standard crossover
operator (the line crossover) and mutation operator (the shrinking window mutation) were
also used.
The fact that evaluating the tness of an individual entails a run of a numerical simulator
that may take several seconds makes it both necessary and desirable to use techniques from AI
and machine learning to make the search somewhat more focused and to avoid evaluating the
tness of an individual if the knowledge gathered along the search suggests that it is unlikely
to be better than the existing individuals. Therefore the guided crossover and the machine
learning screening module (MLSM) were introduced in this research and experimental results
indicate they tremendously improved the performance of the GA in such domains.

2 GA Modi cations
The GA was adapted to the design optimization domain by selecting from the existing
methods and policies what is suitable for the problem as well as proposing new techniques.
The following adaptations were applied to the GA to make it suitable for the search spaces
under consideration:

Representation: Each individual in the GA population represented a list of the design


parameters of an aircraft or missile. Each parameter had a continuous interval range.
Both the binary and the oating point representation of each individual were used in
this research. The tness of each individual was based on the sum of the takeo mass
of the corresponding aircraft and the penalty function if any.
Selection: Selection by rank was used because of the wide range of tness values
caused by the use of a penalty function. Rank selection prevents the rst discovered
evaluable/feasible points from dominating the population.
Line crossover: This operator may be viewed as a generalized version of the linear
crossover operator introduced in 20]. After two candidates for crossover have been
selected, the crossover is done by joining a line between the two candidates and choosing
a point at random along this line or its extensions from either side. Though there is
no formal proof that this method works, an informal argument is: the line joining two
points in the design space represents a design trend, like increasing some length and
decreasing another length by the same proportion, etc. A line joining two points that
represent relatively good designs is likely to contain points that represent good designs
as well, and a line joining one point that represents a bad design to one point that
represents a good design represents an improvement design trend and is likely to lead
to good designs most probably in its extension part. In the case of search spaces whose
evaluable regions have a slab structure, the line joining two evaluable points is likely
to remain in the evaluable region and is not unlikely to uncover better points of the
search space as well.
Shrinking window mutation: This operator may be viewed as a simpli ed version
of the Dynamic Mutation operator introduced in 9]. When crossover is done using
the line crossover operator described above, the point resulting from crossover is not
directly introduced to the GA population as a new individual. A mutation operation
is done rst, in which the parameters composing the point are perturbed by a random
amount whose amplitude is a function of the value of the parameter and the stage of
the optimization. In the early stages of the optimization the perturbation is allowed
to be relatively large and then the amplitude decreases with the number of iterations
elapsed. This idea is intended to add the avor of simulated annealing to the search
and helps in exploring the search space by preventing premature clustering.
Machine learning screening: This is an operation that may not have been useful if each
point evaluation was not expensive. The screening module decides whether a point
is likely to correspond to a good design without invoking any simulator to do this,
instead the module relies on the history of the current search to make its decision.
Currently the module uses a simple machine learning approach which is: The module
keeps a relatively large random sample of the points encountered in the search so far,
typically the sample should be at least ten times the size of the GA population and it
can be as large as the total number of points encountered in the search so far. The size
of the sample should be selected based on the speed of the simulator and the domain
knowledge if available. Before a candidate point generated by crossover and mutation

is evaluated, the module nds the nearest neighbor of such candidate point among
the sample. If this nearest neighbor had a tness that is better than some threshold,
the point is evaluated and added to the GA population, otherwise the point is just
discarded. A good choice of the threshold is very important for the success of the
whole search and we have used the tness of the worst member of the current GA
population as this threshold and it gave good results.
Guided crossover: In the late stages of the search this operator starts to take part.
The guided crossover operation works as follows:
1. One candidate point is selected from the GA population using the normal selection
rule (selection by rank in this research) and called candidate1.
2. The second candidate point is also selected from the GA population but in a
di erent way: for each point X in the GA population other than candidate1 a
quantity Q(X,candidate1) is computed. Where
(A) ? fitness(B ))2
Q(A; B ) = (fitness
distance(A; B )
A value of X that maximizes Q(X,candidate1) is taken to be candidate2.
3. candidate1 and candidate2 are swapped if necessary, to make candidate1 the point
that has the higher tness among the two.
4. The result of the crossover is a point along the line joining candidate1 to candidate2
which is selected at random from the small region around candidate1 (the better
point). in other words we have
Result = L candidate1 + (1 ? L) candidate2
L is in the interval 0.9,1.1]
The guided crossover operator is greedy and consequently it is used instead of the line
crossover only a small percentage of the time and only in the late stages of the search
(in the current implementation it is used 10% of the time in the second half of the
search). No mutation is used with the guided crossover operator.
The guided crossover operator is believed to contribute a lot to the task of improving
the steady state behavior of the search, de ned here as how close the nal solution is
to the global optimum. The main reason for introducing this operator was the desire
to endow the GA with a way to get very close to the optimum, an advantage that
gradient based methods usually had over the GA, without actually having to compute
gradients which may be expensive to compute in domains with high dimensions and
expensive simulation based evaluations.
Replacement strategy: The replacement strategy used takes into consideration both
the tness and the proximity of the points in the GA population and its goal is to
select for replacement a point that both:
1. Has a relatively low tness
2. Is relatively close to the point being introduced

50
new_ga_results
classical_ga_results

45

Percent deviation from global optimum

40
35
30
25
20
15
10
5
0
0

5000

10000

15000
Number of iterations

20000

25000

30000

Figure 1: Performance comparison of the Classical and Adapted GA in the Aircraft Design
Domain

3 Experimental Results

3.1 First Experiment (Aircraft Design Domain)

In the rst experiment to test the e ect of the adaptations presented in the previous section,
we used an eleven-dimensional design space in which the optimizer varied eleven of the
aircraft conceptual design parameters over a continuous range of values. The domain is
described in detail in 5].
The following experiment was done: Five random populations of 100 points each were
generated. Then for each population the GA was allowed to proceed for 30000 iterations (an
iteration denotes a call to the simulator that takes 0.2 seconds on average) one time using:
Floating point representation, The line crossover operator, The shrinking window mutation
operator, The Guided crossover operator and The machine learning screening module. And
another time using: Binary bit string representation,The classical bit string crossover operator, The bit mutation operator and No machine learning screening module. In both times
the same selection and replacement strategies were used.
The results are shown in Figure 1. The gure clearly demonstrates how the introduced
modi cations strongly improved the GA performance. It is important to note here that since

the evaluation function is the output of a numerical simulator we refer to the best design
found throughout all optimization attempts as the global optimum, where in fact we have
no formal proof that it is indeed global. The global optimum was found using a GA run of
100000 iterations.
All ve runs of the classical implementation of the GA failed to get within 10% deviation
from the global optimum (note that this is a minimization problem) and the average deviation
for these runs was 12.72%. On the other hand, all ve runs of the adapted implementation
of the GA got within 1% deviation from the global optimum and the average deviation
for these runs was 0.61%. Therefore the steady state deviation from the global optimum
was improved by a factor of 20.79. This superiority of the adapted GA is no surprise. As
the line crossover operator is likely to produce child points that stay within the slab-like
evaluable (or the feasible) region provided that their parent points were also in the evaluable
(feasible) region, unlike the classical crossover operator which is more likely to produce child
points outside the evaluable (feasible) region in this domain. Moreover, the machine learning
screening module plays an important role in pruning the search and saving the time that
would otherwise be lost in examining bad points, and the guided crossover operator does the
job of ne tuning the optima.
It is important to note here that a collection of conventional optimization techniques, representative of most categories of optimization paradigms, were applied to this minimization
problem but the adapted GA was much more successful than these methods in the aircraft
design optimization domain. The best conventional method was found to be a multistart
CFSQP 10], a state-of-the-art implementation of the Sequential Quadratic Programming
method 1. 1000 multistarts of CFSQP with a total cost of 851,237 iterations failed to reach
the global optimum found by 100,000 iterations of the adapted GA, and out of those 1000
multistarts only 26 got to within 1% deviation from the global optimum.

3.2 Second Experiment (Supersonic Missile inlet Design Domain)

This experiment was conducted in the Supersonic missile inlet design domain with eight
continuous design variables. The domain is described in detail in 21]. The main purpose
of the experiment was to investigate the e ect of the Machine Learning screening module
(MLSM) on performance.
The following experiment was done: Three random populations of 80 points each were
generated. Then for each population the GA was allowed to proceed for 30000 iterations (an
iteration denotes a call to the simulator that takes six seconds on average) one time using the
fully adapted GA (as in the above experiment) and another time using all the adaptations
except for the MLSM.
The results are shown in Figure 2. The gure clearly demonstrates how the MLSM
speeded up the convergence of all three runs to the global optimum region. With the MLSM
all three runs got within 1% from the global optimum in at most 11700 iterations whereas
the runs without the MLSM were in the vicinity of 5% away from the global optimum given
the same number of iterations.

Sequential Quadratic Programming is a quasi-Newton method that solves a nonlinear constrained optimization problem by tting a sequence of quadratic programs.
1

20

Percent deviation from global optimum

new_ga_results
new_ga_no_ML_results

15

10

0
0

5000

10000

15000
Number of iterations

20000

25000

30000

Figure 2: Performance comparison of the Classical and Adapted GA in the Missile Inlet
Design Domain

4 Related Work
A great deal of work has been done in the area of numerical optimization algorithms 6,
18, 13, 11, 12], though not much has been published about the particular di culties of
attempting to optimize functions de ned by large \real-world" numerical simulators. A
number of research e orts have combined AI techniques with numerical optimization 17,
14, 15, 3, 2, 16, 1, 19, 8, 4]. The GA was used in some of these works 14, 2, 15], but no
attempt was made to extensively adapt it to the domain as done in this research, instead an
o -the-shelf implementation of the GA was usually used.

5 Limitations and Future Work


The modi cations done to the GA have increased its reliability as a global optimizer with a
high potential for nding supreme points of the design search space in a reasonable amount
of time. However, the GA based search can be further improved in many ways. Some of the
ways we intend to investigate are:

The use of an AI agent module for the online adjustment of the GA parameters (such
as population size, mutation rate, etc.) along the way as the optimization progresses.
Such a module will supplement the machine learning module towards achieving the
goal of speeding up the search and making it more reliable.
It may be possible to achieve a great gain in performance by reformulating the GA
search problem in continuous variable search spaces as follows: The algorithm maintains two active populations rather than one, the rst is a population of points as
usual and the second is a population of directions. The individuals of the direction
population may be obtained by joining lines between the individuals of the points population or by other ways. The search may proceed by selecting a point from the points
population and a direction from the directions population and using these to hopefully
come up with better points and/or better directions to add to the populations. Many
ideas can be researched on how to update the two populations and what selection and
replacement strategies to use, the question of how to come up with new points and
directions using existing ones is also very interesting. This idea will be investigated in
the near future.
It is important to note that some of the modi cations to the classical GA introduced in
this paper such as machine learning screening are only applicable in the case when evaluations
are expensive (as in the case of using a simulator) and otherwise the additional computational
overhead they introduce may not be justi ed.

6 Conclusion
An adaptation of the GA to design optimization spaces has been presented in this paper.
Experimental results in the domain of aircraft design optimization demonstrate that the
adapted GA had a steady state error that is 20 times better than the classical GA using
the same amount of time. This improvement is valuable in design domains, where a 1%
improvement in design may translate to millions of dollars of saving in the long run.

Acknowledgments

We thank our aircraft design expert, Gene Bouchard of Lockheed, for his invaluable assistance
in this research. We also thank all members of the HPCD project, especially Donald Smith,
Keith Miyake, and Mark Schwabacher. This research was partially supported by NASA
under grant NAG2-817 and is also part of the Rutgers-based HPCD (Hypercomputing and
Design) project supported by the Advanced Research Projects Agency of the Department of
Defense through contract ARPA-DABT 63-93-C-0064.

References
1] A. M. Agogino and A. S. Almgren. Techniques for integrating qualitative reasoning and
symbolic computing. Engineering Optimization, 12:117{135, 1987.
2] E. E. Bouchard. Concepts for a future aircraft design environment. In 1992 Aerospace
Design Conference, Irvine, CA, February 1992. AIAA-92-1188.

3] E. E. Bouchard, G. H. Kidwell, and J. E. Rogan. The application of arti cial intelligence technology to aeronautical system design. In AIAA/AHS/ASEE Aircraft Design
Systems and Operations Meeting, Atlanta, Georgia, September 1988. AIAA-88-4426.
4] G. Cerbone. Machine learning in engineering: Techniques to speed up numerical optimization. Technical Report 92-30-09, Oregon State University Department of Computer
Science, 1992. Ph.D. Thesis.
5] Andrew Gelsey, Mark Schwabacher, and Don Smith. Using modeling knowledge to
guide design space search. In to appear: Fourth International Conference on Arti cial
Intelligence in Design '96, 1996.
6] Philip E. Gill, Walter Murray, and Margaret H. Wright. Practical Optimization. Academic Press, London ; New York, 1981.
7] David E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning.
Addison-Wesley, Reading, Mass., 1989.
8] D. Hoeltzel and W. Chieng. Statistical machine learning for the cognitive selection of
nonlinear programming algorithms in engineering design optimization. In Advances in
Design Automation, Boston, MA, 1987.
9] Cezary Janikow and Zbigniew Michalewicz. An experimental comparison of binary
and oating point representations in genetic algorithms. In Proceedings of the Fourth
International Conference on Genetic Algorithms, pages 31{36. Morgan Kaufmann, 1991.
10] C. Lawrence, J. Zhou, and A. Tits. User's guide for CFSQP version 2.3: A C code for
solving (large scale) constrained nonlinear (minimax) optimization problems, generating
iterates satisfying all inequality constraints. Technical Report TR-94-16r1, Institute for
Systems Research, University of Maryland, August 1995.
11] Jorge J. More and Stephen J. Wright. Optimization Software Guide. SIAM, Philadelphia, 1993.
12] P. Papalambros and J. Wilde. Principles of Optimal Design. Cambridge University
Press, New York, NY, 1988.
13] Anthony L. Peressini, Francis E. Sullivan, and J. J. Uhl, Jr. The Mathematics of
Nonlinear Programming. Springer-Verlag, New York, 1988.
14] D. Powell. Inter-GEN: A hybrid approach to engineering design optimization. Technical
report, Rensselaer Polytechnic Institute Department of Computer Science, December
1990. Ph.D. Thesis.
15] D. Powell and M. Skolnick. Using genetic algorithms in engineering design optimization
with non-linear constraints. In Proceedings of the Fifth International Conference on
Genetic Algorithms, pages 424{431, University of Illinois at Urbana-Champaign, July
1993. Morgan Kaufmann.

16] J. Sobieszczanski-Sobieski, B. B. James, and A. R. Dovi. Structural optimization by


multilevel decomposition. AIAA Journal, 23(11):1775{1782, November 1985.
17] Siu Shing Tong, David Powell, and Sanjay Goel. Integration of arti cial intelligence
and numerical optimization techniques for the design of complex aerospace systems. In
1992 Aerospace Design Conference, Irvine, CA, February 1992. AIAA-92-1189.
18] Garret N. Vanderplaats. Numerical Optimization Techniques for Engineering Design :
With Applications. McGraw-Hill, New York, 1984.
19] Brian C. Williams and Jonathan Cagan. Activity analysis: the qualitative analysis of
stationary points for optimal reasoning. In Proceedings, 12th National Conference on
Arti cial Intelligence, pages 1217{1223, Seattle, Washington, August 1994.
20] Alden Wright. Genetic algorithms for real parameter optimization. In The First workshop on the Foundations of Genetic Algorithms and Classi er Systems, pages 205{218,
Indiana University, Bloomington, July 1990. Morgan Kaufmann.
21] G.-C. Zha, D. Smith, M. Schwabacher, K. Rasheed, A. Gelsey, and D. Knight. High
performance supersonic missile inlet design using automated optimization. In to appear:
AIAA Symposium on Multidisciplinary Analysis and Optimization '96, 1996.

Vous aimerez peut-être aussi