Vous êtes sur la page 1sur 61

Pour action :

M. Tourbier 64230 TCR RUC 4 31


M. Biaggio 64230 TCR RUC 4 45
M. Egee 64230 TCR RUC 4 29
M. Buchou 64230 TCR RUC 4 30
M. Blain 64260 TCR DIA 0 02
M. Caquot 12250 TCR PLU 1 10
M. Awade 64130 TCR LAB 0 35
M. Arai Nissan
M. Torigaki Nissan

Pour info :
M. Guimier* 64230 TCR RUC 4 47
M. Hebrard* 64260 TCR RUC 4 24
M. Moussy* 64140 TCR LAB 0 50
M. Dornez* 65301 TCR AVA 1 08
M. Bonnet* 64260 TCR RUC 4 05
M. Bonte* 64230 TCR RUC 4 45
M. Lory* 64230 TCR RUC 4 29
M. Nedelec* 64230 TCR RUC 4 30

Interne :
M. Demonsant 64230 TCR RUC 4 31

Pour archivage :
Mme Hery 64230
* : people who has the tiny version of the document. The
technical report (61 pages) is available on request.
DR / OPTIMISATION PRODUIT PROCESS
Groupe Qualit Statistique et Optimisation

L. J UILLEN
Tel : (01.34.9)5.74.60
Fax : (01.34.9)5.77.73
Service 64230
e-mail: Lionel.J uillen@renault.com
API : TCR RUC 4 31


H10.400
Optimisation de pices mcaniques












Le 08/10/2000

Technical Report n64230/H1/2000.040
Thme Theoretical reference manual for Adaptive Response Surface Methodology (A-RSM) software

Objectif Your information

Attente Remarks and questions
Summary Conclusions
This technical report contains the description of a new global optimization strategy (A-RSM) build to solve engineering
design problems in a numerical simulation context :
- we are dealing with expensive black-box finite element computations,
- computer model functions are sometimes numerically noisy or have a small number digits precision,
- the global number of numerical experiments is hardly limited by designers lead time,
- we need methods that make search more global than existing methods, to achieve a better solution.

Our Matlab software code can be easily linked with external finite element solvers and is available from the author.
We have already used it with some success on several real-world design optimization problems :
- Mass minimization of a crash energy absorbtion device for car/truck crash compatibility purposes (TR n:
64230/H1/2000.041)
- Optimization of the paint thickness distribution for an electrostatic paint process (TR n: 64230/H1/2000.060)
- Calibration of CFD Wave models (TR n: 64230/H1/2000.056)
In the near future, we will continue to apply our method to engineering design problems that would come up to us.
We will embed our algorithm in the open software architecture iSight, that will become the main tool used for Renault
and Nissan s optimization applications (TR n64230/H1/2000.057).

Lionel J uillen

Page 1


Rsum - Conclusions

Cette note technique contient la description dune nouvelle mthode doptimisation globale (A-RSM) conue pour
rsoudre des problmes de conception dans un contexte li la simulation numrique :
- nous avons souvent faire des simulations lments finis trs couteuses et de type bote noire
- les modles numriques possdent souvent du bruit numrique et leur prcision nest que de quelques dcimales,
- le nombre maximum de simulations numriques auquel nous avons droit est trs limit par les dlais trs courts des
concepteurs qui veulent un temps de rponse trs bref,
- nous avons besoin de mthodes qui explorent dune manire plus globale lespace des paramtres de conception
que des mthodes classiques, dans le but dobtenir de plus grands gains.

Notre logiciel crit en langage Matlab peut facilement tre interfac avec des logiciels de calcul par lments finis, et
est disponible sur demande auprs de lauteur.
Nous lavons dj utilis avec succs pour des problmes doptimisation industriels :
- minimisation de la masse dun dispositif anti-encastrement (DPEA) pour RVI (Note technique n:
64230/H1/2000.041)
- optimisation de la distribution dpaisseur de peinture pour un procd de dpose lectrostatique (Note technique :
n: 64230/H1/2000.060)
- Identification de modles numriques en mcanique des fluides (Note technique n: 64230/H1/2000.056)

Nous projetons de continuer appliquer notre mthode dautres problmes de conception qui nous seraient soumis.
Nous projetons dintgrer la mthodologie A-RSM dans le logiciel iSight, qui va bientt devenir loutil principal pour
Renault et Nissan, pour les applications doptimisation (note n: 64230/H1/2000.057).

Lionel J uillen


Page 2
Page 3
Introduction

In the automotive industry, as well as many others, there is a growing emphasis on designing
products using mathematical and/or computer models. Computer models make easier the
exploration of alternative designs and reduce the need for expensive prototypes. This approach is
often made difficult by the fact that computer models runtimes are very long. Designing
optimization algorithms that take into account such practical constraint with a maximum efficiency
is a great challenge.

In a such context, we can remark that:
- we are dealing very often with expensive black-box finite element solvers
- computer model functions are sometimes numerically noisy or have a small number digits
precision
- function gradients are generally not available
- the global number of numerical experiments is limited by designers lead time
- we need methods that make search more global than classical methods (descent ones for
example), to achieve a better solution

In this technical report, a new approach based on an adaptive response surface methodology (A-
RSM) (a bayesian procedure for statisticians) has been designed and tuned for engineering
optimization applications with deterministic computer models.
This research work was mainly inspired by many researchers : Trosset M.W., Torczon V., Schonlau
M., Welch W.J ., Koehler J .R., Owen A.B., J ones D.R. and Sacks J ., and many others.

Several real-world design problems have been recently solved with our A-RSM algorithm.
The results obtained by Mr. Lidon (GECI company) to design an automotive mechanical part are
described in technical report number 64230/H1/2000.041. The design of a electrostatic paint
process hast just been done (technical report 64230/H1/2000.060). Our methodology is also
embedded in a software tool used to calibrate some CFD Wave models (technical report
64230/H1/2000.056).

The basic idea of our optimization algorithm is to build iteratively cheaper surrogate models of the
expensive black-box solvers involved in the simulation of the physical phenomenums, and to find at
each iteration the optimum of a simplified optimization problem. The surrogates are numerical
approximation whose approximation quality get higher with the number of numerical experiments.
A similar idea is also used by Yves Tourbier at Renault (Reseach Division Sce 64230), within his
StatOpt Project for Diesel engine calibration purposes.
In our optimization procedure, kriging metamodels are used to approximate wealthy function
outputs.

We start this report by fastly describing general metamodeling (a metamodel is a model of a
model) techniques, and specially kriging statistical model and the way to compute its parameters is
also investigated. In the second part we describe the adaptive procedure and defining a merit
function that balances the search between a local and a more global one.
Then, we describe three main ways to use the A-RSM software : non-iterative and iterative building
of kriging metamodels, global unconstrained optimization and global constrained optimization.

1) Metamodeling techniques

Building approximations of simulation codes involves the following steps :
- choosing an experimental design to sample the computer code
- choosing a model to represent the data
- fitting the model to the observed data
- validating the model (Optional but risky if not done)

There are a lot of variety of options for these steps as it can be seen on the next figure (taken from
[3])


In our case, the model class implemented in A-RSM software is kriging one.
Plans of experiments can be classical ones (Factorial, Central Composite, Orthogonal arrays) or
space filling designs (random, latin hypercube or maximin designs)


2) Kriging metamodels
Kriging is based on a semi-parametric model which allows much more flexibility than parametric
models, since no specific model structure has to be used.
Kriging postulates a combination of a polynomial model (linear regression parametric model) and
departures (non-parametric) of the following form:

y(x) =f(x) +Z(x)
(response =linear model +departure)
where :
y(x) is the unknown function of interest,
f(x) is a known polynomial function of x,
and Z(x) is the realization of a gaussian stochastic process with mean zero, variance ,
and non-zero covariance.
2


The f(x) term is similar to the polynomial model in a response surface, providing a global model
of the design space (a trend). In many cases f(x) is simply taken to be a constant term, but in A-
RSM procedure, the user can take f(x) as a linear, linear with interactions, pure quadratic or
complete quadratic polynomials. While f(x) globally approximates the design space, Z(x) creates
localized deviations so that the kriging model interpolates the n
s
sampled data points. We suppose
Z(x) depends on the distances between points too.
Page 4

The covariance matrix of Z(x) which dictates the local deviations follows:
Cov[Z(x
i
),Z(x
j
)] = R([R(x
2

i
,x
j
)]),
where R is the correlation matrix, and R(x
i
,x
j
) is the correlation function between any two of the n
s

sampled data points x
i
and x
j
. R is a n
s
x n
s
symmetric, positive definite matrix, with ones along the
diagonal. The correlation function R(x
i
,x
j
) is specified by the user and quantifies how quickly and
smoothly the function moves from point x
j
to point x
i
and determines how the metamodel fits the
data.
For A-RSM, R(x
i
,x
j
) has been choosed to be :

=
dv k
p
k k
n
k
d
j i
e x x R
1
.
) , (


n
dv
is the number of design variables, ( )
k k
p , are the unknown correlation parameters used to fit
the model, and d
k
=x
ki
- x
kj
which is the distance between the k
th
components of sample points x
i

and x
j.
p
k
=2 is a common choice for R(x
i
,x
j
).
There are others correlation functions forms, see reference [3] for more details.
Once a correlation function has been selected, predicted estimates, , of the response, y(x), at
untried values of x are given by :
) ( x y
)

. ( ). (

). ( ) (
1
F y V x r x f x y
t t
+ =

,
where y is a vector of length n
s
which contains the values of the response at each sample point.
We have :
[ ]
t
n
t
s
x x R x x R x x R x r ) , ( , ), , ( ), , ( ) (
2 1
L = ,
with and R V
z
.
2
= [ ] [ ] ) , (
j i ij
x x R R =
We define matrix as , F

=
) (
) (
1
s
t
t
x f
x f
F M
we have :
). ( ) ( x f x f
t
= and ) . ( ). ( ) (
1
F y R x r x Z
t
=

The mean-square error (MSE) for the prediction (or process variance) is
[ ]

=
) (
) (
.
0
. ) ( ) ( ) (
2 2
x r
x f
V F
F
x r x f x
t
t t
z

As is an interpolator, we have conditions satisfied for every sampled points. ) ( x y 0 ) (
2
=
k
x
The kriging metamodel belongs to the Best Linear Unbiased Predictor family (BLUP).
The prediction depends on the parameters ) ( x y
k
and p
k
in the covariance or correlation function
R. Assuming that the stochastic process Z(x) is gaussian,
k
and p
k
parameters can be estimated by
maximum likelihood, together with and
z
. We have to identify the metamodels parameters
that will give us the Best prediction by maximizing the likelihood ([14]).
To achieve this goal, we have to maximize the following function :
MLE = , ) )) ln(det( ) ln( . (
2
R n
z s
+
with respect to
k
and p
k
.
s
t
z
n
F y R F y )

. .( . )

. (

1
2


=

, and y R F F R F
t
. . . ) . . (

1 1 1
=
z
, and correspond to the maximum likelihood estimators of

z
and respectively.
The optimization problem of maximizing the likelihood function is a hard global one.
Page 5
In our case, we have decided to approximate the solution by a genetic algorithm (other global
optimization algorithm could be used, Alienor algorithm for example (see reference [17]))
Exactly strictly Solving likelihood optimization problem is not necessary. We found that a quite
good solution is enough to build a correct kriging metamodel.

Multiple values of the square root of the mean square error at a location x can provide an
approximate bound on the actual prediction error of the kriging metamodel at x. Mean square error
is high when prediction points are far away from any experimental design points. This distance
notion is measured in terms of the weighted distance in the correlation function.
For example, ) x ( MSE . 2 ) x ( y is an envelope that gives 95% of confidence on the model.
This really means that for location x, if the unknown funtion is truly generated by a random
function with the postulated trend ( ) and correlation function R(x,x

). (x f
t
j
), then approximatively
95% of the kriging models ) , (
k k
p that go through the observed design points would lie in that
envelope (see reference [10]). Of course, the real unknown function will not be a random function,
but we can make the strong hypothesis that approximated error function will not be far from the real
one in the case of continuous and differentiable functions. We see kriging approximating models as
a way to expand a polynomial functions basis with an exponential one, dependant of the datas
(sampling points).
To be careful, we must insert these ideas and hypotheses in an adaptive context, and that is the
object of the A-RSM procedure.
Page 6

3) A-RSM Software Basic principles

As we said in the introduction, A-RSM is an iterative procedure whose basic algorithm is described
in the following flow chart:


4 Initial Plan of experiments
4 Build an approximation interpolating the initial
experiments.
4 for i =1 to maximum number of experiments
+ Make a new experiment at a location that minimize a merit
function : weighted combination of the surrogate model and error
approximation
+ Build an approximation that interpolate the augmented plan of
experiments
+ Update the relative weights of the merit function to balance
between reducing approximation error and minimizing the
statistical model (kriging).
4 end loop i

3.1) Merit Function

To balance between a global search and local search, at each iteration we are minimizing a merit
function that represents a trade-off between increasing the model precision and minimizing the
objective approximation function.
Our choice is :
) ( 2 .
2
) ( .
1
) ( x MSE w x y w x =

What we want is to obtain the smallest value of the objective at a cost less or equal to our global
budget (of function evaluations).
At the beginning of the procedure, we need to improve drastically the quality of our model, thats
why we give a value to w
2
greater than w
1
. Then at the end of each iteration, we decrease w
2
and
increase w
1
, to give gradually more importance to minimizing y(x).
Initial w
1
is equal to 0 and initial w
2
is equal to 1. We have a maximum number of experiments
limited to Nf_Maxi function evaluations.

3.2) Unidimensionnal example

We want to minimize the univariate function f(x) on the range [-3.1,2.1] :
) 3 .( ) 2 .( ) 2 .( . 0001 . 0 ) (
2 2
+ + = x x x x x f
(you can find the graph of this function on the next picture (figure 1)).
Page 7
-3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
x 10
-3
Fonction reelle

Figure 1 : graph of the example function
The purpose of this example, is to show quickly on a simple problem, how the A-RSM
methodology works.
We can see that this function admits three local optima.

The global one is located in the interval [-1,-0.5]. Our objective is to find the global optimum
location with a budget limited to ten function evaluations. As we suppose, we do not have a-priori
knowledge on the objective function, we have decided to start the optimization with only two
experiments (x1 =0.5 and x2 =1). For this example, we choose p
k
=2.

On the next pages are shown the metamodels (in blue) build at each iteration, with the real function
(in red) and the envelop (in green).
Page 8

Figure 2 : first DOE 2 experiments

We begin with very small information about the function

Figure 3 : first iteration nexp =3
We found the third point to be far from the firsts. It makes the kriging metamodel more interesting
even if we still lack information to achieve a good approximation.
Page 9

Figure 4 : 2
nd
iteration nexp =4


Figure 5 : 3
rd
iteration nexp =5
We found the right basin of attraction for the global optimum of the function.
Page 10

Figure 6 : 4
th
iteration nexp =6
Now, points begin to clusterize.

Figure 7 : iteration 5 nexp =7
Page 11

Figure 8 : 6
th
iteration nexp =8

Figure 9 : 7
th
and last iteration nexp =10

Remarks on the A-RSM execution :
We can see on the last figures that as the number of experiments increases, the quality of the
approximation gets better. We can remark, as it is stated in the A-RSM algorithm definition, the
points added at each iteration tends to clusterize. It is seen with the last three iterations. It is a
Page 12
drawback : the last metamodel is not especially the best one (as compared with the true function in
red color), but it is the price we have sometimes to pay to seek the optimum location.
With this simple example, we remark that the A-RSM heuristic finds fastly the basin of attraction of
the global minimizer of the objective function, and even with a small number of experiments (10)
the global optimum search has been successfull.
The main purpose of this small example is to show how A-RSM algorithm works and underlying
idea : a global search becoming progressively a local one. This basic principle is the same for
multidimensionnal optimization problems.




4) Building a Plan of experiments (Matlab API : Read Appendix B)

We need an initial plan of experiments for the first phase of the optimization procedure.
It helps us to make a first simplified global exploration of the design space and to build the trend of
the initial approximation model. There are many ways to achieve this task. Some of them can be
taken from the classical theory of design of experiments (cf. [18]) : full and fractional factorial
designs, Central Composite Designs, Orthogonal Arrays and Optimal designs (A-, D-, G-, etc.).

4.1) Full Factorial sampling
A good picture worths better than a lot of words, you can see on the following drawings (figure 10)
two full factorial plans for respectively two and three design parameters with five and three levels.

Figure 10 : examples of full factorial samplings


4.2) Latin Hypercube sampling

A Latin Hypercube plan belongs to the space-filling family of plan of experiments.
It is a matrix of n
s
rows and n
dv
columns. Each column contains the levels 1,2,,n
s
randomly
permuted, and the n
dv
are matched at random to form a Latin Hypercube. These designs are the
earliest space filling experimental design intended for computer experiments. Due to their
randomness nature, it is quite impossible to obtain the same Latin Hypercubes even with the same
parameters. The basic algorithm to generate a Latin hypercube sampling is as follows (taken from
[19]) : each of the n
dv
parameters (or factors) is associated with a random value between 0 and 1. A
sample of size n
s
is formed by selecting the i
th
realization of the j
th
factor as :
s
j j
j
n
u i
v
+
=
1 ) (
,
where :
Page 13
- (.). (.),...., (.),
2 1 k
are permutations of the integers 1,,n
s
, sampled randomly,
independently, and with replacement from the set of n! possible permuations; and ) (i
j
is the i
th

element of the j
th
permutation.
- The u
j
are sampled independently from [0,1].

On the following figures, you can see examples of Latin Hypercube samplings for 2 and 4 design
parameters respectively (figures 11 and 12).

2 1.5 1 0.5 0 0.5 1 1.5 2
2
1.5
1
0.5
0
0.5
1
1.5
2
Plan initial....

Figure 11 : Latin Hypercube Sampling on [-2.048,2.048]
2

2 0 2
2
0
2
2 0 2
2
0
2
x1
x
2
2 0 2
2
0
2
x2
x
1
2 0 2
2
0
2
x1
x
3
2 0 2
2
0
2
x3
x
1
2 0 2
2
0
2
x1
x
4
2 0 2
2
0
2
x4
x
1
2 0 2
2
0
2
2 0 2
2
0
2
x2
x
3
2 0 2
2
0
2
x3
x
2
2 0 2
2
0
2
x2
x
4
2 0 2
2
0
2
x4
x
2
2 0 2
2
0
2
2 0 2
2
0
2
x3
x
4
2 0 2
2
0
2
x4
x
3
2 0 2
2
0
2
Plan initial....

Figure 12 : Latin Hypercube Sampling on [-2.048,2.048]
4

Remark : Latin Hypercube samplings have stratification properties similar to Orthogonal Arrays : it
means and we can see it for the 4D example (above), that the projections on subspaces show a nice
spreading of the point set. It can be useful for example in the case we want to use it to reduce the
number of factors.


4.3) MaxiMin design sampling (for mathematical formulations, read Appendix A)

This sampling design type is caracterized by the fact that we want to maximize the minimum
distance (defined by a vectorial norm) between any two sample points, thus spreading the points out
as much as possible in the design space. Our motivation to examine this kind of sampling is that
several authors ([3],[10]) has proven that the quality of the kriging models is improving as the
experiments fill the parameter space.
Page 14
On the following figures, here are examples of maximin designs in 2- and 4-dimensional spaces
(figures 13 and 14).
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1

Figure 13 : Maximin design 2 parameters, 20 experiments

0 0.5 1
0
0.5
1
0 0.5 1
0
0.5
1
x1
x
2
0 0.5 1
0
0.5
1
x2
x
1
0 0.5 1
0
0.5
1
x1
x
3
0 0.5 1
0
0.5
1
x3
x
1
0 0.5 1
0
0.5
1
x1
x
4
0 0.5 1
0
0.5
1
x4
x
1
0 0.5 1
0
0.5
1
0 0.5 1
0
0.5
1
x2
x
3
0 0.5 1
0
0.5
1
x3
x
2
0 0.5 1
0
0.5
1
x2
x
4
0 0.5 1
0
0.5
1
x4
x
2
0 0.5 1
0
0.5
1
0 0.5 1
0
0.5
1
x3
x
4
0 0.5 1
0
0.5
1
x4
x
3
0 0.5 1
0
0.5
1

Figure 14 : Maximin design 4 parameters, 40 experiments
(symetric with respect to the diagonal)
Page 15

5) Some ways to use A-RSM software (Matlab API : Appendix A)

In this section we will look more precisely at different ways to use our A-RSM software tools.
The first one, is to build approximation models of expansive numerical models and use these
models for several manipulations (visualization, integration, etc.). The other way is to optimize
an objective function with or without constraints.
We make a Matlab code (ARSMexample.m in Appendix D) showing many different ways to use
A-RSM software. In the following paragraphs, we used this piece of Matlab code to generate the
results and the pictures.
A-RSM can be used to build approximation models and solving global unconstrained optimization
problems. Constrained ones can be solved by the mean of a penalization function. We will use as
main example the Rosenbrock function, it has a quite bad well known behavior. For visualization
convenience, we restrict our examples to 2 dimensional space problems.


5.1) Building response surface

5.1.1 - One-shot construction

The plan of experiments is limited to twenty-five points. On the following figures, we tried a
Maximin design, a random design, a latin Hypercube design and a full factorial plan with 5 levels
per factor.
For each trials, we drawn the final metamodel, the quadratic trend, the Mean Square Error and the
True Error between the kriging metamodel and the analytical Rosenbrock function. With a very
refined grid, we plot metamodel versus real function on figures called scatter plot. If the kriging
approximation is good, all points should be close to the line y=x.
In this study, we are showing the effects of the choice of the initial plan of experiments (maximin
design, latin hypercube design, random design and full factorial design) on the quality of the
approximation. We used A-RSM too, to build adaptively an approximation model by minimizing
the approximation error at each step.
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x1
x
2
500
1000
1500
2000
2500
3000
3500

Figure 15 : maximin design Rosenbrock function

Page 16
5.1.1.1) Maximin design
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
500
1000
1500
2000
2500
3000
3500

Figure 16 : maximin design kriging metamodel

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Tendance du Metamodele
-500
0
500
1000
1500
2000
2500

Figure 17 :maximin design quadratic trend

The point pattern is almost symetric, so the trend is found symetric too with respect to the vertical
axis.
Page 17
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
MSE metamodele
2
4
6
8
10
12
14
16
18
20
22

Figure 18 :maximin design Mean Square Error (MSE)

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
abs(Erreur absolue)
10
20
30
40
50
60

Figure 19 : Maximin design True absolute Error

We can see with figures 18 and 19, that Mean Square error prediction is a quite good approximation
of the true difference between the kriging metamodel and the Rosenbrock function. Mean square
error prediction gives us the area of maximum error (absolute values).
Page 18

Figure 20 : Maximin design - scatter plot

Scatter plot confirms the fact that the approximation obtained is fairly good. All the points of a
refined grid are close to the first diagonal line.



5.1.1.2) Latin Hypercube design
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
500
1000
1500
2000
2500
3000

Figure 21 : latin hypercube sampling kriging metamodel

As the pattern point is less regular and spread, the approximation is not as good as maximin
designs.
Page 19
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Tendance du Metamodele
0
500
1000
1500
2000

Figure 22 : latin hypercube sampling quadratic trend


-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
MSE metamodele
50
100
150
200
250

Figure 23 : latin hypercube sampling MSE

Page 20
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
abs(Erreur absolue)
200
400
600
800
1000
1200
1400
1600

Figure 24 : latin hypercube sampling True Absolute Error

Mean square error prediction gives us the general trend of the true error.


Figure 25 : Latin Hypercube sampling - scatter plot

The metamodel is good for a lot of points, and underestimates the Rosenbrock function on some
areas.
Page 21
5.1.1.3) Random design

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
0
500
1000
1500
2000
2500

Figure 26 : random sampling kriging metamodel

Random design does not possess the geometric properties of latin hypercube sample and maximin
designs, so the kriging metamodel is not as good as the others.

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Tendance du Metamodele
0
500
1000
1500
2000
2500

Figure 27 : random sampling quadratic trend

The point pattern is not symetric, so the quadratic trend too.
Page 22
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
MSE metamodele
50
100
150
200
250

Figure 28 : random sampling MSE


-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
abs(Erreur absolue)
200
400
600
800
1000
1200
1400
1600
1800

Figure 29 : random sampling True absolute Error

Mean square error prediction is bad. It is close to 0, near the experiments, but we dont really find a
close shape of the true error.
Page 23

Figure 30 : random sampling - scatter plot

The kriging metamodel underestimates and overestimates the true function, on large portions of the
design parameter space.


5.1.1.4) Full factorial design

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
500
1000
1500
2000
2500
3000
3500

Figure 31 : Full factorial design (5 levels) kriging metamodel
Page 24
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Tendance du Metamodele
-500
0
500
1000
1500
2000
2500

Figure 32 : Full factorial design (5 levels) quadratic trend

The point pattern is symetric, so trend and metamodel show the same structure.

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
MSE metamodele
50
100
150
200
250
300
350
400
450
500

Figure 33 : Full factorial design (5 levels) MSE
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
abs(Erreur absolue)
50
100
150
200
250
300
350

Figure 34 : Full factorial design (5 levels) True Absolute Error
Page 25

The mean square error prediction is biased by the regularity of the point pattern. But it successes in
finding some band structure in the true error.



Figure 35 : Full factorial design (5 levels) scatter plot

The approximation obtained is fairly good. We dont overestimate too much. We dont
underestimate too much too.


5.1.2) Adaptive Construction :

In this section, we present results obtained by using A-RSM software to sequentially improve the
quality (reducing the maximum mean square error) of the kriging metamodel.

We will start with a small number of experiments (10). Our global budget will be limited to 25
experiments, and we will let A-RSM methodology build the best metamodel.
Like in the previous sections, we take a full quadratic trend.
Page 26
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
0
500
1000
1500
2000
2500
3000
3500

Figure 36 : maximin design (10) kriging metamodel

Ten points is not enough to build a good approximation of the Rosenbrock function !




-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
MSE metamodele
50
100
150
200
250
300
350
400
450

Figure 37 : maximin design (10) MSE

Page 27
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
abs(Erreur absolue)
200
400
600
800
1000
1200
1400

Figure 38 : maximin design (10) True absolute error


Figure 39 : maximin design (10) : scatter plot

It confirms that our starting approximation is not a good one. We need to improve it !
Page 28
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
500
1000
1500
2000
2500
3000
3500

Figure 40 : A-RSM (25) kriging metamodel

The final result looks good. A lot of points has been added on the borders of the design space.
It means that to build a good approximation, it must be first good on the borders.


-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
MSE metamodele
10
20
30
40
50
60

Figure 41 : A-RSM (25) - MSE

Page 29
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
abs(Erreur absolue)
10
20
30
40
50
60
70
80

Figure 42 : A-RSM (25) True absolute error

The final Mean square error prediction looks like the true error distribution and with
approximatively the same order of magnitude.


Figure 43 : A-RSM (25) scatter plot

We have a good final approximation.
Page 30

5.1.3) Conclusions on building kriging approximations

We examined several one-shot methods wich differ only on the design of experiment type and an
adaptive method to build an approximation of a black-box function.
For one-shot methods, using Maximin designs and Latin Hypercube designs seems to be the best
choices. Full factorial design is not bad, but it becomes too expensive for a great number of
parameters and levels. Maximin designs are better than Latin Hypercube sampling but are
considerably harder to find (look at appendix A). If you are in a hurry, Latin Hypercube sampling is
the best compromise between speed and quality of approximation.
Adaptive construction was found to be nearly as good as one-shot maximin design method. But the
sequential process was a bit slower. Adaptive construction is the best choice when we dont have
the choice. When we are obliged to start with a small amount of experiments, or if we begin with an
existing sampling.



5.2) Unconstrained global optimization

Our A-RSM software tries to solve directly the global unconstrained optimization problem, given a
limited budget of experiments.
Again, the 2D Rosenbrock function will be used.
We will start with a small number of experiments (10). Our global budget will be limited to 25
experiments

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
0
500
1000
1500
2000
2500
3000
3500

Figure 44 : initial metamodel (10 maximin design)

We tried A-RSM methodoly to minimize Rosenbrock function with an amount of 25, 30, 35 and 40
experiments.
Page 31
With 25 experiments, we obtained the following final metamodel (figure 45) :
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
500
1000
1500
2000
2500
3000
3500

Figure 45 : A-RSM final metamodel budget : 25

With 30 experiments, we obtained (figure 46) :

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
500
1000
1500
2000
2500
3000
3500

Figure 46 : A-RSM final metamodel (budget =30)
Page 32

With a budget of 35 experiments :
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
500
1000
1500
2000
2500
3000
3500

Figure 47 : A-RSM final metamodel budget =35

Points tends to be located along the narrow valley.


And with a budget of 40 experiments :
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
0
500
1000
1500
2000
2500
3000
3500

Figure 48 : A-RSM final metamodel (budget =40)
Points are more and more in the valley, and they tend to make a cluster in the area of the optimum
([ 1 1]).
Page 33

Obtained results are collected in the next spreadsheet :
Budget Minimum obtained Xoptimum
25 0.14210075272397 1.26878085463878
1.63623544636559
30 0.12302666156750 0.85731909196360
0.70295404984306
35 0.10461430741770 1.30212529286577
1.70707783859490
40 0.00229023035194 1.01304450410719
1.02165474414081
25 30 35 40
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
Budget
M
i
n
i
m
u
m

v
a
l
u
e

o
b
t
a
i
n
e
d
Convergence

Figure 49 : Convergence of the algorithm

From appendix C, we already know that the true optimum of the Rosenbrock function is located at [
1 1 ] and that its optimal value is 0. In this case, we can see that as we increase the budget, the A-
RSM procedure best point gets closer to the optimal location (figure 49).

Now we compare the relative efficiency of our method to a classical SQP method (Sequential
Quadratic Programming from Matlab Optimization Toolbox) by limiting the maximum number of
function evaluations of the SQP algorithm.

Obtained results are collected in the next spreadsheet :
Budget Minimum obtained Xoptimum
25 0.63237333845480 0.20542085573013
0.04538727644189
30 0.54577720628813 0.29421444849716
0.06473464492223
35 0.33547639050801 0.44037028611572
0.17899582685641
40 0.24058283370509 0.51641572310708
0.25848210157977
(starting point =[-2 1], f =901);
Page 34

We can now draw on the same figure (figure 50), for each method, the results obtained versus the
cost of search (Number of evaluation functions only).

20 30 40 50 60 70 80 90 100 110 120
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Unconstrained Rosenbrock function SQP=Blue A-RSM=green
Global budget
O
p
t
i
m
u
m

v
a
l
u
e

o
b
t
a
i
n
e
d


Figure 50: SQP vs A-RSM on unconstrained Rosenbrock function

For the SQP method, we reported the last point very close to the solution of the problem.
We did not reported CPU time, because we suppose evaluation function time is very long (it is not
the case for the Rosenbrock function, but imagine crash finite element analyses instead of it). A-
RSM method seems to converge faster than SQP method, but we do not know if this method would
reach the same level of precision with the same amount of function evaluations. We can partially
conclude that A-RSM is better when the number of function evaluations is rather limited.

5.3) Constrained global optimization

Solving the general constrained global optimization problem with our software needs to transform it
into an unconstrained one. Several basic techniques exists. We will show the most simple one wich
involves the use of a penalization function (or merit function) :

Page 35
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x1
x
2
1000
2000
3000
4000
5000
6000
7000

Figure 51 : True penalized function with initial maximin design (10)

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
1000
2000
3000
4000
5000
6000
7000

Figure 52 : A-RSM final metamodel obtained 25 experiments

Page 36
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
0
1000
2000
3000
4000
5000
6000
7000

Figure 53 : A-RSM final metamodel 30 experiments

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
1000
2000
3000
4000
5000
6000
7000

Figure 54 : A-RSM final metamodel 35 experiments
Page 37
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
1000
2000
3000
4000
5000
6000
7000

Figure 55 : A-RSM final metamodel 40 experiments


-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Metamodele Final
1000
2000
3000
4000
5000
6000
7000

Figure 56 : A-RSM final metamodel - 50 experiments
Page 38

Minimum obtained Xoptimum Norm(Xoptimum)
25 0.86015889910006 0.72523220920637
0.43738063285649
0.84691414870003
30 0.14738770748001 0.66357357061054
0.45882446797913
0.80675261141762
35 0.44168559739016 0.75368953233007
0.62977446547076
0.98217299316510
40 0.08312147663122 0.71215485959505
0.50553159647533
0.87334228059779
50 0.09290510254356 0.69812568228276
0.48316402184951
0.84901527681931

The solution of the problem is the point that minimize the Rosenbrock function on a unity radius
circle. So the distance of the solution from the origin (norm(Xoptimum)) must be close to 1.

25 30 35 40 45 50
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Constrained Rosenbrock function A-RSM
Global budget
O
p
t
i
m
u
m

v
a
l
u
e

o
b
t
a
i
n
e
d

Figure 57 : Minimum value obtained for the constrained Rosenbrock optimization problem
25 30 35 40 45 50
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
Constrained Rosenbrock function A-RSM
Global budget
D
i
s
t
a
n
c
e

t
o

t
h
e

c
o
n
s
t
r
a
i
n
t

b
o
u
n
d
a
r
y

Figure 58 : Distance to the constraint boundary

Page 39
We can see that we get closer to the solution as we increase the budget, but the convergence to the
minimum is not monotone. We solved a transformed version of the constrained problem via a
penalization function that helps us to combine two opposite objectives. We have to minimize the
Rosenbrock function and minimize the constraint violation together.
As the functions are highly nonlinear, we can see a strange behavior of the convergence of the
algorithm on figures 57 and 58 : when the real objective function is minimized, the distance to the
boundary is increased. We suggest that the convergence is not monotonous, and may be if we
would have made additionnal optimizations with greater budgets (e.g.: 60, 70,), we would have
seen the Distance to the constraint boundary becoming closer to 0.

We can compare the relative efficiency of our method to a classical SQP method (from Matlab
Optimization Toolbox) by limiting the maximum number of function evaluations of the SQP
algorithm.

Obtained results are collected in the next spreadsheet :
Budget Minimum obtained Xoptimum Norm(Xoptimum)
25 0.64561595718386 0.19796274012057
0.03433930735380
0.20091897497644
30 0.62484498672076 0.22291646955880
0.03520514705707
0.22567931845845
35 0.55402471171105 0.30302374026579
0.06569891490627
0.31006408141629
40 0.46207156712246 0.37159444144307
0.11216370573736
0.38815348226716
50 0.13865390401953 0.62812620386744
0.39263521634420
0.74074620559175
(starting point =[-2 1], f =2428.86 );

We can now draw on the same figure (figure 59), for each method, the results obtained versus the
cost of search (Number of evaluation functions only).
20 30 40 50 60 70 80 90 100 110
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Constrained Rosenbrock function SQP=Blue A-RSM=green
Global budget
O
p
t
i
m
u
m

v
a
l
u
e

o
b
t
a
i
n
e
d


Figure 59: A-RSM vs SQP Constrained Rosenbrock function
Page 40
Page 41

We did not reported CPU time, because we suppose evaluation function time is very long. The final
result obtained by SQP method is drawn on the picture.
A-RSM has a faster convergence in the [30,..,60] budget range for this problem, than SQP
algorithm. The convergence of A-RSM is not monotonous (but its theory does not assume it) and as
the budget is increasing (after 50), the convergence would be as slow as the SQP method.



6) Conclusions

In this report we gave many examples (appendix D) to show how to use A-RSM Matlab software
and its components (DOE, visualization, optimization, approximation).
We tried to present a new global optimization method dedicated to a certain class of problems
whose properties are :
- small to moderate number of design parameters (<30)
- expensive black-box function evaluations
- numerically noisy or low precision functions
- global number of function evaluations limited by lead time or cost

This method is quite new, even if all of its theoretical components are not (kriging, design of
experiments, bayesian heuristic, etc.).

We showed on a very small example, that A-RSM convergence can be superior to some good
standard optimization methods (e.g: SQP) when we are hardly constrained by time or budget.
According to the author, this kind of method has not and is still not (at that time) available in
commercial software. Industrial applications have already been made (technical report
64230/H1/2000.041, 64230/H1/2000.056 and 64230/H1/2000.060) but many others are still to be
done.
A-RSM methodology is not the best method that can solve all the problems. It has its weaknesses,
and the author thinks that the best way to find a good approximation of the global optimum in a
numerical simulation context is to combine A-RSM to identify the most interesting areas of the
design space, with a local optimization method like (SQP or FAIPA).

At Renault, our plan is to embed A-RSM methodology in iSight software to win efficiency (large
overhead computations increase exponentially with the number of parameters and the number of
experiments) and to provide it to a great number of users.

Research works still need to be done :
- Improve the efficiency of the method for larger sets of design parameters.
- Nowadays, there exist Finite element softwares that provide gradients (MSC/Nastran, Radioss,
Sense) and Automatic differentiation is an active research domain. It will be useful to build
kriging metamodel with additional information like gradients.
- Adaptation of A-RSM methodology to mixed integer nonlinear programming (MINLP) and
global constrained optimization would be of great interest too.

From an application point of view, shape optimization problems of mechanical parts with physical
costly simulations (CFD, Acoustic, Crash) are well suited to A-RSM algorithm.
Page 42

Bibliography

[1] J ones D.R., Schonlau M., Welch W.J .
Efficient Global Optimization of expensive black-box functions
J ournal of Global Optimization, 1998

[2] Schonlau M.
Computer experiments and Global Optimization
PhD thesis, University of Waterloo, 1997

[3] Simpson T.
A concept exploration method for product family design
PhD thesis, Georgia Institute of Technology, 1998

[4] Torczon V., Trosset M.W.
Using approximations to accelerate engineering design optimization
7
th
AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and Optimization
pp 738-748, 1998

[5] Torczon V., Trosset M.W.
Numerical optimization using computer experiments
Technical report 97-02, Department of Computational & Applied Mathematics, Rice University
1997

[6] Trosset M.W.
Optimization on a limited budget
Proceedings of the section on physical and engineering sciences, American Statistical Association,
1998

[7] Booker A.J .
Examples of surrogate modelling of computer simulations
7
th
AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and Optimization
pp 118-128, 1998

[8] Powell M.J .D.
Recent research at Cambridge on radial basis functions
Research Report DAMTP 1998/NA05 Cambridge University, 1998

[9] Guttman H.M.
A radial basis function for global optimization
International workshop on Global Optimization, Florence, Italy, 1 October 1999

[10] Koehler J .R., Owen A.B.
Computer experiments
in: Ghosh, Rao, Eds., Handbook of Statistics, Volume 13, Elsevier Science, 1996
Page 43

[11] Barthelemy P. , Haftka R.T.
Approximation concepts for optimum structural design A review
Structural Optimization, Vol 5, pp 129-144, 1993

[12] Cox D.D., J ohn S.
SDO : a statistical method for global optimization
in: Alexandrov, Hussaini, Eds., Multidisciplinary Design Optimization: State of the Art,
SIAM 1997

[13] Sacks J ., Welch W.J ., Mitchell T.J ., Wynn H.P.
Design and analysis of computer experiments
Statistical Science, Vol 4, pp 409-435, 1989

[14] Walter E., Pronzato L.
Identification de modles paramtriques partir de donnes exprimentales
Masson, 1994

[15] Costa J .P., Pronzato L., Thierry E.
A comparison between kriging and radial basis function networks for nonlinear prediction
in Proc. NSIP99 (IEEE EURASIP Workshop on nonlinear Signal and Image Processing),
paper N:155 - 1999
http://www.i3s.unice.fr/~pronzato/biblio.html

[16] Zhigljavsky A.A.
Theory of global random search
Kluwer Academic Publishers, 1991

[17] Delbauve A.
La transformation rductrice ALIENOR. Application loptimisation globale et aux calculs
dintgrales multiples
Rapport de stage effectu chez Renault (DR Service 64230) - Universit dOrsay - 2000

[18] Myers R.H., Montgomery D.C.
Response Surface Methodology Process and Product Optimization using designed experiments
J ohn Wiley & Sons 1995

[19] Gentle J .E.
Random number generation and Monte Carlo methods
Springer-Verlag 1998

[20] Trosset M.W.
Approximate Maximin distance designs
Proceedings of the Statistical Computing Section, ASA - 1999


Appendix A :
MaxiMin Designs Mathematical Formulations

We suppose we have n
dv
factors and n
s
sampling points.
The global number of variables n
var
for this problem is equal to n
dv
times n
s
.
With dist(x
i
,x
j
) the distance beetween points x
i
and x
j

Basic Maximin Formulation:

Min ( Max(dist(x
i
,x
j
) ),

subject to :
vlb x
i
,x
j
vub

This problem is a nonsmooth one and is very difficult to solve due to the nonconvexity of the
objective function. It needs algorithms like simulated annealing, genetic algorithms or random
searches.

Formulation 1 ([20]):
We add an additional artificial variable r.

Maximize r (or minimize r),

subject to :
2r - dist(x
i
,x
j
) 0
vlb x
i
,x
j
vub


This formulation gives us a smooth objective function but with additional complexity from an extra
parameter and a lot of nonlinear constraints. The number of these constraints is equal to :
2
) 1 .(
var var
n n

So, it grows rapidly with n
dv
and n
s
. We suppose that there are a lot of constraints active at the
optimum and that the feasible domain may not ne convex, so standard optimization algorithms
based on KKT optimality conditions and Newton iterations may be trapped in local optima.

Michael Trossets Formulation ([20]):

Minimize

>

j i
j i ij
x x

) ( with )
1
1 log( ) (
2
t
t
ij
+ =
subject to :
vlb x
i
,x
j
vub

This formulation gives us an unconstrained optimization problem with the same number of
variables as Formulation 1. It avoids nonlinear constraints, even if the objective function stays
strongly nonlinear.
The objective function used here can be seen as an approximation (or relaxation) of the norm L

.
From a practical point of view, we take in the range [8,15].
Appendix B : Matlab Software API
Page 44
Page 45

Sampling points generation procedures :
Full Factorial plan :
The Matlab procedure call to build a full factorial plan of experiments is :

[ pl an, t ab] = LJfullfact( Nb_par amet er s, Nb_l evel s)

This function returns the variable plan as the full factorial plan with values in [1,1].
The variable tab contains the corresponding levels.
Nb_parameters is the number of variables (n
dv
)
Nb_levels is a vector whose component i is the number of level of the parameter i (if it is a scalar,
all parameters has the same number of levels).


Latin Hypercube Sampling :
The Matlab procedure call to build a Latin Hypercube Sampling is :

[ Desi gn] = latinhc( Nb_Exper i ment s, Nb_par amet er s, l ower , upper )

Design is a Nb_Experiments by Nb_parameters matrix whose lines represent the experiments.
Nb_Experiments is the number of points (n
s
)
Nb_parameters is the dimensionnality of the design space (n
dv
)
lower and upper are vectors (size : Nb_Parameters) filled respectively with lower and upper
bounds of the design parameters.

Maximin Design :

The plan_MAXIMIN2 Matlab function is a general procedure to compute Maximin designs :

[ pl an_maxi mi n, maxi mi n] = plan_MAXIMIN2( ,
Nb_Exper i ment s,
Nb_par amet er s,
l ower ,
upper ,
i ni t ,
f unRest r ai nt ,
met hod,
penal i zat i on,
i t er max)

plan_maximin is the maximin design matrix
Nb_Experiments is the number of points (n
s
)
Nb_parameters is the dimensionnality of the design space (n
dv
)
lower and upper are vectors (size : Nb_parameters) filled respectively with lower and upper
bounds of the design parameters.
init is a vector of 2 elements that must be set to : [ 1 0 ] (for example : init =[ 1 0 ] ).
funRESTRAINT is the name of a Matlab function that will return the values of a constraints set
equations (useful only in the case of nonlinearly constrained design space). Set it to [] in the
unconstrained case.
method : number corresponding to the mathematical optimization algorithm used.
We recommend :
method =7 (random search on basic maximin formulation (Appendix B)),
method =12 (constr (Matlab Optimization toolbox SQP) with formulation 1)
method =14 (constr with TROSSET formulation)
Page 46
penalization : penalization coefficient. Set it to a great value in the case of constrained design
space (e.g. : 1
e
4), otherwise 0.
itermax : maximum number of iterations for the used algorithm

Remark : Note that the third method is several orders slower than the first twos.

A-RSM matlab procedure call description (metam.m):

f unct i on [ xopt , f xopt , mseopt , . . .
pl an_f i nal , y_f i nal , . . .
t r end_f i nal , t het a_f i nal , . . .
si gma2_f i nal , Ri nv_f i nal , . . .
Fn_f i nal , bet a_f i nal , bet _f i nal ] = . . .
metam( pl an, y0, vl b, vub, f unobj , i ndf , i ndg, . . . ,
gr adf obj , i ndgr adf , i ndgr adg, . . .
met hode, st r at egi e, mer i t e, opt i ons, . . .
f unARRET, f unRESTRAI NT, r est ar t , r aci ne, . . .
var ar gi n) ;
%I NPUT :
%
%pl an : pl an of exper i ment s
%y0 : val ues of t he f unct i on f or each exper i ment s
%vl b : l ower bounds f or t he desi gn par amet er s
%vub : upper bounds f or t he desi gn par amet er s
%f unobj : name of t he mat l ab f unct i on f or t he obj ect i ve f unct i on eval uat i on
%i ndf : i ndf ( i ) = 1 : eval uat i on of f ( i ) i s r equest ed
% el se i ndf ( i ) = 0
%i ndg : i ndg( i ) = 1 : eval uat i on of g( i ) i s r equest ed
% el se i ndg( i ) = 0 ( NOT USED ! = i ndg = [ ] )
%gr adf obj : NOT USED ! ( gr adf obj = ) ;
%i ndgr adf : NOT USED ! i ndgr adf = [ ] ;
%i ndgr adg : NOT USED ! i ndgr adg = [ ] ;
%met hode : ' wei ght '
%st r at egi e : ' par et o'
%mer i t e : 1 = met amodel + MSE
% 2 = met amodel + r el at i ve MSE
% 3 = met amodel + Maxi mi n
% 4 = met amodel
% 5 = MSE
% 6 = Maxi mi mdi st ance
%
%opt i ons( 1) = maxi mumnumber of i t er at i ons
%opt i ons( 2) = maxi mumnumber of f unobj eval uat i ons
%opt i ons( 3) = pr eci si on r equest ed f or t he st oppi ng r ul e
%opt i ons( 4) = 4
%opt i ons( 5) = pol ynomi al t r end of t he kr i gi ng met amodel
% 0 = const ant
% 1 = l i near wi t hout i nt er act i ons
% 2 = l i near wi t h i nt er act i ons
% 3 = pur e quadr at i c
% 4 = f ul l quadr at i c
%opt i ons( 6) = choose t he number of t het a par amet er s
% 0 : one par amet er t het a , p = 2
% 1 : one par amet er t het a per di mensi on, p = 2
% 2 : one par amet er t het a and one exponant p per di mensi on
%opt i ons( 7) = Opt i mi zat i on used t o maxi mi ze t he MLE f unct i on
% 0 : SQP ( const r ) Mat l ab Opt i mi zat i on Tool box
% 1 : GAOT genet i c al gor i t hm
% 2 : I mpr oved Nel der - Mead al gor i t hmt ype: SI MPS
% 9 : ossr s ( r andomsear ch t ype al gor i t hm)
%opt i ons( 8) = Opt i mi zat i on met hod used t o mi ni mi ze t he mer i t f unct i on
% 1 : GAOT genet i c al gor i t hm
%opt i ons( 10) : t ol er ance f or t he SVD decomposi t i on of t he covar i ance mat r i x
( 1e- 14)
Page 47
%opt i ons( 20) = 0
%opt i ons( 21) = number of poi nt s t o dr aw t he r eal f unct i on
%opt i ons( 22) = number of poi nt s t o dr aw t he kr i gi ng met amodel
%opt i ons( 23) = number of cont our l evel s
%opt i ons( 24) = cur r ent i t er at i on ( set i t t o 0 i f r est ar t not equal t o 1)
%
%r est ar t = 1 : i f we need t o r est ar t f r oma pr ecomput ed i t er at i on ( opt i ons( 24) ,
el se set i t t o 0 ( nor mal mode)
%
%
%
%
%OUTPUT :
%
%xopt : l ocat i on of t he best mi ni mumval ue f ound
%f xopt : best mi mi mumval ue f ound
%mseopt : MSE val ue at xopt
%pl an_f i nal : f i nal pl an of exper i ment s
%y_f i nal : val ues of t he f unct i on f or pl an_f i nal
%
%t he f ol l owi ng out put s ar e used onl y i n t he case we need t o
%r ebui l d t he f i nal kr i gi ng met amodel ( l ook at save_met amodel and
%l oad_met amodel f unct i ons )
%
%t r end_f i nal
%t het a_f i nal
%si gma2_f i nal
%Ri nv_f i nal
%Fn_f i nal
%bet a_f i nal
%bet _f i nal
%


Basic 2D visualization functions

f unct i on [ er r or , i f i g] = vi spl an2D2( vl b, vub, pl an) ;
%
%vi sual i zat i on of a 2D poi nt s set
%
%I NPUT :
% vl b : l ower bounds vect or
% vub : upper bounds vect or
% pl an : poi nt s l ocat i ons ( pl an( i , 1) = X( i ) , pl an( i , 2) = Y( i ) )
%
%OUTPUT :
% er r or : er r or message
% i f i g : l i st of f i gur e poi nt er s
%


f unct i on [ er r or , i f i g] =
vi spl an2DC( t i t r e, vl b, vub, pl an, f t ab, nbc, np, xopt , i x, i y, f unf un, var ar gi n) ;
%
%Vi sual i zat i on of a 2D poi nt s set wi t h cont our s f r oman ext er nal f unct i on
%
%I NPUT :
% t i t r e : t i t l e st r i ng of t he f i gur es
% vl b : l ower bounds vect or
% vub : upper bounds vect or
% pl an : poi nt s l ocat i ons
% f t ab : f unct i on val ues vect or
% nbc : number of cont our l evel s
% np : number of poi nt s f or each di mensi on f or comput at i ons of
% cont our l evel s
% xopt : r ef er ence l ocat i on ( gener al l y opt i muml ocat i on)
% i x : i ndex par amet er number f or X
% i y : i ndex par amet er number f or Y
% f unf un : Mat l ab f unct i on needed f or cont our l evel s comput at i on
Page 48
% var ar gi n : l i st of addi t i onal par amet er s f or t he f unf un f unct i on
%
%OUTPUT :
% er r or : er r or message
% i f i g : l i st of f i gur e poi nt er s


Basic n-D visualization functions :

f unct i on [ er r or , i f i g] = vi spl anND( t i t r e, vl b, vub, pl an) ;
%
%Vi sual i zat i on of a n- D poi nt set
%
%I NPUT :
% t i t r e : t i t l e st r i ng of t he f i gur es
% vl b : l ower bounds vect or
% vub : upper bounds vect or
% pl an : coor di nat es of poi nt s
%
%OUTPUT :
% er r or : er r or message
% i f i g : l i st of f i gur e poi nt er s
%

f unct i on [ er r or , i f i g] =
vi spl anNDC( t i t r e, vl b, vub, pl an, f t ab, nbc, np, xopt , f unf un, var ar gi n) ;
%
%Vi sual i zat i on of a n- D poi nt set wi t h cont our l evel s f or a speci f i ed ext er nal
%f unct i on
%
%I NPUT :
% t i t r e : chai ne de car act er e
% vl b : val eur mi ni des var i abl es, si vl b == [ ] l e sous- pr og pr end l e mi n
% vub : val eur maxi des var i abl es
% pl an : t abl eau cont enant l es coor donnees des poi nt s
% f t ab : t abl eau de val eur s de l a f onct i on
% nbc : nombr e de cont our s
%
%OUTPUT :
% er r or :
% i f i g :
%

Input-Output metamodel functions

f unct i on [ i er r ] = save_met amodel e( nom_f i chi er , METAM) ;
%
%
%I NPUT :
% nom_f i chi er : name of t he f i l e t o save t he met amodel i n
% METAM : st r uct ur e t ype var i abl e whose f i el ds ar e
% dat as needed t o eval uat e a kr i gi ng met amodel
%
%OUTPUT :
% i er r : er r or message


f unct i on [ METAM] = l oad_met amodel e( nom_f i chi er )
%
%
%I NPUT :
% nom_f i chi er : dat af i l e t o l oad
%
%OUTPUT :
% METAM : met amodel l oaded ( st r uct ur e)


Other functions :
Page 49

f unct i on [ met hode, st r at egi e, mer i t e, opt i ons] = met am_opt i ons( nvar , nexp) ;
%
%I ni t i al i zat i on of def aul t par amet er s used by met amf unct i on
%
%I NPUT :
% nvar : number of par amet er s
% nexp : number of exper i ment s
%
%OUTPUT :
% met hode :
% st r at egi e : par amet er s used by met am
% mer i t e :
% opt i ons :


f unct i on [ f t ab, cont t ab, gr ad_f t ab, gr ad_cont t ab, er r eur ] =
gener i c_bat ch( pl an, f unobj , gr adobj , i ndf , i ndg, i ndgr adf , i ndgr adg, var ar gi n) ;
%
%Aut omat i c execut i on of a l ot of eval uat i on f unct i ons
%Thi s f unct i on l aunches sequent i al l y f unobj and gr adobj on
%a set of poi nt s ( pl an)
%
%I NPUT :
% pl an : desi gn of exper i ment s
% f unobj : f unct i on t o eval uat e
% gr adobj : gr adi ent f unct i on
% i ndf : i ndex of t he obj ect i ve f unct i on t o eval uat e
% i ndg : i ndex of t he const r ai nt f unct i ons t o eval uat e
% i ndgr adf : i ndex of t he obj ect i ve f unct i on gr adi ent s t o eval uat e
% i ndgr adg : i ndex of t he const r ai nt f unct i on gr adi ent s t o eval uat e
% var ar gi n : addi t i onnal par amet er s f or f unobj and/ or gr adobj f unct i ons
%
%OUTPUT :
% f t ab : val ues of t he obj ect i ve f unct i ons
% cont t ab : val ues of t he const r ai nt s
% gr ad_f t ab : val ues of t he obj ect i ve f unct i on gr adi ent s
% gr ad_cont t ab : val ues of t he const r ai nt f unct i on gr adi ent s
% er r eur : er r or message
%

f unct i on [ f , t end, noi se] = eval _kr i gmodel ( xx) ;
%
%eval _kr i gmodel . m
%
%Thi s f unct i on f i l e eval uat es t he kr i gi ng met amodel and r et ur ns t he val ues
%of t he met amodel i t sel f ( f ) , t he t r end par t ( t end) and t he noi se par t .
%
%WARNI NG ! : xx must be nor mal i zed i n [ 0, 1]
%The f ol l owi ng gl obal var i abl es must be i ni t i al i zed bef or e usi ng eval _kr i gmodel
%f unct i on
%gl obal x y bet a Ri nv t het a Fn t r end si gma2 bet met PARAM;
%
%I NPUT :
% xx : poi nt wher e we want t o eval uat e t he met amodel
%
%OUTPUT :
% f : val ue of t he met amodel at xx
% t end : t r end val ue at xx
% noi se : noi se val ue at xx
%
Appendix C : Rosenbrock function
F(x) =

=

+
2
n
1 i
2
1 i 2
2
1 i 2 i 2
dv
) ) x 1 ( ) x x .( 100 (
[ ]
dv
n
048 . 2 , 048 . 2 x

There exists only one optimum located at ( )
dv
n
1 ,., 1 ,..., 1 , the minimum value of the function is
equal to 0.
On the following picture, we drawn the Rosenbrock function for 2 and 4 parameters.
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Fonction reelle
500
1000
1500
2000
2500
3000
3500

Rosenbrock function 2 variables

Rosenbrock function 4 variables
Page 50
Page 51

We used the SQP method implementation from MATLAB Optimization Toolbox to find the
solution and it gave for the 2D Rosenbrock function, with forward finite difference gradients :

f - COUNT FUNCTI ON MAX{g} STEP Pr ocedur es
3 2509 0 1
6 461. 76 0 1
9 233. 829 0 1
13 0. 772339 0 0. 5
18 0. 664555 0 0. 25
21 0. 642115 0 1
24 0. 632373 0 1
27 0. 588289 0 1
30 0. 545777 0 1
33 0. 493587 0 1
36 0. 335476 0 1
39 0. 240583 0 1
43 0. 199519 0 0. 5
46 0. 0892429 0 1
50 0. 0698957 0 0. 5
53 0. 0403858 0 1
57 0. 0327867 0 0. 5
60 0. 0223813 0 1
63 0. 00840244 0 1
66 0. 00319637 0 1
69 0. 00303542 0 1
72 0. 000644356 0 1
75 0. 000219762 0 1 Hessi an modi f i ed
78 2. 24883e- 005 0 1 Hessi an modi f i ed
81 1. 53686e- 006 0 1 Hessi an modi f i ed
84 4. 96639e- 009 0 1 Hessi an modi f i ed
87 3. 73017e- 012 0 1 Hessi an modi f i ed
t wi ce
104 3. 05854e- 012 0 - 6. 1e- 005 Hessi an modi f i ed
t wi ce
119 3. 05732e- 012 0 - 6. 1e- 005 Hessi an modi f i ed
t wi ce
Opt i mi zat i on Conver ged Successf ul l y
No Act i ve Const r ai nt s
Optimum =
x =

1.00000030537696 1.00000043858955

Optimum value of the function
ans =

3.057315616996783e-012

Number of function evaluations
ans =

119
The constrained Rosenbrock function problem is as follows :
Minimize Rosenbrock function F(x) =

=

+
2
n
1 i
2
1 i 2
2
1 i 2 i 2
dv
) ) x 1 ( ) x x .( 100 (
Subject to : 0 1 x ) x ( g
2 / 1
n
1 i
2
i
dv

=

=
With [ ]
dv
n
048 . 2 , 048 . 2 x

We solve it with the SQP method :

f - COUNT FUNCTI ON MAX{g} STEP Pr ocedur es
3 2509 1. 23607 1
7 28. 3498 - 0. 475451 0. 5
12 3. 19575 - 0. 765158 0. 25
15 0. 741385 - 0. 840748 1
18 0. 706189 - 0. 838414 1
21 0. 704594 - 0. 837305 1 Hessi an modi f i ed
24 0. 688777 - 0. 822903 1
27 0. 665528 - 0. 796984 1
30 0. 624792 - 0. 747467 1
33 0. 561365 - 0. 692827 1
36 0. 414019 - 0. 591794 1
39 0. 297447 - 0. 498391 1
43 0. 257299 - 0. 379071 0. 5
46 0. 176777 - 0. 321301 1
49 0. 134228 - 0. 140957 1
52 0. 0823377 - 0. 124507 1
55 0. 0491848 - 0. 00628552 1
58 0. 0457367 9. 33923e- 006 1
61 0. 0456748 1. 02737e- 007 1 Hessi an modi f i ed
62 0. 0456748 1. 03162e- 012 1 Hessi an modi f i ed
Opt i mi zat i on Conver ged Successf ul l y
Act i ve Const r ai nt s:
1
Optimum =

x =

0.78641516511815 0.61769829858456

Optimum value of the function
ans =

0.04567480871935

Number of function evaluations
ans =

62
Page 52
Page 53

Now we solve the unconstrained formulation of our constrained problem via a penalization function
(look at constrained_rosen_fun.m MATLAB code page 57 for the mathematical formulation).
Matlab execution gives us :

f - COUNT FUNCTI ON MAX{g} STEP Pr ocedur es
3 4036. 86 0 1
7 28. 3498 0 0. 5
12 3. 91421 0 0. 25
15 0. 754423 0 1
18 0. 648972 0 1
21 0. 647771 0 1 Hessi an modi f i ed
24 0. 645616 0 1 Hessi an modi f i ed
27 0. 639032 0 1
30 0. 624845 0 1
33 0. 597786 0 1
36 0. 554025 0 1
39 0. 462072 0 1
42 0. 323804 0 1
45 0. 244906 0 1
48 0. 138654 0 1
52 0. 11845 0 0. 5
55 0. 0831644 0 1
59 0. 0620863 0 0. 5
64 0. 0498578 0 0. 25
72 0. 0483572 0 0. 0313
81 0. 0475846 0 0. 0156
92 0. 0475287 0 0. 00391
95 0. 0459494 0 1
98 0. 0456651 0 1 Hessi an modi f i ed
102 0. 0456601 0 0. 5 Hessi an modi f i ed
105 0. 0456601 0 1 Hessi an modi f i ed
109 0. 0456601 0 0. 125 Hessi an modi f i ed
Opt i mi zat i on Conver ged Successf ul l y
No Act i ve
Optimum =
Const r ai nt s

x =

0.78648418351974 0.61780709301725

Optimum value of the function
ans =

0.04566005301450

Number of function evaluations:
ans =

109
Page 54

Matlab code for Rosenbrock function :
(rosen_fun.m)

f unct i on [ f , g, no, ni , ne] = r osen_f un( x, i ndf , i ndg, var ar gi n) ;
%
%Rosenbr ock Funct i on
%
%I NPUT :
% x : var i abl es
% i ndf : i ndf ( i ) = 1 : eval uat i on of f ( i ) i s r equest ed
% el se i ndf ( i ) = 0
% i ndg : i ndg( i ) = 1 : eval uat i on of g( i ) el se i ndg( i ) = 0
%
%OUTPUT :
% f : obj ect i ve f unct i ons
% g : const r ai nt s
% no : number of obj ect i ve f unct i ons
% ni : number of i nequal i t y t ype r est r ai nt s
% ne : number of equal i t y const r ai nt s
%
no = 1 ; %number of obj ect i ves
ni = 0 ; %number of i nequal i t y r est r ai nt s
ne = 0 ; %number of equal i t y r est r ai nt s
%
nvar = l engt h( x) ;
%
f vec = zer os( 1, 2*nvar ) ;
%
i f ( i sempt y( i ndf ) | ( no==0) ) ;
f = [ ] ;
el se
i f mod( nvar , 2) == 1;
er r or ( ' number of var i abl es must be even ! ' ) ;
el se
f or i = 1: ( nvar / 2) ;
f vec( 2*i - 1) =10*( x( 2*i ) - x( 2*i - 1) ^2) ;
f vec( 2*i ) =1- x( 2*i - 1) ;
end;
end;
f = sum( f vec. ^2) ;
end;

i f ( i sempt y( i ndg) | ( ( ni +ne) ==0) ) ;
g = [ ] ;
el se
g = 0 ;
end;
Page 55

Matlab code for the Rosenbrock function gradients:
(gradrosen_fun.m)

f unct i on [ df , dg, var ar gout ] = gr adr osen_f un( f un, x, i ndgr adf , i ndgr adg, var ar gi n) ;
%
%Rosenbr ock f unct i on Gr adi ent s
%
%I NPUT :
% f un : obj ect i ve f unct i on Mat l ab pr ocedur e
% x : var i abl es
% i ndgr adf : i ndgr adf ( i ) = 1 means comput e gr ad f ( i ) ( l ook at i ndf )
% i ndgr adg : i ndgr adg( i ) = 1 means comput e gr ad g( i ) ( l ook at i ndg)
%
%OUTPUT :
% df : obj ect i ve f unct i ons gr adi ent s wi t h r espect t o x
% dg : const r ai nt s gr adi ent s
%
nvar = l engt h( x) ;
%
no = 1 ;
ni = 0 ;
ne = 0 ;
%
f vec = zer os( 1, 2*nvar ) ;
i f ( i sempt y( i ndgr adf ) ) ;
df = [ ] ;
el se
i f mod( nvar , 2) == 1;
er r or ( ' number of var i abl es must be even ! ' ) ;
el se
df vec = zer os( 2*nvar , nvar ) ;
f or i = 1: ( nvar / 2) ;
f vec( 2*i - 1) =10*( x( 2*i ) - x( 2*i - 1) ^2) ;
f vec( 2*i ) =1- x( 2*i - 1) ;
df vec( 2*i - 1, 2*i - 1) = df vec( 2*i - 1, 2*i - 1) +2*f vec( 2*i - 1) *( - 20*x( 2*i - 1) ) ;
df vec( 2*i - 1, 2*i ) = df vec( 2*i - 1, 2*i ) + 2*f vec( 2*i - 1) *10 ;
df vec( 2*i , 2*i - 1) = df vec( 2*i , 2*i - 1) + 2*f vec( 2*i ) *( - 1) ;
df vec( 2*i , 2*i ) = df vec( 2*i , 2*i ) + 0 ;
end;

f or i = 1: nvar ;
df ( i , : ) = sum( df vec( : , i ) ) ;
end;
end;
end;
%
i f ( i sempt y( i ndgr adg) ) ;
dg = [ ] ;
el se
dg = 0 ;
end;
%
Matlab code for the
Unconstrained version of the Constrained Rosenbrock optimization problem
(constrained_rosen_fun.m)


The constrained optimization problem :
Minimize Rosenbrock function F(x) =

=

+
2
n
1 i
2
1 i 2
2
1 i 2 i 2
dv
) ) x 1 ( ) x x .( 100 (
Subject to : 0 1 x ) x ( g
2 / 1
n
1 i
2
i
dv

=

=
With [ ]
dv
n
048 . 2 , 048 . 2 x
Is changed to :

Minimize , 0 .(max( 1000 ) x ( F ) x ( + = g(x)))
2
With [ ]
dv
n
048 . 2 , 048 . 2 x

f unct i on [ f , g, no, ni , ne] = const r ai ned_r osen_f un( x, i ndf , i ndg, var ar gi n) ;
%
no = 1 ;
ni = 0 ;
ne = 0 ;
%
nvar = l engt h( x) ;
%
f vec = zer os( 1, 2*nvar ) ;
%
i f ( i sempt y( i ndf ) | ( no==0) ) ;
f = [ ] ;
el se
i f mod( nvar , 2) == 1;
er r or ( ' l e nombr e de var i abl es doi t et r e pai r ! ' ) ;
el se
f or i = 1: ( nvar / 2) ;
f vec( 2*i - 1) =10*( x( 2*i ) - x( 2*i - 1) ^2) ;
f vec( 2*i ) =1- x( 2*i - 1) ;
end;
end;
g = nor m( x) - 1 ;
f = sum( f vec. ^2) + 1000*( max( g, 0) ) ^2;
end;

i f ( i sempt y( i ndg) | ( ( ni +ne) ==0) ) ;
g = [ ] ;
el se
g = 0 ;
end;

Page 56
Page 57

Appendix D : A-RSM examples
(ARSMexample.m)

%
%HOWTO USE A- RSM ! ! ! !
%
%ARSMexampl e. m
%
f or mat l ong ;
war ni ng of f ;
%
pr ompt ={' Number of par amet er s' , . . .
' Name of t he Mat l ab bl ack box f unct i on' , . . .
' Name of t he sessi on l og f i l e' , . . .
' I ni t i al number of exper i ment s' , . . .
' Maxi mumnumber of exper i ment s' , . . .
' Maxi mumnumber of i t er at i ons' , . . .
' Tr end' , . . .
' St oppi ng r ul e val ue' , . . .
' Number of poi nt s per par amet er s f or dr awi ng t he r eal f unct i on' , . . .
' Number of cont our l evel s' } ;

def ={' 2' , ' r osen_f un' , ' r osen_f unct i on. out ' , ' 10' , ' 25' , ' 15' , ' 4' , ' 0. 001' , ' 100' , ' 50' };

t i t r e=' Dat a I nput ' ;
l i neNo=1;
answer =i nput dl g( pr ompt , t i t r e, l i neNo, def ) ;
nvar = st r 2num( answer {1}) ;
f unobj = answer {2};
l ogf i l e = answer {3};
nexp = st r 2num( answer {4}) ;
nexp_maxi = st r 2num( answer {5}) ;
i t maxi = st r 2num( answer {6}) ;
%Tr end
%0 = const ant , 1 = pur e l i near , 2 = l i near wi t h i nt er act i ons, 3 = pur e quadr at i c, 4 = f ul l
quadr at i c
t r end = st r 2num( answer {7}) ;
pr eci si on = st r 2num( answer {8}) ;
np = st r 2num( answer {9}) ;
nbc = st r 2num( answer {10}) ;

i f ( i sempt y( np) )
np1 = 100 ;
el se
np1 = np ;
end;
%
di ar y( l ogf i l e) ;
%
%boundar y const r ai nt s f or t he par amet er s
%
vu = 2. 048 ;
%vu = 10 ;
vl = - vu ;
l ow_var = vl *ones( 1, nvar ) ;
up_var = vu*ones( 1, nvar ) ;



%
%Sel ect i on of t he i ni t i al pl an of exper i ment t ype
%
pl an_st r = {' maxi mi n' , ' l at i nHS' , ' r andom' , ' f ul l _f act or i al _5l evel s' };
[ s, v] = l i st dl g( ' Pr ompt St r i ng' , ' Choose a pl an: ' , . . .
' Sel ect i onMode' , ' si ngl e' , . . .
' Li st St r i ng' , pl an_st r ) ;
choi x_pl an = pl an_st r {s} ;
%
i f ( st r cmp( choi x_pl an, ' maxi mi n' ) ) ;
di sp( ' Maxi Mi n desi gn' ) ;
i ni t =[ 1 100 ] ;
mpl an = 12 ;
p0 = 1000 ;
i t er MX = 1 ;
[ pl an, maxi mi n_di st ] = pl an_MAXI MI N2( nexp, nvar , l ow_var , up_var , . . .
i ni t , ' ' , mpl an, p0, i t er MX) ;
el sei f ( st r cmp( choi x_pl an, ' l at i nHS' ) ) ;
di sp( ' Lat i n Hyper cube Sampl i ng' ) ;
[ pl an] = l at i nhc( nexp, nvar , l ow_var , up_var ) ;
el sei f ( st r cmp( choi x_pl an, ' r andom' ) ) ;
Page 58
di sp( ' Randomdesi gn' ) ;
[ pl an] = LJ uni f r nd( l ow_var , up_var , nexp, nvar ) ;
el sei f ( st r cmp( choi x_pl an, ' f ul l _f act or i al _5l evel s' ) ) ;
di sp( ' Ful l Fact or i al desi gn - 5 l evel s per f act or s' ) ;
nb_ni vo = 5 ;
[ pl an, i ndex_exp] = LJ f ul l f act ( nvar , nb_ni vo) ;
[ nexp, nvar ] = si ze( pl an) ;
f or j = 1: nvar ;
pl an( : , j ) = ( ( pl an( : , j ) +1) . *( ( up_var ( j ) - l ow_var ( j ) ) / 2) ) +l ow_var ( j ) ;
end;
el sei f ( st r cmp( pl an_st r {s}, ' ' ) ) ;
di sp( ' Lat i n Hyper cube Sampl i ng' ) ;
[ pl an] = l at i nhc( nexp, nvar , l ow_var , up_var ) ;
end;
%
%vi sual i zat i on of t he sampl i ng poi nt s
%
nf i g = [ ] ;
i f ( nvar == 2) ;
[ er r , i f i g] =vi spl an2D2( l ow_var , up_var , pl an) ;
el se
[ er r , i f i g] =vi spl anND( ' ' , l ow_var , up_var , pl an) ;
end;
nf i g = [ nf i g i f i g ] ;
%
gr adobj = [ ] ;
i ndf = 1 ;
i ndg = [ ] ; i ndgr adf = [ ] ; i ndgr adg = [ ] ;
y = gener i c_bat ch( pl an, f unobj , gr adobj , i ndf , i ndg, i ndgr adf , i ndgr adg) ;
%
%we vi sual i ze t he shape of t he f unct i on near i t s opt i mum
%
%you have t o change i t f or anot her f unct i on
x_opt i mum= ones( 1, nvar ) ; %opt i muml ocat i on f or Rosenbr ock f unct i on
i f ( np>0) ;
t i t r e = [ ' Funct i on ' f unobj ] ;
i f ( nvar > 2) ;
[ er r , i f i g] = vi spl anNDC( t i t r e, l ow_var , up_var , pl an, y, nbc, np, x_opt i mum, f unobj , 1, [ ] ) ;
el se
[ er r , i f i g] = vi spl an2DC( t i t r e, l ow_var , up_var , pl an, y, nbc, np, x_opt i mum, 1, 2, f unobj , 1, [ ] ) ;
end;
end;
nf i g = [ nf i g i f i g ] ;
%
%Par t 1 - Bui l di ng kr i gi ng met amodel wi t h an i ni t i al pl an of exper i ment s
%
di sp( ' Bui l di ng kr i gi ng met amodel wi t h an i ni t i al pl an of exper i ment s' ) ;
i ndf = 1 ; i ndg = [ ] ;
gr adf obj = ' ' ; i ndgr adf = [ ] ; i ndgr adg = [ ] ;
met hode = ' wei ght ' ;
st r at egi e = ' par et o' ;
mer i t e = 5 ;
%opt i ons( 20 = 1 == t r ace debug pendant l es i t er at i ons
opt i ons = [ 0 nexp_maxi pr eci si on 2 t r end 1 2 1 1e- 3 1e- 14 10 0 0 0 0 0 0 0 0 2 np np1 nbc 0 ] ;
f unARRET = ' ' ; %no st oppi ng r ul e
f unRESTRAI NT = ' ' ; %no r est r ai nt s
r est ar t = 0 ;
r aci ne = ' ARSM_exampl e_par t 1' ;
%
[ xopt , f xopt , msef i nal , . . .
pl anf i nal , yf i nal , . . .
t r endf i nal , t het af i nal , . . .
si gma2f i nal , r i nvf i nal , . . .
Fnf i nal , bet af i nal , bet f i nal ] = . . .
met am( pl an, y, l ow_var , up_var , f unobj , . . .
i ndf , i ndg, gr adf obj , i ndgr adf , i ndgr adg, . . .
met hode, st r at egi e, mer i t e, opt i ons, . . .
f unARRET, f unRESTRAI NT, r est ar t , r aci ne) ;

%
%savi ng t he r esul t s i n a st r uct ur e
%
METAM( 1) . xopt = xopt ;
METAM( 1) . f xopt = f xopt ;
METAM( 1) . mse = msef i nal ;
METAM( 1) . pl an = pl anf i nal ;
METAM( 1) . y = yf i nal ;
METAM( 1) . t r end = t r endf i nal ;
METAM( 1) . t het a = t het af i nal ;
METAM( 1) . si gma2 = si gma2f i nal ;
METAM( 1) . r i nv = r i nvf i nal ;
METAM( 1) . f n = Fnf i nal ;
METAM( 1) . bet a = bet af i nal ;
METAM( 1) . bet = bet f i nal ;
METAM( 1) . met PARAM = opt i ons( 6) ;
METAM( 1) . l ow_var = l ow_var ;
Page 59
METAM( 1) . up_var = up_var ;


%%keyboar d
%
%Par t 2 - Bui l di ng kr i gi ng met amodel by i ncr easi ng t he qual i t y of an i ni t i al pl an of exper i ment s
%
di sp( ' Bui l di ng kr i gi ng met amodel by i ncr easi ng t he qual i t y of an i ni t i al pl an of exper i ment s' ) ;
i ndf = 1 ; i ndg = [ ] ;
gr adf obj = ' ' ; i ndgr adf = [ ] ; i ndgr adg = [ ] ;
met hode = ' wei ght ' ;
st r at egi e = ' par et o' ;
mer i t e = 5 ;
opt i ons = [ i t maxi nexp_maxi pr eci si on 2 t r end 1 2 1 1e- 3 1e- 14 10 0 0 0 0 0 0 0 0 2 np np1 nbc 0 ] ;
f unARRET = ' ' ; %no st oppi ng r ul e
f unRESTRAI NT = ' ' ; %no r est r ai nt s
r est ar t = 0 ;
r aci ne = ' ARSM_exampl e_par t 2' ;
%
[ xopt , f xopt , msef i nal , . . .
pl anf i nal , yf i nal , . . .
t r endf i nal , t het af i nal , . . .
si gma2f i nal , r i nvf i nal , . . .
Fnf i nal , bet af i nal , bet f i nal ] = . . .
met am( pl an, y, l ow_var , up_var , f unobj , . . .
i ndf , i ndg, gr adf obj , i ndgr adf , i ndgr adg, . . .
met hode, st r at egi e, mer i t e, opt i ons, . . .
f unARRET, f unRESTRAI NT, r est ar t , r aci ne) ;

METAM( 2) . xopt = xopt ;
METAM( 2) . f xopt = f xopt ;
METAM( 2) . mse = msef i nal ;
METAM( 2) . pl an = pl anf i nal ;
METAM( 2) . y = yf i nal ;
METAM( 2) . t r end = t r endf i nal ;
METAM( 2) . t het a = t het af i nal ;
METAM( 2) . si gma2 = si gma2f i nal ;
METAM( 2) . r i nv = r i nvf i nal ;
METAM( 2) . f n = Fnf i nal ;
METAM( 2) . bet a = bet af i nal ;
METAM( 2) . bet = bet f i nal ;
METAM( 2) . met PARAM = opt i ons( 6) ;
METAM( 2) . l ow_var = l ow_var ;
METAM( 2) . up_var = up_var ;

keyboar d
%
%Par t 3 - unconst r ai ned gl obal opt i mi zat i on wi t h a gl obal l i mi t ed budget
%
di sp( ' unconst r ai ned gl obal opt i mi zat i on wi t h a gl obal l i mi t ed budget ' ) ;
i ndf = 1 ; i ndg = [ ] ;
gr adf obj = ' ' ; i ndgr adf = [ ] ; i ndgr adg = [ ] ;
met hode = ' wei ght ' ;
st r at egi e = ' par et o' ;
mer i t e = 1 ;
opt i ons = [ i t maxi nexp_maxi pr eci si on 2 t r end 1 2 1 1e- 3 1e- 14 10 0 0 0 0 0 0 0 0 2 np np1 nbc 0 ] ;
f unARRET = ' ' ; %no st oppi ng r ul e
f unRESTRAI NT = ' ' ; %no r est r ai nt s
r est ar t = 0 ;
r aci ne = ' ARSM_exampl e_par t 3' ;
%
[ xopt , f xopt , msef i nal , . . .
pl anf i nal , yf i nal , . . .
t r endf i nal , t het af i nal , . . .
si gma2f i nal , r i nvf i nal , . . .
Fnf i nal , bet af i nal , bet f i nal ] = . . .
met am( pl an, y, l ow_var , up_var , f unobj , . . .
i ndf , i ndg, gr adf obj , i ndgr adf , i ndgr adg, . . .
met hode, st r at egi e, mer i t e, opt i ons, . . .
f unARRET, f unRESTRAI NT, r est ar t , r aci ne) ;
METAM( 3) . xopt = xopt ;
METAM( 3) . f xopt = f xopt ;
METAM( 3) . mse = msef i nal ;
METAM( 3) . pl an = pl anf i nal ;
METAM( 3) . y = yf i nal ;
METAM( 3) . t r end = t r endf i nal ;
METAM( 3) . t het a = t het af i nal ;
METAM( 3) . si gma2 = si gma2f i nal ;
METAM( 3) . r i nv = r i nvf i nal ;
METAM( 3) . f n = Fnf i nal ;
METAM( 3) . bet a = bet af i nal ;
METAM( 3) . bet = bet f i nal ;
METAM( 3) . met PARAM = opt i ons( 6) ;
METAM( 3) . l ow_var = l ow_var ;
METAM( 3) . up_var = up_var ;

Page 60
keyboar d
%
%Par t 4 - const r ai ned gl obal opt i mi zat i on wi t h a gl obal l i mi t ed budget
%
%t he const r ai ned f or mul at i on i s changed t o an unconst r ai ned one vi a a penal i zat i on f unct i on
%
f unobj = ' const r ai ned_r osen_f un' ;
gr adobj = [ ] ;
i ndf = 1 ;
i ndg = [ ] ; i ndgr adf = [ ] ; i ndgr adg = [ ] ;
y2 = gener i c_bat ch( pl an, f unobj , gr adobj , i ndf , i ndg, i ndgr adf , i ndgr adg) ;
%
%we vi sual i ze t he shape of t he f unct i on near i t s opt i mum
%
%you have t o change i t f or anot her f unct i on
x_opt i mum= ones( 1, nvar ) ; %opt i muml ocat i on f or Rosenbr ock f unct i on
i f ( np>0) ;
t i t r e = [ ' Funct i on ' f unobj ] ;
i f ( nvar > 2) ;
[ er r , i f i g] = vi spl anNDC( t i t r e, l ow_var , up_var , pl an, y2, nbc, np, x_opt i mum, f unobj , 1, [ ] ) ;
el se
[ er r , i f i g] = vi spl an2DC( t i t r e, l ow_var , up_var , pl an, y2, nbc, np, x_opt i mum, 1, 2, f unobj , 1, [ ] ) ;
end;
end;
nf i g = [ nf i g i f i g ] ;


di sp( ' const r ai ned gl obal opt i mi zat i on wi t h a gl obal l i mi t ed budget ' ) ;
f unobj = ' const r ai ned_r osen_f un' ;
i ndf = 1 ; i ndg = [ ] ;
gr adf obj = ' ' ; i ndgr adf = [ ] ; i ndgr adg = [ ] ;
met hode = ' wei ght ' ;
st r at egi e = ' par et o' ;
mer i t e = 1 ;
opt i ons = [ i t maxi nexp_maxi pr eci si on 2 t r end 1 2 1 1e- 3 1e- 14 10 0 0 0 0 0 0 0 0 2 np np1 nbc 0 ] ;
f unARRET = ' ' ; %no st oppi ng r ul e
f unRESTRAI NT = ' ' ; %no r est r ai nt s
r est ar t = 0 ;
r aci ne = ' ARSM_exampl e_par t 4' ;
%
[ xopt , f xopt , msef i nal , . . .
pl anf i nal , yf i nal , . . .
t r endf i nal , t het af i nal , . . .
si gma2f i nal , r i nvf i nal , . . .
Fnf i nal , bet af i nal , bet f i nal ] = . . .
met am( pl an, y2, l ow_var , up_var , f unobj , . . .
i ndf , i ndg, gr adf obj , i ndgr adf , i ndgr adg, . . .
met hode, st r at egi e, mer i t e, opt i ons, . . .
f unARRET, f unRESTRAI NT, r est ar t , r aci ne) ;

METAM( 4) . xopt = xopt ;
METAM( 4) . f xopt = f xopt ;
METAM( 4) . mse = msef i nal ;
METAM( 4) . pl an = pl anf i nal ;
METAM( 4) . y = yf i nal ;
METAM( 4) . t r end = t r endf i nal ;
METAM( 4) . t het a = t het af i nal ;
METAM( 4) . si gma2 = si gma2f i nal ;
METAM( 4) . r i nv = r i nvf i nal ;
METAM( 4) . f n = Fnf i nal ;
METAM( 4) . bet a = bet af i nal ;
METAM( 4) . bet = bet f i nal ;
METAM( 4) . met PARAM = opt i ons( 6) ;
METAM( 4) . l ow_var = l ow_var ;
METAM( 4) . up_var = up_var ;

keyboar d
%
%Par t 5 - savi ng met amodel s. . . . .
%
di sp( ' SAVI NG KRI GI NG METAMODELS ! ' ) ;
f i csave = ' ARSMmet amodel s' ;
save_met amodel e( f i csave, METAM) ;
%
di sp( ' LOADI NG KRI GI NG METAMODELS ! ' ) ;
[ NEW_METAM] =l oad_met amodel e( f i csave) ;
nb_met amodel s = l engt h( NEW_METAM) ; %number of met amodel s i n t he st r uct ur e
%
%Par t 6 - use of pr e- bui l t kr i gi ng met amodel s
%
gl obal x y bet a Ri nv t het a Fn t r end si gma2 bet met PARAM;
%
%l oop over al l t he met amodel s
%
f or i i = 1: nb_met amodel s;
[ nn, l l ] = si ze( NEW_METAM( i i ) . pl an) ;
Page 61
as = ones( nn, 1) *NEW_METAM( i i ) . l ow_var ;
bs = ones( nn, 1) *NEW_METAM( i i ) . up_var ;
x= ( NEW_METAM( i i ) . pl an- as) . / ( bs- as) ;
y = NEW_METAM( i i ) . y ;
bet a = NEW_METAM( i i ) . bet a ;
Ri nv = NEW_METAM( i i ) . r i nv ;
t het a= NEW_METAM( i i ) . t het a ;
Fn = NEW_METAM( i i ) . f n ;
t r end= NEW_METAM( i i ) . t r end ;
si gma2= NEW_METAM( i i ) . si gma2 ;
bet = NEW_METAM( i i ) . bet ;
met PARAM= NEW_METAM( i i ) . met PARAM ;
vl b1 = NEW_METAM( i i ) . l ow_var ;
vub1 = NEW_METAM( i i ) . up_var ;
%
%Vi sual i zat i on of cont our l evel s
%
xdum= ones( 1, l l ) ; %xdum= [ 1 1 . . . . 1 ] ;
nbc = 50 ; %number of cont our s
np = 100 ; %number of poi nt s per di mensi on f or t he pl ot s
[ er r , i nf i g] = vi spl anNDC( t i t r e, vl b1, vub1, NEW_METAM( i i ) . pl an, y, nbc, np, xdum, ' eval _kr i gmodel ' ) ;
%
%eval uat i on of t he met amodel and t he MSE at one poi nt ( xdum)
%
xdum_nor mal i zed = ( xdum- vl b1) . / ( vub1- vl b1) ;
[ kr i g_xdum, t r end_xdum, noi se_xdum] = eval _kr i gmodel ( xdum_nor mal i zed) ;
%
%Mi ni mi zat i on of t he met amodel i i
%
%we save i t i n i t s own f i l e
gl obal METAM_FI LE ;
%
METAM_FI LE = st r cat ( ' ARSMmet amodel _' , num2st r ( i i ) ) ;
%
save_met amodel e( METAM_FI LE, NEW_METAM( i i ) ) ;
%
opt i ons = f opt i ons ;
x0 = 0. 5*( vl b1+vub1) ;
i ndf = 1 ; i ndg = [ ] ;
[ xopt _met am, opt i ons, l ambda, hess] = const r ( ' met amodel e_f un' , x0, opt i ons, vl b1, vub1, ' ' , . . .
i ndf , i ndg) ;
xopt _met am
end;
%
%The End
%
di ar y of f ;

Vous aimerez peut-être aussi