Vous êtes sur la page 1sur 5

Chapter 1

We can define an experiment as a test or series of runs in which purposeful changes are made to
the input variables of a process or system so that we may observe and identify the reasons for changes that
may be observed in the output response.

General Model of a process or system

An objective of the experimenter is to determine the influence that these factors have on the output
response of the system. The general approach to planning and conducting the experiment is called the
strategy of experimentation.

The three basic principles of experimental design are randomization, replication, and blocking.

By randomization we mean that both the allocation of the experimental material and the order in
which the individual runs of the experiment are to be performed are randomly determined.

By replication we mean an independent repeat run of each factor combination. First, it allows the
experimenter to obtain an estimate of the experimental error. This estimate of error becomes a basic unit of
measurement for determining whether observed differences in the data are really statistically different.
Second, if the sample mean is used to estimate the true mean response for one of the factor levels in the
experiment, replication permits the experimenter to obtain a more precise estimate of this parameter.

Blocking is a design technique used to improve the precision with which comparisons among the
factors of interest are made. Often blocking is used to reduce or eliminate the variability transmitted from
nuisance factors - that is, factors that may influence the experimental response but in which we are not
directly interested.
Guidelines for Designing an Experiment

1. Recognition of and statement of the problem


a. Factor screening or characterization - it is usually important to learn which factors have
the most influence on the response(s) of interest. Screening experiments are extremely
important when working with new systems or technologies so that valuable resources will
not be wasted using best guess and Ofat approaches.
b. Optimization - that is, find the settings or levels of the important factors that result in
desirable values of the response.
c. Confirmation - trying to verify that the system operates or behaves in a manner that is
consistent with some theory or past experience.
d. Discovery - he experimenters are usually trying to determine what happens when we
explore new materials, or new factors, or new ranges for factors.
e. Robustness - A variation of this is determining how we can set the factors in the system
that we can control to minimize the variability transmitted into the response from factors
that we cannot control very well.
2. Selection of the response variable - the experimenter should be certain that this variable really
provides useful information about the process under study. It is usually critically important to
identify issues related to defining the responses of interest and how they are to be measured before
conducting the experiment. Sometimes designed experiments are employed to study and improve the
performance of measurement systems.
3. Choice of factors, levels, and range
a. Some useful classifications – Potential Design Factors:
i. Design factors - are the factors actually selected for study in the experiment;
ii. Held-constant factors - are variables that may exert some effect on the response,
but for purposes of the present experiment these factors are not of interest, so
they will be held at a specific level;
iii. Allowed-to-vary factors - the experimental units or the “materials” to which the
design factors are applied are usually nonhomogeneous, yet we often ignore this
unit-to-unit variability and rely on randomization to balance out any material or
experimental unit effect.
b. Some useful classifications – Nuisance Factors:
i. Controllable factor - is one whose levels may be set by the experimenter;
ii. Uncontrollable factor – if a factor is uncontrollable in the experiment, but it
can be measured, an analysis procedure called the analysis of covariance can
often be used to compensate for its effect. For example, the relative humidity in
the process environment may affect process performance, and if the humidity
cannot be controlled, it probably can be measured and treated as a covariate.
iii. Noise factor – It´s when a factor that varies naturally and uncontrollably in
the process can be controlled for purposes of an experiment.
4. Choice of experimental design - Choice of design involves consideration of sample size (number of
replicates), selection of a suitable run order for the experimental trials, and determination of
whether or not blocking or other randomization restrictions are involved. In many engineering
experiments, we already know at the outset that some of the factor levels will result in different
values for the response. Consequently, we are interested in identifying which factors cause this
difference and in estimating the magnitude of the response change.
5. Performing the experiment - When running the experiment, it is vital to monitor the process
carefully to ensure that everything is being done according to plan. Prior to conducting the
experiment a few trial runs or pilot runs are often helpful. These runs provide information about
consistency of experimental material, a check on the measurement system, a rough idea of
experimental error, and a chance to practice the overall experimental technique.
6. Statistical analysis of the data - statistical methods cannot prove that a factor (or factors) has a
particular effect. They only provide guidelines as to the reliability and validity of results. When
properly applied, statistical methods do not allow anything to be proved experimentally, but they
do allow us to measure the likely error in a conclusion or to attach a level of confidence to a
statement. The primary advantage of statistical methods is that they add objectivity to the
decision-making process. Statistical techniques coupled with good engineering or process knowledge
and common sense will usually lead to sound conclusions.
7. Conclusions and recommendations - Once the data have been analyzed, the experimenter must draw
practical conclusions about the results and recommend a course of action.
Chapter 2
Basic Statistical Concepts

Each of the observations would be called a run.

Individual runs differ, so there is fluctuation, or noise. It is a statistical error, meaning that it arises
from variation that is uncontrolled and generally unavoidable.

The presence of error or noise implies that the response variable, tension bond strength, is a random
variable. A random variable may be:

 If the set of all possible values of the random variable is either finite or countably infinite, then
the random variable is discrete;
 Whereas if the set of all possible values of the random variable is an interval, then the random
variable is continuous.

Graphical Description of Variability

 Dot Diagram - enables the experimenter to see quickly the general location or central tendency of
the observations and their spread or variability, is a very useful device for displaying a small body
of data (say up to about 20 observations).
 Histogram - shows the central tendency, spread, and general shape of the distribution of the data.
Histograms should not be used with fewer than 75–100 observations.
 Box plot - displays the minimum, the maximum, the lower and upper quartiles (the 25th percentile
and the 75th percentile, respectively), and the median (the 50th percentile) on a rectangular box
aligned either horizontally or vertically.

Probability Distributions
The properties of probability distributions may be summarized quantitatively as follows:

The mean, µ, of a probability distribution is a measure of its central tendency or location.

The variability or dispersion of a probability distribution can be measured by the variance:

Sampling and Sampling Distributions

A random sample is a sample that has been selected from the population in such a way that every
possible sample has an equal probability of being selected. The sample mean y is a point estimator of the
population mean µ, and the sample variance S2 is a point estimator of the population variance σ2.

Properties are required of good point estimators:

1. The point estimator should be unbiased. That is, the long-run average or expected value of the
point estimator should be equal to the parameter that is being estimated.
2. An unbiased estimator should have minimum variance. This property states that the minimum
variance point estimator has a variance that is smaller than the variance of any other
estimator of that parameter.

Vous aimerez peut-être aussi