Académique Documents
Professionnel Documents
Culture Documents
ECONOMIC
DESIGN OF CONTROL
TAGUCHI
CHARTS
USING THE
LOSS FUNCTION
I. B A C K G R O U N D
The design of the Shewhart ?? chart involves the determination of the sample size (N), the frequency
or time between sampling (H) and the multiplier that defines the spread of the control limits from
the centerline (K).
In practice, the Shewart ?? charts have utilized a rational subgroup size for N, normally around
4 or 5. The sampling interval is generally selected based on the production rate and familiarity with
the process. For instance, in the early stages of introduction of control charts to the process the
samples may be taken frequently, such as once every 30 min. In the later stages, when the charts
have been established and preventive measures taken against assignable causes, samples may be
taken less frequently such as once every shift. The control limits for the control charts are
traditionally set at _+3ax.
The rational subgroup size is normally small, since larger sample sizes increase the risk of
process shifts or assignable causes occurring while the sample is taken. Such an occurrence is
undesirable, since this would filter the effect of the shift on the statistic used for monitoring and
also exaggerate the perceived inherent variation of the process. The reduction in power of the
statistical test, resulting from the small sample size, is compensated by taking more frequent
samples. The + 3a~ limits have been found to provide an acceptable level of risk of false alarms
in practice.
The problem with the commonly used "rational" approach to control chart design is that it is
used in almost all processes as the standard procedure for implementing control charts, without
regard to the cost consequences of the design. In order to overcome this shortcoming, a number
of researchers have proposed economic models for the design of control charts. Ho and Case [1]
provide a literature review of such models covering the period 1981-1991. Most of this research
has focused on the design of ??-charts, e.g. [2-4]. Even though these models have not been widely
used, their value is obvious. One of the reasons economic models are not widely used is because
the models are quite complex, and difficult to evaluate and optimize [5]. Also, these models are
typically optimized for a particular size of shift, frequency of out of control, and cost of diagnosis.
In practice, however, the mean period the process remains in control is not static, the size of the
process shift is not constant, and the cost of diagnosis changes with time. In fact, with an
assumption of continuous improvement, we would expect the frequency of out-of-control
situations, the size of the shift and the cost of diagnosis to be reduced over time. After all, this
is one of the purposes of statistical process control (SPC). In order to address some of these
concerns we attempt to establish the direction of change of the control chart's design parameters,
when the frequency and the size of process shifts and the cost of diagnosis change. With
this information, the practitioner might be able to adjust the "optimized" design parameters
over time.
671
672
The first concern, related to model complexity, is not easily addressed, since the presence of
integral evaluations and optimization over three variables makes the process difficult to simplify.
Taguchi et al. [6] have proposed an on-line control model for which they have developed a
closed-form solution for the selection of optimal control parameters. The closed-form solution
makes the evaluation of process control parameters much easier. However, in their model the sample
size (N) is always one; the costs associated with false alarms, and searching for assignable causes are
ignored; also, the probability of not detecting a process shift is ignored. These simplifications are
unrealistic, especially considered the fact that the Type II error increases with smaller sample sizes.
Adams and Woodhall [7] provide a comparison of Taguchi's ideas and Duncan's model. We select
Duncan's [8] cost function for the X chart which we find more realistic and we embellish this cost
function with the Taguchi loss function. We determine the optimal control chart design parameters
using this function and suggest changes in these parameters over time.
The Taguchi loss function provides a means of explicitly considering the loss due to process
variability. Whereas Duncan applies a penalty cost for operating out of control, he does not show
how this cost can be obtained or quantified. In this paper we present, evaluate, optimize and
analyze an economic model of the control chart. In the next section we describe our cost model.
We then illustrate it's application using a hypothetical example. We conclude this paper by studying
the direction of control chart design parameter changes in the presence of changes in the magnitude
and frequency of process shifts and the costs of discovering and correcting the causes of these shifts.
2. EMBELLISHMENTOF DUNCAN'S COST MODEL WITH TAGUCHI'S LOSS FUNCTION
Duncan's model assumes a single out of control state. Research has confirmed that multiple
assignable cause models can be approximated by an appropriately selected single cause model [2].
Hence we assume that we monitor the process to detect the occurrence of a single assignable cause
that causes a fixed shift in the process. Duncan defines the monitoring and related costs over a cycle.
The elements of the cycle are as follows:
(1) The in control state. (The process starts in this state.)
(2) The out of control state. (The process goes to an out-of-control state from an in-control
state. This is assumed to be a Poisson process with 2 occurrences per hour.)
(3) Detection of the out-of-control state.
(4) The assignable cause is detected and fixed.
Duncan also assumes that the process is not stopped while investigating the presence of an
assignable cause.
The expected cycle time (E(T)) with Duncan's assumption is:
1
+
z I-fl
z+gN+D
where
H =
(I - f l ) =
r =
g=
N=
D=
(a~ + a2N)E(T)
a3~ exp(-2H)
( H
+a3+l-exp(-).H)
+a4 l - - ~ - r
H
where
a~ = fixed cost of sampling
a2 = variable cost of sampling
)
+gN+D
a3 =
a 3=
pa4 =
=
673
The Taguchi loss function for a product is defined below. Consider a product with bilateral
tolerances o f equal value (A). If the cost to society for manufacturing a product out of specification
is A S/unit, then, the Taguchi loss function defines the expected loss to society caused by using a
particular process to produce the product as:
A
Expected loss/unit = ~-~ v 2
(1)
where
v 2 = mean squared deviation of the process.
It can easily be shown that v 2 = a 2 + (p - T) 2, where o 2 = process variance, # = process mean
and T = process target. We assume that when the process is in control its means is centered on
target and its v 2 = v 2 = a 2. We also assume that when the process shifts, its mean shifts off target
and v 2 = v~ = a 2 + (# - T) 2. (Since we are just considering $ charts the consideration of mean shifts
are sufficient.)
Using the definition o f loss given in (I) we can easily embellish D u n c a n ' s model to consider losses
owing to in-control and out-of-control variability. Noting that the expected period in-control is
(1/2) and the expected period out-of-control is
E"
l~--~fl - r x g N + D
and assuming that the production rate is P units/h, the cost per cycle (c) using the embellished
model is shown below:
E(T)
a3aexp(-).H) t -AviP.
- - - f f - z + Av,P[- - - ~ - - k -H
--r
c = (a I + a2N) y
+ a3 -t 1 - e x p ( - ) , H )
I - fl
+gN +O
"
Dividing (2) by E(T) and applying the following a p p r o x i m a t i o n s and definitions [2]
T~
2H 2
12
I
B=
1
(I-fl)
1 2H]H
+D+gU
2+-i2..]
exp( - ;tH)
(i - e x p ( - 2 H )
u
).H
L, = ~ " v~
A
L2 =-~- i v~.
We obtain the expected cost per hour as
I +2B
The optimal values for N, H, and K can be obtained by minimizing the above cost function.
(2)
Suraj
674
M. A l e x a n d e r et aL
3. A P P L I C A T I O N O F E M B E L L I S H E D
MODEL
a2=$0.10
a 3 = $50
a~ = $50
P = 100 parts/h
v~ =(0.001) 2
A = $5/part
A = 0.003
-=4h
2
D=2
g=0.01h
v~ = tr 2 + 62 = (0.001) 2 + (0.001) 2 = 0.000002.
= (u - T) = 0.001
Table 1 lists the results of a computer search for the optimum design parameters. For these
conditions the optimal parameters are seen to be N* = 13, K* = 2.5, and H* = 1.0 at a cost of
$88.48/h. The most common values used in the U.S. industry are N = 5, K = 3, and H = 0.5 which
results in a cost of $92.88/h or a penalty of $4.40/h. The computer program used to search for an
optimum computes the optimal control limit width K and sampling frequency H for several values
of N and displays the value of the cost function with the associated alpha risk and power as shown
on Table I. This is the same approach used by Montgomery [2] and Jaraiedi and Zhuang [9]. The
program is found in the Appendix. The program is easy to run on any IBM compatible computer
with BASIC. It uses a simple grid search. The range and the step size of the search on any of the
parameters can be changed by changing the " F O R " statements in the program.
Power (I - / / )
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1.8
1.9
2. I
2. I
2.2
2.2
2.2
2.3
2.4
2.4
2.4
2.5
2.5
2.6
0.7
0.7
0.7
0.8
0.8
0.8
0.9
0.9
0.9
1.0
1.0
1.0
I. I
I. 1
0.07
0.06
0.04
0.04
0.03
0.03
0.03
0.02
0.02
0.02
0.02
0.01
0.01
0.01
0.35
0.43
0.46
0.55
0.60
0.67
0.74
0.76
0.78
0.82
0.86
0.87
0.89
0.90
Cost
93.07
91.64
90.69
90.02
89.54
89.20
89.95
88.76
88.64
88.55
88.50
88.48~
88.49
88.50
T a g u c h i loss function
20 --
2.0
\\%
15
675
=100
....
N
1.5
\ ........
..........
),:
1.0
10
II/I a31/50
II=- ii a 3 = 100
0.5
a I = $1.0
a 2 -- $0.10
0.5
1/~. (h)
Fig. I. S a m p l e size (N) and s a m p l i n g interval ( H ) vs I / L
4. S E N S I T I V I T Y A N A L Y S I S
We study the sensitivity to the magnitude and frequency of process shifts in order to determine
the appropriate adjustment of control chart parameters in the presence of process improvements,
and process deterioration. The frequency of process shifts is changed in the model by adjusting
the value of (2), the expected arrival rate of process shifts. The magnitude of process shift is varied
by changing the value of ~ = (/~ - T). Note that the cost of investigating and fixing an assignable
cause (a3) is not changed as a function of the magnitude of shift, since we assume that the cost
of investigating small shifts plus the cost of repair of these shifts is equal to the cost of investigating
large shifts plus the cost of repair of the causes of these shifts. However, over time we expect the
average cost a3 to decrease as the teams become more adept at discovering and correcting causes
of process shifts. We therefore also study the change in optimal control chart parameters when a3
changes.
Figures 1, 2, and 3 indicate changes in the "optimum" control chart design parameters, i.e. the
sample size (N) and the sampling interval (H) under conditions of process improvement and
deterioration. The design (K) was found to be relatively robust under the different conditions.
Process improvement is denoted by a reduction in the frequency and magnitude of process
shifts.
a3= 100,~h
,~
a3 = 5 0
a3 =25 ~ %
2.0
1.5
sss
a3 -- 2 5 /
1.0
//
/
.
//
// a 3 = 50
/
d
II
/
/
//
SSS
/
. . . . . .
....
I00
a3
0.5
a I = $10.0
a2 --- $1.0
I/~. (h)
676
--
20
15
L0
a3 = 25
Ilk : 2
2.0
1.5
1.0
~
I
\ x
0.1)005
0.0007
0.0009
0.0010
0.0030
0.0050
015
8
Fig. 3. Sample size (N) and sampling interval (H) vs ft.
The curves in Fig. 1 indicate that when the frequency of process shifts decreases or the mean
utime interval between process shifts increases, the sample size (N) increases and the sampling
interval (H) decreases to a steady state value. This, at first, seems counter-intuitive, i.e. when the
process improves the monitoring effort seems to have increased, albeit to a steady state value.
However, this can be explained when we observe the rate of convergence to the steady state values
of the design parameters. The rate of convergence to the steady state values depends on the cost
of searching for an assignable cause (a3), i.e. the higher this cost the slower the rate of convergence.
This signifies that if there is a high cost related to investigating out-of-control signals owing to the
high cost of search and frequency of occurrence, then the control chart design parameters are set
to keep this cost low. That is when N is kept low and H is set high, (1 - [3) is reduced and H / ( I - [3),
the time required to detect an out of control state increases. Hence the number of out of control
states detected and investigated per unit time is reduced. Figure 2 illustrates the same curves
(optimal N and H vs I/2) as that of Fig. 1. In Fig. 2, however, we investigate the senario where
the sampling costs have increased by a factor of 10, i.e. a~ = $10 and a2 = $1. Under these conditions
the behavior of N is unchanged, while the sampling interval H remains at a relatively high value.
The latter can be explained by the high sampling costs.
Figure 3 indicates that increases in the size of the shift from 0.5 a to 5 tr warrants a decrease
in the sample size and an increase in the sampling frequency. The smaller sample size, recommended
for larger process shifts, results in a lower cost of sampling, with the probability of detecting the
shift (1 - f l ) , at an acceptable level. The sampliing frequency increase can be explained by the
objective of limiting the period of operating out of control and its associated losses.
5. C O N C L U S I O N S A N D R E C O M M E N D A T I O N S
In this paper we have embellished Duncan's cost model with the Taguchi loss function. This
embellishment provides a framework for using the Taguchi loss function, which defines losses
owing to the variabilty caused by both chance and assignable causes, for the economic design of
control charts. We have also investigated the behavior of this embellished model through sensitivity
analysis. Our analysis has indicated that the design parameters for the ~-chart are fairly robust
when the cost of finding assignable cause and the frequency of occurrence of an assignable cause
are not too high. The parameters (N and H) do have to be adjusted based on the size of the process
shift that is investigated. Small process shifts require larger values of N and H, while for large shifts
a small N and H are recommended.
REFERENCES
I. C. Ho and K. E. Case. Economic design of control charts: a literature review for 1981-1991. J. Quality Technol. 26,
1-78 (1994).
2. D. C. Montgomery. Economic design of an ~ control chart. J. Quality Technol. 14, 40~,3 (1982).
677
3. J. J. Pignatiello. Optimal economic design of g-control charts when cost model parameters are not precisely known.
HE Trans. 20, 103-1 I0 (1988).
4. G. Tagaras. Economic 'f-charts with asymmetric control limits. J. Quality Technol. 21, 147-154 (1989).
5. E. M. Saniga. Economic statistical control chart designs with an application to "f and R charts. Technometrics 31,
313--320.
6. G. Taguchi, E. A. Elsayed and T. Hsiang. Quality Engineering in Production Systems. McGraw.Hill, New York.
7. B. M. A d a m s and W. H. Woodall. An analysis of Taguchi's on-line process control procedure under a random walk
model. Technometrics. 31,401-413 (1989).
8. A. J. Duncan. The economic design of 'f-charts used to maintain current control of a process. J. Am. statist. Ass. 51,
228 242 (1956).
9. M. Jaraiedi and Z. Zhuana. Determination of optimal design parameters of .f-charts when there is a multiplicity of
assignable causes. J. Quality Technol. 23, 253-258 (1991).
10. T. J. Lorenzen and L. Vance. The economic design ofcontrol charts: a unified approach. Technometrics 28, 3-10 (1986).
11. T. P. McWilliams. Economic control chart designs and the in-control time distribution: a sensitivity study. J. Quality
Technol. 21, 103 110 (1989).
12. D. C. Montgomery. Statistical Quality Control. Wiley, New York (1991).
APPENDIX
678
679
1060 N D E L T A = DELTA/SQR(V I )
1070 TI = N D E L T A * S Q R ( N ) - K
1080 T2 = - N D E L T A * S Q R ( N ) - K
1090 X = - 3 . 5
II00 YI = T I - X
IIIOC=YI/8
1120
SI = C/(3*SQR(2*3.14159))*(EXP( - . 5 X A 2 ) + 4 * E X P ( - . 5 * ( X + YI/8) A 2 ) + 2*EXP( - . 5 * ( X + YI/4) A 2) +4*EXP~
.5*(X + 3"Y1/8) ^ 2 ) + 2 EXP( - . 5 * ( X + YI/2) A 2) + 4 * E X P ( - . 5 * ( X + 5"YI/8) ^ 2)+ 2*EXP( - . 5 * ( X + 6*Y1 ,'8) ^
+ 4*EXP( - . 5 * ( X + 7"Y1/8) A 2 ) + EXP( - . 5 * ( X + Y I ) A 2))
1130 X = - 5
1140 Y 2 = T 2 - X
1150 C2 = Y2,:8
1160
S2 = C2/(3"SQR(2"3.14159))*(EXP(- .5*X A 2) + 4*EXP( - . 5 * ( X + Y2/8) A 2 ) + 2 * E X P ( - .5*(X + Y2/4) ^ 2) +4*EXP(
- .5*(X + 3'Y2/8) A 2) + 2*EXP( - . 5 * ( X + Y2/2) A 2 ) + 4 * E X P ( - . 5 * ( X + 5*Y2/8J 2) + 2*EXP( - . 5 * ( X + 6"Y2.~8) A 2)
+ 4*EXP( - .5*(X + 7"Y2/8) A 2 ) + EXP( - .5*(X + Y2) ^ 2))
1170 RE M BETA IS I-BETA
1180 BETA = S1 + $2
1190