Vous êtes sur la page 1sur 17

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/245302363

Checking Models in Structural Design

Article in Journal of Structural Engineering · June 1989


DOI: 10.1061/(ASCE)0733-9445(1989)115:6(1309)

CITATIONS READS

16 31

2 authors, including:

Mark G. Stewart
University of Newcastle
237 PUBLICATIONS 3,594 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Climate change effects and adaptation of civil infrastructure and buildings View project

All content following this page was uploaded by Mark G. Stewart on 13 October 2015.

The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document
and are linked to publications on ResearchGate, letting you access and read them immediately.
C H E C K I N G M O D E L S I N STRTTCTITHM. " O P ^ I ^ N
By M a r k G. Stewart 1 and R o b e r t E . Melchers 2

ABSTRACT: A large proportion of structural failures are due to human error in the
design stage of a structural engineering project, and many of these failures could
have been averted if there had been adequate design checking. Results are reported
Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

herein from surveys investigating the effectiveness of three typical design-checking


processes: self-checking, independent detailed design checking, and overview
checking. Following a review of current work in this area, appropriate mathe-
matical models, which examine the effects of error magnitude, time, and experi-
ence, are proposed for each design checking process. These are compared to the
limited data obtained from the surveys. Although preliminary, the results have
interesting implications for practitioners.

INTRODUCTION

Statistical studies have revealed that up to 75% of structural failures, mal-


functions or lack of serviceability can be attributed to human error (Matousek
and Schneider 1976; Walker 1980). Procedures to reduce human errors are
therefore of interest in achieving safer structures. Available statistical data
(e.g., Ellingwood 1987) show that approximately 3 0 - 4 0 % of all human er-
rors in the building process are committed to the design stage of a project.
This is a significant proportion and emphasizes the importance of human
errors in design.
A human error in the structural design context may be defined as an event
or process that departs from commonly accepted competent professional
practice. It excludes such unforeseen events as "acts of God," variation in
material properties, etc. Such a definition also accords with legal practice
(Sneath 1979).
It has been suggested that human error may be reduced by one or more
of the following measures: controls, legal sanctions, education, personnel
selection, and task complexity reduction. The effectiveness and practicality
of these measures remain unclear, for there has been much speculation but
little research. However, it has been estimated that as many as half of the
structural failures could have been averted had there been adequate checking
or other controls, particularly in design related areas (Matousek and Schnei-
der 1976; Walker 1980).
In current practice, design checking usually involves one or more of in-
ternal checking, checking by an independent consultant, or checking by au-
thorities. Each of these may include self-checking, independent detailed de-
sign checking, and overview checking. The effectiveness of these procedures
was investigated in the present project by data obtained from surveys. The
self-checking data were based on the reassessment of undergraduate engi-
neering student examination responses, while data for independent detailed
'Lect., Dept. of Civ. Engrg. and Surv., Univ. of Newcastle, N.S.W. 2308, Aus-
tralia.
2
Prof. of Civ. Engrg., Univ. of Newcastle, N.S.W., Australia.
Note. Discussion open until November 1, 1989. To extend the closing date one
month, a written request must be filed with the ASCE Manager of Journals. The
manuscript for this paper was submitted for review and possible publication on May
17, 1988. This paper is part of the Journal of Structural Engineering, Vol. 115,
No. 6, June, 1989. ©ASCE, ISSN 0733-9445/89/0O06-13O9/$l.OO + $.15 per page.
Paper No. 23546.

1309

J. Struct. Eng., 1989, 115(6): 1309-1324


design checking and overview checking were obtained from specially con-
ducted surveys among practicing professional engineers.
Self-checking may be defined as the immediate monitoring and correction
of each successive "microtask" (which is a single task in the design pro-
cess—e.g., a calculation or a table look-up) as it is completed by the de-
signer undertaking a design process. Independent detailed design checking
Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

is the checking and correction of all microtasks in the design by an inde-


pendent reviewer after the initial design process is completed. Overview
checking is the check which is typically made by a senior engineer without
resorting to detailed calculations. Clearly, such a check relies on a subjective
judgement about the overall suitability of the member or structure under
consideration.
Understanding the concepts of error detection is essential for the devel-
opment of a structural design task model (using event-tree methodology) to
simulate the effects of design error and design checking (Melchers 1989).
One of the functions to which such a model could be put is to optimize the
level of design checking necessary for the design of a particular structure.
Ths may in turn lead to suggestions for possible design-checking guidelines.
The structure under study is a typical steel portal frame (Stewart 1987).

SELF-CHECKING
Survey Methodology
There is evidence (Rabbitt 1978) that self-checking efficiency for so-called
"omission" errors (i.e., failure to perform a task) is substantially lower (by
more than an order of magnitude) than self-checking efficiency for errors of
"commission" (incorrect performance of a task). For this reason, the part of
the study considering self-checking was limited largely to the study of self-
checking for errors of commission.
For practical reasons the data set was limited to first-year undergraduate
student examination scripts. For these intermediate calculations and correc-
tions could be examined for each set task. The number of individual re-
sponses totalled to just over 800. Each individual response was carefully
examined for calculation errors, and, where self-correction was evident, the
original and amended responses were recorded.
For the purposes of the present study, a self-correction was considered to
have occurred when there was evidence both of an original task response
and an amended response. In particular, self-checking was deemed to have
occurred if an incorrect result was amended in any manner. Typically this
would be by crossing out the original value and replacing it with a corrected
value.
Corrections for round-off error were excluded from consideration (Melch-
ers 1988), giving a sample size of 86 responses for which self-checking
resulted in a correction. There was no evidence of corrections leading to
further error, nor of a correct result changed to an error.
Mathematical Model
For each sample, the incorrect value (x) and the correctly self-checked
response (x,„) was used to evaluate the logarithmic error factor
x
«loE = l°gi0 — (1)

i d IU

J. Struct. Eng., 1989, 115(6): 1309-1324


IQ,
0.50

0)
•<* 0.40-
U
Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

30
UJ °
O)
•rt 0 to
U
CI)
C Q 10-
o
00 i&SJ, l_
-4. -3. -2. -1. 0. 1. 2. 3. 4.
Logarithmic Error Factor e ^
FIG. 1. Error Factor "Self-Checking" Model Fitted to Self-Checking Efficiency (pj
Histogram

For those errors which were correctly self-checked, the error factor (x/xm)
is a measure of the magnitude of the "initial" error. Note that "initial" error
refers to an error before its possible correction.
Previous research has shown that the occurrence rate of error varies with
type of calculation, and that shorter calculations occur more frequently than
larger ones (Melchers 1989). These factors would influence the rate of error
detection and accordingly the survey data obtained were appropriately cor-
rected (Stewart 1987). Fig. 1 shows the histogram obtained for checking
efficiency as a function of error factor. A checking efficiency of unity in-
dicates that all errors were detected, a value of zero that none were detected.
Also shown in Fig. 1 is an empirical model based on fitting to the data a
modified Type I extreme value probability distribution (Stewart and Melch-
ers 1987a).
Comment
The process of self-checking is a complex one which appears to operate
within the subconscious level of thought. At best, the data can provide only
an indication of the underlying trends of the self-checking process.
The results support Grill's (1984) proposition, made in relation to prac-
ticing structural designers, that self-checking detects only the small or minor
errors that may occur in calculations, and that self-checking cannot ade-
quately safeguard against.errors due to misconceptions, oversights, or mis-
understandings. The latter errors are the results of deliberate and conscious
decisions that, once taken, appear seldom to be doubted by the designer
himself. At present little understanding of these types of errors appears to
be available; it would appear that there can be no effective self-checking
effort for this type of error.
The present survey demonstrates that the detection rate for self-checking
for small, or minor, initial error magnitudes is much greater than for larger
initial error magnitudes. It might be concluded that (as a group) designers
1311

J. Struct. Eng., 1989, 115(6): 1309-1324


tend to be more concerned with relatively minor details and technicalities
and that they tend to ignore larger errors. Such a conclusion is in broad
agreement with the work of Norman (1981). However, the present results
appear to contradict conventional wisdom, it being commonly assumed that
larger errors are more detectable.
Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

INDEPENDENT DETAILED DESIGN CHECKING

Survey Methodology
In the study of the effectiveness of independent detailed design checking,
two factors influencing checking effectiveness were isolated as being of par-
ticular interest and also obtainable from survey results. These were total time
taken for checking [and therefore, indirectly, checking effort (Lind 1983)]
and error magnitude.
To obtain this information, a mailed survey technique was adopted. It was
recognized that while this approach might alert prospective respondents about
the real nature of the exercise (and thus lead to excessively good error de-
tection rates), no other viable alternative existed. At least indicative data
would be obtained. The survey was mailed to 150 civil engineering orga-
nizations and individuals in the state of Victoria, Australia. The total number
of responses was 47.
Prospective respondents were asked to detect and correct any errors or
mistakes in three pages of computations of design loadings for a steel portal
frame structure, and to record the checking time taken.
The types of errors which might occur in a design task have previously
been reviewed (Stewart and Melchers 1986). The most significant errors ap-
peared to be errors of commission and errors of omission. Because of the
difficulty in data analysis associated with errors of omission, only errors of
commission were included in the task to be checked.
As is shown in Table 1, the design consisted of 93 individual microtasks
that required checking. Of these, deliberate errors were incorporated in nine
microtasks. Successful error detection was defined to occur when the re-
spondent indicated clearly any error and corrected it in some way.

Mathematical Models
Fig. 2 shows the data points for checking efficiency Q5ind), defined as the
ratio of errors detected and errors present, plotted against checking time (f)

TABLE 1. Definition of Microtasks for "Independent Detailed Design Checking"


Survey
Possible Number of Number of
microtask error correct microtasks incorrect microtasks Totals
(1) (2) (3) (4)
Calculation 9 3 12
Table look-up 13 1 14
Transfer of information 50 2 52
Code look-up 4 — 4
Load direction 8 3 11
Totals 84 9 93

I O \ KL

J. Struct. Eng., 1989, 115(6): 1309-1324


,..-i'^;'

IO. 1 2 1 22 ,1-^'^""
o.e-
o 1 1 I 1 Jf-^" 2 "' 1 ""
c
0) 1 1 1 3/r "
Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

0.B- sir

\,'/ij '
•* 1
• if
UJ ' Ojf 1
0.4-
• 4/ i
O) x . //

/
/ ././•"-
/h \
Pw = 1 / (i+Aexp (-Bt 1 ^
\
u 0.2- / / / - - Pirn =l-exp (-ait)
0}
, / / - - - p w =i-exp [ - a 2 ( t - t j ]
CJ
0.0- fe=^_ / , , » _ _ _ , _ _ _ _
0. 10. 30. 30. 40.
Checking Time (t in minutes)
FIG. 2. Comparison of Checking Efficiency Models as a Function of Checking
Time (A= 376.4, B = 1.4365, t0 = 10, a, = 0.05, a2 = 0.095; Integers Refer to Num-
ber of Concurrent Data Values)

as obtained from the survey. The data points show a lot of scatter. This is
considered to be due to the unavoidable lack of control over test conditions.
Nevertheless, the data do suggest a trend.
It has been suggested (e.g., Kupfer and Rackwitz 1980) that error detec-
tion is related to search theory and can be expressed as a negative expo-
nential curve:
PindO) = 1 - exp (-ajO (2)
where pmA(t) = the average checking efficiency as a function of checking
time t. The constant a 1 may be assumed to be proportional to the level of
detail examination and to the characteristics of the checker, and inversely
proportional to the task size. Statistical tests indicated that this model does
not provide a reasonable fit to the data for any value of a! (see Fig. 2).
A better description of design checking as a function of checking time is
possible by using an S-shaped "learning curve" as used in the field of psy-
chology of education (Estes 1959; Hull 1952). In terms of design checking,
the use of an S-curve has some appeal. The initial increase in checking ef-
ficiency may be attributed to the designer attempting to understand the de-
sign concept and procedure. This is followed by a period of checking each
microtask for any errors and in which many of the errors would be detected.
Finally the designer would reach the stage of diminishing returns for his
effort, resulting in a reduced rate of checking efficiency. An appropriate S-
curve is
1
PindW = (3)
1 + A exp (-Btl/2)
where the constant A must be sufficiently large to ensure pmi(0) = 0; and
constant B is inversely proportional to the task complexity and proportional
1313

J. Struct. Eng., 1989, 115(6): 1309-1324


B i.o
to. I
k 4
o o.aH
c
03

J.f
Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

UJ
0.4 li
en — Pw = l - e x p ( - C i t n ^
c - - Pw - 1 - e x p (-cgm§)
"g 0.2-f
h — Pw = 1 / (1+Aexp (-Bt^rrg
I

0.0
OiO 0^5 1^0 1.5 a.o
E r r o r Magnitude r%

FIG. 3. "Independent Detailed Design Checking" Models as a Function of Error


Magnitude (A = 3,000, B = 2.2765, t = 20, c, = 0.075, c2 = 2.5)

to the expertise of the checker. Eq. 3 is shown in Fig. 2 (with A = 376.4


and B = 1.436) and is seen to provide a reasonable fit. Other S-shaped
curves could also be postulated.
The negative exponential curve Eq. 2 might be adapted by shifting the
time axis by an amount t0:
Pmd(0 = 1 - exp [-a2(r - t0)] (4)

and this is also seen to provide a reasonable fit to the data when f0 = 10
and a2 = 0.095 (see Fig. 2). Evidence from education psychology indicates
that "learning curves" progressively change from S-shaped to negative ex-
ponential as the subject's training increases (Harlow 1959). This observation
is of relevance since a superior checking efficiency would be expected from
an engineer with relevant expertise or experience in similar designs. This
could result in t0 reducing with experience. It is unlikely, however, that t0
would reduce to zero, since each design is unique and even expert design
checkers would require some effort to become familiar with the design. A
"learning curve" of the form given by Eqs. 3 or 4 therefore appears more
appropriate than the negative exponential curve, Eq. 2.
When the error magnitude (me) relative to the correct value x,„, defined as

(5)

is plotted (Fig. 3) against checking efficiency for each error and for re-
sponses with a similar checking time (20 ± 1 min), it is seen that larger
errors are more easily detected than smaller ones. Such an observation seems
reasonable, and may be incorporated in the negative exponential model
(Eq. 2) proposed by Lind (1983) or the shifted negative exponential model
(Eq. 4)

J. Struct. Eng., 1989, 115(6): 1309-1324


p-mA(me, 0 = 1 - exp [-h{me)cQ{t - tQ)] (6)
where c0 = a constant. Lind (1983) assumed that h(me) = m], which, with
Eq. 6, is plotted in Fig. 3 for / = 20, t0 = 0 and c0 = 2.5. It is evident
that this curve approaches unity rather too quickly. By assuming h(me) =
m1/5 and c0 = 0.075, the curve fit is improved (Fig. 3), although no justi-
fication for this form of h( ) can yet be given. (The values of t0 and c0 were
Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

chosen in each case to produce a curve of best fit.)


Eq. 3 also may be modified to incorporate the error magnitude function
h(>T>e), giving
1
P\Jjne,t) = T—^ / w /2s (7)
1 + A exp (-h(me)Bt"£)
where the function of form h(me) = mlJ{0 appears to be appropriate for pa-
rameter values A = 3,000, B — 2.2765, and t = 20 (Fig. 3). This curve
appears to provide an improved fit to the data.

OVERVIEW CHECKING

Survey Methodology
In this study practicing engineers were again used as subjects, with a ques-
tionnaire mailed to 210 civil engineering organizations and individuals
throughout Australia. A total of 105 survey responses were obtained.
Decisions as to the adequacy of 11 simple structural designs, all simply
supported beam members, each with a different loading configuration, were
required. Nine designs used steel universal beams, two used reinforced con-
crete sections. The possible response options were preselected as "under-
sized," "correct," "oversized," and "unsure." It was clearly stated that the
decision was to be based on personal judgement, without the aid of engi-
neering design aids or detailed calculations, and based on previous experi-
ence with Australian codes and practice. The respondents were also re-
quested to record both their response time and the extent of relevant professional
engineering experience (in years).
In the following, the member sizes shown in the survey sheet will be
termed the "suggested" design for each case, and the theoretically correct
member size as simply the "correct" design.
In the analysis, responses marked as "unsure" were ignored in further
analysis. The reinforced concrete designs led to the highest proportion of
"unsure" responses (5.2%). By comparison, only 0.4% of steel member de-
signs were recorded as "unsure."
The relative degree of adequacy of a "suggested" member design was
measured through the percentage resistance error (Re), defined as the per-
centage difference between the "correct" design (RCD) and the "suggested"
design (RSD), using bending moment resistance as a comparative measure

Re = — x 100% (8)
RcD
Member resistance was considered to be adequately described by working
stress methods as specified in Australian Standard design codes (AS 1250
and AS 1480). The appropriate "correct" design, the percentage resistance

1315

J. Struct. Eng., 1989, 115(6): 1309-1324


TABLi 2. Suggested and Corrected Member Design Definitions for "Overview
Checking" Survey
Design "Suggested" "Correct" Correct
task design design R. (%) response
(1) (2) (3) (4) (5)
l
Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

350 UB 51 250 UB 31 125.49 Oversized


2 760 UB 148 690 UB 125 28.74 Oversized
3 530 UB 92 310 UB 40 270.77 Oversized
4 610 UB 113 460 UB 82 78.88 Oversized
5 760 UB 220 690 UB 140 74.67 Oversized
6 310 UB 46 360 UB 51 -18.59 Undersized
7 250 UB 31 250 UB 37 -18.66 Undersized
8 310 UB 40 250 UB 31 91.89 Oversized
9 760 UB 148 610 UB 101 78.49 Oversized
10 2-Y28 — -49.65 Undersized
11 3-Y24 — 0.0 Correct

error, and the appropriate response to the survey question are given for each
case in Table 2. It is evident that the "suggested" design is in seven cases
overdesigned, in three underdesigned, and in one correctly designed.

Mathematical Models
A number of factors are known to influence the effectiveness of control
measures (Ingles 1986). Of these, the following were singled out for atten-
tion in the present study: percentage resistance error, experience, and check-
ing time (and therefore, indirectly, checking effort). To consider these fac-
tors systematically, a model of the overview checking process is required.
It was found difficult to devise a measure of overview checking effec-
tiveness in terms of error detection efficiency. Overview checking tends to
be concerned with the outcome of a number of design steps and processes
and as such is unlikely to be able to detect anything about any one of them.
Design reviewers are more concerned with the functionality of the result.
Accordingly, a simple overview checking model consisting of two decisions
was formulated:

1. Is the member as designed safe?


2. If the member is deemed safe, is it overdesigned?

In the sections to follow, a procedure for evaluating this decision strategy


will be described.
The first decision—is the designed member safe?—requires analysis of
the data for the responses "correct" and "oversized." For each design task,
the probability of judging the design as "safe" is the sum of "correct" and
"oversized" responses divided by the total number of responses. Initially,
all data from the 105 respondents were employed, providing, therefore, an
averaged result (i.e., average experience and average response time).
The probability of a suggested member design being judged "safe," as a
function of the percentage resistance error, is shown in Fig. 4, together with
95% confidence intervals calculated for the data and a "best fit" model. To
define the model, let z be defined as
1316

J. Struct. Eng., 1989, 115(6): 1309-1324


Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

i
-100. 0. 100. 200. 300.
Percentage Resistance E r r o r Rg

FIG. 4. Comparison of "Safe" Mode! and Survey Data

Re
(9)

where x andCT= mean and standard deviation of the chosen t distribution.


Then the model for a suggested design being judged "safe" is a modified
form of the expression for a cumulative t distribution

A*(R.) = 1 - - + I V(z,v)dz; Re>x (10a)

PsiSc(Re) - 1
H ¥U,v)dz; Re<x

where /(z,.) = probability density function for the t distribution; and v and
(106)

8 = constants. The model is compared with survey data in Fig. 4. The pa-
rameters x, o-, v, and 8 are given in Table 3.
Fig. 4 shows that the proposed model intercepts all except one of the 95%
confidence intervals. This suggests that the present model is reasonably ap-
propriate; however, other models could also be postulated.
The second part of the decision process is concerned with the probability
of selecting a member as "oversized" given that the "suggested" member
has previously been deemed "safe." This is a conditional probability povereiied|safe.
The general trend of p0versized|safe against Re shown in Fig. 5 is not unex-
pected; as the percentage resistance error increases, so does the degree of
oversizing, resulting in a higher proportion of "oversized" responses.
It is possible to develop an oversized|safe decision model also based on

TABLE 3. Parameters for "Overview Checking" Models


Judgment X cr V 8
(1) (2) (3) (4) (5)
Safe -20.0 17.5 1 0.925
Oversized|safe 35.0 17.5 1 0.900
(Inexperienced) 32.5 17.5 1 0.750
(Experienced) 30.0 17.5 1 0.910

1317

J. Struct. Eng., 1989, 115(6): 1309-1324


Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

—i—>ib—t i
-100. 0. 100. 200. 300.

Percentage Resistance E r r o r R,

FIG. 5. Comparison of "Oversiied|Safe" Model and Survey Data

a modified form of the expression for a cumulative t distribution


/?oversized|safe(K e ) = 8 — 1 + piast{Re) (11)

where pSBtc(Re) is defined by Eq. 10. The parameters for his model are given
in Table 3. This model is shown with the survey data in Fig. 5 and is seen
to provide a reasonable fit.
The probability of judging a designed member as "correct" is evidently
/^correct Psafe * Pcorrect|safe Psnfe * (1 Poversized|safe) (12)

Fig. 6 shows the general model for judging the designed member as "cor-
rect," and its comparison to the survey data.

Effect of Experience
Experience is a term that is widely used in the profession, but one which
lacks precise definition in terms which can be quantitatively interpreted. Due
to the relatively small sample size obtained from the survey, the use of a
continuous variable to represent "experience" for mathematical models was
not possible. Hence a binary variable was adopted; "inexperienced" and
"experienced." These two terms were defined arbitrarily (in three alternative

-100. 0. 100. 200. 300.

Percentage Resistance E r r o r R.

FIG. 6. Comparison of "Correct" Model and Survey Data

J. Struct. Eng., 1989, 115(6): 1309-1324


ways) to correspond to respondents whose responses fell in the following
subsampies:

El. Lower and upper 20th percentiles of experience (remaining 60% ignored).
E2. Lower and upper 50th percentiles of experience.
E3. Less than four years and greater than four years experience.
Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

Both graphical and statistical methods were used to examine the effects
of experience for decision-making effectiveness. Nonparametric statistical
tests of significance were employed because the survey data were in a di-
chotomized format. The most powerful of these tests is the Randomization
Test for Matched Pairs. A more general but less powerful test is the Cochran
Test, which was used to confirm results from the Randomization Test (Siegal
1956).
The significance tests for psafe concluded that it was highly likely that the
probability of selecting a designed member as safe is not related to the ex-
perience level of the overview checker. Further support to this conclusion
is found in the observation that the proposed model for psaf(! given by Eq.
10 fits within the 95% confidence intervals for all subsampies, irrespective
of experience categories (El, E2, and E3).
For the second decision, however, the null hypothesis of no difference
between poversized|safe f° r "inexperienced" and "experienced" conditions was
rejected at the 5% level. It can therefore be stated with a considerable amount
of certainty that p0versized|safe increases for positive Re values when the expe-
rience level is high.
The effect of experience on poverSized|safe may be modeled using Eq. 11, with
changed parameters for "inexperienced" and "experienced" conditions. The
relevant parameters for the model, as determined by experience level, are
given in Table 3. The two resulting models were compared with relevant
survey data for each of three subsampies of "experienced" and "inexperi-
enced." In the main, the two proposed models plotted within the 95% con-
fidence intervals of the respective survey data, hence providing credibility
to the models.
If, as has been suggested, p0versized|safe is experience-dependent, then pcomcl
evaluated from Eq. 12 must be also experience-dependent. A comparison of
the two proposed models for experience (i.e., "inexperienced" and "expe-
rienced") with the average experience model for /5correct is shown in Fig. 7.
It appears that for Re — 0 the pcorrea values are somewhat contradictory; pcomct
for inexperienced engineers is slightly higher than that for experienced en-
gineers. This observation is also supported by the survey data for the "cor-
rect" response. The reason for this remains unclear, but is most likely due
to variability in the survey data.

Effect of Time
The effects of response time were evaluated statistically in a manner rather
similar to that applied to the study of experience. For both decisions, the
effects of response time were considered to be negligible, as indicated by
the nonrejection (at the 5% level) of the null hypothesis of no differences
due to time. It was also shown that there was no statistical evidence of a
relationship between experience level and response time.
1319

J. Struct. Eng., 1989, 115(6): 1309-1324


I.O-i
-— "Inexperienced"
— Average
O.B- — "Experienced"

O.B-
Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

0.2

o.o i . . • i i
-300. -200. -JOO. 0. 100. 200. 300.

Percentage Resistance E r r o r Ra

FIG. 7. Effect of Experience on "Correct" Model


Comment
The aforementioned results provide information that is unexpected. It seems
fairly certain from the results that the probability of predicting whether a
"suggested" member design is "safe," is not a function of experience. This
observation contradicts the popularly held belief that more experience is
preferable to less experience. At the same time, the present results also in-
dicate that if a proposed member design is deemed "safe," then "experi-
enced" engineers tend to be more efficient in assessing whether the member
is oversized or not. This, at least, conforms with conventional wisdom.
The relationship between an engineer's experience and the safety of a de-
signed member is of particular interest. It has been shown by Matousek and
Schneider (1976) and Walker (1980) that lack of experience is a major con-
tributing cause in actual cases of structural failures. However, an analysis
of structural failures by Blockley (1977) shows that while the designer's
experience is a factor, its relative importance when compared with other
causes of failure is very low. On the other hand, a study by Ingles and Nawar
(1983) showed that engineers place great weight on experience for error re-
duction. The results of the present survey indicate that such a perception
may be false.
In a related area, studies investigating decision making as a result of train-
ing (which may be considered analogous to experience) have also produced
contradictory results. For example, Zakay and Wooler (1984) found that
training improved decision making, while Voth (1974) observed no such
improvement. Clearly further research is required.

REVIEW

The relative importance of the models presented herein may be gauged


from a simple example. Fig. 8 shows the effect of self-checking and the two
alternative independent detailed design checking models (Eqs. 6 and 7) on
a one-step calculation error distribution (Melchers and Harrington 1984). Note
that the reduction of error occurrence is relatively small for self-checking,
indicating that self-checking as a control measure is relatively ineffective.
On the other hand, a large reduction in error occurrence results from a single
independent detailed design check, suggesting that this is a more effective
control measure.

J. Struct. Eng., 1989, 115(6): 1309-1324


Initial
Error
Distribution
4.0-
Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

g.0-

0.0 >
10" 4 10" 3 10" g 10" 1 10° 10 s lO 8 10 3 10 4

s.o

4.0

2.0-

' 10~ 4 10" 3 10" 2 10" 1 10° 10 1 10 2 10 3 10 4

c 6.0
CD Error Distribution
a After "Independent
Checking" According
4.0 to eqn. (8)

2.0-

a 0.0
o 10"4 10' 3 10"2 10"1 101 1C2 103 104
c. 10°
a.

6.0
Error Distribution
5.0- After "Independent
Checking" According
4.0- to eqn. (9)

3.0
2.0
1.0
0.0
10 10" 3 10" 2 10" 1 10° 10 1 10 2 10 3 10 4
x/x m

FIG. 8. One-Step Calculation Error Distribution and Effect of Checking: (a) Initial
Error Distribution; (lb) Error Distribution After Self-Checking; (c) Error Distribution
After "Independent Checking" According to Eq. 7; (d) Error Distribution After "In-
dependent Checking" According to Eq. 8

1321

J. Struct. Eng., 1989, 115(6): 1309-1324


The results described here and the interpretations made should be seen as
preliminary, and indicative of some of the factors influencing design check-
ing. While it has been shown that design checking processes can be modeled
from appropriate survey data, it must also be recognized that the models are
rather idealistic representations of data having a lot of scatter. Thus, although
in principle, these models can be used to develop effective design-checking
Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

procedures, it is clear that the present work is only a first step in this di-
rection.
Finally, it is readily acknowledged that the methods and techniques em-
ployed may be criticized on a number of grounds (Stewart and Melchers
1985, 1986, 1987). However, the deficiencies in technique reflect the new-
ness of this research area. The only work comparable to the present inves-
tigations is "human factors engineering" or "ergonomics." These deal mainly
with man-machine interfaces and hence mainly with psychomotor tasks. The
tasks involved in design checking are mainly of a cognitive nature.

CONCLUSION

Survey data and mathematical models for three design-checking processes


have been described. The processes appropriate to design checking are self-
checking, independent detailed design checking and overview checking. It
was shown that relatively simple models may be developed for components
of these design-checking processes. The newness of the research area and
methodological difficulties were noted. It was noted also that the results are
indicative and not always in accord with conventional wisdom.

APPENDIX I. REFERENCES

Blockley, D. I. (1977). "Analysis of structural failures." Institution of Civil Engi-


neers, Part 1, London, England, 62, 51-74.
Ellingwood, B. (1987). "Design and construction error effects on structural reli-
ability." J. Struct. Engrg., ASCE, 113(2), 409-422.
Estes, W. K. (1959). "The statistical approach to learning theory." In Psychology:
A Study of Science. S. Koch, ed., McGraw-Hill, New York, N.Y., 380-491.
Harlow, H. F. (1959). "Learning set and error factor theory." In Psychology: A Study
of Science. S. Koch, ed., McGraw-Hill, New York, N.Y., 492-537.
Hull, C. L. (1952). A behavior system. Yale Univ. Press, New Haven, Conn.
Grill, L. (1984). "Present trends and relevant applications to increase reliability of
structures." Proc, Seminar on Quality Assurance, Codes, Safety and Risk in Struct.
Engrg. and Geomechanics, Monash Univ., 85-90.
Ingles, O. G. (1986). "Where should we look for error control?" Modeling Human
Error in Structural Design and Construction, A. Nowak, ed., ASCE, New York,
N.Y., 13-21.
Ingles, O., and Nawar, G. (1983). "Evaluation of engineering practice in Australia."
IABSE Workshop on Quality Assurance within Building Process, Rigi, 47, 111-
116.
Kupfer, H., and Rackwitz, R. (1980). "Models for human error and control in struc-
tual reliability." IABSE Anniv. Congress, Vienna, Austria, 1019-1024.
Lind, N. C. (1983). "Models of human error in structural reliability." Struct. Safety,
1, 167-175.
Matousek, M., and Schneider, J. (1976). "Untersuchungen ziir struktur des sicher-
heitsproblems bei bauwerken." Institut fur Baustatik und Konstruktion, ETH Zu-
rich, Bericht No. 59, Birkhauser Verlag (in German). See also Hauser, R. (1979).
"Lessons from European failures." Concr. Int., 21-25.

J. Struct. Eng., 1989, 115(6): 1309-1324


Melchers, R. E. (1989). "Human error in structural design tasks" ,/. Struct. Kngro
ASCE, 115(7), to appear.
Melchers, R. E., and Harrington, M. V. (1984). "Human error in structural reli-
ability—I. Investigations of typical design tasks." Res. Kept. 2/1984, Dept. of
Civ. Engrg., Monash Univ., Melbourne, Australia.
Melchers, R. E., and Stewart, M. G. (1985). "Data-based models for human error
in design." Fourth Int. Conf. on Struct. Safety and Reliability, Kobe, Japan, II,
Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

51-60.
Norman, D. A. (1981). "Categorization of action slips." Psychological Re view. 88(1),
1-15.
Rabbitt, P. (1978). "Detection of errors by skilled typists." Ergonomics, 21(11),
945-958.
Siegal, S. (1956). Non-parametric Statistics for Behavioral Sciences. McGraw-Hill,
New York, N.Y.
Sneath, N. (1979). "Discussion paper on liability and indemnity under conditions of
finite risk." Third Int. Conf. on Statistics and Probability in Soil and Struct Engrg.,
Sydney, Australia, 419-422.
Standards Association of Australia, (n.d.). SAA Steel Structures Code, AS 1250,
Sydney, Australia.
Standards Association of Australia, (n.d.). SAA Concrete Structures Code, AS 1480,
Sydney, Australia.
Stewart, M. G. (1987). "Control of human errors in structural design." Thesis pre-
sented to the Department of Civil Engineering and Surveying, University of New-
castle, at New South Wales, Australia, in partial fulfillment of the requirements
for the degree of Doctor of Philosophy.
Stewart, M. G., and Melchers, R. E. (1985). "Human error in structural reliability—
IV: Efficiency in design checking." Res. Rept. 3/1985, Dept. of Civ. Engrg.,
Monash Univ., Melbourne, Australia.
Stewart, M. G., and Melchers, R. E. (1986). "Human error in structural reliability—
V: Efficiency in self-checking." Res. Rept. 018.12.86, Dept. of Civ. Engrg. and
Surveying, Univ. of Newcastle, Newcastle, Australia.
Stewart, M. G., and Melchers, R. E. (1987a). "Human error in structural reliabil-
ity—VI: Overview checking." Res. Rept. 019.01.87, Dept. of Civ. Engrg. and
Surveying, Univ. of Newcastle, Newcastle, Australia.
Stewart, M. G., and Melchers, R. E., (1987b). "Structural design and design check-
ing." Proc, First Nat. Struct. Engrg. Conf, Melbourne, I.E. Australia, 700-705.
Voth, R. T. (1974). "An experimental study comparing the effectiveness of three
training methods in human relations." Attitudes and Decision Making Skills, Dis-
sertation Abstracts Int. (A), University Microfilms International, Michigan, 6817-
6818.
Walker, A. C. (1980). "Study and analysis of the first 120 failure cases." Symp.,
Struct. Failures in Bldgs., Inst, of Struct. Engrs., London, U.K., 15-40.
Zakay, D., and Wooler, S. (1984). "Time pressure, training and decision effective-
ness." Ergonomics, 27(3), 273-284.

APPENDIX II. NOTATION

The following symbols are used in this paper:

A = constant;
B = constant;
c0 = constant;
El = lower and upper 20th percentiles of experience;
E2 = lower and upper 50th percentiles of experience;
E3 = less than 4 years and greater than 4 years experience;
eiog = logarithmic error factor;
f(z,i) = t distribution probability density function;

1323

J. Struct. Eng., 1989, 115(6): 1309-1324


me = error magnitude;
Pcorrect = probability of judging a designed member as "correct";
Pind ~ checking efficiency for independent detailed design check-
ing;
oversized|safe ~ probability of judging a member as "oversized" given that
the suggested member is "safe";
Downloaded from ascelibrary.org by University of Newcastle on 10/12/15. Copyright ASCE. For personal use only; all rights reserved.

Ps = self-checking efficiency;
Psafe = probability of judging a design as "safe";
Re = percentage resistance error;
RcD = bending moment resistance for "suggested" design;
RsD = bending moment resistance for "correct" design;
t = checking time;
k = time to become familiar with design to be checked;
X = incorrect value;
•*m = correctly self-checked response;
X = constant;
z = standard variable;
«1 = constant;
a2 = constant;
8 = constant;
V = constant; and
a = constant.

13k!4

View publication stats J. Struct. Eng., 1989, 115(6): 1309-1324

Vous aimerez peut-être aussi