Vous êtes sur la page 1sur 6

Automation in Construction 22 (2012) 271276

Contents lists available at SciVerse ScienceDirect

Automation in Construction
journal homepage: www.elsevier.com/locate/autcon

Supervised vs. unsupervised learning for construction crew productivity prediction


Mustafa Oral a,, Emel Laptali Oral b, Ahmet Aydn c
a
b
c

ukurova niversitesi, Mhendislik Mimarlk Fakltesi, Bilgisayar Mhendislii Blm, Balcal, Adana, Turkey
ukurova niversitesi, Mhendislik Mimarlk Fakltesi, naat Mhendislii Blm, Balcal, Adana, Turkey
ukurova niversitesi, Mhendislik Mimarlk Fakltesi, Elektrik Elektronik Mhendislii Blm, Balcal, Adana, Turkey

a r t i c l e

i n f o

Article history:
Accepted 16 September 2011
Available online 24 October 2011
Keywords:
Supervised learning
Unsupervised learning
Self Organizing Maps
Construction crew
Productivity

a b s t r a c t
Complex variability is a signicant problem in predicting construction crew productivity. Neural Networks
using supervised learning methods like Feed Forward Back Propagation (FFBP) and General Regression Neural Networks (GRNN) have proved to be more advantageous than statistical methods like multiple regression,
considering factors like the modelling ease and the prediction accuracy. Neural Networks using unsupervised
learning like Self Organizing Maps (SOM) have additionally been introduced as methods that overcome some
of the weaknesses of supervised learning methods through their clustering ability. The objective of this article
is thus to compare the performances of FFBP, GRNN and SOM in predicting construction crew productivity.
Related data has been collected from 117 plastering crews through a systematized time study and comparison of prediction performances of the three methods showed that SOM have a superior performance in predicting plastering crew productivity.
2011 Elsevier B.V. All rights reserved.

1. Introduction
Realistic project scheduling is one of the vital issues for successful
completion of construction projects and this can only be achieved if
schedules are based on realistic man-hour values. Yet, determination
of realistic man-hour values has been a complicated issue due to the
complex variability of construction labor productivity [19]. Thus,
recent researches have focused on articial neural network applications which provide a exible environment to deal with such kind
of variability. These applications have been based on supervised
learning methods, primarily Feed Forward Back Propagation (FFBP)
[12,1018], and recently General Regression Neural Network
(GRNN) [1920]. While strengths of these methods over multiple regression models, related with the modeling ease and the prediction
accuracy, have been well discussed; the weaknesses of the supervised
learning process, i.e. requiring the output vector to be known for
training, has also been pointed out [2122]. In parallel, Self Organizing Maps (SOM), based on unsupervised learning, have been introduced as applications which overcome the weaknesses of both the
statistical methods and the neural network applications based on supervised learning [2125]. However, few researchers like Hwa and
Miklas [26], Du et al. [27] and Mochnache et al. [28] used SOM for prediction purposes and these were related with heavy metal removal
performance, oil temperature of transformers and thermal aging of
transformer oil, respectively. A recent application, alternatively, has
Corresponding author. Tel.: + 90 322 338 60 84/2663 17; fax: + 90 322 3386702.
E-mail addresses: moral@cu.edu.tr (M. Oral), eoral@cu.edu.tr (E.L. Oral),
aaydin@cu.edu.tr (A. Aydn).
0926-5805/$ see front matter 2011 Elsevier B.V. All rights reserved.
doi:10.1016/j.autcon.2011.09.002

focused on prediction of construction crew man-hour values for concrete, reinforcement and formwork crews [29] and prediction results
have been compared with the results of previous research based on
both multi regression analysis and Feed Forward Back Propagation
(FFBP) [2,30]. The objective of the current research, however, has
been to use a specic sample data and compare the prediction results
of the models developed by using SOM, FFBP and GRNN.
2. Data collection and nature of the data
Collecting realistic and consistent productivity related data is one
of the key factors in arriving at realistic man-hour estimates by using
any of the prediction methods. Various work, labor and site related
factors affect construction labor/crew productivity and these have to
be observed and analyzed systematically in order to arrive at realistic
man-hour values, and time study is a methodical process of directly
observing and measuring work [31]. Thus, time study has been undertaken with 1181 construction crews in Turkey through the use of
standard time study sheets between the years 2006 and 2008 and details related with concrete pouring, reinforcement and formwork
crews have been presented in various publications [17,29,32]. For
plastering crews, quantity and details of plastering work undertaken
by each crew were recorded together with work (location of the
site, location of the work on site, the type and the size of the material
used and the weather conditions), labor (age, education, experience,
working hours, payment method, absenteeism and crew size), and
site (site congestion, transport distances, and the availability of the;
crew, machinery, materials, equipments and site management) related factors for 31 crews initially. Man-hour values were then

272

M. Oral et al. / Automation in Construction 22 (2012) 271276

determined by calculating duration of undertaking 1 m 2 plastering


work. Required number of observations for the sample size to be representative within a targeted condence interval (90%) was then determined by using Eq. 1 [31].
2 q

 32
Nni1 2i ni1 i 2
5
N 4A
ni1 i

where:
N'
A
N
Xi
n

required number of observations within the targeted condence interval.


20 for 90% condence level
number of observations during the pilot study.
unit output of the related labor (crew) during the ith
observation.
number of observations during the pilot study.

Calculations showed that a minimum of 71 crews were required


for the sample size to be representative of the population within
90% condence level. Data was then collected from 117 crews. The
number of crews was satisfactory regarding the sample size requirement. Normality of the collected data was then tested by determining
the skewness (to be between 3), and kurtosis (to be between 10)
coefcients for productivity values [33]. Table 1 shows that normality
assumptions were satised by the data set.
Data analysis and selection, additionally focused on the fact that
productivity models including fewer signicant factors predict better
than models based on many factors without considering signicance
[2]. Collected data/information related with the work and the site factors showed that most of the data/information were unevenly distributed; for example weather conditions were usually very hot and not
rainy due to the prolonged draughts between 2006 and 2008 in
Turkey, or there were usually no problems with the availability of
resources and so on. So labor related factors of age, education and
experience were decided to be used as independent variables in the
developed models. This decision was in good agreement with the literature ndings which recognize labor related factors to be the most
important factors affecting crew productivity [3,3444].
3. Prediction methods: supervised learning vs unsupervised learning
Supervised and unsupervised learning differ theoretically due to
the causal structure of the learning processes. While inputs are at
the beginning and outputs are at the end of the causal chain in supervised learning, observations are at the end of the causal chain in unsupervised learning. Unlike supervised learning, output vector is not
required to be known with unsupervised learning, i.e. the system
does not use pairs consisting of an input and the desired output for
training but instead uses the input and the output patterns; and locates remarkable patterns, regularities or clusters among them.
Thus, learning task is often easier with unsupervised learning if; the
causal relationship between the input and the output observations
has a complex variability; there is a deep hierarchy in the model or
some of the input data are missing [45].

3.1. Feed-Forward Back Propagation (FFBP)


A Feed Forward Back Propagation neural network (FFBP) is the
generalization of the WidrowHoff learning rule to multiple-layer
networks and nonlinear differentiable transfer functions [46]. A
FFBP usually consists of three layers; input, hidden and output.
Input layer contains as many neurons as the number of parameters
affecting the problem. One hidden layer is usually sufcient for nearly
all problems. Even though multiple hidden layers rarely improve a
data model, they may be required for modeling data with discontinuities. Number of neurons in a hidden layer should be selected arbitrarily. A neuron has Tan-sigmoid (TSig) activation function in a
hidden layer and linear activation function in the output layer. Thus,
the hidden layer squeezes the output to a narrow range, from which
the output layer with linear function can predict all values [4648].
Fig. 1 illustrates the architecture of a FFBP Neural Network with two
hidden layers.
FFBP network is initialized with random weights and biases, and is
then trained with a set of input vectors that sufciently represents
the input space. For the current problem, the input vectors have
three components; experience of the crew on the particular site,
crew size, and age of the crew members. The target vector, on the
other hand, has only one component; the crew productivity.
Training is achieved by using two steps. In the rst step, a randomly selected input vector from the training data set is fed into the input
layer. The output from the activated neurons is then propagated forward from hidden layer(s) to the output layer. The back propagation
step, on the other hand, starts with calculating the error in the gradient descent and propagates it backwards to each neuron in the output
layer, then the hidden layer. At the end of the second step, the
weights and the biases are recomputed. These two steps are alternately used until the network's overall error is less than a predened
rate, or until the number of maximum epochs is reached.

3.2. Generalized Regression Neural Networks (GRNN)


GRNN networks have four layers; input, hidden, pattern (summation) and decision (Fig. 2). Like FFBP, number of input neurons in the
input layer is equal to the total number of parameters. Input neurons
standardize input values and feed the standardized values to the neurons in the hidden layer. Hidden layer has one corresponding neuron
for each case in the training data set. For the current problem, there
are 104 neurons in the hidden layer of GRNN since data from 104
crews are used for training purposes as discussed in Section 4.
Each neuron stores the values of the predictor variables together
with the target value and computes the Euclidean distance of the
test case from the neuron's center point and then applies the RBF kernel function using the sigma value(s). The Sigma value determines

Input
Layer

Output
Layer

Experience
Crew Size

Table 1
Distribution characteristics of productivity values for plastering crews.

Hidden
Layers

Productivity

Age

Plastering (mh/m2)
Mean productivity
Standard deviation
Coefcient of Variation
Skewness coefcient
Kurtosis coefcient

0.5
0.23
0.47
1.08
2.53

Fig. 1. The architecture a FFBP Neural Network with two hidden layers.

M. Oral et al. / Automation in Construction 22 (2012) 271276

Input
Layer

Hidden
Layer

Experience

Pattern
Layer

273

Input Nodes

Decision
Layer

Weights

Output Nodes

Crew Size

Age

Productivity

Experience

Crew Size
Fig. 2. Schematic diagram of GRNN.

Age

the spread of RBF kernel function. The output value of a hidden neuron is passed to the two neurons in the pattern layer where one neuron is the denominator summation unit and the other is the
numerator summation unit. The denominator summation unit adds
up the weight values coming from each of the hidden neurons and
the numerator summation unit adds up the weight values multiplied
by the actual target value for each hidden neuron. The decision layer
then divides the value accumulated in the numerator summation unit
by the value in the denominator summation unit and uses the result
as the predicted target value [4950]. The prediction performance of
a GRNN strongly relies on the sigma value. It is the only parameter
that needs to be tuned by the user.
3.3. Self Organizing Maps (SOM)
Unlike FFBP and GRNN, SOM uses groups of similar instances, i.e.
input/output patterns, instead of a predened classication, i.e.
pairs consisting of an input and the desired output for training. The
training data set contains only input variables and SOM attempts to
learn the structure of the data in order to arrive at solutions. It organizes and clusters unknown data into groups of similar patterns
according to a similarity criterion (e.g. Euclidean distance) resulting
in reduction in the amount of data and organization of the data on a
low dimensional display. Thus, when the process is complete, each
node in the Output Layer has a topological position where similar
clusters position close to each other [51]. Fig. 3 shows the current
problem as an example, which adapts 4 dimensional input vector
into a 6 6 dimensional map.
Contrary to its common usage for; clustering, classication or displaying multidimensional data, SOM can be modied to make accurate predictions from which the data model is unknown. A typical
SOM has an input vector which has as many dimensions as the number of independent parameters, and no target vector. By feeding experience of the crew on the particular site, crew size, age of the
crew members and productivity as input vector, the relationships between these attributes can be displayed in two dimensional maps
produced by SOM. The maps are, in fact, visual representations of
the weight values that connect input and output nodes. The weight
values originating from crew size input node, for example, form
crew size map. As seen in Fig. 3, the output nodes do not produce
any output. Any output has to be derived from the weights. During
the training phase of SOM, similar input instances are approximated
and grouped together topologically. While some of the output nodes
correspond to the input vectors of the data set, some of them are
the estimation of the possible vector instances that lay between
these input vectors. This feature allows SOM to predict the outcome
of the instances that is not in the training data set. For the prediction
of crew productivity, FFBP and GRNN require input vectors that have
three components; experience, crew size and age, and target vectors

Productivity

Fig. 3. SOM structure.

that have only one component; productivity. SOM, on the other


hand, requires only input vectors. The input vectors of SOM have
four components; experience, crew size, age, and productivity. The
weights of the output nodes are calculated according to these inputs.
After the training process, prediction is achieved as follows: an
input vector is fed into the input layer. Since one of the input components for the current problem that is the productivity value is unknown and to be estimated, that component of the input vector
would be missing. Then, the Best Matching Unit (BMU), which is
the output node with the weight vector best matching to the current
input vector, is identied. The weight originating from the missing
input node is not considered for the calculation of BMU. The weight
connecting the BMU to the missing component of the input vector
is the normalized prediction value. Thus, after de-normalization, prediction process is completed.
4. Prediction performance of the models
FFBP, GRNN and SOM neural networks were congured and tuned
with different network structures and parameters in order to achieve
the best prediction performances. The input data set contains three
independent parameters; crew size, experience of the crew on the
particular site and age of the crew members, and one dependent variable; crew productivity for 117 crews.
Nine fold cross validation was carried out for each conguration in
order to compare the prediction results obtained from the three
methods. The input data set was divided into nine sub-groups containing 13 crews each. Eight sub-groups were used to train the
models and the remaining sub-group was used as test data. The
cross-validation process was repeated nine times by using each of
the nine sub-groups as test data. Three standard error measures;
Mean Squared Error (MSE), Mean Absolute Error (MAE) and Mean
Absolute Percentage Error (MAPE), were then calculated for each
data set by using Eqs. 24. MAPE, MAE and MSE values for the data
sets are given for FFBP, GRNN and SOM in Table 2. MAPE values together with Coefcient of Variation (CV) were considered to be the
determinants in arriving to the conclusions about the accuracy of
each model as both MSE and MAE have the disadvantages of heavily
weighting outliers and MSE accentuating large errors. CV values

274

M. Oral et al. / Automation in Construction 22 (2012) 271276

Table 2
Validation results for FFBP for different transfer functions.
FFBP
Conguration

TSig

PLin

PLinPLin

PLinTSig

TSigPLin

TSigTSig

Nodes
MSE
MAE
MAPE

7
0.07
0.2072
49.99

9
0.0614
0.1896
46.83

7-6
0.0593
0.1882
45.97a

10-9
0.0699
0.1988
46.61

6-6
0.0699
0.201
48.31

6-5
0.0672
0.2027
47.37

Best result.

(Eq. 5), on the other hand, presented the measure of dispersion,


meaning that the smaller the CV value, the more reliable the model
was in terms of stableness in predicting values.
Fig. 4. Initial neighborhood radius vs MAPE values.

MSE

1 n
2
A Pi
n i1 i

MAE

1 n
jA Pi j
n i1 i

MAPE

CV



1 n Ai Pi 
100%

n i1 Ai 

 100%

where;
n
Ai
Pi

number of data sets used for estimation


actual value of the ith element of the data set.
predicted value of the ith element of the data set.
standard deviation
mean

Two different network structures were tested for FFBP. In the


rst structure, FFBP had a single hidden layer with Tan-sigmoid
(TSig) transfer function. The number of neurons in the hidden
layer was changed from ve to ten, and the best prediction performance was obtained for the network structure with seven neurons
in the hidden layer with 49.99, 0.07 and 0.2072, for MAPE, MSE
and MAE, respectively (Table 2). Then the transfer function is
changed to Purelinear (PLin) and better prediction accuracy was
obtained for nine neurons in the hidden layer with 46.83, 0.0614
and 0.1896, for MAPE, MSE and MAE, respectively. Finally, number
of hidden layers was increased to two and both the number of

neurons in the hidden layers and the transfer functions were changed as in the rst structure. The best result was obtained as 45.97,
0.0593 and 0.1882 for MAPE, MSE and MAE, respectively when
transfer functions in both layers were set to Purelinear with the
number of neurons equal to seven for the rst layer and six for the
second layer (Table 2).
The GRNN algorithm, on the other hand, needs only one parameter to be tuned up; spread of radial basis function (). parameter
was changed from 0 to 5 with 0.01 steps in order to seek the most
suitable value to achieve the best prediction performance. The best
result was obtained when was 0.3, and MAPE, MAE and MSE values
were 45.87, 0.117 and 0.023, respectively (Table 3).
SOM, however, requires many parameters to be tuned up; output
map size, initial learning rate, initial neighborhood radius, even seed
of random generator. Instead of using systematic parameter search
as in FFBP and GRNN, random twelve different congurations were
tested for SOM. The best results were obtained with output map
size of 50 by 50 and initial learning rate of 0.1. It was observed that
initial neighborhood radius has a signicant effect on the prediction
accuracy. Further experiments were carried on by changing initial
neighborhood radius from 70 to 300 with step size 1. Fig. 4 shows
the effect of initial neighborhood radius on the prediction accuracy.
The best result was obtained when the radius was set to 237. MAPE,
MAE and MSE values were obtained as 41.27, 0.193 and 0.069 respectively, with these parameters (Table 3).
Results in Table 3 show that the prediction accuracy of SOM is
superior to GRNN and FFBP with respect to MAPE values. Best and
Worst rows in Table 3 which display the best and the worst

Table 3
Validation results.
GRNN

Fold 1
Fold 2
Fold 3
Fold 4
Fold 5
Fold 6
Fold 7
Fold 8
Fold 9
Best
Worst

.
CV (%)
a

Best result.

FFBP (PLinPLin)

SOM

MSE

MAE

MAPE

MSE

MAE

MAPE

MSE

MAE

MAPE

0.022
0.013
0.019
0.020
0.055
0.036
0.019
0.011
0.012
0.011
0.055
0.023
0.014
61

0.118
0.100
0.112
0.120
0.153
0.166
0.116
0.084
0.080
0.080
0.166
0.117
0.029
25

47.85
31.15
82.12
51.07
33.59
43.88
51.74
39.91
31.56
31.15
82.12
45.87
15.78
34

0.066
0.031
0.053
0.057
0.129
0.089
0.043
0.028
0.040
0.028
0.129
0.059
0.032
54

0.175
0.155
0.199
0.204
0.234
0.264
0.170
0.134
0.158
0.134
0.264
0.1882
0.0412
22

36.87
31.02
85.50
59.52
31.63
44.71
47.50
39.74
37.29
31.02
85.50
45.97
17.23
38

0.068
0.028
0.040
0.070
0.183
0.120
0.046
0.021
0.051
0.021
0.183
0.07
0.051
74

0.199
0.123
0.168
0.212
0.298
0.278
0.158
0.120
0.182
0.120
0.298
0.193
0.062
32

42.93
21.25
66.12
49.96
39.52
40.59
43.50
29.30
38.23
21.25
66.12
41.27a
12.53
30

M. Oral et al. / Automation in Construction 22 (2012) 271276

275

Table 4
Sensitivity analysis.
FFBP

GRNN

SOM

10-8
0.0725
0.2057
46.98

0.0586
0.1879
46.50

0.027
0.120
41.41a

5-5
0.0658
0.1929
45.82

6-5
0.0592
0.1872
44.50

0.0588
0.1857
45.67

0.0761
0.2085
45.82

9-7
0.0619
0.1867
43.56

5-6
0.0661
0.1968
46.97

10-5
0.0652
0.1923
45.46

0.0587
0.1871
46.06

0.0660
0.043
40.69a

8-8
0.0608
0.1831
42.79a

7-10
0.058
0.1833
44.52

8-10
0.0828
0.2115
47.79

9-8
0.0745
0.1953
43.20

0.0586
0.1874
46.40

0.0651
0.1860
43.96

(e) Sensitivity analysis 5: (independent variable: experience)


Nodes
5
6
MSE
0.0574
0.0675
MAE
0.1848
0.1946
MAPE
45.44
46.57

8-8
0.0613
0.1856
43.68a

7-8
0.0681
0.1934
43.94

7-6
0.0654
0.1954
44.61

9-9
0.0622
0.1888
44.40

0.0585
0.187
46.02

0.0645
0.1939
44.56

(f) Sensitivity analysis 6: (independent variable: crew size)


Nodes
10
6
MSE
0.0618
0.0613
MAE
0.1892
0.1881
MAPE
45.67
44.82

8-8
0.0605
0.1825
42.86

9-7
0.0602
0.1863
44.71

10-5
0.0614
0.1872
44.83

6-9
0.0616
0.1892
43.51

0.0585
0.1866
45.70

0.0581
0.1858
40.54a

Conguration

PLinTSig

TSigPLin

TSigTSig

(a) Sensitivity analysis 1: (independent variables: age/crew size)


Nodes
9
5
5-7
MSE
0.0599
0.0861
0.0607
MAE
0.1883
0.2131
0.1881
MAPE
47.09
48.89
45.55

8-5
0.0791
0.2073
45.91

9-5
0.0636
0.1955
44.86

(b) Sensitivity analysis 2: (independent variables: age/experience)


Nodes
9
6
5-8
MSE
0.0597
0.0668
0.0626
MAE
0.1892
0.1954
0.1918
MAPE
46.43
47.78
45.08

5-9
0.0605
0.1865
44.63a

(c) Sensitivity analysis 3: (independent variables: experience/crew size)


Nodes
9
6
6-8
MSE
0.0619
0.0652
0.0585
MAE
0.1909
0.1941
0.1836
MAPE
45.21
46.55
45.26
(d) Sensitivity analysis 4:
Nodes
MSE
MAE
MAPE

PLin

TSig

PLinPLin

(independent variable: age)


5
6
0.0637
0.0719
0.1923
0.2027
46.93
49.16

Best result.

prediction performances amongst the nine folds together with CV


values additionally show that SOM provides the most stable model
for predicting plastering crew productivity when the crew size, experience of the crew on the particular site, age of the crew members are
known. SOM displays an excellent performance for Fold 2 as 21.25,
0.123 and 0.028 for MAPE, MAE and MSE, respectively. Fold 2 also
gives the best results for both GRNN and FFBP. Meanwhile the worst
results with very high MAPE values are obtained for Fold 3 for
all of the models; giving the hint of outliers in the training data set
used for that fold.

When the results are compared by considering the MAPE values,


SOM's prediction accuracy is better than both FFBP and GRNN for
three cases with independent variables; experience/crew size, age/
crew size and crew size; and prediction results for independent variable combinations of; age/experience, age, and experience are better
for FFBP. When overall performances of the prediction models are
considered, prediction accuracy of SOM is superior for 4 out of 7 combinations with an average MAPE value of 42.89. Table 5 additionally
summarizes sensitivity analyses of SOM's performance in itself. The
results show that best prediction values were obtained when data
sets included crew size data.

4.1. Sensitivity analysis


5. Conclusions
Sensitivity analyses were additionally undertaken in order to observe how the prediction performances changed by ignoring one or
more of the independent variables. Table 4 (a)(f) shows the results.

Articial neural network models based on supervised learning have


proved to be successful in predicting construction crew productivity

Table 5
Sensitivity results of SOM.
Independent variable

Age/experience/crew size
Age/experience
Age/crew size
Experience/crew size
Age
Experience
Crew size
a

Best result.

MSE

MAE

MAPE

CV (%)

CV (%)

CV (%)

0.07
0.0761
0.0680
0.0660
0.0651
0.0645
0.0581

0.051
0.0437
0.0400
0.0304
0.0501
0.0345
0.0320

74
57.4244
58.8235
46.0606
76.9585
53.4884
55.0775

0.193
0.2085
0.1889
0.1926
0.1860
0.1939
0.1858

0.062
0.0551
0.0443
0.043
0.0772
0.0614
0.0518

32
26.4267
23.4516
22.3261
41.5054
31.6658
27.8794

41.27
45.82
43.41
40.69
43.96
44.56
40.54a

12.53
15.70
19.21
12.49
10.93
9.58
11.77

30
44.2525
30.6955
24.8635
21.4991
29.0331
30.3853

276

M. Oral et al. / Automation in Construction 22 (2012) 271276

signicantly better than statistical methods like regression. Meanwhile


applications in various areas other than construction showed that if the
causal relation between input and output has a complex variability,
learning task is often easier with unsupervised learning. Thus, current
research focused on application of both supervised and unsupervised
learning based methods to plastering crew productivity data in order
to compare the prediction results. Results show that SOM has a superior
performance than FFBP and GRNN for plastering crew productivity prediction. SOM's performance has also been tested for concrete, reinforcement and formwork crews and superior prediction performance, in
comparison to the previous models has been reported [29]. However,
future work is required in order to compare the results with the performance of FFBP and GRNN for concrete, reinforcement and formwork
crews. Positive effect of crew size related data on the model's performance is also an additional point which can guide future data collections and which can be investigated by the future applications.
Finally, it can be concluded that the current research proved SOM
to be an alternative tool to the supervised learning based tools and
SOM can be used in various prediction applications.

[19]

[20]
[21]
[22]

[23]

[24]
[25]

[26]

[27]

[28]

Acknowledgment
Data used in this paper has been collected during the research
project 106M055 which is supported by TBTAK (The Scientic
and Technical Research Council of Turkey). The authors would like
to thank G. Mstkoglu, E. Erdis, E.M. Ocal, and O. Paydak for their invaluable support during data collection.

[29]
[30]

[31]
[32]
[33]

References
[34]
[1] M. Radosavljevi, R.M. Horner, The evidence of complex variability in construction
labour productivity, Construction Management and Economics 20 (1) (2002) 312.
[2] R. Snmez, J.E. Rowings, Construction labor productivity modelling with neural
networks, Journl of Construction Engrg. and Mgmt. 124 (6) (1998) 498504.
[3] A. Kazaz, S. Ulubeyli, A different approach to construction labour in Turkey: comparative productivity analysis, Building and Environment 39 (2004) 93100.
[4] R. Lane, G. Goodman, Wicked Problems: Righteous Solutions Back to the Future
on Large Complex Projects, IGLC 8, Brighton, UK, 2000.
[5] S. Bertelsen, L. Koskela, Managing the Three Aspects of Production in Construction, IGLC-10, Gramado, Brazil, 2002.
[6] S. Bertelsen, ComplexityConstruction in a New Perspective, ILGC-11, Blacksburg, Virginia, 2003.
[7] S. Emmit, S. Bertelsen, A. Dam, BygLOKA Danish Experiement on Cooperation in
Construction, IGLC-12, Elsimore, Denmark, 2004.
[8] L. Koskela, G.A. Howell, The Underlying Theory of Project Management is Obsolete, Project Management Institute, 2002.
[9] G.A. Howell, G. Ballard, I.D. Tommelein, L. Koskela, Discussion of Reducing variability to improve performance as a lean construction principle, Journal of Construction Engineering Management 130 (2) (2004) 299300.
[10] L. Chao, M.J. Skibniewski, Estimating construction productivity: neural-networkbased approach, Journal of Computing in Civil Engineering 2 (1994) 234251.
[11] A.S. Ezeldin, L.M. Sharar, Neural networks for estimating the productivity of concreting
activities, Journal of Construction Engineering Management 132 (2006) 650656.
[12] J. Portas, S. AbouRizk, Neural network model for estimating construction productivity, Journal of Construction Engineering Management 123 (4) (1997) 399410.
[13] H.Y. Ersoz, Discussion of neural network model for estimating construction productivity, 125 (3), 1999, pp. 211212.
[14] S. AbouRizk, P. Knowles, U.R. Hermann, Estimating labor production rates for industrial construction activities, Journal of Construction Engineering Management
127 (6) (2001) 502511.
[15] C.O. Seung, K.S. Sunil, Construction equipment productivity estimation using articial neural network model, Construction Management and Economics 24 (2006)
10291044.
[16] C.M. Tam, T. Tong, S. Tse, Articial neural networks model for predicting excavator productivity, Engineering Construction and Architectural Management
(2002) 446452 5/6.
[17] E. Oral (Laptal), E. Erdi, G. Mstkolu, Kalp ilerinde ekip prollerinin verimlilie etkileri, 4. naat Ynetimi Kongresi, stanbul, 3031 Ekim, 2007.
[18] R. Noori, A. Khakpour, B. Omidvar, A. Farokhnia, Comparison of ANN and principal
component analysis-multivariate linear regression models for predicting the

[35]
[36]

[37]

[38]
[39]

[40]
[41]

[42]

[43]
[44]

[45]
[46]
[47]
[48]
[49]
[50]
[51]

river ow based on developed discrepancy ratio statistic, Expert Systems with


Applications 37 (8) (2010) 58565862 10.1016/j.eswa.2010.02.020.
M. Dissanayake, A.R. Fayek, A.D. Russell, W. Pedrycz, A Hybrid Neural Network for
Predicting Construction Labour Productivity, Computing in Civil Engineering,
ASCE, 2005.
M. Dissanayake, A.R. Fayek, Soft computing approach to construction performance
prediction and diagnosis, Canadian Journal of Civil Engineering 35 (8) (2008).
C.O. Seung, K.S. Sunil, Construction equipment productivity estimation using articial neural network model, Construction Management and Economics 24 (2006).
A.K. Ghosh, P. Om, Neural models for predicting trajectory performance of an artillery rocket, Journal of Aerospace Computing, Information, and Communication
2 (Feb. 2004) 112115.
D.H. Grosse, W. Timm, T.W. Nattkemper, REEFSOMa metaphoric data display for
exploratory data mining, Brains, Minds and Media Vol.2, bmm305 (urn:nbn:de:
0009-3-3051) (2006).
M. Oja, S. Kaski, T. Kohonen, Bibliography of Self-Organizing Map (SOM) papers:
19982001 addendum, Neural Computing Surveys 3 (2002) 1156.
M. Oral, E. Gen, skenderun Krfezi'nde yaayan Orfoz Bal (Ephinephelus marginatus Lowe 1834) ndaki parazitlenmenin z rgtlenmeli Haritalarla yeniden deerlendirilmesi, Journal of Fisheries Sciences.com (2008) 293300.
L.B. Hwa, S. Miklas, Application of the Self-Organizing Map (SOM) to assess the
heavy metal removal performance in experimental constructed wetlands,
Water Research 40 (18) (2006) 33673374.
H. Du, M. Inui, M. Ohki, M. Ohkita, Short-term prediction of oil temperature of a
transformer during summer and winter by Self-Organizing Map, Applied Informatics (2002) 351.
L. Mokhnache, A. Boubakeur, N. Nait Said, Comparison of Different Neural Networks Algorithms Used in the Diagnosis and Thermal Ageing Prediction of Transformer Oil, IEEE-SMC CD-ROM, paper WA2P1, 2002 Hammamat, Tunisia.
E. Oral, M. Oral, Predicting construction crew productivity by using Self Organizing Maps, Automation in Construction 19 (6) (2010) 791797.
S.C. Ok, K.S. Sinha, Construction equipment productivity estimation using articial neural network model, Construction Management and Economics 24
(2006) 10291044 (October).
B. Kobu, retim Ynetimi, 1999, Avcol Basm, stanbul.
M. Oral, E. Oral, A computer based system for documentation and monitoring of
construction labour productivity, CIB 24th W78 Conference, 2007, Maribor.
R.B. Kline, Principles and Practice Of Structural Equation Modelling, Guilford
Press, USA, 2004.
O.A. Akindele, Craftsmen and Labour Productivity in the Swaziland Construction
Industry, CIDB 1st Postgraduate Conference, Porth Elizabeth, South Africa, 2003.
J.D. Bocherding, L.F. Alaercon, Quantitative effects on construction productivity,
Construction Lawyer 11 (1) (1991) 3648.
P.F. Kaming, P. Olomolaiye, G.D. Holt, F.C. Haris, Factors inuencing craftsmen's
productivity in Indonesia, International Journal of Project Management 15 (1)
(1997) 2130.
M. Kuruolu, F.. Bayolu, Yap retiminde adam saat deerlerinin belirlenmesi
zerine bir aratrma ve sonular. 16. naat Mhendislii Teknik Kongresi Ankara, No:65, 2001.
L.S. Pheng, C.Y. Meng, Managing Productivity in Construction: JIT Operations and
Measurements, Ashgate, Singapore, 1997.
D.G. Proverbs, G.D. Holt, P.O. Olomolaiye, Productivity rates and construction
methods for high rise concrete construction: a comparative evaluation of UK,
German and French contractors, Construction Management and Economics 17
(1) (1999) 4552.
P. Olomolaiye, An evaluation of bricklayers' motivation and productivity. PhD
thesis, Loughborough University of Technology, UK, 1988.
M.E. cal, A. Tat, E. Erdi, Analysis of Labour Productivity Rates in Public Works Unit
Price Book (Bayndrlk ileri birim yat analizlerindeki igc verimliliklerinin irdelenmesi.), 3rd Construction Management Congress, zmir, 2005.
S. Wang, A Methodology for comparing the productivities of the RMC industries
in major cities. PhD thesis, The Hong Kong Polytechnic University, Hong Kong,
1995.
G. Winch, B. Carr, Benchmarking on-site productivity in France and the UK: a CALIBRE approach, Construction Management and Economics 19 (6) (2001) 577590.
M. Zakeri, P.O. Olomolaiye, G.D. Holt, F.C. Harris, A survey of constraints on Iranian construction operatives' productivity, Construction Management and Economics 14 (1996) 417426.
H. Valpola, Bayesian ensemble learning for nonlinear factor analysis, Acta Polytechnica Scandinavica, Mathematics and Computing Series No. 108, , 2000, p. 54, Espoo.
P.J. Werbos, The Roots of Back Propagation. From Ordered Derivatives to Neural
Networks and Political Forecasting, John Wiley & Sons, Inc, New York, 1994.
J. Lawrence, Introduction to Neural Networks, California Scientic Software Press,
1994.
T.L. Fine, Feedforward Neural Network Methodology, Springer, 1999.
http://www.dtreg.com/pnn.htm.
D.F. Specht, A generalized regression neural network, IEEE Transactions on Neural
Networks 2 (Nov. 1991) 568576.
T. Kohonen, Self-organized formation of topologically correct feature maps, Biological Cybernetics 43 (1982) 5969.

Vous aimerez peut-être aussi