Vous êtes sur la page 1sur 32

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/320112369

3D Visualization-based Ergonomic Risk Assessment and Work Modification


Framework and Its Validation for a Lifting Task

Article  in  Journal of Construction Engineering and Management · July 2017


DOI: 10.1061/(ASCE)CO.1943/7862.0001412

CITATION READS

1 438

5 authors, including:

Xinming Li Sanghyeok Han


University of Alberta Concordia University Montreal
14 PUBLICATIONS   39 CITATIONS    31 PUBLICATIONS   79 CITATIONS   

SEE PROFILE SEE PROFILE

Mohamed Al-Hussein Marwan El-Rich


University of Alberta University of Alberta/Khalifa University (UAE)
204 PUBLICATIONS   1,083 CITATIONS    77 PUBLICATIONS   610 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Automation for heavy lift study View project

Spinal loads and trunk muscles forces during level walking - A combined in vivo and in silico study on six subjects View project

All content following this page was uploaded by Sanghyeok Han on 14 November 2017.

The user has requested enhancement of the downloaded file.


1 3D Visualization-based Ergonomic Risk Assessment and Work

2 Modification Framework and Its Validation for a Lifting Task

3 Xinming LI1, SangHyeok HAN2, Mustafa GÜL3, Mohamed AL-HUSSEIN4, Marwan EL-RICH5

4 ABSTRACT

5 The construction manufacturing industry in North America has a disproportionately high number

6 of lost-time injuries due to the high physical demand of the labour-intensive tasks it involves. It

7 is thus essential to investigate the physical demands of body movement in the construction

8 manufacturing workplace to proactively identify worker exposure to ergonomic risk. This paper

9 proposes a methodology to utilize 3D skeletal modeling to imitate human body movement in an

10 actual construction manufacturing plant for ergonomic risk assessment of a workstation. The

11 inputs for the creation of an accurate and reliable 3D model are also identified. Through 3D

12 modeling, continuous human body motion data can be obtained (such as joint coordinates and

13 joint angles) for risk assessment analysis using existing risk assessment algorithms. The

14 presented framework enables risk evaluation by detecting awkward body postures and evaluating

15 the handled force/load and frequency that cause ergonomic risks during manufacturing

16 operations. The results of the analysis are expected to facilitate the development of modified

17 work to the workstation, which will potentially reduce injuries and workers’ compensation

1
Post-doctoral Fellow, Department of Civil and Environmental Engineering, University of
Alberta, Edmonton, Alberta, Canada
2
Assistant Professor, Department of Building, Civil and Environmental Engineering, Concordia
University, Montréal, Québec, Canada
3
Corresponding Author: mustafa.gul@ualberta.ca, Associate Professor, Department of Civil and
Environmental Engineering, University of Alberta, Edmonton, Alberta, Canada
4
Professor, Department of Civil and Environmental Engineering, University of Alberta,
Edmonton, Alberta, Canada
5
Associate Professor, Department of Civil and Environmental Engineering, University of
Alberta, Edmonton, Alberta, Canada; Associate Professor, Department of Mechanical
Engineering, Khalifa University, Abu Dhabi, United Arab Emirates
18 insurance costs in the long term for construction manufacturers. The proposed framework can

19 also be expanded to evaluate workstations in the design phase without the need for physical

20 imitation by human subjects.

21 In this paper, the proposed 3D visualization-based ergonomic risk assessment methodology is

22 validated through an optical marker-based motion capture experiment for a lifting task in order to

23 prove the feasibility and reliability of the framework. It is also compared to the traditional

24 manual observation method. Three subjects are selected to conduct the experiment and three

25 levels of comparison are completed: joint angles comparison, risk rating comparison for body

26 segments, and REBA/RULA total risk rating and risk level comparison.

27 KEYWORDS

28 Ergonomics, 3D Visualization, Risk Assessment, REBA, RULA, Validation, Motion Capture

29 Experiment

30 INTRODUCTION

31 Workers in the construction industry are exposed to tasks with higher physical demands—such

32 as overexertion, repetitive motion, and awkward body posture—than those to which workers in

33 other industries are exposed, thereby resulting in a comparatively high rate of work-related

34 musculoskeletal disorders (WMSDs) (Wang et al. 2015; OSACH 2010). Schneider (2001)

35 reports that construction workers are at risk of developing WMSDs, which may affect

36 productivity and increase construction costs. These risks arise from the fact that many workers in

37 this sector are involved in various construction activities such as cleaning, assembling, preparing

38 the construction site, loading and unloading building material, operating power tools, and

39 operating machinery. To some extent, ergonomic analysis prevents WMSDs and maintains or

40 increases productivity, as it identifies ergonomic risks proactively and reduces inefficient and

2
41 non-productive motions. Fatigue and lack of motivation are other factors related to ergonomics

42 that could result in a loss of productivity. For construction manufacturers in particular,

43 continuous and regular improvement of the factory encompasses both high-level changes in

44 operational production flow to low-level modifications of operational tasks. There are thus

45 numerous avenues for improvement of the physical and psychological conditions of the

46 workstation then can result increased safety and productivity.

47 Partial ergonomic risk analysis of construction activities is implemented using existing

48 ergonomic analysis models such as Ovako Work posture Analysing System (OWAS) (Karhu et

49 al. 1977), Rapid Entire Body Assessment (REBA) (Hignett and McAtamney 2000; Janowitz et

50 al. 2006), and Rapid Upper Limb Assessment (RULA) (McAtamney and Corlett 1993). Inyang

51 and Al-Hussein (2011) propose a comprehensive framework for evaluating and quantifying

52 ergonomic effects on each body segment while the worker is performing construction activities.

53 However, even though workers are greatly affected by their workplaces, these studies have not

54 considered environmental factors as a means to assist in designing productive and safe

55 operations. From a lean manufacturing perspective, the workplace must be organized and

56 standardized to achieve higher productivity by reducing the number of accidents and errors that

57 occur (Dennis 2002). The existing methods for assessing safety risk can provide a priori risk

58 estimates and can measure the frequency of unsafe behaviors or conditions, but do not provide a

59 way to assess the potential for accidents based on the actual execution of the operation

60 (Mitropoulos et al. 2009). Thus, it is critical to determine a method to proactively identifies and

61 mitigates WMSD risks. The research presented in this paper aims to develop a framework to

62 assess ergonomic risks, even in the design phase of manufacturing plants, in order to reduce

63 potential work-related injuries and claims.

3
64 BACKGROUND

65 The existing body motion data collection methods, such as direct observation, self-reporting, and

66 physiological measurements (direct/indirect), are research-based and paired with occupational

67 safety and health practitioner requirements. However, there are limitations in the real-life

68 implementation in construction manufacturing (David 2005; Li and Buckle 1999). On the other

69 hand, these methods can be time-consuming and error-prone since the required information, such

70 as joint angles of a body, must be collected accurately and efficiently in order to analyze the

71 potential risks of tasks and workers. Wang et al. (2015) summarize the existing techniques for

72 work-related musculoskeletal disorder (WMSDs) risk assessments, outlining their benefits and

73 limitations. To be more specific, although direct observation is conducted with minimal

74 disruption to the work and minimal instrumentation, the results are based on subjective

75 evaluation, and inter-rater differences exist between different observers. The data collection for

76 physical demand and ergonomic posture assessment currently depends on this direct observation.

77 Although direct physiological measurement (such as use of goniometers, force sensors,

78 accelerometers, electromyography, and optimal markers) offers a high level of accuracy and

79 provides information that is more objective, it may be limited by experimental cost, environment,

80 and technical and ethical issues for both non-invasive and invasive approaches. The risk

81 assessment tools, including REBA, RULA, 3DSSPP etc., require detailed physical data, such as

82 joint angle, in order to complete human body motion analysis. These risk assessment tools have

83 not been fully implemented in construction cases due to the limitations with respect to

84 physiological measurement. Joint angle and body posture can be obtained not only by direct

85 measurement, but also by indirect measurement (e.g., Kinect range camera, computer vision-

86 based approach) (Alwasel et al. 2011; Ray and Teizer 2012; Han and Lee 2013; Seo et al.

4
87 2015a). Seo et al. (2015a) review the current limitations of implementing computer vision

88 techniques, and suggest potential areas for future application. Other studies (Li and Lee 2011;

89 Han and Lee 2013) use a video-based computer visualization approach to automatically assess

90 the captured motion. However, these measurements have limitations in the real-life

91 implementation in construction manufacturing, such as illumination and obstacles in capturing

92 direction, and the time-consuming nature of data post-processing (David 2005; Li and Buckle

93 1999). A few studies integrate these physiological measurements with biomechanical analysis

94 tools (Seo et al. 2015b; Han et al. 2013; Seo et al. 2013); however, these approaches interfere

95 with the work to some extent, and require a human subject to physically imitate the task prior to

96 the analysis. The limitations of current methods are summarized in Table 1.

97 Alternatively, human subjects are commonly utilized to simulate tasks in a laboratory setting in

98 order to imitate the task and utilize direct and indirect measurements to capture the motion and to

99 record physiological data for further ergonomic analysis. It should be noted that laboratory-based

100 simulation can represent tasks with a reasonable level of detail and accuracy (comparing

101 alternative methods), and can thereby facilitate effective ergonomic risk assessment (providing

102 adequate information as the input to these risk assessment tools). However, a laboratory setting

103 will have space limitations compared to the field, and thus can only accommodate the simulation

104 of elemental tasks. In addition, the experimentation in this setting is usually subject to ethical,

105 technical, and cost issues, and it also requires time-consuming data post-processing and a large

106 number of subjects to imitate the motions and activities being evaluated (Spielholz et al. 2001).

107 The need is thus increasing for a new method which can overcome these difficulties and

108 eliminate work interruption on the job in order to automatically identify ergonomic risk. Wang

5
109 (2015) points out that there remains an opportunity for researchers to investigate cost-effective

110 methods for risk assessment and mimicking of human behavior in a laboratory setting.

111 Among the various approaches for human body motion data collection, 3D visualization allows

112 its users to imitate and simulate an operational task on the computer screen through careful

113 editing and drawings, which is less time-consuming, and helps to prevent human error, technical

114 issues, and the need for costly on-site devices. It can also proactively visualize a proposed design

115 prior to implementation in the real world. Many researchers (Al-Hussein et al. 2005; Manrique et

116 al. 2007; Staub-French et al. 2008; Han et al. 2014) have demonstrated that 3D visualization is

117 an effective tool for various purposes such as identifying operations, space conflict, site layout,

118 and construction sequences of heavy industrial facilities. Dynamic 3D visualization has been

119 proven to benefit construction in both activity and operation planning, in monitoring and

120 controlling aspects with proper machinery, and in resource and material arrangement (Kamat et

121 al. 2011). Feyen et al. (2000) develop a 3D static strength prediction program using AutoCAD

122 interface as a proactive biomechanical risk analysis tool based on postural and dynamic load

123 analysis functionalities and methods for preventing injury risks at the earliest design stages.

124 However, previous studies have used only some aspects of 3D visualization functionalities for

125 their respective objectives. These studies do not provide detailed or complete descriptions of a

126 project, such as the worker’s motions and repetitions in regular construction operations or

127 material handling. In order to overcome this limitation, Golabchi et al. (2015) propose a

128 framework for an automated biomechanical simulation approach to ergonomic job analysis for

129 workplace design using 3D visualization. Their research extends the range of applicability of 3D

130 visualization as a support tool in order to collect all engineering information for ergonomic

131 posture analysis in a production line, and as an educational and training tool for junior workers.

6
132 In addition, most of these studies analyze the broken-down static postures for only a few of the

133 discrete postures assumed during the task, even though in reality the movement is continuous.

134 Thus, there is also strong potential for the application of ergonomic risk assessment of

135 continuous movements.

136 This paper thus introduces a method which enables automated ergonomic risk assessment of

137 continuous movements with graphical 3D visualization-based modeling as a support tool. The

138 study investigates the feasibility and reliability of accurate operational task imitation for the

139 purpose of analyzing the ergonomic risks associated with existing modular construction

140 operations or with proposed operations under design. The output can help construction planners

141 to proactively eliminate or mitigate potential risks. Ultimately, the goal of this research is to

142 secure a healthy and safe working environment for workers and to reduce insurance claims and

143 injuries. A lifting task is also selected as a pilot study to validate the proposed framework. The

144 results from the 3D modeling are compared with the optimal marker-based experimental data

145 and traditional manual observation data. It is shown that 3D visualization is a promising

146 alternative to traditional risk assessment methods, requiring less time on site and thereby leading

147 to both time- and cost-savings.

148 3D VISUALIZATION-BASED ERGONOMIC RISK ASSESSMENT METHOD

149 As shown in Figure 1, the framework consists of the following phases: (1) create 3D model to

150 animate and imitate the operational task; (2) export body posture data from 3D visualization and

151 obtaining the joint angle at each frame (data acquisition); (3) conduct risk assessment during the

152 motion by using existing risk assessment tools, REBA/RULA (data post-processing); (4) identify

153 any motions that have a high risk rating based on the assessment in the previous step and propose

154 modified work by repeating step 1.

7
155 In the proposed method, the inputs are the task maneuver, workstation design, work environment,

156 and work schedule, which can be obtained from a brief plant observation or a video record in

157 order to thoroughly understand the working process. In order to create a 3D model that imitates

158 actual operations, the researchers must become familiar with the task maneuver. To maintain the

159 reliability of the motions, the working environment should be built accurately and effectively

160 since the motions of workers are highly influenced by workplace layouts (Golabchi et al. 2015).

161 Measurements may also be needed in order to obtain precise dimensions of the machine and

162 equipment and the handled load.

163 3ds Max (Autodesk 2016) is chosen to create human body working motion animations. Based on

164 the working environment, the first step is to draft or import plant layout, workstations, and work-

165 related elements into the 3D model. Although the process of establishing all 3D components

166 from sketching in the early stages of 3D modeling, without any existing 3D component library as

167 support, is time-consuming, 3ds Max has a massive inventory of models in an existing library to

168 support this framework and the building of 3D modeling to animate and imitate the working

169 motions in the factory. Following this, the same motion as performed by the worker from the

170 observation or in the video capturing is animated, including human body model creation, human

171 body movement imitation by controlling footsteps, body posture key frame edition using biped

172 control, and the speed of movement determination that is achieved by defining the frame rate. 3D

173 modeling is capable of representing all types of body postures, such as bend forward, reach

174 above shoulder, kneel, squat, or crouch, when the human body model is performing a task. It

175 should also be noted that different workers may exhibit different working behaviors for material

176 handling and in completing operational tasks. In this regard, another strength of this framework

177 is that 3D visualization enables the animation and simulation of diverse methods of completing

8
178 one task and comparing the results in order to propose the optimal method. To ensure a precise

179 and accurate human body animation for the physical data extraction, several rounds of

180 modifications may be required. To develop an accurate animation, the 3D model inputs needed

181 are the size of the human body segments (for the case where human subject physical imitation is

182 available), the layout of the plant and workstations (the human subject’s interaction with the

183 workstation and physical measurements), the primary motions (body postures required) during

184 the operational tasks, as well as the corresponding key time frames of these motions (based on a

185 time study of the motion). In analyzing a workstation in the design phase, i.e., when the

186 workstation has not yet been built, the detailed physical measurements of the workstation cannot

187 be obtained. However, reasonable assumptions can be considered, such as the average size of the

188 workers in the plant, the primary motions involved in completing the task based on the

189 dimensions of the workstation, and body posture magnitudes based on the workstation design

190 and the speed of the movement. In this case, the proposed method does not require a human

191 subject to physically imitate the task. In the biped model of 3D modeling, a total of 26 bones,

192 which include pelvis, spine, neck, head, as well as left and right thigh, calf, foot, toe, clavicle,

193 upper arm, forearm, hand, 3 finger bones, must be satisfied. A biped’s skeletal structure setting

194 consists of one neck link, one spine link, three leg links, one finger, three finger links, one toe,

195 and one toe link.

196 In phase two, 41 joint angles (e.g., flexion angle on sagittal plane, axial rotation angle, bending

197 angle on transverse plane) are calculated using computer programming by assessing the

198 coordination of related bones in the human skeleton model; force required by the body segment

199 is also estimated. To obtain the body posture joint angles for various body postures and

200 continuous movements, MAXScript, the built-in language in 3ds Max software (Autodesk 2011),

9
201 is used in conjunction with programming code. Referring to the joint angle calculation

202 methodology from the “3D Static Strength Prediction Program” developed at the University of

203 Michigan (2012), joint angles for hand, lower arm, upper arm, clavicle, upper leg, lower leg,

204 foot, head, neck, trunk, and pelvis are calculated and generated (as depicted in the images in

205 Figure 2). It should be noted that left and right side, horizontal and vertical, rotation, lateral bend,

206 and flexion are distinguished and calculated separately. In detail, the x-y plane is the horizontal

207 plane (transverse plane), where the front of the pelvis is defined as the forward direction, positive

208 y on the y-axis, while the line connecting iliac crests is defined as the x-axis (perpendicular to the

209 y-axis). Positive x is in the direction of the body segment pointing away from the pelvis. The z-

210 axis is the vertical axis, perpendicular to the x-y plane, where upward is positive. Horizontal

211 angles, which are within the range [0°,180°] for positive angles and [−180°,0°] for negative

212 angles, are measured on the x-y plane (transverse plane) between the body segment and the

213 positive x-axis. If the y value is positive, then the horizontal angle is positive and vice-versa. The

214 vertical angles, i.e., those angles falling within the range [−90°,90°], are defined and measured

215 on the y-z plane (sagittal plane) between the body segment and the positive y-axis. If the segment

216 is above the transverse plane, the vertical angle is positive and vice-versa. For the data

217 acquisition, world coordinate system is used to define the joint angle of the pelvis, local

218 coordinate system is utilized to determine the joint angle of the other body segments (vertical

219 angle and horizontal angle), and gimbal coordinate system is used to calculate the rotation of the

220 body segments. In addition, inverse kinematics is employed to identify the moving direction of

221 the subject during the animation. It is also applied during the animation creation, as this function

222 reverses the direction of the chain manipulation and facilitates the creation of the animation. By

223 selecting the animation frame and running MAXScript to make the calculation, joint angles for

10
224 body segments can be captured by each frame of the 3D animation for the continuous movement

225 in a batch file.

226 In phase three, REBA and RULA are utilized as systematic risk assessment tools for rating each

227 motion, evaluating the body movement during the task by using the body posture, force/load,

228 repetition, and coupling conditions (Hignett and McAtamney 2000; Middlesworth 2012;

229 McAtamney and Corlett 1993). REBA, it should be noted, looks at the entire body working

230 posture while RULA emphasizes the upper limb postures for ergonomic identification and

231 design. These two risk assessment algorithms are integrated with the framework to provide

232 greater flexibility. The user can choose which algorithm to use depending on their purpose of

233 analysis. A conversion of joint angles among different scenarios is also required in order to

234 satisfy the joint angle requirements of the REBA and RULA risk assessment methods. For

235 example, the leg is judged by the knee angle for the risk assessment, while 3D modeling

236 separates leg angle by upper leg horizontal and vertical angles as well as by lower leg horizontal

237 and vertical angles.

238 In phase four, the total risk rating is provided in the final REBA and RULA risk rating chart for a

239 continuous working operation process. Aggregating all the body posture movements, the total

240 rating is found to fluctuate during the movement. The final REBA and RULA risk ratings both

241 consider the joint angle of each individual body segment, force/load added to the body, and the

242 frequency of the activity. Five risk severity levels are categorized in the REBA methodology,

243 while four levels are categorized in the RULA methodology. Not all the human body motions

244 evaluated, it should be noted, involve high ergonomic risk. However, the high-risk motions can

245 be identified through this rating algorithm and the plotted chart. It is also recommended to check

246 the risk rating for each body segment in order to achieve a comprehensive analysis of the

11
247 simulated task. Tasks with high ergonomic risk can thus be identified and corresponding analysis

248 can be conducted to help with proposing modified work to reduce the risk and potential injury.

249 VALIDATION APPROACH

250 The proposed 3D ergonomic risk assessment methodology is compared with two other

251 approaches—the optical marker-based motion-capture experiment and traditional manual

252 observation data collection—in order to evaluate the accuracy of the motion data collected

253 through 3D modeling. Eichelberger et al. (2016) underscore the reliability of using the optical

254 marker-based motion capture to collect the physical data. A few studies adopt the marker-based

255 technique as the standard based upon which to compare the accuracy of other approaches

256 (Ceseracciu et al. 2014; Mündermann et al. 2006). The optical marker-based motion capture

257 experiment in this validation section is also the basis for comparison of the other two methods.

258 Manual observation using ergonomic assessment tools has been widely implemented in practice

259 because of its simplicity, validity, and accessibility, as well as the lower cost and time

260 requirements (Golabchi et al. 2016). Traditional REBA and RULA assessments are conducted

261 through manual observation (Hignett and McAtamney 2000; Janowitz et al. 2006; McAtamney

262 and Corlett 1993; Middlesworth 2012; Golabchi et al. 2015) and usually only analyze one or a

263 few primary body postures during continuous movements. A few researchers also compare the

264 proposed work with manual observation in order to verify the work (Golabchi et al. 2015;

265 Levanon et al. 2014; Van der Molen et al. 2007). Thus, these two approaches are selected in

266 order to compare the results from the proposed framework.

267 With ethics approval, a lifting task is performed by three male subjects with no history of injury

268 at the Syncrude Centre of the Glenrose Rehabilitation Hospital (Edmonton, Alberta, Canada) (Li

269 et al. 2017). Their heights range from 173 cm to 180 cm, ages vary from 27 years to 31 years,

12
270 and body weights vary from 57.0 kg to 80.5 kg. The test procedure and possible risks of injury

271 are explained and a written consent form is obtained from each subject.

272 Optical marker-based motion capture is a widely used and effective method for capturing human

273 motion. In this study, thirty-six self-adhesive reflective markers are attached to the subject’s skin

274 at specific joints, placed symmetrically in order to identify the left and right side, and are

275 recorded by a set of cameras to capture the kinematics of body segments while performing the

276 selected task. Li et al. (2017) explain the anatomical positions of the reflective markers while the

277 subject performs the lifting task. The subject is surrounded by an 8-camera Eagle Digital Motion

278 Analysis system (Motion Analysis Corp. 2016) sampling at 120 Hz to collect the 3D coordinates

279 of the reflective markers and to ensure that no marker disappears from camera view at any time.

280 The optical marker-based motion capture is calibrated before conducting the experiment. Results

281 indicate that the error of this motion capture approach is only 0.5 mm. A wand 500 mm in length

282 is used for calibration, and the system indicates the length of the wand is 499.98 mm on average

283 with a deviation of 0.49 mm among eight iterations of the calibration test. After the experiment,

284 data post-processing is carried out in order to clear any data noise during the experiment. The

285 same 41 joint angles mentioned above are calculated based on the coordinates of various markers,

286 as listed in Table 2. Rather than simply use the markers on the two ends of each body segment to

287 calculate joint angles, depending on the reliability of individual markers during the experiment

288 and the thickness of the body and muscles, minor adjustments are conducted to achieve better

289 joint angle results. However, the joint angles from 3D modeling are estimated based on the two

290 joints of the body segment, a practice which contributes to a slight difference compared with

291 experimental data calculation.

13
292 Manual observation is also conducted in order to compare the manual observation results with

293 the 3D motion data collection. The observer records the data for 41 joint angles based on the

294 video captured during the experiment. Based on the duration of each lifting motion by the subject,

295 a half-second in this case is selected as the time interval to be observed and recorded. The lifting

296 motion durations for the three subjects vary from 6.0 seconds to 8.5 seconds, thus 13 to 18

297 groups of body posture data are documented as the primary motions from the video, and these

298 are later used in joint angle comparisons.

299 The lifting task contains two extensions (bending backward) and two flexions (bending forward)

300 of the spine. It starts from free standing posture then flexion without twisting the trunk or

301 bending legs, and lifting a 15 lb, 18″×24″ in rectangular window (which is initially positioned on

302 a table whose center is located 20″ from knees), holding the window close to the body while

303 maintaining an upright standing posture, and then returning it back to the table, as illustrated in

304 Figure 3. To improve the accuracy of the 3D modeling, the sizes of the body segments in the 3D

305 model are adjusted to match the body sizes of the subjects, which are calculated by the

306 coordinates of the markers during the standing posture. The motions in this task are divided into

307 five primary body postures, beginning from standing posture, bending to reach the window,

308 holding the window in a standing posture, bending to return the window, and ending with

309 another standing posture, as presented in Figure 3. To match the primary motions between the

310 3D modeling and the experiment, a brief time study of these motions is also conducted (i.e., to

311 determine when these primary motions occur). One hundred time frames per second is the frame

312 rate selected for the 3D model. Subject 3 is utilized as an example in Figure 3, where the primary

313 motions occur during the time frames 0, 150, 300, 500, and 600.

14
314 Validation Methods

315 The validation method involves three levels of comparison: (1) joint angle comparison; (2) risk

316 rating comparison for body segments; and (3) REBA/RULA total risk rating and risk level

317 comparison. It should be noted that the manual observation method in the pilot study of this

318 paper collects data at half-second intervals (i.e., every 50 time frames), whereas the experiment

319 and 3D modeling collect data at 0.01 second intervals (i.e., each time frame). The time frame

320 interval and number of data in the manual observation method can vary depending on the total

321 time frames in the optical marker-based motion capture experimentation and the primary motions

322 with their corresponding key time frames. Since the data is collected at different

323 intervals/frequency for the three respective data collection methods, a direct, comprehensive

324 statistical comparison cannot be conducted.

325 To compare the joint angles between the experiment and 3D modeling, the correlation coefficient

326 can be determined as per Eq. (1). The manual observation joint angles can be compared based on

327 the fluctuating trend in the figures. Horizontal and vertical angles for the hand, upper arm, lower

328 arm, upper leg, lower leg, foot, and trunk flexion are selected as the main angles for joint angles

329 comparison; it should also be noted that the left side and right side are compared separately.

330 With respect to risk rating comparisons for body segments and total rating, all three data

331 collection methods can use Eq. (2) and Eq. (3) based on the cumulative area under the fluctuating

332 curve and its cumulative area difference in percentage (error). A positive area difference under

333 the curve and area percentage (which occurs when most of the risk ratings from the manual

334 observation/3D modeling are below the risk ratings from the experiment) indicates that the

335 results from the manual observation/3D modeling constitute an underestimate of ergonomic risk,

336 whereas a negative area difference under the curve and area percentage (which occurs when most

15
337 of the risk ratings from the manual observation/3D modeling are above the risk ratings from the

338 experiment) indicates that the results from the manual observation/3D modeling constitute an

339 overestimate of ergonomic risk.

340 However, the objective of the present validation is to identify the feasibility of the motion data in

341 3D modeling compared with data in experimental and manual observation. As such, to improve

342 the accuracy of the comparison of motion data between 3D visualization and experimentation,

343 each joint angle and risk rating at each time frame is compared using the difference and error

344 equations given in Eq. (4) and Eq. (5), respectively. A positive difference and error express that

345 the 3D modeling results constitute an underestimate of ergonomic risk, while a negative

346 difference and error demonstrate that the 3D modeling results constitute an overestimate of

347 ergonomic risk.

∑𝑛 ̅̅̅̅ ̅̅̅̅̅
𝑖=1(DE𝑖 −𝐷𝐸 )(DD𝑖 −DD)
348 𝐶𝑜𝑟𝑟𝑒𝑙(DE𝑖 , DD𝑖 ) = (1)
√∑𝑛 ̅̅̅̅ 2 𝑛 ̅̅̅̅̅ 2
𝑖=1(DE𝑖 −DE) ∑𝑖=1(DD𝑖 −DD)

349 Area = ∑n𝑖=1 R 𝑖 × ∆t i (2)

Area𝑥 −Area𝑦
350 Area Percentage= Area𝑥
× 100% (3)

351 ̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ = 1 ∑𝑛𝑖=1(DE𝑖 − DD𝑖 )


𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 (4)
𝑛

1 DE −DD
352 ̅̅̅̅̅̅̅̅
𝐸𝑟𝑟𝑜𝑟 = 𝑛 ∑𝑛𝑖=1( 𝑖 D 𝑖 × 100%) (5)

353 where

354 DE𝑖 : joint angle collected from experiment at time i, (°)

355 ̅̅̅̅
DE: mean joint angle collected from experiment, (°)

356 DD𝑖 : joint angle collected from 3D modeling at time i, (°)

357 ̅̅̅̅
DD: mean joint angle collected from 3D modeling, (°)

358 Correl: correlation coefficient of two arrays in the range, [0,1]

16
359 ̅̅̅̅̅̅̅̅̅̅̅̅̅ : joint angle average difference between data collected from experiment and 3D
Difference

360 modeling, (°)

361 ̅̅̅̅̅̅̅: 3D modeling joint angle average error compared with experiment, (%)
Error

362 D: the joint angle range for horizontal and vertical angles (D = 180° for horizontal angles; D =

363 90° for vertical angles, respectively)

364 Area: cumulative area under the curve

365 R 𝑖 : risk rating at time i

366 ∆t 𝑖 : time difference between t 𝑖 and t 𝑖−1

367 Area Percentage: cumulative area difference in percentage of y compared with x through the

368 entire motion, (%)

369 Area𝑥 : Area under the experimental result curve

370 Area𝑦 : Area under the 3D modeling or manual observation result curve

371 Validation Results

372 In this pilot study, the total number of time frames in the 3D simulation models for three subjects

373 varies from 600 to 850, corresponding to 6.0 to 8.5 seconds based on the interval of 50 time

374 frames per second. The data collected from the experiment is scaled for the purpose of

375 comparison to match the 3D model time frame, as is the data collected from manual observation.

376 (1) Joint Angle Comparison

377 Using subject 3 as an example, 600 data-point groups from the experiment and 3D modeling are

378 selected for statistical comparison, while manual observation provides 13 data-point groups to

379 plot trends for comparison. As expressed in Figure 4, twenty main joint angles used in REBA

380 and RULA algorithms are compared. In general, the joint angles from these three methods follow

381 similar trends; however, large gaps can be observed throughout the motion, especially for

17
382 horizontal angles. The vertical angles perform with higher consistency than do the horizontal

383 angles. Data noise that may have been caused by movement of the markers during the

384 experiment and shaking of the body is observed during experimental data analysis for the upper

385 leg and lower leg motions. Due to the limited data points from manual observation, more

386 detailed statistical analysis is provided in comparing data collected from the experiment and 3D

387 modeling later in this section.

388 The average correlation coefficient between the experimental and 3D extracted vertical angles is

389 0.80, with the vertical angles for hand, lower arm, upper leg, and lower leg performing at a high

390 average correlation coefficient of 0.94. Only the upper arm has a slight negative correlation. The

391 results from the other two subjects also show a higher correlation coefficient for vertical angles

392 than for horizontal angles. Consistency is observed between the left and right sides of the body

393 segments since the movement is almost symmetrical in this test. Among these twenty main joint

394 angles, fourteen angles are found to have less than ±14% error with less than 25° variation.

395 The results of trunk movement show for each of the three subjects a high correlation coefficient

396 between the experiment and 3D modeling. For example, the main body movement of the trunk

397 for subject 3 has a high correlation coefficient of 0.96 for flexion with an average difference of

398 10.24° and an average error of 11%, as expressed in Figure 5. The manual observation results are

399 plotted in a curve, which in turn is compared in the figure to the fluctuating curves of the

400 experiment and the 3D modeling.

401 (2) REBA/RULA Risk Rating Comparison for Body Segments

402 The joint angle results for each body segment having been obtained, REBA/RULA risk

403 assessments are used to evaluate the risk rating for each body segment by defining the rating for

404 certain joint angle ranges. Figure 6 plots the ratings for neck, trunk, leg, upper arm, lower arm,

18
405 and wrist, respectively, for REBA for subject 3 as an example. The risk ratings for each body

406 segment are integers, (i.e., they use a risk rating interval of 1 as categorized by REBA/RULA);

407 thus, if the risk rating difference is less than 1, the risk judgment would not show a significant

408 difference. Accordingly, to be considered acceptable, a given risk rating on average must have a

409 difference of less than 1 and the product of the error and experimental risk rating on average

410 must be less than 1 when comparing the 3D modeling with the experiment. Moreover, the most

411 crucial rating during the motion is the peak rating, where risk has the highest chance of occurring

412 and further modified work may be necessary to proactively reduce the risk. Thus, five factors are

413 selected for comparison: average rating, maximum rating, minimum rating, rating difference, and

414 error. Table 3 takes trunk as an example and shows the comparison of these five factors among

415 the three data collection methods for all the subjects.

416 (3) REBA/RULA Total Risk Rating and Risk Level Comparison

417 The last point of comparison between the three methods is with respect to total REBA/RULA

418 risk rating and risk level. The risk rating performance and risk level performance for subject 3

419 are plotted in Figure 7 as an example. A force rating of 1 is added to both the REBA and RULA

420 assessments due to the 15 lb load on the hands when the subject lifts the window without the

421 support of the table. The minimum coupling rating, activity rating, and muscle rating are used in

422 this study for REBA and RULA assessment.

423 In the risk ratings, the 3D ergonomic analysis generally follows the same trend for both REBA

424 and RULA, as indicated in Figure 7. Considering the results of the three subjects on average, 3D

425 modeling method provides highly accurate estimation for REBA with a difference of 0.38 and

426 −0.87% error on average and for RULA with a −0.44 difference and −13.61% error on average.

427 In terms of the cumulative area under the curve compared with the experimental results, the area

19
428 percentage of 3D modeling is 6.86% for REBA and −7.54% for RULA on average for all the

429 subjects. Evidently, the manual observation method underestimates the risk of the movement by

430 34% and 19% of area percentage on average for REBA and RULA, respectively, giving a rough

431 fluctuation trend. In the risk category level, 3D modeling provides accurate estimation with only

432 a 0.11 difference and 0.18% error on average for REBA, and a −0.15 difference and −9.95%

433 error on average for RULA for all the subjects. Furthermore, the experimental data and the 3D

434 extracted data provide almost the same maximum and minimum risk rating and risk level

435 judgment as each other for both REBA and RULA for all the subjects, with only a slight

436 difference for minimum rating in RULA for subject 3. Table 4 summarizes the risk rating and

437 risk level judgments of all the subjects for RULA.

438 Compared with the experiment risk assessment, 3D modeling offers accurate estimation with an

439 average difference of less than 1 for both risk rating and risk level all the subjects. Furthermore,

440 in both the REBA and RULA assessments, the results for 3D modeling and experiment are found

441 to be more consistent with one another for motions with maximum risk rating than for motions

442 with minimum risk rating, and the 3D modeling and experiment methods are found to yield equal

443 risk levels. Less than half of the minimum ratings and risk levels are underestimated by a

444 difference of only 1.

445 Both methods—experimental and 3D modeling—conclude that the lifting motion is approaching

446 a maximum risk level of 4 for REBA, a finding which indicates that the motion is exposed to

447 high risk and requires the action of “investigate and implement change”. Both methods also

448 show a maximum risk level of 4 for RULA, again triggering a recommendation to “investigate

449 and implement change”.

20
450 Validation and Discussion

451 There are a few reasons for the differences observed between the three levels of comparison. For

452 the joint angle comparison, joint angles, such as hand vertical angles, lower arm vertical angles,

453 and upper arm horizontal angles, are less accurate in this case than in the experiment. This is due

454 to the limitation that the locations of the markers on the joints are slightly different than the

455 locations of the joints in the 3D model, as aforementioned. To be more specific, the hand joint

456 angle calculation in the experiment relies on the marker on the index finger and the midpoint

457 between the two wrist markers, while the hand joint angle in the 3D model is estimated based on

458 the positions of the middle fingertip and the wrist. Lower arm angle is calculated based on the

459 markers on the elbow and wrist B for the experiment, whereas in the 3D model the lower arm

460 angle is determined by the joints of the elbow and wrist. The noticeable difference in

461 performance in upper arm horizontal joint angles between the 3D modeling and the experiment is

462 due to the observation direction, which will be further discussed below. Also, the markers are

463 located on the skin of the subjects, which results in different outcomes due to variations in body

464 thickness among the subjects.

465 The horizontal angle, we infer, is less accurate in this validation due to the limited observation

466 direction (view angle of video captured) in this test, as the subject is captured from the side view

467 during the experiment. If the video captured had also been available from the front view, for

468 instance, then whether or not the upper arm is abducted could have been identified. In this case,

469 the upper arm vertical angles, together with all the other horizontal angles, can be imitated more

470 accurately and realistically. Essentially, in this proposed risk assessment, vertical angles are

471 more sensitive to the final risk rating, as it varies from 1 to 4 for the rating of vertical angles but
472 has less variation (from 0 to 1) for horizontal angles. Thus, vertical angles are more critical in

473 this risk assessment method.

474 Another reason for the joint angle difference between the 3D modeling and the experiment is the

475 different speeds of the motion, including acceleration and deceleration, as indicated in Figure 5,

476 which results in a larger gap in the error values from Eq. (5) between the 3D modeling and the

477 experiment. The moving speed of the 3D model is set by default to be constant. However, in the

478 experiment, the subject determines the speed and acceleration of window lifting movement that

479 is comfortable for them. Thus, the speed varies as the subject spends more time holding the

480 window at lifted heights. In the 3D model, in contrast, the speed is not an important factor for the

481 current study though it is controllable during creation of the animation. The primary motions and

482 the peak joint angles, which are the main ratings to be identified through this method, are vital.

483 Thus, the results of the joint angle comparison indicate that the proposed 3D modeling method is

484 still valid.

485 The variations in average rating between the 3D modeling and the experiment are within 1 for all

486 subjects’ body segments, a finding which validates the 3D modeling. Due to the different

487 functions of REBA and RULA risk assessments the sensitivity of each body segment is also

488 different, producing results of varying accuracy. Throughout the entire motion, the results from

489 the REBA risk assessment show that 3D modeling provides a reasonably accurate risk rating on

490 average (with less than 0.5 difference) for neck, trunk, lower arm, while the outcome for leg,

491 upper arm and wrist still requires improvement. The results from the RULA risk assessment

492 show that 3D modeling provides a reasonably accurate risk rating on average (with less than 0.5

493 difference) for trunk, leg, and wrist, while the outcome for neck, upper arm and lower arm still

494 requires improvement. The values of error for the corresponding body segments are less than 30%

22
495 absolute error for REBA and less than 20% absolute error for RULA due to the differing

496 acceleration and deceleration, and the products of error and the experimental risk rating on

497 average are all less than 0.5. Comparing the peak ratings, including both maximum and

498 minimum rating, the three methods demonstrate high consistency as each other in general. The

499 proposed 3D modeling method shows peak rating estimations that are statistically close to those

500 of the experiment, i.e., with a difference of either 0 (as in most of the cases) or 1. Thus, results

501 from risk rating comparison for the body segments also indicate that the proposed 3D modeling

502 method is acceptable.

503 Throughout the entire motion for all the subjects, the 3D visualization-based method provides

504 more accurate estimation for the final REBA rating and a slight overestimation for the final

505 RULA rating compared to the other methods. There are two explanations for the finding that the

506 RULA algorithm overestimates risk while the REBA algorithm does not: (1) the movements of

507 neck are underestimated in this experiment due to the absence of markers on the neck; in the

508 experiment, the neck joint angle assumes that the head and neck of the subject always perform

509 vertical to the ground (i.e., the subject is always looking forward); (2) RULA focuses on the

510 upper body, which means neck postures and minor movements by other upper body segments,

511 such as rotation/twist, will be more sensitive to the final risk ratings than they are in REBA. The

512 results from both methods with regard to both REBA and RULA for all the subjects indicate that

513 the motion is exposed to high risk level, a finding which proves the consistency of the

514 framework.

515 Manual observation is found to generally underestimate the risk rating. In particular, the

516 intricacies of changes in motions such as minor bending and rotation/twist of the body segments

517 are difficult to identify in this method. Manual observation underestimates for the peak risk

23
518 identification, which in reality may lead to claims or injuries as a result of these unforeseen risks.

519 3D modeling, on the other hand, provides high accuracy for maximum risk identification. Thus,

520 the proposed 3D ergonomic analysis is deemed to be acceptable and within the scope of work of

521 this paper, since identification of the motion with the highest risk (as opposed to identification of

522 the motion with the lowest risk) is the paramount task in this framework.

523 CONCLUSION

524 This research proposes a framework for using 3D visualization to imitate the continuous

525 operation motions and analyze the ergonomic risks of manufacturing construction activities. The

526 critical inputs for the creation of a reliable 3D model are also identified. The framework

527 eliminates the technical, ethical, and cost issues that would be at play when using other

528 ergonomic analysis methods. The use in this framework of the proposed 3D visualization model

529 to animate the task motions offers the ability to (1) obtain reasonably accurate human body

530 continuous motion data; (2) conduct data post-processing and obtain ergonomic risk assessment;

531 (3) proactively provide modified work or conduct ergonomic checking in the design phase of the

532 workstation, such that the actual work can be carried out within safe working conditions; and (4)

533 provide visualization of existing workstations and any proposed changes to the plant.

534 Consequently, the capability of evaluating and identifying ergonomic risk proactively facilitates

535 the development of modified work for the industries. A more robust return-to-work program can

536 be established with the support of this framework and a considerable reduction in the number of

537 insurance claims and injuries can be achieved.

538 Validation Summary

539 A validation approach is developed to prove the feasibility and reliability of the proposed

540 framework through a motion-capture-based experiment for a lifting task. Twenty-one main joint

24
541 angles that have high sensitivity to REBA and RULA algorithms are compared, proving that

542 vertical angles and trunk flexion perform with high correlation coefficients. The differences in

543 average, maximum, and minimum risk rating for individual body segments and for the entire

544 body between 3D modeling and the experiment are all within 1, and 3D modeling ultimately

545 provides almost the same risk level judgment as the experiment does. To conclude, throughout

546 the entire motion, the 3D visualization-based modeling method provides accurate estimation for

547 REBA and a slight overestimation for RULA compared to the experiment. On the other hand,

548 traditional manual observation provides underestimation of risk assessment. Thus, the proposed

549 3D modeling method has higher accuracy than the traditional manual observation method,

550 overcomes the limitations of the manual observation method, and is better and faster in this

551 respect because it is automated and capable of reliably analyzing continuous motions. The

552 proposed 3D visualization-based ergonomic risk assessment is thus validated and deemed to be

553 reliable for future use.

554 Limitations and Future Study

555 The proposed framework provides less accuracy than the risk assessment from the experiment

556 for the estimation of small movements such as neck and head movement or hand rotation/twist, a

557 limitation which may have a minor effect on the final risk rating results. Thus, this proposed

558 framework is considered to be more reliable for those activities for which the focus is on main

559 body movements than for activities for which the focus is on more subtle movements such as

560 those described above. The validity of this framework can also be greatly improved if more

561 experimental applications can be implemented and compared with the results from 3D modeling.

562 Additionally, real-time evaluation can potentially be integrated with the proposed framework.

563 Finally, it should be noted that the framework has been implemented but is not limited to analyze

25
564 the existing workstations. Application to the design phase of the workstations can also be

565 feasible.

566 DATA AVAILABILITY STATEMENT

567 Data generated or analyzed during the study are available from the corresponding author by

568 request.

569 ACKNOWLEDGEMENTS

570 This study is funded by the Natural Sciences and Engineering Research Council of Canada

571 (NSERC) through the Industrial Research Chair (IRC) in the Industrialization of Building

572 Construction (File No. IRCPJ 419145-15), Engage Grant EGP470391, and Discovery Grants

573 RGPIN 402046, RGPIN 436051, and RGPIN-2017-06721. The authors would also like to

574 acknowledge the support from their industry partner, All Weather Windows, as well as the

575 contributions of Amin Komeili and Justin Lewicke, who assisted with and participated in the

576 experiment. And special thanks are extended to the Glenrose Rehabilitation Hospital for

577 allowing the authors to complete the experiment in their motion capture lab.

578 REFERENCES

579 Al-Hussein, M., Niaz, M.A., Yu, H., and Kim, H., 2005. Integrating 3D visualization and

580 simulation for tower crane operations on construction site. Automation in Construction,

581 15(5), 554–562.

582 Alwasel, A., Elrayes, K., Abdel-Rahman, E.M., and Haas, C.T., 2011. Sensing construction

583 work-related musculoskeletal disorders (WMSDs). In: Proceedings, International

584 Symposium on Automation and Robotics in Construction (ISARC), 29 June–2 July 2011

585 Seoul, Korea. London, UK: International Association for Automation and Robotics in

586 Construction, 164–169.

26
587 Autodesk Inc., 2016. Create amazing worlds in 3ds Max. Retrieved from

588 <http://www.autodesk.com/products/3ds-max/overview-dts?s_tnt=69291:1:0>

589 Autodesk Inc., 2011. MAXScript Introduction. Retrieved from

590 <http://docs.autodesk.com/3DSMAX/14/ENU/MAXScript%20Help%202012/> (accessed in

591 January 2017).

592 Ceseracciu, E., Sawacha, Z., and Cobelli, C., 2014. Comparison of markerless and marker-based

593 motion capture technologies through simultaneous data collection during gait: Proof of

594 concept. PLOS ONE, 9(3), 7 pages.

595 Dennis, P., 2002. Lean Production Simplified, Second Edition. Productivity Press, New York,

596 NY, USA.

597 David, G.C., 2005. Ergonomic methods for assessing exposure to risk factors for work-related

598 musculoskeletal disorders. Occupational Medicine, 55(3), 190–199.

599 Eichelberger, P., Ferraro, M., Minder, U., Denton, T., Blasimann, A., Krause, F., and Baur, H.,

600 2016. Analysis of accuracy in optical motion capture: A protocol for laboratory setup

601 evaluation. Journal of Biomechanics, 49(10), 2085–2088.

602 Feyen, R., Liu, Y., Chaffin, D., Jimmerson, G., and Joseph, B., 2000. Computer-aided

603 ergonomics: A case study of incorporating ergonomics analyses into workplace design.

604 Applied Ergonomics, 31(3), 291–300.

605 Golabchi, A., Han, S., Seo, J., Han, S., Lee, S., and Al-Hussein, M., 2015. An automated

606 biomechanical simulation approach to ergonomic job analysis for workplace design. Journal

607 of Construction Engineering and Management, 141(8), 04015020.

27
608 Golabchi, A., Han, S., and Robinson Fayek, A., 2016. A fuzzy logic approach to posture-based

609 ergonomic analysis for field observation and assessment of construction manual operations.

610 Canadian Journal of Civil Engineering, 43(4), 294–303.

611 Han, S. and Lee, S., 2013. A vision-based motion capture and recognition framework for

612 behavior-based safety management. Automation in Construction, 35, 131–141.

613 Han, S., Lee, S., and Peña-Mora, F., 2013. Vision-based detection of unsafe actions of a

614 construction worker: Case study of ladder climbing. Journal of Computing in Civil

615 Engineering, 27(6), 635–644.

616 Han, S.H., Hasan, S., Bouferguene, A., Al-Hussein, M., and Kosa, J., 2014. Utilization of 3D

617 visualization of mobile crane operations for modular construction on-site assembly. Journal

618 of Management in Engineering, 31(5), 9 pages.

619 Hignett, S. and McAtamney, L., 2000. Rapid entire body assessment (REBA). Applied

620 Ergonomics, 31(2), 201–205.

621 Inyang, N. and Al-Hussein, M., 2011. Ergonomic hazard quantification and rating of residential

622 construction tasks. In: Proceedings, Canadian Society for Civil Engineering (CSCE) Annual

623 Conference, 14–17 June 2011, Ottawa, Canada. Montréal, Canada: Canadian Society for

624 Civil Engineering.

625 Janowitz, I.L., Gillen, M., Ryan, G., Rempel, D., Trupin, L., Swig, L., … Blanc, P.D., 2006.

626 Measuring the physical demands of work in hospital settings: Design and implementation of

627 an ergonomics assessment. Applied Ergonomics, 37(5), 641–658.

628 Karhu, O., Kansi, P., and Kuorinka, I., 1977. Correcting working postures in industry: A

629 practical method for analysis. Applied Ergonomics, 8(4), 199–201.

28
630 Kamat, V.R., Martinez, J.C., Fischer, M., Golparvar-Fard, M., Peña-Mora, F., and Savarese, S.,

631 2011. Research in visualization techniques for field construction. Journal of Construction

632 Engineering and Management, 137(10), 853–862.

633 Li, G. and Buckle, P., 1999. Current techniques for assessing physical exposure to work-related

634 musculoskeletal risks, with emphasis on posture-based methods. Ergonomics, 42(5), 674–

635 695.

636 Li, X., Komeili, A., Gül, M., and El-Rich, M., 2017. A framework for evaluating muscle activity

637 during repetitive manual material handling in construction manufacturing. Automation in

638 Construction, 79, 39–48.

639 Li, C. and Lee, S., 2011. Computer vision techniques for worker motion analysis to reduce

640 musculoskeletal disorders in construction. In: Proceedings, International Workshop on

641 Computing in Civil Engineering, 19–22 June 2011, Miami. Reston, VA: American Society

642 of Civil Engineers, 380–387.

643 Levanon, Y., Lerman, Y., Gefen, A., and Ratzon, N.Z., 2014. Validity of the modified RULA for

644 computer workers and reliability of one observation compared to six. Ergonomics, 57(12),

645 1856–1863.

646 Mündermann, L., Corazza, S., and Andriacchi, T.P., 2006. The evolution of methods for the

647 capture of human movement leading to markerless motion capture for biomechanical

648 applications. Journal of NeuroEngineering and Rehabilitation, 3(6), 11 pages.

649 Manrique, J.D., Al-Hussein, M., Telyas, A., and Funston, G., 2007. Constructing a complex

650 precast tilt-up panel structure utilizing an optimization model, 3D CAD, and animation.

651 Journal of Construction Engineering and Management, 133(3), 199–207.

29
652 McAtamney, L. and Corlett, E.N., 1993. RULA: A survey method for investigation of work

653 related upper limb disorders. Applied Ergonomics, 24(2), 91–99.

654 Middlesworth, M., 2012. A step-by-step guide: Rapid Entire Body Assessment (REBA).

655 Ergonomics Plus, from <www.ergo-plus.com> (accessed January, 2016)

656 Mitropoulos, P., Cupido, G., and Namboodiri, M., 2009. Cognitive approach to construction

657 safety: Task demand-capability model. Journal of Construction Engineering and

658 Management, 135(9), 881–889.

659 Motion Analysis Corp., 2016. Motion analysis. Santa Rosa, CA, USA. Retrieved from

660 <http://www.motionanalysis.com> (accessed January, 2015)

661 Ontario Safety Association for Community and Healthcare, 2010. Musculoskeletal disorders

662 prevention series Part 1: MSD prevention guideline for Ontario.

663 <http://www.osach.ca/misc_pdf> (Mar., 2015).

664 Ray, S.J. and Teizer, J., 2012. Real-time construction worker posture analysis for ergonomics

665 training. Advanced Engineering Informatics, 26(2), 439–455.

666 Staub-French, S., Russell, A., and Tran, N., 2008. Linear scheduling and 4D visualization.

667 Journal of Computing in Civil Engineering, 22(3), 192–205.

668 Schneider, S.P., 2001. Musculoskeletal injuries in construction: A review of the literature.

669 Applied Occupational and Environmental Hygiene, 16(11), 1056–1064.

670 Seo, J., Han, S., Lee, S., and Kim, H., 2015a. Computer vision techniques for construction safety

671 and health monitoring. Advanced Engineering Informatics, 29(2), 239–251.

672 Seo, J., Starbuck, R., and Han, S., 2015b. Motion data-driven biomechanical analysis during

673 construction tasks on sites. Journal of Computing in Civil Engineering, 29(4), 10 pages.

30
674 Seo, J., Lee, S., Armstrong, T.J., and Han, S., 2013. Dynamic biomechanical simulation for

675 identifying risk factors for work-related musculoskeletal disorders during construction tasks.

676 In: Proceedings, International Symposium on Automation and Robotics in Construction and

677 Mining (ISARC), 11–15 August 2013 Montréal, Canada. London, UK: International

678 Association for Automation and Robotics in Construction, 1074–1084.

679 Spielholz, P., Silverstein, B., Morgan, M., Checkoway, H., and Kaufman, J., 2001. Comparison

680 of self-report, video observation and direct measurement methods for upper extremity

681 musculoskeletal disorder physical risk factors. Ergonomics, 44(6), 588–613.

682 University of Michigan, Center of Ergonomics, 2012. 3D static strength prediction program.

683 <http://djhurij4nde4r.cloudfront.net/attachments/files/000/000/284/original/Manual_606.pdf

684 ?1406656210> (assessed in July 2016)

685 Van der Molen, H.F., Mol, E., Kuijer, P.P.F.M., and Frings-Dresen, M.H.W., 2007. The

686 evaluation of smaller plasterboards on productivity, work demands and workload in

687 construction workers. Applied Ergonomics, 38(5), 681–686.

688 Wang, D., Dai, F., and Ning, X., 2015. Risk assessment of work-related musculoskeletal

689 disorders in construction: State-of-the-art review. Journal of Construction Engineering and

690 Management, 141(6), 15 pages.

31

View publication stats