Vous êtes sur la page 1sur 235

International Journal of Industrial Engineering, 19(1), 1-13, 2012.

INTERNET USER RESEARCH IN PRODUCT DEVELOPMENT: RAPID


AND LOW COST DATA COLLECTION
A.Shekar and J.McIntyre

School of Engineering and Advanced Technology


Massey University, Auckland
New Zealand

Small to Medium Enterprises (SMEs) face enormous financial risks when developing new products. A key element
of risk minimization is an early emphasis on gathering information about the end users of the product quickly. SMEs
are often overwhelmed by the prospect of expected research costs, lack of expertise, and financial pressures to rush to
market. Too often the more conventional path is chosen, whereby a solution is, developed and tested in the market to
“see if it sticks”. Such methodologies are less effective and subject the SME to increased financial risk. This study
demonstrates how SMEs can make use of freely available internet resources to reproduce aspects of more
sophisticated customer research techniques. Internet resources such as the YouTube and Forums enable SMEs to
research customers rapidly, and in a cost effective manner. This study examines New Zealand SMEs and presents
two case studies to support the use of modern web-based user research in new product development.

Keywords: product development, user information, web research, New Zealand

(Received 27 October 2010; Accepted in revised form 24 June 2011)

1. INTRODUCTION
Small and Medium Enterprises (SMEs) are a large and vital component of most developed nation’s economies. The
prevalence of such firms is so large that in sectors such as manufacturing, their numbers often dominate the economic
landscape (Larsen and Lewis 2007). Their accrued success contributes substantially to employment, exports, and
Gross Domestic Product (GDP). The sheer quantity of firms and their individual contributions build flexibility and
robustness into a nation’s economy. Governments generally recognize this fact (Massey 2002) and support
innovation in SMEs through funding research and incentive programs.
The ability to launch new products and services is a critical element of success for all companies, large and small.
Launching a new product or service is often the most significant financial risk a firm may face since its own
inception. New product launches are typically characterized by large expenditures associated with research,
production tooling, marketing and promotions. The successful recovery of expenditures and the prospect of
generating profits depend entirely upon the product’s success in the consumer marketplace. The losses incurred from
a failed product can be devastating for the small organisation. In one study of SMEs based in the Netherlands, 40% of
firms were found not to survive their first 5 years in business (Vos, Keizer et al. 1998). Surveys of NZ SMEs indicate
that the risks are well understood; however, NPD is still identified as a weakness within their organisation (McGregor
and Gomes 1999).

1.1 SME Challenges and New Product Development


Innovation poses inherent risks, yet remains an essential activity of businesses both large and small (Boag and
Rinholm 1989). While SMEs are typically described as being more entrepreneurial “risk-takers” than their larger
counterparts, in reality their situation may be more precarious. Small businesses are often more sensitive to the risks
of new product development (NPD) activities due to limited financial resources. Indeed, an unsuccessful product
introduction can spell disaster for the small business.
While structured approaches have been successfully implemented in larger firms, smaller organisations are found to
be less enthusiastic about incorporating them and struggle to adopt and make use of them (Enright 2001). The
reasons for this are varied and not well understood. Many SMEs operate without the benefit of academic partnerships
and may simply not be aware of the information available. Others may recognize that structured NPD approaches
generally cater to the specific needs of larger firms and the results may impose unnecessary bureaucracy on the
smaller organisations.
It is generally recognized that smaller firms are distinct in both principle and practice from their larger counterparts.
Successful large firms deal efficiently with multiple project ideas, communications involving large numbers of
participants, and documentation to retain and share corporate knowledge. Smaller firms participating in the NPD
process face different challenges. SMEs typically address smaller numbers of projects, involving fewer participants,
and enjoy opportunities for more frequent face to face communications. Challenges to successful NPD efforts are the
results of operating constraints and the culture found within smaller organisations. A partial summary of the unique
issues faced by SMEs is presented in Table 1.

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(1), 14-25, 2012.

AN INVESTIGATION OF INTERNAL LOGISTICS OF A LEAN BUS


ASSEMBLY SYSTEM VIA SIMULATION: A CASE STUDY
Aric Johnson11, Patrick Balve2, and Nagen Nagarur1
1
Department of Systems Science and Industrial Engineering
Binghamton University
P.O. Box 6000
Binghamton, NY. 13902-6000, USA
2
Production und Logistics Department
Heilbronn University
39 Max-Planck-Straße, Heilbronn 74081, Germany

Corresponding author’s email: {Aric Johnson, aricrjohnson@gmail.com}

This study involves the internal logistics of a chosen bus assembly plant that follows a lean assembly process dictated by
takt time production. The assembly system works according to a rigid sequential order of assembly of different models of
buses, called the String of Pearls. The logistics department is responsible for supplying kitted components to assembly
workstations for the right model at the right time. A simulation model was developed to study this assembly system, with
an objective of finding the minimum number of kit carts for multiple production rates and kitting methods. The
implementation of JIT kitting was the ultimate goal in this case. The research focused on a specific assembly plant and
therefore, the numerical results are applicable to the selected plant only. However, some of the trends in the output may be
generalized to any assembly plant of similar type.

Significance: This study illustrates the use of simulation to plan further lean transformation within a major bus assembly
plant. This assembly plant had recently transformed their assembly operations according to lean principles with much
success. The next step was to transform the logistical support to this system, and this was planned via simulation. This
paper makes an original contribution to this area of research, and to the best of the authors’ knowledge such a work has not
been published so far.

Keywords: Bus assembly, kitting, takt time, simulation, internal logistics, JIT

(Received 21 March 2011; Accepted in revised form 12 March 2012)

1. INTRODUCTION
Automotive industries, including bus assemblies, have been forced to cut costs to remain competitive in a global
environment. For customers, price is often an important criterion, and so automotive plants strive to cut costs, while at the
same time struggle to improve their throughput. The industry has mostly adopted lean manufacturing methods as the means
of reducing costs and increasing throughput. Auto plants typically follow an assembly-line type of manufacturing, in which
all the operations are done in stations or cells connected sequentially with a set of operations assigned for each station. This
is because there are a large number of operations that need to be completed to produce a finished automobile; breaking the
operations into stations allows the system to operate more efficiently and at a much faster rate. Most plants also implement
a balanced assembly line of workstations that allows assemblies to flow through the system at a specific, predetermined
rate, termed takt time. This balanced, sequential workstation design promotes a smooth flow throughout the plant.
However, this type of system then inherits a new challenge of physically getting the required parts to the workstations on
time. This problem can be described as a problem of internal logistics between parts storage (warehouse) and the many
workstations. A well-coordinated logistics system is vital since a single workstation that does not receive its required
parts/components on time results in delaying the entire assembly line. An assembly plant operating at a takt time production
rate has little or no slack built into its schedule. Hence, getting the required parts/components to the right workstation at the
right time is critical in this setting.
One internal supply strategy would be to stage required parts at the workstations and replenish them as necessary. This is
often not feasible in bus assembly. For one thing, the parts may be of large size and storage of such parts at a workstation
may be prohibitive. In addition, if the line is producing multiple models, storing all the combinations of parts makes it more
complex and tends to become more error prone. Hence, the standard practice under such situations is to let the product flow
through the stations and have the parts/components for an individual assembly be brought to the appropriate workstation at
the exact time they are needed. The majority of the parts/components are stored in a warehouse, and the required set of
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(1), 26-32, 2012.

RESEARCH-BASED ENQUIRY IN PRODUCT DEVELOPMENT


EDUCATION: LESSONS FROM SUPERVISING UNDERGRADUATE
FINAL YEAR PROJECTS

A. Shekar

School of Engineering and Advanced Technology


Massey University, Auckland
New Zealand

(Received 27 October 2010; Accepted in revised form 24 June 2011)

This paper presents an interesting perspective on enquiry-based learning by engineering students through a project and
research-based course. It discusses the lessons learned from coordinating and supervising undergraduate research-based
project courses in Product Development engineering, at Massey University in New Zealand. Research is undertaken by
students at the start and throughout the development project in order to understand the background and trends in the
literature and incorporate them in their projects. Further research is done regarding the product’s technologies,
problem and motivation behind the development, as well as a thorough knowledge of the context and user environment
are undertaken. The multi-disciplinary nature of product development, requires students to research widely across
disciplinary borders, and then to integrate the results for the goals of designing a new product and journal-style research
papers. The Product Development process is a research-based decision-making process and one that needs an enquiring
mind and an independent learning approach, as often the problems are open-ended and ill-defined. Both explicit and
tacit knowledge are gained through this action-research methodology of learning. Tacit knowledge is gained through
the hands-on project experience, experimentation, and learning by doing.
Several implications for educators are highlighted, including the need for a greater emphasis on self-learning through
research and hands-on practical experience, the importance of developing student research skills, and the value of
learning from peer interaction.

Keywords: Product development, research-based enquiry, project-based learning.

(Received 1 May 2009; Accepted in revised form 1 June 2010)

1. INTRODUCTION
Engineering design programs are increasingly aware, ‘that the project-based approach results in the development of
competencies that are expected by employers’ (DeVere, 2010). One of these competencies is independent research
skills and learning. Several new design-engineering programs have emerged and many see the need for engineers to
demonstrate design and management (business) thinking in addressing product design problems. Most of these
programs build the curriculum by combining courses from business, design and engineering faculties, leaving the
integration to the students. We have found that this integration does not take place well. Often students tend to
compartmentalise papers, do not appreciate the links between papers or sometimes lecturers from other departments are
not aware of how engineers may use some of the material they cover, hence may not provide relevant examples. Hence
project-based learning is an attempt to address this issue.
A broad definition of project-based learning (PBL) given by Prince and Felder is:
‘Project-based learning begins with an assignment to carry out one or more tasks that lead to the production of a final
product—a design, a model, a device or a computer simulation. The culmination of the project is normally a written
and/or oral report summarizing the procedure used to produce the product and presenting the outcome.’
In practice, many engineering education activities developed on the basis of inductive instructional methods – active
research, inquiry-led learning and problem-based learning focus on a fixed deliverable and therefore fall within this
definition of PBL.
Massey University is currently reorganizing the curriculum towards overcoming the gap between theory and practice,
the lack of good integration of disciplines and taking on a more student-centred approach to learning. Students follow
courses in engineering sciences, physicals, mathematics, statistics and the like; however in tackling practical design
projects, they fail to apply this knowledge to the extent that their design would benefit. The new curriculum proposes to
have more project-based learning and less of the traditional ‘chalk and talk’ teacher centred approach in all of the
majors offered. This approach follows worldwide trends in engineering education, and has already been practiced
within the current product development major with success, hence is presented in this paper.

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(1), 33-46, 2012.

A HYBRID BENDERS/GENETIC ALGORITHM


FOR VEHICLE ROUTING AND SCHEDULING PROBLEM
Ming-Che Lai 1, Han-Suk Sohn 2, Tzu-Liang (Bill) Tseng 3, and Dennis L. Bricker 4
1
Department of Marketing and Logistics Management, Yu Da University, Miao-Li County 361, Taiwan
2
Dept. of Industrial Engineering, New Mexico State University, Las Cruces, NM 88003, USA
3
Dept. of Industrial Engineering, University of Texas, El Paso, TX 79968, USA
4
Dept. of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242, USA

Corresponding author: Han-Suk Sohn, hsohn@nmsu.edu

This paper presents an optimization model and its application to a classical vehicle routing problem. The proposed model is
exploited effectively by the hybrid Benders/genetic algorithm which is based on the solution framework of Benders’
decomposition algorithm, together with the use of genetic algorithm to effectively reduce the computational difficulty. The
applicability of the hybrid algorithm is demonstrated in the case study of the Rockwell Collin’s fleet management plan.
The results demonstrate that the model is a practical and flexible tool in solving realistic fleet management planning
problems.

Keywords: Vehicle Routing, Hybrid Algorithm, Genetic Algorithm, Benders’ Decomposition, Lagrangian Relaxation,
Mixed-integer programming.

(Received 9 June 2011; Accepted in revised form 28 February 2012)

1. INTRODUCTION
The vehicle routing problem (VRP) involves a number of delivery customers to be serviced by a set of identical vehicles at
a single home depot. The objective of the problem is to find a set of delivery routes such that all customers are served
exactly once and the total distance traveled or time consumed by all vehicles is minimized, while at the same time the sum
of the demanded quantities in any routes does not exceed the capacity of the vehicle. The VRP is one of the most
challenging combinatorial optimization problems and it was first introduced by Dantzig and Ramser (1959). Since then, the
VRP has stimulated a large amount of researches in the operations research and management science community (Miller,
1995). There are substantial numbers of heuristic solution algorithms proposed in the literature. Early heuristics for this
problem are those of Clarke and Wright (1964), Gillett and Miller (1974), Christofides et al. (1979), Nelson et al. (1985),
and Thompson and Psaraftis (1993). A number of more sophisticated heuristics have been developed by Osman (1993),
Thangiah (1993), Gendreau et al. (1994), Schmitt (1994), Rochat and Taillard (1995), Xu and Kelly (1996), Potvin et al.
(1996), Rego and Roucairol (1996), Golden et al. (1998), Kawamura et al. (1998), Bullnheimer et al. (1998 and 1999),
Barbarosoglu and Ozgur (1999), and Tom and Vigo (2003). As well, exact solution methods have been studied by many
authors. These include branch-and-bound procedures, typically with the basic combinatorial relaxations (Laporte et al.,
1986; Laporte and Nobert, 1987; Desrosiers et al., 1995; Hadjiconstantinou et al., 1995) or Lagrangiran relaxation (Fisher,
1994; Miller, 1995; Toth and Vigo, 1997), branch-and-price procedure (Desrochers et al., 1992), and branch-and-cut
procedure (Augerat et al, 1995; Ralphs, 1995; Kopman, 1999; Blasum and Hochstattler, 2000).
Unlike many other mixed-integer linear programming applications, however, Benders’ decomposition algorithm was not
successful in this problem domain because of the difficulty of solving the master system. In mixed-integer linear
programming problems, where Benders’ algorithm is most often applied, the master problem selects values for the integer
variables (the more difficult decisions) and the subproblem is a linear programming problem which selects values for the
continuous variables (the easier decisions). For the VRP problem, the master problem of Benders’ decomposition is more
amenable to solution by a genetic algorithm (GA) which searches the solution space in parallel fashion. The fitness
function of the GA is, in this case, evaluated quickly and simply by evaluating a set of linear functions. In this short paper,
therefore, a hybrid algorithm is presented in order to overcome the difficulty in implementing the Benders’ decomposition
for the VRP problem. It is based on the solution framework of Benders’ decomposition algorithm, together with the use of
GA to effectively reduce the computational difficulty. The rest of this paper is organized as follows. In section 2 the
classical vehicle routing problem is presented. The application of the hybrid algorithm is described in section 3. In Section
4, a case study on the fleet management planning of the Rockwell Collin, Inc. is presented. Some concluding remarks are
presented in Section 5. Finally, Section 6 lists references used in this paper.

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(1), 47-56, 2012.

A NEW TREND BASED APPROACH FOR FORECASTING OF ELECTRICITY


DEMAND IN KOREA
Byoung Chul Lee, Jinsoo Park, Yun Bae Kim

Department of Systems Management Engineering, Sungkyunkwan University, Suwon, Republic of Korea

Corresponding author: Yun Bae Kim, kimyb@skku.edu

Many forecasting methods for electric power demand have been developed. In Korea, however, these kinds of methods do
not work correctly. A peculiar seasonality in Korea increases the forecasting error produced by previous methods. Two big
festivals, Chuseok and Seol, also produce forecasting errors. Therefore, a new demand forecasting model is required. In this
paper, we introduce a new model for electric power demand forecasting which is appropriate to Korea. We start the
research using the concept of weekday average. The final goal is to forecast hourly demand for both the long and short term.
We finally obtain the result with accuracy of over 95%.

Keywords: Demand forecasting, electric power, moving average

(Received 7 April 2010; Accepted in revised form 24 June 2011)

1. INTRODUCTION
There have been many studies related to forecasting electric demand. These studies have contributed to achieving greater
accuracy. Shahiderhpour et al. (2002) introduced market operation in electric power systems. Price modeling for electricity
markets was described by Bunn (2004). Kawauchi et al. (2004) developed a forecasting method based on conventional
chaos theory for short term forecasting. Gonzalez-Romera et al. (2007) used neural network theory, Oda et al. (2005)
forecasted demand with regression analysis, and Pezzulli et al. (2006) focused on seasonal forecasting with a Bayesian
hierarchical model .
These attempts, while valuable, are inappropriate for Korea because there are four distinct seasons, which have their
own feature such as a cycle of three cold days and four warm days in winter. In addition, Korean demand trend has a cycle
by weekly unit. Therefore we have two sources of seasonality, seasonal factor and weekly factor. Therefore, the previous
methods are not proper due to double seasonality.
To examine double seasonality, we analyzed past data to determine properties of Korean electric demand. Using these
properties, we defined a new concept of weekday average, (WA), and developed models for forecasting hourly demand of
electric power in Korea.
The organization of this paper is as follows. In Section 2, the concept of WA is used for 24 hours as the first step in
forecasting hourly demand. In Section 3, we deal with the methods of forecasting WA and non-weekday demand, including
holidays and festivals. We apply our model to the actual demand data and show the results in Section 4. We conclude the
research and suggest further studies in Section 5.

2. CONCEPT OF WEEKDAY AVERAGE


We found two special properties related to the hourly demand of electric power in Korea; one is the character of weekdays,
and the other is a connection between weekdays and non-weekdays (a weekday means the days from Tuesday to Friday).
Holidays and festival seasons are regarded as non-weekdays even though they are in a weekday period. The demands
during each weekday are almost similar to one another at the same hour; this is the first property. However, the demands of
Monday and weekends are less than those of weekdays by an invariable ratio; this is the second property. Therefore, our
research starts by developing a method for forecasting the hourly demand of weekdays. We then find the relation between
weekdays and non-week days.
Let us define the hourly demand:
Dn i (h) : demand from (h − 1) : 00 to h : 00 ... (1)

(h = 1,  ,24 and i = 1, ,7).


where n is the number of weeks from the base week; for example, if the base week is the first week in 2007, then Dec.
31st, 2007 has the value of n = 52 . i is the day of the week (1=Monday, 7= Sunday).

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(2), 57-67, 2012.

RELIABILITY EVALUATION OF A MULTISTATE NETWORK UNDER


ROUTING POLICY
Yi-Kuei Lin

Department of Industrial Management


National Taiwan University of Science and Technology
Taipei, Taiwan 106, R.O.C.
Tel: +886-2-27303277, Fax: +886-2-27376344
Corresponding author: Lin, yklin@mail.ntust.edu.tw

A multistate network is a stochastic network composed with multistate arcs in which each arc has several possible
capacities and may fail due to failure, maintenance, etc. Different from the deterministic case, the minimum transmission
time in a multistate network is not a fixed number. We evaluate the probability that a given amount of data/commodity can
be sent from a source port to a sink port through a pair of minimal path (MP) simultaneously under the time constraint.
Such a probability is named the system reliability. An efficient solution procedure is first proposed to calculate it. In order
to enhance the system reliability, the network administrator decides the routing policy in advance to indicate the first and
the second priority pairs of MP. Subsequently, we can evaluate the system reliability under the routing policy. An easy
criterion is then proposed to derive an ideal routing policy with higher system reliability. We can treat the system reliability
as a performance index to measure the transmission ability of a multistate network such as computer, logistics, urban traffic,
telecommunication systems, etc.

Keywords: Multistate network; commodity transmission; system reliability; transmission time; routing policy

(Received 1 March 2010; Accepted in revised form 27 February 2012)

1. INTRODUCTION

For a deterministic network in which each arc has a fixed length attribute, the shortest path problem is to find a path with
minimum total length. When commodities are transmitted from a source to a sink through a flow network, it is desirable to
adopt the shortest path, least cost path, largest capacity path, shortest delay path, or some combination of multiple criteria
(Ahuja, 1998; Bodin et al., 1982; Fredman and Tarjan, 1987; Golden and Magnanti, 1977), which are all variants of the
shortest path problem. From the point of view of quality management and decision making, it is an important task to reduce
the transmission time through a flow network. Hence, a version of the shortest path problem called the quickest path
problem proposed by Chen and Chin (1990) arises. This problem finds a quickest path with minimum transmission time to
send a given amount of data/commodity through the network. In this problem, each arc has the capacity and the lead time
contributes (Chen and Chin, 1990; Hung and Chen, 1992; Martins and Santos, 1997; Park et al., 2004). More specifically,
the capacity and the lead time are both assumed to be deterministic. Several variants of quickest path problems are
thereafter proposed; constrained quickest path problem (Chen and Hung, 1994; Chen and Tang, 1998), the first k quickest
paths problem (Chen, 1993; Chen, 1994; Clímaco et al., 2007; Pascoal et al., 2005), and all-pairs quickest path problem
(Chen and Hung, 1993; Lee and Papadopoulou, 1993).
However, due to failure, partial failure, maintenance, etc., each arc should be considered as multistate in many real-life
flow networks such as computer, logistics, urban traffic, telecommunication systems, etc. That is, each arc has multiple
possible capacities or states (Jane et al., 1993; Lin et al., 1995; Lin, 2003, 2004, 2007a,b, 2009; Yeh, 2007, 2008). Then the
transmission time thorough a network is not a fixed number if each arc has the time attribute. Such a network is named a
multistate network throughout this paper. For instance, a logistics system with each node representing the shipping port and
each arc representing the shipping itinerary between two ports is a typical multistate network. The capacity of each arc is
counted in terms of number of container, and is stochastic due to that either containers or traffic tools (e.g., cargo airplane,
cargo ship, etc.) through each arc may be in maintenance, reserved by other suppliers or in other conditions.
The purpose of this paper is to design a performance index to measure the transmission ability for a multistate network.
In order to reduce the transmission time, the data/commodity can be transmitted through several minimal paths (MPs)
simultaneously, where an MP is a sequence of arcs without loops. For convenience, we first concentrate on commodity
transmission through two MPs. We mainly evaluate the probability that the multistate network can send d units of
commodity from a source port to a sink port through a pair of MP under the time constraint T. Such a probability is named
the system reliability, which can be treated as a performance index. Under the same time constraint and demand
requirement, the system owns a better transmission ability if it obtains the higher system reliability. In order to boost the
transmission ability, the network administrator decides the routing policy in advance to indicate the first and the second
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(2), 68-79 2012.

DETERMINING THE CONSTANTS OF RANGE CHART FOR SKEWED


POPULATIONS
Shih-Chou Kao

Graduate School of Operation and Management, Kao Yuan University,


No.1821, Jhongshan Rd., Lujhu Dist., Kaohsiung City 821, Taiwan (R.O.C.).
Corresponding author email: t80132@cc.kyu.edu.tw

The probability of a false alarm rate (type I risk) in Shewhart control charts based on a normal distribution will increase
as the skewness of a process increases. However, the distribution of a range is a positively–skewed one. It is unstable
for monitoring range values by using three-sigma control limits that is from the concept of a normal assumption.
Moreover, most studies employ a simulation method to compute the type I risks of the range control chart for
non–normal processes. To provide an alternative method, this study utilizes the probability density function of the
distribution of the range to construct the appropriated control limits of a range control chart for a skewed process. The
control limits of the range chart were determined by setting that the type I risk is equal to 0.0027 and the standardized
Weibull, lognormal and Burr distributions. Furthermore, compared to range charts that use type I risks and type II risks,
weighted variance (WV), skewness correction (SC) and traditional Shewhart control charts, the proposed range chart is
superior to other control chart, in terms of the type I risks and type II risks for a skewed process. An example of the
yield strength for the deformed bar in coil is presented to illustrate these findings. The study utilized the probability
density function of range distribution and α=0.0027 probability limits with considering the three distributions, Weibull,
lognormal and Burr to construct the R control chart. The computed constants of the R control chart were listed in a
table that can be consulted by for practitioners. R chart using the proposed method is superior to other control chart, in
terms of the type I risks and type II risks for a skewed process.

Keywords: Range chart, skewed distribution, normality, type I risk.

(Received 22 March 2010; Accepted in revised form 24 June 2011)

1. INTRODUCTION
The development of control charts became rapid and diverse after W. A. Shewhart proposed a traditional control chart.
Control charts have the superior ability for monitoring a process in manufacturing, and they have been applied
successfully in other areas, such as finance, health care and information.
The Shewhart range (R) control chart is one of the most frequently used control charts since it is easily operated and
interpreted by practitioners. In general, traditional variable control charts, such as an average and a R control charts, are
based on the normality assumption. However, many processes in industry violate this assumption. These skewed
processes involve chemical processes, cutting tool wear processes and lifetime in an accelerated life test (Bai and Choi,
1995). Moreover, the range distribution is a positively–skewed one (Montgomery, 2005). If the traditional control
charts are used to monitor a non–normal process, the probabilities of a type I error (α) in the control charts increases as
the skewness of the process increases (Bai and Choi, 1995; Chang and Bai, 2001).
Bai and Choi (1995), Chang and Bai (2001) and Montgomery (2005) considered four methods for improving the
capabilities of control charts for monitoring a skewed process. The first method increased the sample sizes on the basis
of the central limit theorem. When the samples are larger, the skewed distribution will become a normal or
approximately normal distribution. However, the method is often expensive due to sampling. The second method is to
assume that the distribution of a process is known and then to derive a suitable control chart from this known
distribution. Ferrell (1958) designed geometric midrange and range control charts for a lognormally distributed process.
Nelson (1979) proposed median, range, scale and location control charts for a Weibull distribution.
The third method is to construct the traditional control chart using approximately normal data that result from
transforming skewed data. Various criteria were proposed to transform exponential data, such as maximum likelihood
and Bayesian methods (Box and Cox, 1964), Kullback–Leibler (K–L) information numbers (Hernandez and Johnson,
1980; Yang and Xie, 2000), measure of symmetry (zero skewness; Nelson, 1994), ease of use (Kittlitz, 1999) and
minimizing the sum of the absolute differences (Kao et al., 2006), to assess transformation efficiency. The shortcoming
of this method is that it is difficult to identify an exact distribution of a process with the second method.
The last method is to construct control charts using heuristic methods with no assumption on the form of the
distribution. Choobineh and Ballard (1987) proposed the WV method to determine the constants of average and R
charts based on the semivariance estimation of Choobineh and Branting (1986). Bai and Choi (1995) considered the
three skewed distributions (Weibull, lognormal and Burr) and determined the constants of average and R charts using
the weighted variance (WV) method by splitting a skewed distribution in two parts at the mean. Chang and Bai (2001)
decided the constants of average control chart by replacing a variance of WV method with a standard deviation. Chan
and Cui (2003) proposed the skewness correction (SC) method based on the Cornish–Fisher expansion (Johnson et al.,
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(2), 80-89 , 2012.

PERFORMANCE MODELING AND AVAILABILITY ANALYSIS OF SOLE


LASTING UNIT IN SHOE MAKING INDUSTRY: A CASE STUDY
Vikas Modgil 1, S.K. Sharma2, JagtarSingh3
1
Dept of Mechanical Engineering, D.C.R.U.S.T., Murthal, Sonepat, Haryana, India
2
Dept of Mechanical Engineering, N.I.T Kurukshetra, Haryana, India
3
Dept of Mechanical Engineering, S.L.I.E.T Longowal, Sangrur, Punjab, India
Corresponding author: Vikas Modgil, vikasmodgil@yahoo.co.uk

In the present work Performance modelling of the sole lasting unit, a part of shoe making industry has been done on the
basis of Markov birth-death process using probabilistic approach for the purpose to compute and improve the time
dependent system availability (TDSA). The kolmogorov-differential equations based on mnemonic rule are formulated
using the performance model and are solved to estimate the availability of the system as a function of time month wise
for the whole year using a more sensitive and advance numerical technique, known as adaptive step-size control Runge-
Kutta method. The input contributors for the computation of time dependent system availability of the system are the
existing failure and repair rate are taken from plant maintenance history sheets. The new repair rates are also devised for
the purpose of maximum improvement in the availability. The analysis finding helps the plant management for adapting
the best possible maintenance strategies. Performance modeling and availability analysis of a practical system is
conducted in the paper with the purpose to improve its operational availability. The time dependent system availability
(TDSA) is computed with the existing failure and repair rates on the monthly basis for the whole year. New devised
repair rates are also proposed through which one can assure maximum availability of the system with existing
equipments/or machines. It is also explored that the, the knowledge of TDSA minimizes the chances of sudden failure
and assure the maximum availability of the system and exposes the critical subsystems which needs more attention and
due consideration as far as the maintenance is concerned. The improvement in the availability of the system is mostly
from 2% to 5% in most of the month. However it increases drastically to 9% in the month of April. Further the assured
increase in availability increases productivity as well as the balance between demand and supply such that the
manufacturer delivers its product properly in time to the market/society, which in turn increases the profit and the
reputation of industry in the market.

Keywords: Performance Modelling, Time Dependent System Availability (TDSA); Runge-Kutta; Sole Lasting.
Kolmogorov-Differential equation, Shoe Making.

(Received 25 September 2011; Accepted in revised form 27 February 2012)

1. INTRODUCTION
With increasing advancement and automation, the industrial systems are getting complex and thus maintaining their
failure-free operation is not only costly but also difficult. Thus maximum availability levels are desirable to reduce the
cost of production and maintaining them in working order for a long duration. The industrial operating conditions and
repair facility play also an important role in this regard.
Several attempts have been made by various researchers and authors to find the availability of practical industrial
system using different techniques. Dhillon and Natesan (1983) examined the availability of power system in fluctuating
environment. Singh I.P. (1989) studied the reliability analysis of a complex system having four types of components
with pre-emptive priority repairs. Singh and Dayal (1992) studied the reliability analysis of a repairable system in a
fluctuating environment. Gupta et al. (2005) evaluated the reliability parameters for butter manufacturing system in a
diary plant considering exponentially distributed failure rates of various components. Solanki et al (2006) evaluated the
reliability of thermal-hydraulic passive systems using thermal hydraulic code RELAP 5/MOD 3.2(which operate in two
phase natural circulation). Rajpal et al (2006) employed artificial neural network for modelling reliability, availability
and maintainability of a repairable helicopter transport facility. Kumar et al. (2007) developed a simulated availability
model for CO2 cooling system of a fertilizer plant. Goyal et al. (2009) discusses the steady state availability analysis of
a part of rubber tube production system under pre-emptive priority repair using Laplace transform technique. Garg S.et
al. (2010) computed the availability of crank-case manufacturing in a 2-wheeler automobile industry and block board
system under pre-emptive priority discipline.
In this paper a sub-system of the practical plant “Liberty Shoes Limited” which is a continuous production system is
taken and the time dependent system availability of the system is estimated using a more advance and sensitive
numerical technique known as adaptive step-size runge-kutta method. The earlier work carried out by most of the
research groups do not entertain this aspect of time dependent availability. They just provide the long run or steady
state availability of the system by taking time infinity. The liberty shoe making plant situated in Karnal, Haryana, India
is chosen for study. Numerical results based upon the true data collected from industry are presented to illustrate the

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(2), 90-100, 2012.

SIMULATION MODELING OF OUTBOUND LOGISTICS OF SUPPLY


CHAIN: A CASE STUDY OF TELECOM COMPANY
Arvind Jayant1, S. Wadhwa2, P.Gupta3, S.K.Garg4
1,3
Department of Mechanical Engineering
Sant Longowal Institute of Engg. & Technology, Longowal, Sangrur, Punjab – 148106 (INDIA)
2
Department of Mechanical Engineering, Indian Institute of Technology, Delhi (INDIA)
4
Department of Mechanical Engineering, Delhi Technological University, Delhi-110042
Corresponding author: Arvind Jayant, arvindjayant@rediffmail.com

The present work has been done for a telecom company with a focus on cost and flexibility in effectively deals with
changing scenario. In this paper, the major problems faced by company at upper end of supply chain and sales outlet are
analyzed and a complete inventory analysis on one of a company product is done by developing an Inventory model for the
company bound store/distribution center and optimal inventory policy is suggested for the outbound logistics on the basis
of simulation analysis. This model is flexible enough to respond to the market fluctuations more efficiently and effectively.
The model is developed in Microsoft EXCEL.

Significance: Increasing competitive pressures and market globalization are forcing the firms to develop supply chains that
can quickly respond to customer needs. The inventory model for the company’s bound store/outbound
logistics has been developed & simulated to reduce the operating cost, stock out, to make supply chain agile.

Key words: Supply Chain, Outbound Logistics, Information Technology, Simulation, Operating Cost, Inventory.

(Received 4 August 2010; Accepted in revised form 28 February 2012)

1. INTRODUCTION
The basis of global competition has changed. No longer are companies competing against other companies, but rather
supply chains are competing against supply chains. Indeed, the success of a business is now invariably measured neither by
the sophistication of its product nor by the size of the market share. It is usually seen in the light of the ability to sometimes
forcefully and deliberately harness its supply chain to deliver responsively to the customers as and when they demand it.
Flexible Supplier-manufacturer relationship is the key enabler in the supply chain management, without the flexibility at
the vendor side the supply chain can’t respond fast. Therefore, the relationship with the supplier should be flexible enough
to meets the changing market needs [2].
In this paper several experiments were carried out on the model for visualizing the impact of the various decision
variables on the total cost and then fixing up the values of (s) and (S). The graphs showing the impact of these parameters
on the performance of the individuals and the system were plotted. Based on the system’s performance under different sets
of operating decisions we shall try to analyze the effect of the different parameters and in what manner their decisions
affect the performance of others across the chain. The parameters whose impact was studied are stock level (S), reorder
level (s); this paper deals with the impact of increase in stock levels and reorder level of the warehouse on overall system
performance [6].

2. ABOUT THE PRODUCT


Bharti -Teletech is a giant in the manufacturing of all kind of telephone sets for the Department of Telecommunication,
open market and for exports. The company share in this segment is highest in India. This company has 35% share in
telephone segment in India. The company is producing the seven model of telephone with brand name of beetal.
§ The company is currently facing the problem of delivering the CORAL & MILL –I model of phones on schedule
date. Though the number of shortage is small but any delivery made beyond schedule will be considered as the lost
opportunity of sale.
§ The coral is general model for the open market and its demand is highly uncertain therefore frequent stock outs are
going on at the end of bound store and warehouse side.
§ The forecasts generated using the 6-month average were not giving the appropriate results.
§ The warehouse is not using the any inventory policy and the reorder level of the warehouse was made intuitively
made.
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(2), 101-115, 2012.

SIMULATION-BASED OPTIMIZATION FOR RESOURCE ALLOCATION AT

THIRD-PARTY LOGISTICS SYSTEMS


Yanchun Pan1, Ming Zhou2, Zhimin Chen3
1,3
College of Management, Shenzhen University, P.R. China
2
Center for Systems Modeling and Simulation, Indiana State University
Corresponding author: Ming Zhou, mzhou@indstate.edu

Allocating resource at third-party logistics systems differs significantly from traditional private logistics systems. The
resources are considered commodities sold to customers of different types. Total yield suffers when over-allocate to
lower-rate or price-sensitive customers; but the resource become “spoiled” when reserve too much for full-rate or
time-sensitive customers that do not arrive as expected. Uncertain order characteristics make the optimization of such
decisions very hard, if not impossible. In this paper we proposed a simulation-based optimization to address related issues.
A genetic algorithm based optimization module is developed to generate/search good solutions; and a discrete-event
simulation model is created to evaluate the solutions generated. The two modules are integrated to work in evolutionary
cycles to achieve the optimization. The study also compared GA/Simulation model with more traditional approach such as
response surface methodology via designed experiments. The models were validated through experimental analysis.

Keywords: resource allocation; simulation; genetic algorithm; optimization; third-party logistics

(Received 2 September 2010; Accepted in revised form 1 March 2012)

1. INTRODUCTION

Studies on third-party logistics (TPL) systems have been thriving since last two decades, as TPL systems gain popularity in
many parts of the world through the flexibility and convenience they provide to improve the quality and efficiency of
logistics services and customer satisfaction (Lambert et al, 1998; Bowersox et al, 2002). Resource or capacity allocation
(e.g. allocation of warehouse space for temporary storage of customer goods) at TPL systems differs significantly from
traditional private logistics system. Unlike private systems, TPL companies use “public warehouses” that are usually more
efficient than private ones through better productivity, shared resources, economy of scale, and transportation (delivery)
consolidation (Ackerman, 1994); and consider the resources to be allocated as commodities sold directly to different
customers repeatedly via services generated based on the resources, such as storing, handling, or transporting goods. Also
such resources are considered “perishable” when they are not sold at or during a period of time, i.e. they cause the loss of
possible revenue that could have been otherwise generated if they were sold (Phillips, 2005).
As in airline or hospitality industries, there are mainly two types of customer demands, and accordingly two different
approaches for allocating resource to customer orders. First, many customers prefer to have their orders placed in advance a
period of time to expect a discounted rate of service. Once allocated, the chunk of resource is “locked in” and subtracted
(from available stock) for the usage period of the order, which is a time period during which the allocated resource is
consumed to generate service for the order. This type of customer is price-sensitive. The risk of over-allocating resource to
this kind of orders is that we may lose opportunities to serve more profitable full-rate customers (or customers willing to
pay higher rates). This is known as the “risk of spill” (Humphreys, 1994; Phillips, 2005). On the other hand, there are
customers who are less price-sensitive, but more time-sensitive, i.e. they place orders often at a time very close to (or at) the
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(3), 117-127, 2012.

TRACKING AND TRACING OF LOGISTICS NETWORKS: PERSPECTIVE


OF REAL-TIME BUSINESS ENVIRONMENT
AHM Shamsuzzoha and Petri T Helo
Department of Production
University of Vaasa, PO BOX 700, FI-65101, Finland

Today’s business environments are full of complexities in terms of managing the value adding supply chain and logistics
networks. In recent years, the development of locating and identifying technologies contribute to fulfill the growing
demands of tracking and tracing the logistics and/or transportation chain. The importance of tracking and tracing of
shipments is considered quite high for manufacturing firms in respect to managing logistics networks efficiently and
satisfying high customers demand. This paper presents a theoretical overview of sophisticated technology-based
methodology or approach required for solving the complex tracking and tracing system in the logistics and supply chain
network. A real-life case example is presented in this paper with the view to demonstrate the tracking technology in terms
of identifying the location and related conditions of the case shipment. The overall outcomes from this research are
concluded with future research direction too.

Significance: This work basically reviews the existing tracking and tracing technologies available over the areas of
logistics and supply chain management. It also demonstrates the methodology for implementing such technologies in real-
life business cases and provides insight of tracking and tracing technology with respect to identifying location, position and
conditions of the shipped items.

Keywords: Logistics tracking and tracing, IT-based solution, Transportation and Distribution network, Real-time
information flow, Business competition.

(Received 3 June 2011; Accepted in revised form 31 July 2011)

1. INTRODUCTION
The identification of location and knowing the conditions of the transported items on real-time business environment are
growing increasing concern in today’s business. This is very much expected for the manufacturing firms in terms of their
business growth and making the customers happy. The importance of tracking and tracing of shipments is considered quite
high for manufacturing firms in terms of customer service and essential for managing logistics networks efficiently. Global
industries are facing problems both from tracking and tracing in their logistics networks that creates huge coordination
problems in the overall product development sites. This problem looses the track among production, delivery and
distribution in the complete logistics chain from source to destination, which is responsible for opportunity cost through
customers’ dissatisfaction. Tracking system helps to identify the position of the shipment and informed the customer in well
advance. Without tracking system it is almost impossible to find out delivered items and often considered as lost or stolen
item that causes business loss. This system might fulfill the needs of project manager to map the production process from
transportation to material management (Helo et al., 2005, Helo, 2006).
Recently evolved technologies supports the fundamental needs for tracking and tracing the logistics network. The
tracking technology ensures the real-time status update of the target shipment and provides the detailed information
corresponding to location, conditions of the shipments (vibration, damage, missing, etc). In practice, there are several
tracking systems available through GPS, GTIN (EAN Int., 2001), RFID (ISO/IEC, 2000; Chang, 2011), Barcode etc;
however, all these systems are not fully compatible for industry. Most of the available tracking and tracing systems utilize
proprietary tracking numbers defined by the individual companies operating systems and are based on information
architecture, where the tracking information is centralized to the provider of the tracking service. Existing tracking systems
can not able to identify the contents within a box for example, whether the box is open or the contents are lost or stolen etc.
In order to tackle such misalignments in the logistics channel, a state-of-the art technologies or tools are needed to be
developed for sustainable production process. These tools are needed to be cost effective and at the same time possibility
for reuse or recycling for any circumstances. Before proceed towards the real-time tracking technology, it is crucial to
analyze its possible cause and effects. Optimal performance measures for the technologies could ensure projects success for
any industries.
Tracking technologies in logistics networks are implemented fairly little in the global technology industry. Mostly high
volume of global industries are implemented this technology with limited capabilities. The basic methods for all these
tracking systems are usually confined for the customer to access the tracking information are within the area of tracing the
shipments through manual queries such as using a www-site or telephone call, e-mailing, fax or to engage in developing
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(3), 128-136, 2012.

A MATHEMATICAL PROGRAMMING FOR AN EMPLOYEES CREATIVITY


MATRIX CUBIC SPACE CLUSTERING IN ORGANIZATIONS
Hamed Fazlollahtabar*1, Iraj Mahdavi2, Saber Shiripour2, Mohammad Hassan Yahyanejad3
1
Faculty of Industrial Engineering, Iran University of Science and Technology, Tehran, Iran
2
Department of Industrial Engineering, Mazandaran University of Science and Technology, Babol, Iran
3
Mazandaran Gas Company, Sari, Iran
*Corresponding author’s email: hfazl@iust.ac.ir

We investigate different structural aspects teams’ network organization and their creativity within a knowledge
development program (KDP). Initially, a pilot group of employees in an organization is selected. This group is evaluated
through creativity parameters using a questionnaire. Considering the questionnaires’ data, a creativity matrix is configured
by a binary scoring. Applying the creativity matrix, clustering is performed via mathematical programming. The pilot group
is divided into some research teams. The research subjects are submitted to the teams. Finally, an allocated problem is
solved and some new research subjects are evolved to be assigned to the next configured teams. This procedure is repeated
dynamically for different time periods.

Keywords: Creativity matrix; Intelligent clustering; Cubic space clustering

(Received 28 September 2011; Accepted in revised form 20 December 2011)

1. INTRODUCTION

In today’s knowledge-intensive environment, Knowledge Development Programs (KDPs) are increasingly employed for
executing innovative efforts (Oxley and Sampson, 2004; Smith and Blanck, 2002). Researchers and practitioners mainly
agree that effective management plays a critical role in the success of such KDPs (Pinto and Prescott, 1988). Unfortunately,
the knowledge and experience base of most managers refer to smaller-scale projects consisting of only a few project teams.
This may be responsible for what Flyvbjerg et al. (2003) call a ‘performance paradox’: ‘‘At the same time as many more
and much larger infrastructure projects are being proposed and built around the world, it is becoming clear that many such
projects have strikingly poor performance records ...”.
KDPs employ follow a project-management like approach with the team as the organizational nucleus (e.g., van Engelen
et al., 2001). The information network of these teams defines the opportunities available to them to create new knowledge
(e.g., Uzzi, 1996). As many scholars have argued, networks of organizational linkages are critical to a host of
organizational processes and outcomes (e.g., Baum and Ingram, 1998; Darr et al., 1995; Hansen, 1999; Reagans and
McEvily, 2003; Szulanski, 1996). New knowledge is the result of creative achievements. Creativity, therefore, molds the
foundation for poor or high degree of performance. The extent to which teams in KDPs produce creative ideas depends not
only on their internal processes and achievements, but also on the work environment in which they operate (e.g., Amabile et
al., 2004; Perry-Smith and Shalley, 2003; Reiter-Palmon and Illies, 2004). Since new knowledge is mainly created when
existing bases of information are disseminated through interaction between interacting teams with varying areas of
expertise, creativity is couched in interaction networks (e.g., Leenders et al., 2003; Hansen, 1999; Ingram and Robert, 2000;
Reagans and Zuckerman, 2001; Tsai, 2001; Uzzi, 1996).
Any organization needs team work among employees for productivity purposes in problem solving. Organizations face
various problems in their determined missions. A useful approach to address these problems is to configure teams
consisting of expert employees. Due to their knowledge and experience of the organization, these teams understand the
organization's problems better more than external research groups and thus may solve the problems more effectively.
Hence, the significant decision to be made is configuration of the teams. Creative teams would be able to propose more
practical and beneficial solutions for organization's problems. Since creativity is a qualitative concept, analyzing and
decision making require knowledge management algorithms and methodologies. These methodologies are employed in the
different steps of configuring teams, task assignment to teams, teams' progress assessment and executive solution proposals
for problems.
In the present work, we propose a creativity matrix analyzing creativity parameters of a pilot group in an organization.
Then, using an intelligent clustering technique, research teams are configured and research subjects are allocated to them.

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(3), 137-148, 2012.

ACCEPTANCE OF E-REVERSE AUCTION USE: A TEST OF COMPETING


MODELS
Fethi Calisir and Cigdem Altin Gumussoy
Department of Industrial Engineering
Istanbul Technical University

This study aims to understand factors affecting e-reverse auction usage in companies by comparing three models:
Technology Acceptance Model (TAM), Theory of Planned Behavior (TPB) and the integrated model (integration of TAM
and TPB). The comparison of the models will answer two important questions: First, with the integration of the models,
whether the explanation rate of behavioral intention to use and actual use is increased. Second, in explaining e-reverse
auction usage, whether TAM is the most powerful method. Since TAM is developed only to explain usages of information
technologies (IT). Using LISREL 8.54, data collected from 156 employees working in the procurement department of
companies in 40 different countries were used to test the models. Results indicated that, TPB may be more appropriate than
the TAM and the integrated model for explaining behavioral intention to use e-reverse auction. Further, the explanation rate
of both behavioral intention to use and actual use is not increased with the integration of the models. The other result
suggests that behavioral intention to use is explained- by only attitude towards use in TAM; by subjective norms, perceived
behavioral control and attitude towards use in both TPB and the integrated model. Actual use of e-reverse auction is
directly predicted by behavioral intention to use in all three models. This study concludes with the discussion of the
findings, implications for practitioners and recommendations for possible future research.

Significance: This paper aims to identify significant factors affecting e-reverse auction usage among buyers working in
the procurement department of companies by comparing three models: TAM, TPB and the integrated
model. The comparisons will explore that whether the explanation rates of behavioral intention to use and
actual use is increased with the integration of the models and whether TAM is the most powerful method
in explaining the usage behavior of e-reverse auction users.

Keywords: E-reverse auction, TAM, TPB, Integrated model, Actual use, Model comparison

(Received 7 June 2011; Accepted in revised form 18 September 2011)

1. INTRODUCTION
E-reverse auction is an online- and real-time auction between a buying company and two or more suppliers (Carter et al.,
2004). Use of the e-reverse auction tool was first offered by FreeMarkets in 1999 and has since then been progressively
adopted more intensively by firms. Several Fortune Global 2000 companies use e-reverse auction as a purchasing tool
(Giampietro and Emiliani, 2007). For example, General Electric spends 50-60 billion $ per year and people in positions of
responsibility believe that 50-66% of this amount can be auctioned (Hannon, 2001).
Using e-reverse auction offers many advantages to buyers as well as suppliers. Price reduction is undoubtedly the most
important one. Suppliers may have to make higher price reductions to win the auction (Giunipero and Eltantawy, 2004). In
addition to the price advantage, increase in buyer productivity, reduction in cycle time, access to many suppliers at the same
time, creating a more competitive environment, standardization, and transparency in purchasing process are the other
advantages of e-reverse auction. All these advantages create more opportunities for companies by reduction in cost and
time, enabling these companies can offer higher quality products (Carter et al., 2004; Bartezzaghi and Ronchi, 2003). In
2000, General Electric saved $480 million by using e-reverse auction from its $6.4 billion expenditure (Hannon, 2001). E-
reverse auction has benefits not only for buyers but also for suppliers. These are growing markets, accessed by system users
all over the world, who are enabled to compare their own competitiveness in the market and follow up auctions by potential
customers on the Internet. Besides, they can estimate their customers’ needs and market trends by checking the e-reverse
auctions’ specifications and conditions for the products and services. Thus, suppliers can not only see areas for
improvement and but also their own needs for improvement (Emiliani, 2000; Mullane et al., 2001). Therefore, it is
important to explain and understand the factors that affect the use of e-reverse auctions as they aim at improving
performances of company and employees to complement each other. To our knowledge, the only study that compares
models in the context of e-auction is Bosnjak et al. (2006). In their study, they aim to explain English auction use, which is
generally used in business-to-customer and customer-to-customer markets, whereas the current study is related with e-
reverse auction technology, used for procurement of products or services in the business-to-business markets.

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(3), 149-160, 2012.

A METHODOLOGY FOR PERFORMANCE MEASUREMENT IN


MANUFACTURING COLLABORATION
Jae-Yoon Jung1, JinSung Lee1, Ji-Hwan Jung2, Sang-Kuk Kim1, and Dongmin Shin3
1
Department of Industrial and Management Systems Engineering, Kyung Hee University, Korea
2
Business Innovation Center, LG Display, Korea
3
Department of Industrial and Management Engineering, Hanyang University, Korea

Corresponding author: Dongmin Shin, dmshin@hanyang.ac.kr

Effective performance measures must be developed in order to effectively maintain successful collaboration. This
paper presents a methodology of collaborative performance measures to evaluate the overall performance of a
collaboration process between multiple manufacturing partners. The partners first define collaborative key performance
indicators (cKPI), and they then measure the cKPIs and calculate the synthetic performance from the cKPI values to
evaluate the result of the collaboration case. To measure different scales of cKPI, we develop a two-folded desirability
function based on the logistic sigmoid functions. The proposed methodology provides a quantitative way to measure
collaborative performance in order to effectively manage collaboration among partners, continuously improving
collaboration performance.

Keywords: Manufacturing collaboration, performance measurement, collaborative key performance indicators, two-
folded desirability function, sigmoid function.

(Received 17 May 2011; Accepted in revised form 18 September 2011)

1. INTRODUCTION

One important change in the manufacturing industry is that competition between individual companies has been
extended to competition between the manufacturing networks surrounding the companies (NISA, 2001). This is
because the competitive advantages of modern manufacturing companies are derived from manufacturing collaboration
in virtual enterprise networks such as supply chains (Mun et al., 2009). Most existing performance measures, however,
have been developed to evaluate the performance of internal or outsourcing projects from the perspective of a single
company (Ghalayini et al., 1997; Khadem et al., 2008; Koc, 2011). Moreover, some performance indicators such as
trading costs are oriented to a single company, and cannot be directly applied to measuring the collaboration
performance since such indicators conflict between two partners. As a result, new collaborative performance measures
are needed so that collaboration partners can make arrangements and compromises with each other, reflecting their
common interests.
In this paper, we first introduce the concept of collaborative key performance indicators (cKPIs), which are defined
to measure the collaboration performance of multiple manufacturing partners. cKPIs are calculated by using several
key performance indicators (KPIs) which individual partners can measure. For this research, we referred to the Supply
Chain Operations Reference (SCOR) model (SCC, 2006) to define cKPI for manufacturing collaboration. Since the
SCOR model provides corresponding performance metrics as well as several levels of supply chain process models, it
can be a good reference for defining collaborative performance indicators (Barratt, 2004).
In addition, we developed a two-folded desirability function to reflect the characteristics of performance indicators in
manufacturing collaboration. The desirability function, which is based on the sigmoid function, can reflect multiple
cKPI criteria in service level agreements (SLA). Further, unlike existing desirability functions, the sigmoid based
desirability function can transform different scales of cKPIs into values between 0 and 1 without requiring maximum or
minimum values (Lee and Yum, 2003). The weighted values of two-folded desirability functions for all cKPIs are
summed to determine the synthetic performance of a collaboration, which can be compared with prior performance or
partners’ performance.
This paper is organized as follows. We first introduce the background of our research in Section 2. The framework
of collaborative performance management is presented, along with the concept of cKPI, in Section 3. Subsequently,
how to design the collaborative performance indicators and how to measure the performance indicators of
manufacturing collaboration are described in Section 4 and Section 5, respectively. Finally, Section 6 concludes this
paper.

2. BACKGROUND

2.1 Collaboration in Manufacturing Processes


Manufacturing sector is a critical backbone of a nation’s economy while other industries such as information and
service sectors are rapidly emerging for economic growth in developed countries. In order for manufacturing

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(3), 161-170, 2012.

A FRAMEWORK FOR THE ADOPTION OF RAPID PROTOTYPING FOR


SMEs: FROM STRATEGIC TO OPERATIONAL
Ayyaz Ahmad*, Muhammad Ilyas Mazhar and Ian Howard
Department of Mechanical Engineering,
Curtin University of Technology, WA 6102, Australia

*Corresponding author: Ayyaz Ahmad, ayyaz.ahmad@postgrad.curtin.edu.au

Rapidly changing global markets, unprecedented increase in product flexibility requirements and shorter product life cycles
require more efficient technologies that can help reduce the time to market, which is considered to be a crucial factor to
survive in today’s highly volatile market conditions. Rapid prototyping technology (RPT) has the potential to make
remarkable reductions in the product development time. However, its fast development pace combined with increasing
complexity and variety has made the task of RPT selection difficult as well as challenging, resulting in low diffusion
particularly at SME level. This paper systematically presents (i) Low RP adoption issues and challenges (ii) Importance of
SMEs and the challenges they are facing to highlight the magnitude of the problem (iii) Previous work in the area of
technology selection and adoption and finally offers an adoption framework which is exclusive for the adoption of RP
technology by considering the manufacturing, operational, technology and cost drivers for a perfect technology fit into the
business.

Significance: Rapid Prototyping (RP) exhibits unique characteristics and can have potential impact on all business
functions, which demands a methodological approach for the evaluation and adoption of the technology.
The main focus of this study is to propose a framework that facilitates the RP adoption from strategic to
operational level to ensure complete and effective implementation to obtain the desired objectives, with a
special emphasis on SMEs.

Keywords: Rapid prototyping, Technology adoption, SMEs, Technology Selection, Competitiveness

(Received 3 June 2011; Accepted in revised form 18 September 2011)

1. INTRODUCTION
The changes in the global economic scenario have posed considerable threats to many companies, especially SMEs as they
strive to stay competitive in world markets. This change in paradigms demands more flexibility in product designs. These
challenges combined with increased variety and very short lead times has a great impact on the business of small to
medium companies in securing a significant proportion of markets in which they operate. The conventional approaches and
technologies are struggling to meet business needs. Consequently, manufacturers are searching for more efficient
technologies, such as rapid prototyping that can help embrace the challenges. A critical activity for small companies is the
decision-making on the selection and adoption of these advanced technologies. The SME’s task becomes more difficult
because of the absence of any formal procedures (Ordoobadi et al., 2001). An advanced technology can be a great
opportunity for a business but it can also be a threat to a company. A wrong alternative or too much investment in the right
one can reduce the competitive advantage of a company (Trokkeli and Tuominen, 2002). The changing picture of the
competition requires synchronization between business and new trends, which demands unique and effective solutions.
These solutions should be designed to support them by keeping in view the specific nature of SMEs and ought to be simple,
comprehensive and very practical so that they remain an effective part of the global value chain.
To meet these global challenges, the design and manufacturing community is adopting the RP technology to remain
efficient as well as competitive. The RP technology has enormous potential to shrink the product design and development
timeline. Despite these great advantages, the adoption of RP at SMEs level is significantly low. A survey of 262 UK
companies showed that 85% do not use RP. Lack of awareness of what the RP technology offers and how it can be
successfully linked into the business functions are the key factors holding back this sector from the RP technology
adoption. The majority of the groups who indicate that RP is irrelevant are unaware of what impact it can have on their
business (Grenada, 2002). The condition is even worst in developing countries. Laar highlights the sensitivity of the issue
by arguing that many engineers and R&D people are still unaware of the future implications of this technology. This is a
major concern in view of the fact that technical departments are ignoring the RP/RM when it has already entered into world
leading markets and has the potential to completely change the way we do business (Laar, 2007). Kidds argues that RP

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(3), 171-180, 2012.

AN INTEGRATED UTILISATION, SCHEDULING AND LOT-SIZING


ALGORITHM FOR PULL PRODUCTION
Olufemi A.B. Adetunji, Venkata S.S. Yadavalli
Department of Industrial and Systems Engineering,
University of Pretoria, Hatfield, Pretoria 0002, South Africa

We present an algorithm that continuously reduces the batch sizes of products on Non-constraining resource in a production
network through the utilization of the idle time on such resource. This leads to reduction in the holding cost and increase in
the frequency of batch release of the production system. This would also lead to reduction in customer facing supply lead
time. Such technique could be valuable in typical pull production systems like lean manufacturing, theory of constraints or
Constant-Work-in-Process CONWIP processes. An example is used to demonstrate a real life application of the algorithm,
and it was found to work better for system cost minimization than a previous algorithm that uses the production run length
as the criterion for batch reduction.

Keywords: Lot-sizing, Utilization, Setup, Pull production, Scheduling algorithm

(Received 23 May 2011; Accepted in revised form 28 May 2012)

1. INTRODUCTION
Traditionally, a lot size is taken to be the quantity of products contained in a production or purchase batch. This definition
is also congruent to the classical batching model of economic order, which basically assumes that decision of what quantity
to produce is made independently of job scheduling, but this is assumption is now being relaxed and the concept redefined.
Potts and Wassenhove (1992), for instance, defined batching as making decision about whether or not to schedule similar
jobs contiguously, and lot sizing as the decision about when and how to split the production of identical items into sub-lots.
They noted that these decisions were traditionally taken as if lot sizing is independent of scheduling of jobs. This is
obviated by the majority of the body of literature available on both subjects that are separate, with the impression being
given that scheduling decisions are taken only after lot sizes of the various products have been decided. This assumption of
independence is not usually true in most cases as the decisions are always inter-twined. Paul and Wassenhove also
proposed a general model for integrated batching, lot sizing and scheduling. Drexl and Kims (1997) noted that lot-sizing
and scheduling are two short term decisions of production planning that must be tied together with the medium term plan,
which is the Master Production Scheduling of the system. Many models are since being published addressing integrated
batching, lot sizing and scheduling. Potts and Kovalyov (2000) and Webster and Baker (1995) together with Potts and
Wassenhove (1992) and Drexl and Kims (1997) are good readings.
There is also a close relationship between system utilization and other system parameters like the Work-in-Process
Inventory (WIP) and consequently the system holding cost and profitability. Variability in resource processing time and/or
input arrival pattern have degrading influence on WIP level, especially as the system gets close to full utilization. This is
succinctly summarized in Little’s law. This effect of resource utilization on the production plan and the level of WIP
appears not to have been well studied. Among the few known models incorporating resource utilization into production
scheduling include Rappold and Yoho (2008), and a model proposed in Hopp (2008). The procedure proposed by Hopp’s is
simple and straightforward to use, and that is what has been extended, and hopefully improved, in this paper.
Next is a brief review of some work currently being done on integrated lot-sizing. We then proceed to briefly review
some necessary principles of the management of constraint system pertinent to our model; especially the emphasis on
balancing flow rather than capacities, which creates pockets of spare capacities (labor and machine), and the useful
breakdown of the total cycle time of manufacturing resources and jobs, which identifies the various locations and quantities
of idle capacities in the system, which can then be used in improved job scheduling due to reduced customer facing lead
time and decreased lot sizes. The insight derived, however, is useful in other pull production environments as well since all
pull techniques (including lean and CONWIP) always prefer to concentrate on flow and to buffer input and process
variability via spare capacities as opposed to excess inventories.

2. INTEGRATED SCHEDULING AND LOT SIZING MODELS


Solving integrated batching, lot sizing and scheduling problems has received more research attention recently. This could
have also been buoyed by the development of many heuristics and techniques for solving difficult combinatorial problems.
Among the recently published work in this area include Toledo et al (2010), which evaluated different parallel algorithms

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(4), 181-192, 2012.

THE OPTIMAL ORGANIZATION STRUCTURE DESIGN PROBLEM IN


MAKE-TO-ORDER ENTERPRISES

Jesús A. Mena
Department of Industrial Engineering,
Monterrey Institute of Technology Campus, Chihuahua, Mexico

This paper addresses the organization structure design problem in a make-to-order (MTO) operation environment. A
mathematical model is presented to aid an operations manager in an MTO environment to select a set of potential
managerial layers to minimize the operation and supervision cost. With a given Work Breakdown Structure (WBS) for any
specific project, solving this model leads an optimal organization structure design. The proposed model considers allocation
tasks to workers, considering complexity and compatibility of each task with respect to workers, and the requirement of
management for planning, execution, training and control in a hierarchical organization. This model addresses the span of
control problem and provides a quantitative approach to the organization design problem and is intended for applications as
a design tool in the make-to-order industries.

Keywords
Span of control, Organizational Design, Hierarchical Organization, Assignment Problem, Make-to-order

(Received 20 Sept 2011; Accepted in revised form 2 Jan 2012)

1. INTRODUCTION
The span of management is perhaps the most discussed single concept in classical, neo-classical or modern management
theory. Throughout its evolution it has been referred to by various titles such as span of management, span of control, span
of supervision, and span of authority (Van Fleet & Benedian, 1977). The existing research work focus on principally
qualitative methods to analyze this concept, i.e., heuristic rules based on experiences and/or intuition. This research
develops an analytical modeling to determine the number of managerial layers and it is motivated in order to have an
evaluation tool for functional based companies and also as a design tool for project-based companies.
The challenge of mass customization brings great value to both the customer and the company. For example, building
cars to customer order eliminates the need for companies to hold billions of dollars worth of finished stock. Any company
able to free this capital would improve their competitive position, and be able to reinvest in future product development.
The question for many company executives is how efficient the organizational structure could be. The need for frequent
adjustment to an organizational structure can be found in this type of make-to-order or project-based companies, where
work contents and its organizational structure could vary dramatically over a short period of time.
This paper presents an analytical model for analyzing hierarchical organizations. It considers various factors that affect
the requirement for supervision and formulates them into an analytical model which aims at optimizing the organizational
design. This decision includes allocation tasks to workers, considering complexity and compatibility of each task with
respect to workers, and the requirement of management for planning, execution, training and control in a hierarchical
organization. The model is formulated as a 0-1 mixed integer program. The objective of the model is minimum
operational cost, which are the sum of supervision costs at each level of the hierarchy and the number of workers assigned
with tasks. This model addresses the span of control problem and provides a quantitative approach to the organization
design problem and is intended for applications as a design tool in the make-to-order industries. Each project-based
company may have to frequently readjust its organizational structure, as its capability and capacity shifts over time. It
could also be applied to functionality based companies as an evaluation tool, to assess the optimality of their current
organization structure.
Meier and Bohte (Meier & Bohte, 2003) have recently reinvigorated the debate on span of control and the optimal
manager-subordinate relationship. They offer a theory concerning the impacts and determinants of span of control and test
it using data from educational organizations. The findings of Theobald et al. (Theobald & Nicholson-Crotty, S., 2005)
suggest that manager-subordinates ratios, along with other structural influences on production, deserve considerably more
attention than they have received in modern research on administration.

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(4), 193-203, 2012.

A NON-TRADITIONAL CAPITAL INVESTMENT CRITERIA-BASED


METHOD TO OPTIMIZE A PORTFOLIO OF INVESTMENTS
Joana Siqueira de Souza1, Francisco José Kliemann Neto2, Michel José Anzanello3, Tiago Pascoal Filomena4
1
Assistant Professor, Engineering School - Pontifícia Universidade Catolica of Rio Grande do Sul, Av. Ipiranga, 6681 -
Partenon - 90619-900, Porto Alegre, RS, Brazil.
2
Associate Professor, Department of Industrial and Transportation Engineering, Federal University of Rio Grande do Sul –
PPGEP/UFRGS. Av. Osvaldo Aranha, 99, 90035-190, Porto Alegre, RS, Brazil.
3
Assistant Professor, Department of Industrial and Transportation Engineering, Federal University of Rio Grande do Sul –
PPGEP/UFRGS. Av. Osvaldo Aranha, 99, 90035-190, Porto Alegre, RS, Brazil.
4
Assistant Professor, School Business, Federal University of Rio Grande do Sul – Rua Washington Luiz, 855. Centro,
90010-460. Porto Alegre, RS, Brazil.

During the capital budgeting, companies need to define a set of projects that bring profitability, perpetuity and also have a
direct link with the strategic objectives. This paper presents a practical model for defining a portfolio of industrial
investments during capital budgeting by making use of traditional methods of investment analysis, such as Net Present
Value (NPV), and by incorporating qualitative attributes on the analysis through the multicriteria analysis method called
Non-Traditional Capital Investment Criteria (Boucher and MacStravic, 1991). Optimization techniques are then used to
integrate economic and qualitative attributes subjected to budget restrictions. The proposed model was validated in an
automotive company.

Keywords: project portfolio, capital budgeting, net present value, multicriteria analysis, linear programming, decision-
making.

(Received 31 Aug 2010; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION

The definition of a portfolio of projects in capital budgeting appears as an important issue in investment decisions and
industrial planning (Chou et al. 2001). Decisions are seldom made for an isolated project; in most situations, the decision
maker needs to consider several alternative projects relying on particular variables (Borgonovo and Peccati, 2006)
associated not only to financial resources, but also to internal and external factors to the company (Kooros and Mcmanis,
1998; Mortensen et al. 2008).
Although a large number of robust approaches related to investment decisions have been suggested in the literature,
simplistic methods for evaluating investments are still widely used, and little structured decision making is applied in
portfolio definition. Many assessment methods use discounted cash flow techniques such as the Internal Rate of Return
(IRR), Net Present Value (NPV) and the Profitability Index (PI) (Cooper et al. 1997).
More sophisticated methods can increase the likelihood of solid investments due to a stronger connection to company's
strategy, leading to a more consistent analysis of opportunities (Verbeeten, 2006). Although many of these methods are
appropriate for investment evaluation, Jansen et al. (2004) state they only enable tactical allocation of capital, and seldom
take qualitative aspects into consideration (e.g. strategic aspects). That is corroborated by Arnold and Hatzopoulos (2000)
who found that many firms invest their capital in non-economic projects (i.e. projects that do not necessarily bring
economic benefits to the company), such as projects driven to workers’ health and safety.
One way to incorporate qualitative aspects on decision-making process for capital investment is the adoption of
multicriteria techniques, also known as Multiple Criteria Decision Making (MCDM) methods. A widespread method is the
MAUT - Multiattribute Utility Theory - which relies on a simple and easy method for ranking the alternatives; see Min
(1994). Another popular method is the Analytical Hierarchy Process (AHP), which hierarchically accommodates both
quantitative and qualitative attributes of complex decisions (Saaty, 1980; Vaidya and Kumar, 2006). Successful
applications of AHP can be found in Fogliatto and Guimarães (2004), Rabbani et al. (2005), Vaidya and Kumar (2006), and
Mendoza et al. (2008).
A drawback of AHP is that it accommodates economic and qualitative aspects in different matrices, and also requires the
comparison of all the alternatives over the same criteria. That is undesired when working with investment projects, since
not all projects impact upon the same criteria. For example, a project to renew a truck fleet may have an impact on workers’
ergonomic condition, while a training project might not impact on that criterion. That led Boucher and MacStravic (1991)
to develop an AHP-based multicriteria method for investment decision: the Non-Traditional Capital Investment Criteria
(NCIC).
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(4), 204-212, 2012.

AN ANALYTICAL APPROACH OF SENSITIVITY ANALYSIS FOR EOQ

Hui-Ming Teng1,2, Yufang Chiu1, Ping-Hui Hsu1,3, Hui Ming Wee1*


1
Department of Industrial and Systems Engineering, Chung Yuan Christian University, Chungli, Taiwan
2
Department of Business Administration, Chihlee Institute of Technology, Panchiao, Taipei, Taiwan
3
Department of Business Administration, De Lin Institute of Technology, Tu-Cheng, Taipei, Taiwan
Corresponding author:*E-mail: weehm@cycu.edu.tw

This study develops an analytical sensitivity analysis approach for a traditional economic order quantity (EOQ) model. The
parameters are treated as variables and a direction for deriving the optimal solution is developed using the gradient
approach. The graph of the optimal solution is provided to demonstrate the sensitivity analysis. Numerical example is
provided to illustrate the theory.

Keywords: Economic order quantity (EOQ); Sensitivity analysis; Gradient; Sub-gradient.

(Received 28 Apr 2010; Accepted in revised form 27 Feb 2012)

1. INTRODUCTION

Researches on inventory problems are usually summarized by sensitivity analysis (Koh et al., 2002; Weng and McClurg,
2003; Sarker and Kindi, 2006; Ji et al., 2008; Savsar and Abdulmalek,2008; Patel et al., 2009; Hsu et al., 2010). The
traditional methodology to investigate the impact of parameters sensitivities is done by evaluating the target value based on
varying parameters. Although the performance of traditional methodology is good enough, however, its graphs precision is
limited. This is mainly caused by its inability to express the discrete property completely. Ray and Sahu (1992) provided
the details of sensitivity analysis factors in productivity measurement for multi-product manufacturing firms. Borgonovo
and Peccati (2007) applied Sobol’s function and variance decomposition method to determine the most influential
parameters on the model output. Borgonovo(2010) introduced a new method to define sensitivity measurement that do not
need differential equations for sensitivity analysis.
Lee and Olson presented a nonlinear goal programming algorithm based on the gradient method, utilizing an optimal
step length for chance constrained goal programming models. Arsham (2007) developed a full gradient method which
consists of three phases: initialization, push and final iteration phase. The initialization phase provided initial tableau which
may not have a full set of basis. The push phase used a full gradient vector of the objective function to obtain a feasible
vertex. The final iteration phase used a series of pivotal steps using sub-gradient, which leads to an optimal solution. For
each iteration, the sub-gradient provides the desired direction of motion within the feasible region.
In this study, sensitivity analysis based on the traditional economic order quantity (EOQ) model is discussed. A
numerical example is provided to illustrate the theory.

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(5), 213-220, 2012.

PRODUCTION LEAD TIME VARIABILITY SIMULATION –


INSIGHTS FROM A CASE STUDY
Gandolf R. Finke1, Mahender Singh2, Prof. Dr. Paul Schönsleben1
1
BWI Center for Industrial Management, ETH Zurich, Kreuzplatz 5, 8032 Zurich, Switzerland
2
Malaysia Institute for Supply Chain Innovation, No. 2A, Persiaran Tebar Layar, Seksyen U8, Bukit Jelutong, Shah Alam,
40150 Selangor, Malaysia

We study the impact of disruptions to operations that can cause deviations in the individual processing time of
a task, resulting in longer than planned production lead time. Quality, availability of capacity and required
material as well as variability in process times are regarded as drivers of disruption. The focus is to study the
impact of variability in the lead time on the overall performance of the production system, instead of the
average lead time. Structural and numerical application of the approach are provided in a case study.
Additionally, the different dimensions of practical implications of this research are accentuated. Accordingly,
discrete event simulation is used to study the interactions and draw insights based on a case study. Measures to
mitigate lead time variability are discussed and their impact is analyzed quantitatively.

Keywords: Operations management, Production planning, Simulation, Lead time, Variability, Reliability

(Received 15 Nov 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION

1.1 Motivation
Production lead time is a critical driver of process design in a manufacturing company. The concept of time-based
competition stresses the importance of lead times as a competitive advantage and strategic instrument. Shorter lead times
are not only advisable in terms of meeting customer demand and the ability to adapt but also to minimize cost by reducing
inventories and work in progress. As a result, cycle time reduction efforts have garnered a lot of attention in the literature
and industry initiatives.
Although a lower lead time is a worthwhile endeavor, how it is reduced is the all-important decision. Traditionally, these
decisions involve weighing the benefits of the reduction in the average cycle time with the investment required to achieve
the targeted improvement. Little or no attention is paid to the variability in cycle times, however. We will use the terms
variability and reliability to address the same issue in this paper. Through this research we intend to highlight the need for a
formal consideration of the cycle time reliability when implementing measures for lead time reduction.
Although seemingly simple, understanding the system level impact of individual task variability is not straightforward.
Whereas the averages are additive and thus simple to study, the variability is not. We take a simple example from the
reliability domain to illustrate this point. Consider a system that has 20 components, with each one performing at a high
level of 98% reliability individually. Collectively, assuming independence, the reliability of this system is only 65%! This
deteriorates further to near 50% if we add 10 more components! The key point here is that we need to assess reliability in a
holistic manner as individual task processing time variability tends to amplify as it travels through an interconnected
production sequence. In short, a high level of local reliability does not necessarily imply a high level of global reliability.
A deeper understanding of the true system level reliability will motivate the need for redundancy at strategic locations
throughout the system to improve the overall performance. It may in certain situations be more beneficial to have reliable
delivery with longer average lead time, that is minimum or no lead time deviation, than enforcing a shorter average lead
time that is less reliable. This type of analysis will enhance the selection criterion when multiple investment options to
reduce lead time are possible since reliability has direct and indirect cost consequences.

1.2 Classification of disruptions and scope


We classify potential disruptions encountered by a typical manufacturing company into two categories. The first category,
which we call systemic disruptions, covers all factors that affect large portions of a company or the supply chain
simultaneously, for example earthquakes, floods, wars or strikes.
The second category is described as operational disruptions. These include drivers that influence a company’s
performance at a micro scale, i.e., individual steps in production sequence for instance, failed quality tests, variability in the
completion time of single production steps and production resource breakdown.

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(5), 221-231, 2012.
 

A COMPUTER SIMULATION MODEL TO REDUCE PATIENT LENGTH OF


STAY AND TO IMPROVE RESOURCE UTILIZATION RATE IN AN
EMERGENCY DEPARTMENT SERVICE SYSTEM
Muhammet Gul1, Ali Fuat Guneri2
1
Industrial Engineering Department
Faculty of Engineering
Tunceli University
62000, Tunceli
2
Industrial Engineering Department
Mechanical Faculty
Yıldız Technical University
Yıldız, Beşiktaş, İstanbul
guneri@yildiz.edu.tr
Corresponding author’s e-mail: {Muhammet Gul, muhammetgul@tunceli.edu.tr}

This paper presents a case study of a discrete-event simulation (DES) model of an emergency department (ED) unit in a
regional university hospital in Turkey. In this paper emergency department operations of the hospital were modeled,
analyzed and improved. The goal of the study is to reduce patient average length of stay (LOS) and to improve patient
throughput and utilization of locations and human resources (doctors, nurses, receptionists). Some alternative scenarios in
an attempt to determine optimal staff level were evaluated. These alternative approaches illustrate that vital improvement
in LOS and throughput can be obtained by minor changes in shift hours and number of resources. Considering future
changes in patient demand a scenario which reduces LOS and improves throughput is available in the paper.

Significance: The Key Performance Indicators (KPIs) to determine and improve system performance in healthcare
emergency departments consist of to reduce patient average length of stay (LOS), to improve patient throughput and
resource utilization rates. Alternative scenarios and optimal staff levels are enhanced within the scope of this study.

Keywords: Emergency departments, healthcare modeling, discrete event simulation, length of stay, Servicemodel

(Received 8 Mar 2012; Accepted in revised form 31 Mar 2012)

1. INTRODUCTION
Emergency departments (EDs) in which people consult due to many complaints and demand first medical response have
vital importance in healthcare systems. Today improvements in healthcare lead to an increase in number of tools and
methods. During recent years utilization of emergency department units in Turkey has heavy increased because of fast and
cheap treatment opportunities. Statistics about the consultations to healthcare institutions show that number of arrivals at
emergency departments have increased recently (Arslanhan, 2010).
It is objected decreasing the waiting times that improve performance of the operations in healthcare sector. McGuire
(1994) evaluated alternatives to reduce waiting times of ED patients using Medmodel. He managed to reduce LOS from
157 minutes to 107 minutes. Kirtland et al. (1995) provided an improvement of 38 minutes as combination of optimal
solutions. Performance measures obtained from simulation applications in EDs are to reduce patient length of stay (LOS),
to improve patient throughput, to increase resource utilization rate and to control costs. Evans et al. (1996) described an
Arena simulation model for the emergency department of a particular hospital in Kentucky. In the model patient flows of
13 different types of patients were simulated. Also different feasible schedules for doctors, nurses and technicians were
evaluated. Main performance measure used in the process was average patient length of stay in emergency department.
Model was run 50 replications and patient LOS was found as 142 minutes. Patvivatsiri at al. (2003) evaluated a reducing of
%45 in patients’ average waiting times with an affective nurse schedule.
Simulation enables how changes system performance based on several factors (Tekkanat, 2007). In EDs, operation times,
arrival rates of entities, costs and utilization of resources are given as example to these factors. Discrete Event Simulation
(DES) techniques have been used a lot for modeling the operations of an emergency department and for the analysis of
patient flows and throughput time (Samaha et al., 2003; Mahapatra et al., 2003; Takakuwa and Shiozaki, 2004). Samaha et
al. (2003) evaluated some alternatives to decrease patient length of stay in system with 24 hours and a week data obtained
from ED using Arena simulation software. Mahapatra et al. (2003) aimed to develop a reliable decision support system
(DSS) using Emergency Severity Index (ESI) triage method which optimizes resource utilization rate. According to three
ISSN  1943-­‐670X                                                                                                                                                                                                    ©  INTERNATIONAL  JOURNAL  OF  INDUSTRIAL  ENGINEERING  
International Journal of Industrial Engineering, 19(5), 232-240, 2012.

AN EPQ MODEL WITH VARIABLE HOLDING COST


Hesham K. Alfares
Systems Engineering Department, King Fahd University of Petroleum & Minerals,
Dhahran 31261, Saudi Arabia. Email: alfares@kfupm.edu.sa

Instantaneous order replenishment and constant holding cost are two fundamental assumptions of the economic order
quantity (EOQ) model. This paper presents modifications to both of these basic assumptions. First, non-instantaneous order
replenishment is assumed, i.e. a finite production rate of the economic production quantity (EPQ) model is considered.
Second, the holding cost per unit per time period is assumed to vary according to the length of the storage duration. Two
types of holding cost variability with longer storage times are considered: retroactive increase and incremental increase. For
both cases, models are formulated, solutions algorithms are developed, and examples are solved.

Keywords: Economic production quantity (EPQ), Variable holding cost, Production-inventory models.

(Received 13 Apr 2011; Accepted in revised form 28 Oct 2011)

1. INTRODUCTION
In the classical economic order quantity (EOQ) model, the replenishment of the order is assumed to be instantaneous, i.e.
the production rate is implicitly assumed infinite. In practice, many orders are manufactured gradually, at a finite rate of
production. Even if the orders are purchased, the procurement and receipt of these orders is seldom instantaneous.
Therefore, economic production/manufacturing quantity (EPQ/EMQ) models are more representative of real life.
Moreover, the assumption of a constant holding cost for the entire duration of storage may not be always realistic. In many
practical situations, such as in the storage of perishable items, longer storage periods require additional specialized
equipment and facilities, resulting in higher holding costs.
This paper presents an EPQ inventory model with a finite production rate and a variable holding cost. In this model, the
holding cost is assumed to be an increasing step function of the storage duration. Two types of time-dependent holding cost
functions are considered: retroactive increase, and incremental increase. Retroactive holding cost increase means that the
holding cost of the last storage period applies to all previous storage periods. Incremental holding cost increase means that
increasingly higher holding costs apply only to later storage periods. For each of these two types, optimal solutions
algorithms are developed to minimize the total cost per unit time.
Several EOQ and EPQ models with variable holding costs proposed in the literature consider holding cost to be a function
of the amount or value of inventory. Only few EOQ-type models assume the holding cost to vary in relation to the
inventory level. Muhlemann and Valtis-Spanopoulos (1980) revise the classical EOQ formula, assuming the holding cost to
be an increasing function of the average inventory value. Their justification is that the greater the value of inventory, the
higher the cost of financing it. Mao and Xiao (2009) construct an EOQ model for deteriorating items with complete
backlogging, considering the holding cost as a function of the on-hand inventory. A solution procedure is developed, and
the conditions are specified for the existence and uniqueness of the optimal solution when the total holding cost function is
convex. Moon et al. (2008) develop mixed integer programming models and genetic algorithm heuristic solutions to
minimize the maximum EOQ storage space requirement for both finite and infinite time horizons.
Some inventory models have built-in flexibility, allowing the holding to be a function of either the inventory level or
storage time. Goh (1994) considers an EOQ-type single-item inventory system with a stock-dependent demand rate and
variable holding cost. Giri and Chaudhuri (1998) construct an EOQ-type inventory model for a perishable product with
stock-dependent demand and variable holding cost. Considering two types of variation of the holding cost per unit, both
Goh (1994) and Giri and Chaudhuri (1998) treat holding cost either as: (i) a non-linear continuous function of the time in
storage, or (i) a non-linear continuous function of the amount of inventory.
In several EOQ-type models, the holding cost is assumed to be a continuous function of storage time. For a non-linearly
deteriorating item, Weiss (1982) considers the holding cost per unit as a non-linear function of the length of storage
duration. Optimal order quantities are derived for deterministic and stochastic demands, and for both finite and infinite time
horizons. Giri at al. (1996) develop a generalized EOQ model for deteriorating items with shortages, in which both the
demand rate and the holding cost are continuous functions of time. The optimal inventory policy is derived assuming a
finite planning horizon and constant replenishment cycles. Ferguson et al. (2007) apply Weiss (1982) formulas to
approximate optimal order quantities for grocery store perishable goods, using regression to estimate the holding cost curve
parameters.
Alfares (2007) introduces the notion of holding cost variability as a discontinuous step function of storage time, with two
types of holding cost increase. As the storage time extends to the next time period, the new (higher) holding cost can be
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(6), 241-251, 2012.

A MULTI-HIERARCHY GREY RELATIONAL ANALYSIS MODEL FOR


NATURAL GAS PIPELINE OPERATION SCHEMES COMPREHENSIVE
EVALUATION
Chang Jun Li1, Wen Long Jia2, En Bin Liu2, Xia Wu2
1
Oil and Gas Storage and Transportation Engineering Institute of Southwest Petroleum University
2
Southwest Petroleum University

In the condition of satisfying process requirement, determining the optimum operation schemes of natural gas pipeline
network is essential to improve the overall efficiency of network operation. According to the operation parameters of
natural gas network, the multi-hierarchy comprehensive evaluation index system is illustrated, and the weights of each
index are determined with an improved Analytic Hierarchy Process (AHP). This paper presents a multi-hierarchy grey
relational analysis (GRA) method which is suitable for evaluating the multi-hierarchy index system with combining the
AHP and grey relational analysis. Ultimately, the industrial application shows that multi hierarchy grey relational analysis
is effective to evaluate the nature gas pipeline network operation schemes.

Significance: This paper presents a multi-hierarchy grey relational analysis model for natural gas operation schemes
comprehensive evaluation with the combination of AHP and traditional GRA. The method is applied to
the Sebei-Ningxia-Lanzhou gas transmission pipeline successfully.

Keywords: Natural gas pipeline network; Operation schemes; Analytic Hierarchy Process; Grey relational analysis;
Comprehensive evaluation

(Received 27 Jul 2011; Accepted in revised form 2 Jan 2012)

1. INTRODUCTION
   
Gas transmission and distribution pipelines play an important role in the development and utilization of natural gas. The
network operators can formulate many different schemes in the condition of satisfying process requirement. However, the
overall goal of operators is quality, quantity and timely supply of gas and best economic as well as the social benefit. Thus,
select the optimum scheme from many reasonable options to improve the economic returns and social benefits of pipeline
operation is a problem deserving of study.
The operation scheme of natural gas pipeline network is close related to the flow rate, temperature, and pressure at
each node in the network. As it involves too many parameters, it is almost impossible to list all the relevant and determine
the relationship among them. The traditional probability theory and mathematical methods are used to solve problems with
uncertainty characterized by large sample sizes and multi-data. Consequently, it is not suitable for evaluating the network
operation schemes. However, grey relational analysis is proposed to solve uncertainty problems with less available data
and experiences, small sample sizes and incomplete information. Its main principle is contained in the grey relational
analysis model. This analysis method establishes an overall comparative mechanism, overcomes the limitation of pair-wise
comparison, and avoids conflicting conditions between serialized and qualitative results (Tong and Wang, 2003, Chi and
Hsu, 2005). This method has been widely used in the area of oil and gas pipeline optimum designing and comprehensive
evaluation (Liang and Zhen 2004; Zhao 2007) since Professor Wang (Wang, 1993) introduced this method into the
optimum designing of natural gas pipeline in 1993. But the index system of objects evaluated is only one layer.
This paper builds the multi-hierarchy comprehensive evaluation index system for natural gas network and calculates
their weights firstly. Then the multi-hierarchy grey relational analysis method is presented by the combining of the
calculating method of AHP and traditional grey relational analysis method. Ultimately, this paper evaluates seven different
operation schemes of a natural gas network using the method presented.
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(6), 252-263, 2012.

OPTIMAL FLEET SIZE, DELIVERY ROUTES, AND WORKFORCE


ASSIGNMENTS FOR THE VEHICLE ROUTING PROBLEM WITH MANUAL
MATERIALS HANDLING
Prachya Boonprasurt and Suebsak Nanthavanij
Engineering Management Program
Sirindhorn International Institute of Technology, Thammasat University
Pathumthani 12121, Thailand
Corresponding author’s e-mail: {Suebsak Nanthavanij, suebsak@siit.tu.ac.th}

The vehicle routing problem with manual materials handling (VRPMMH) is introduced. At customer locations, delivery
workers must manually unload goods from the vehicle and take them to the stockroom. The delivery activities require
workers to expend certain amounts of physical energy. In this paper, two models of VRPMMH are developed, namely
VRPMMH models with fixed workforce assignments (FXW) and with flexible workforce assignments (FLW). The
objective of both VRPMMH models is to determine optimal fleet size and delivery routes such that the total cost is
minimized. Additionally, the second model is intended to assign delivery workers to vehicles to minimize the differences
in physical workload.

Significance: The results obtained from the vehicle routing problem with manual materials handling (VRPMMH) can
help goods suppliers to obtain a delivery solution that not only is economical but also safe for delivery
workers. By adding the workload constraint into consideration, the solution will prevent the delivery
workers from performing daily physical work beyond the recommended limit.

Keywords: Vehicle routing problem, workforce assignment, manual materials handling, optimization, ergonomics

(Received 9 May 2010; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
Danzig and Ramser (1959) firstly introduced the capacitated vehicle routing problem (VRP) several decades ago. Since
then, VRP has been studied extensively by researchers. In the classical capacitated VRP, goods are delivered from a depot
to a set of customers using a set of identical delivery vehicles. Each customer demands a certain quantity of goods and the
delivery vehicles have a limited capacity. Typically, the problem objective is to find delivery routes starting and ending at
the depot that minimize a total travel distance without violating the capacity constraint of the delivery vehicles. In some
problems, the objective might be to determine the minimum number of delivery vehicles to serve all customers.
There are many variants of VRP such as the vehicle routing problem with backhauls (VRPB), the pickup and delivery
problem with time windows (VRPTW), the mixed vehicle routing problem with backhauls (MVRPB), the multiple depot
mixed vehicle routing problem with backhauls (MDMVRPB), the vehicle routing problem with backhauls and time
windows (VRPBTW), the mixed vehicle routing problem with backhauls and time windows (MVRPBTW), and the vehicle
routing problem with simultaneous deliveries and pickups (VRPSDP) (Ropke and Pisinger, 2006). The classical VRP
including its variants are combinatorial optimization problems. Both exact and heuristic methods have been developed to
obtain the problem solution. For example, consider the vehicle routing problem with simultaneous delivery and pickup
(VRPSDP) (Min, 1989). Halse (1992) presented exact and heuristic methods for the problem and Dethloff (2001, 2002)
considered heuristic algorithms. Additionally, simulation and meta-heuristic approaches have also been employed to
investigate the VRP. Park and Hong (2003) evaluated the system performance of the vehicle routing problem under a
stochastic environment using four heuristics. They considered the VRP with time window constraints where traveling time
and service quantity vary. Ting and Huang (2005) used a genetic algorithm with elitism strategy (GAE) to solve the VRP
with time windows. They reported that their GAE are superior to the other three GAs tested in their study in terms of the
total traveling distance.
Virtually all VRPs do not consider ergonomics in the problem formulation. Consider the situation in which goods must
be manually moved from the vehicle to an assigned location at the customer point. This situation is not unusual especially
for short-distance deliveries within the city area using small delivery vehicles. An example is the delivery of goods from a
distribution center to convenience stores which are scattered around the city. The delivered supplies are manually unloaded
from the vehicle and then moved to the stockroom. These convenience stores do not usually keep large inventories. In
fact, they rely on receiving supplies from the distribution center on a daily basis. Another example is the delivery of

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(6), 264-277, 2012.
 

A QUANTITATIVE PERFORMANCE EVALUATION MODEL BASED ON


A JOB SATISFACTION-PERFORMANCE MATRIX AND APPLICATION
IN A MANUFACTURING COMPANY
Adnan Aktepe, Suleyman Ersoz
Department of Industrial Engineering, Kirikkale University, Turkey

In this study, we propose a performance management model based on employee performance evaluations. Employees
are clustered into 4 different groups according to a job satisfaction-performance model and strategic plans are derived
for each group for an effective performance management. The sustainability of this business process improvement
model is managed with a control mechanism as a Plan-Do-Check-Act (PDCA) cycle as a continuous improvement
methodology. The grouping model is developed with a data mining clustering algorithm. Firstly 4 different
performance groups are determined with a two-step k-means clustering approach. Then the clustering model developed
is testified with an Artificial Neural Network (ANN) model. Necessary data for this study are collected with a
questionnaire application composed of 25 questions, first 13 variables measuring job satisfaction level and last 12
variables measuring performance characteristics where evaluators are employees themselves. With the help of model
developed, human resources department is able to track employees’ job satisfaction and performance levels and
strategies for different performance groups are developed. Application of the model is conducted in a manufacturing
company located in Istanbul, Turkey.

Keywords: Job Satisfaction-Performance Matrix, K-Means Clustering, Performance Management, Employee


Performance Evaluation, Job Satisfaction.

(Received 12 Aug 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION

Fast developing new technologies and changing world had made competitive market conditions harsh. Staying
competitive in the market, which is inevitable for organizations to survive, is possible with the efficient use of
resources. While traditional organizations directed their efforts only on increasing profitability and being financially
strong, now non-traditional organizations analyze the input-output interaction of resources to find reasons of low or
high profitability. Today factors affecting financial and non-financial performance of the company are analyzed in
detail. Being financially strong for the moment does not guarantee a long-running organization. In order to see the
whole picture organizations have started to change their strategies according to performance management systems.
Today mostly used performance management systems are Deming Prize Model developed in Japan in 1951, Malcolm
Baldridge Quality Award Model developed in the U.S.A. in 1987, American Productivity Centre Model, EFQM
Excellence Model, Performance Pyramid developed by Lynch ve Cross (1991), Balanced Scorecard developed by
Kaplan and Norton (1992), Quantum Performance Management Model developed by Hronec (1993), Performance
Pyramid by Neely and Adams (2001), Neely et al. (2002) and Skandia Navigator model.
The very first systematic studies on performance started in the beginning of 20th century. Taylor (1911) in his book
“Principles of Scientific Management” discussed productivity, efficiency, optimization and proposed novel techniques
on increasing productivity. After that he proposed a performance based salary system for employees but this idea was
intensely criticized at that time although today many organizations use this system. Then research on employee
performance was triggered. It was found that ergonomic factors affect performance. Besides ergonomic factors Mayo
(1933, 1949) and his friends proved that, with experiments conducted at Hawthorne, employee performance is much
more affected by behavioral factors. He demonstrated that teamwork, motivation and human affairs much more affect
individual performance. There is an abundance of empirical studies on relationship among job performance, job
satisfaction and other factors in the literature (Saari and Judge, 2004; Shahu and Gole, 2008; Pugno and Depedri,
2009). The performance model used in this study, of which details given in the next section, groups employees
according to both performance and job satisfaction levels. So here we analyze the relationship between them and
present a literature review on job satisfaction, performance and other factors’ relationships. Other factors affecting job
performance and job satisfaction include stress, organizational commitment, employee attitudes, employee morale, etc.
Several authors in the literature studied the effect of job satisfaction and other factors on performance. In Table 1 we
give a list of studies carried out on relations among job satisfaction, performance and other factors. However, there
exists a controversial debate on the relationship among job satisfaction, performance and other factors. The satisfactory
performance model used in this study enables us to look from a different point of view. Without considering the
relationships among performance and related factors, in this model employees are grouped according to job satisfaction
and performance. This helps us to develop a new approach to individual performance appraisals. If we summarize the
performance factors addressed in the literature, we see the relationship diagram given in Figure 1.

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(7), 278-288, 2012.

AN INTRODUCTION TO DISTRIBUTION OPERATIONAL EFFICIENCY


Bernardo Villareal, Fabiola Garza, Imelda Rosas, David Garcia
Department of Engineering, Universidad de Monterrey
Department of Business, Universidad de Monterrey

The Lean Manufacturing approach for waste elimination can be applied in all sorts of operations. In this project is applied
for the improvement of a supply chain and to achieve high levels of chain efficiency. The identification of warehousing and
transportation waste at the chain level is aggregate being difficult its identification within both processes. This work
provides an introduction to the concept of distribution operational efficiency and proposes a scheme for eliminating waste
in a distribution operation. The Operational Effectiveness Index used in TPM is adapted and used as the main performance
measure. Availability, performance and quality wastes are identified using Value Stream Mapping. The scheme is
exemplified by applying it on distribution networks of several Mexican companies.

Keywords: Lean warehousing, Lean transportation, distribution waste, operational effectiveness index, supply chain
efficiency.

(Received 8 Sep 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
A key feature of business is the idea that competition is made through supply chains and not between the companies
(Christopher, 1992), success or failure of supply chains is ultimately determined in the market-place by the end consumer.  
Therefore, is extremely important the deployment of the right strategies to compete successfully. Fisher (1997) suggests
that supply chains must acquire capabilities to become efficient or agile accordingly to the type of products marketed (see
Figure 1). In particular, an efficient supply chain is suitable for selling functional products. The order winning factor in this
market is cost, having quality, lead time and service level as order qualifiers (Hill, 1993). The main supply chain strategy
recommended to become efficient is waste elimination (Towill et al., 2002).
The origin of waste elimination is associated with the concept of lean manufacturing. This can be traced back to the
1930´s when Henry Ford revolutionised car manufacturing with the introduction of mass production. The most important
contribution to the development of lean manufacturing techniques since then came from the Japanese automotive firm
Toyota. Its success is based on its renowned Toyota Production System. This system is based on a philosophy of
continuous improvement where the elimination of waste is fundamental. The process of elimination is facilitated by the
definition of seven forms of waste, activities that add cost but no value: production of goods not yet ordered; waiting;
rectification of mistakes; excess processing; excess movement; excess transport; and excess stock.
Jones et al., (1997) have shown that these seven types of waste need to be adapted for the supply chain environment.
Hines and Taylor (2000) propose a methodology extending the lean approach to enable waste elimination throughout the
supply chain and Rother et al., (1999) recommend the use of the value stream map (VSM) and the supply chain mapping
toolkit described by Hines et al., (2000) as fundamental aids for identifying waste.
As lean expands towards supply chain management, rises the question about its adequate adaptation. Transportation and
warehousing represent good opportunities for the application and could give important benefits if applied properly. It is
well known that both activities are classified as waste. However, when markets are distant, these are certainly necessary
activities to attain competitive customer service levels. Most distribution networks have significant waste and unnecessary
costs say McKinnon et al., (2003) and Ackermann (2007). For the identification of waste between facilities and installations
in a supply chain Jones et al., (2003) recommend Value Stream Mapping for the extended enterprise. When mapping at the
supply chain level, unnecessary inventories and transportation become important wastes. Unnecessary transportation waste
is related to location decisions for the improvement of performance at given points of the supply chain. Therefore, the
solutions suggested for its elimination are concerned with the relocation and consolidation of facilities, a change of
transportation mode or the implementation of milk runs. In addition to transportation, warehousing is another important part
of a distribution network. Value stream mapping at the supply chain level emphasizes on the identification of inventory
waste. This approach does not consider the elimination of waste in warehousing operations. However, it is important to
realize that warehousing could have an important impact on the supply chain cost structure and on the capacity to respond
to customer needs. Lean transportation and warehousing are still new areas in full development.

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(7), 289-296, 2012.

ALTERNATIVE CONSTRUCTIVE HEURISTIC ALGORITHM FOR


PERMUTATION FLOW-SHOP SCHEDULING PROBLEM WITH MAKE-
SPAN CRITERION
Vladimír Modrák, Pavol Semančo and Peter Knuth
Faculty of Manufacturing Technologies, TUKE, Bayerova, 1, Presov, Slovakia
Corresponding author email: vladimir.modrak@tuke.sk

In this paper, a constructive heuristic algorithm is presented to solve deterministic flow-shop scheduling problem with
make-span criterion. The algorithm is addressed to an m-machine and n-job permutation flow shop scheduling problem.
This paper is composed in a way that the different scheduling approaches to solve flow shop scheduling problems are
benchmarked. In order to compare the proposed algorithm against the benchmarked, selected heuristic techniques and
genetic algorithm have been used. Results of experiments show that proposed algorithm gives better or at least comparable
solutions than benchmarked constructive heuristic techniques. Finally, the average computational times (CPU time in ms)
are compared for each size of the problem.

Keywords: make-span, constructive heuristics, genetic algorithm, CPU time

(Received 13 Mar 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
Dispatching rules are one of the most common application areas of heuristic methods used for factory scheduling (Caskey,
2001). Basic types, job shop and flow shop production, cope with a scheduling problem to find a feasible sequence of jobs
on given machines with the objective of optimization of some specific function. A selected criterion for purpose of this
study - job completion time (make-span) can be defined as the time span from material availability at the first processing
operation to the completion at the last operation. Johnson (1954) has shown that, in a 2-machines flow shop, an optimal
sequence can be constructed. It was determined that machine flow shop scheduling problem (FSSP) is strongly NP-hard for
m ≥3 (Garey et al., 1976). FSSPs can be divided into two main categories: dynamic and static. Hejazi and Saghafian (2005)
characterize scheduling problem as an effort „to specify the order and timing of the processing of the jobs on machines,
with an objective or objectives respecting above-mentioned assumptions“. This paper is concerned with multi machine
FSSP that present a class of Group Shop Scheduling Problems. The criterion of optimality in a flow shop sequencing
problem is usually specified as minimization of make-span. If there are no release times for the jobs then the total
completion time equals the total flow time. Maximum criteria should be used when interest is focused on the whole system
(Mokotoff, 2011). Pan and Chen (2004) studied the re-entrant flow-shop (RFS) with the objective of minimizing the make-
span (Cmax) and average flow time of jobs by proposing optimization models based on integer programming technique and
heuristic procedure. In addition, they treated new dispatching rules to accommodate the reentry feature. In a RFS, all jobs
have the same routing over the machines of the shop and the same sequence is traversed several times to complete the jobs.
Chen at al (2009) presented study on hybrid genetic algorithm to solve RFS scheduling problem with the aim to improve
the Genetic Algorithm (GA) performance and the heuristic methods proposed by Pan a Chen (2004).
In some cases for calculating the completion times specific constraints are assumed. For example, such a situation in the
FSSP arises when no idle time is allowed at machines. This constraint creates an important practical situation that arises
when expensive machinery is employed (Chakraborty, 2009). The general scheduling problem for a classical shop flow
gives rise to (n!)m possible schedules. With aim to reduce the number of possible schedules it is reasonable to make
assumption that all machines process jobs in the same order (Gupta 1975). In the classical flow-shop scheduling problem,
queues of jobs are allowed at any of m machines in processing sequence based on assumption that jobs may wait on or
between the machines (Allahverdi et al., 1999, 2008). Moreover setup times are not considered for calculating make-span
parameter.
The currently reported approximation algorithms can be categorized into two types: constructive methods or improvement
methods. Constructive methods include Slope index based heuristics, CDS heuristics and others. Most of improvement
approaches are based on modern meta-heuristics, such as Simulated Annealing, Tabu Search, Genetic Algorithm and
others. Modern meta-heuristic algorithms can be easily applied to various FSPs and usually by them are obtained better
solution than by constructive methods. However, Kalczynski and Kamburowski (2005) showed that many meta-heuristic
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(7), 297-304, 2012.

MEDIA MIX DECISION SUPPORT FOR SCHOOLS BASED ON ANALYTIC


NETWORK PROCESS
1* 1 2 3 1
Shu-Hsuan Chang , Tsung-Chih Wu , Hwai-En Tseng , Yu-Jui Su , and Chen-Chen Ko
1
Department of Industrial Education and Technology, National Changhua University of Education
No. 2, Shida Rd., Changhua City 500, Taiwan, ROC
2
Department of Industrial Engineering and Management, National Chin-Yi University of Technology,
35, Lane215, Section 1, Chung-Shan Road, Taiping City, Taichung County 411, Taiwan, ROC
3
Asia-Pacific Institute of Creativity, No.110.Syuefu Rd. Toufen Township.Miaoli County 351.Taiwan, ROC
*Corresponding author: Shu-Hsuan Chang, shc@cc.ncue.edu.tw

Media Selection is a multi criteria decision making (MCDM) problem. Decision makers with budget constraints should
select media vehicles with the greatest effects on audiences by simultaneously considering multiple and interdependent
evaluation criteria. This work develops a systematic decision support algorithm for media selection. Analytic Network
Process (ANP) is adopted to determine the relative weights of criteria. An Integer Programming (IP) is then applied to
(identify the optimum combination of media below a fixed budget. An empirical example demonstrates the computational
process and effectiveness of the proposed model.

Significance: The decision model aims to develop a systematic decision support hybrid algorithm to solve the best media
mix for student recruiting advertisement with budget constraints by simultaneously considering multiple
and interdependent evaluation criteria. An empirical example of media selection for school C is
demonstrates the computational process and effectiveness of the proposed model.

Keywords: MCDM, Media Selection, Analytic Network Process (ANP), Integer Programming (IP)

(Received 9 Sep 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
Consumers have benefited from the revolutionary growth in the number of TV and radio channels, magazines, newspapers
and outdoor media in recent decades. However, the time devoted to a single medium constantly shrinks, and the complexity
of the media landscape undermines the stability of media habits. As the attention of consumers is spread over more media
categories than ever before, only one conclusion is possible: an effective media strategy must take a multimedia selection
approach (Franz, 2000).The media mix decisions, a unique case of a resource allocation problem, is a complex
multi-faceted decision (Dyer, Forman, and Mustafa, 1992). Selecting the best media requires considering not only cost and
the number of readers, but also the efficiency with which the medium reaches the target audience. These developments
have influenced the media usage habits of target audiences as well as the fit between the product and the characteristics of
the medium. The media selection approach is defined as the process whereby the decision maker selects the media vehicles
that affect the audience effectively by simultaneously considering multiple and interdependent evaluation criteria, which is
a multi criteria decision making (MCDM) problem (Lgnizio, 1976; Dyer, Forman, and Mustafa, 1992). Many factors have
increased the complexity of the media selection decision. The criteria are usually interdependent (Gensch, 1973).
Moreover, since some criteria are uncertainty, qualitative, and subjective, consistent expert opinions are rare (Dyer, Forman
and Mustafa, 1992; Calantone, 1981). So far, the literature on media selection problems suggests that the criteria for
evaluating media are independent and ignore the interactions between the criteria (Lgnizio, 1976; Lee, 1972; Dyer, Forman
and Mustafa, 1992). Since the process for media selection is so complicated, an effective tool for assessing interdependent
criteria is needed. However, AHP models a decision-making framework that assumes a unidirectional hierarchical
relationship among decision levels (Triantaphyllou and Mann, 1995; Meade and Presley, 2002; Shen et al., 2010). Analytic
Network Process (ANP) is an effective tool when elements of the system are interdependent (Saaty, 2001). The ANP is
more accurate in complex situation due to its capability of modeling complexity and the way in which comparisons are
performed (Yang et al., 2010).
The ANP has been applied to many areas, including (1) evaluating and selecting alternatives; e.g., ANP has been utilized
to construct a model for selecting an appropriate project (Lee and Kim 2001; Shang et al., 2004; Chang, Yang, and Shen,
2007), a company partner (Chen et al., 2004), and an appropriate product design (Karsak et al., 2002); (2) optimizing a
product mix (Chung et al., 2005) and price allocation (Momoh and Zhu, 2003); (3) constructing models for assessing

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(8), 305-319, 2012.
 

SOLVING CAPACITATED P-MEDIAN PROBLEM BY A NEW STRUCTURE


OF NEURAL NETWORK
Hengameh Shamsipoor, Mohammad Ali Sandidzadeh, Masoud Yaghini
School of Railway Engineering, Iran University of Science & Technology, Kermanshah University of Technology, Iran
Corresponding author email: Email: sandidzadeh@iust.ac.ir

One of the most popular and renowned location-allocation problems is Capacitated P-Median Problem (CPMP). In CPMP
locations of p capacitated medians are selected to serve a set of n customers, so that the total distance between customers
and medians is minimized. In this paper primarily we present a new dynamic assignment method based on urgency
function. After that a new formulation for CPMP, based on two types of decision variables with 2( n + p ) linear
constraints, is proposed. Later on, based on the newly presented formulation, we propose a novel neural network structure
that comprises five layers. This neural network is a combination of two-layered Hopfield neural network with location and
allocation layers and three other layers that control the Hopfield neural network. The advantage of the proposed network is
that it always provides feasible solutions, and since the constraints are united in this neural structure instead of the energy
function, the need for tuning parameters is avoided. According to the computational dynamic of the new neural network,
the amount of energy function always decreases or remains constant. The effectiveness and efficiency of this algorithm, for
standard and simulated problems with different sizes, are analyzed. Our results show that the proposed neural network
generates excellent quality and acceptable solutions.

Keywords: Location-allocation, Capacitated p-Median Problem (CPMP), Neural Network, Hopfield Network.

(Received 2 Feb 2010; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
Location allocation problem has several applications in the areas of telecommunication, transportation and distribution and
has received a great deal of attention from many researchers recently. One of the most well-known location-allocation
problems is the capacitated p-median problem. Its aim is to locate p facilities within the given space to serve n demand(s)
with the minimum total cost possible. We illustrate a typical p-median model in Fig. 1. The total cost of the solution
presented is the sum of the distance between demand points and selected location which is presented by the black lines [1].

Figure 1. Typical output for the p-median problem

The p-median problem is an improved NP-hard problem in which an increase in the input increases the computation time
of the result logarithmically. Consequently, many heuristic methods have been developed to solve this problem.
In this article we try to use neural network techniques to solve the p-median problem in which each facility can serve only a
limited number of demands.
ISSN  1943-­‐670X                                                                                                                                                                                                                                                                                ©  INTERNATIONAL  JOURNAL  OF  INDUSTRIAL  ENGINEERING  
International Journal of Industrial Engineering, 19(8), 320-329, 2012.

ADOPTING THE HEALTHCARE FAILURE MODE AND EFFECT ANALYSIS


TO IMPROVE THE BLOOD TRANSFUSION PROCESSES
Chao-Ton Su1,*, Chia-Jen Chou1, Sheng-Hui Hung2, Pa-Chun Wang2,3,4
1
Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu 30013,
Taiwan, R.O.C.
2
Quality Management Center, Cathay General Hospital, Taipei 10630, Taiwan, R.O.C.
3
Fu Jen Catholic University School of Medicine, Taipei County 24205, Taiwan, R.O.C.
4
Department of Public Health, China Medical University, Taichung 40402, Taiwan, R.O.C.
*Corresponding author. Email: ctsu@mx.nthu.edu.tw

The aim of this study is to conduct the healthcare failure mode and effects analysis (HFMEA) to evaluate the risky and
vulnerable blood transfusion process. By implementing HFMEA, the research hospital plans to develop a safer blood
transfusion system that is capable of detecting potentially hazardous events in advance. In this case, eight possible failure
modes were identified in total. Regarding the severity and frequency, seven failure modes were identified to have hazard
scores higher which are than 8. Five actions were undertaken to eliminate the potential risk processes. After the completion
of HFMEA improvement, from the end of July, 2008 to December 2009, two adverse events occurred during the blood
transfusion processes and the error rate is 0.012%. The HFMEA proves to be feasible and effective to predict and prevent
potentially risky transfusion processes. We have successfully introduced information technology to improve the whole
blood transfusion process.

Keywords: healthcare failure mode and effect analysis (HFMEA), blood transfusion, hazard score.

(Received 30 Mar 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION

Reducing medical errors for a given healthcare process is critical to patient safety. Traditionally, risk assessment methods in
healthcare have analyzed adverse events individually. However, risk-evaluated approaches should reflect healthcare
operations, which are usually composed of sequential procedures. In other words, a systematic and process-driven
programming of risk prevention is necessary for every healthcare provider. Many studies have illustrated the necessities to
introduce risk analysis method in preventing the medical error ((Bonnabry et al., 2006); (Bonan et al., 2009)).
Healthcare Failure Mode and Effect Analysis (HFMEA) is a novel technology used to evaluate healthcare processes
proactively. HFMEA was first introduced by the Department of Veterans Affairs (VA) System and developed by the
National Center for Patient Safety (NCPS) in the United States. HFMEA is a hybrid risk evaluation system that combines
the ideas behind Failure Mode and Effect Analysis (FMEA), Hazard Analysis and Critical Control Point (HACCP), and the
VA’s root cause analysis (RCA) program. An interdisciplinary team, process and subprocess flow drawing, identification of
failure mode and its cause, a hazard scoring matrix, and a decision tree to determine system weakness are usually included
in HFMEA. Currently, the HFMEA method is encouraged by the American Society for Healthcare Risk Management for
hospitals in the United States (Gilcheist et al., 2008).
Clinical researches have identified blood transfusion as a significant risky process (Klein, 2001; Rawn, 2008). Errors in
blood transfusion result in immediate and long-term negative outcomes including the increase chance of death rates, stroke,
renal failure, myocardial infraction, and infection, among others. Therefore, reducing the risks of blood transfusion is a
major patient safety issue for all hospitals. The blood transfusion process is setting on top of the list for process analysis,
since the process affects a large number of patients and the procedure is complex in nature (Burgmeier, 2002). Linden et al.
(2002) indicated that the blood transfusion is a complicated system involving the hospital blood bank, patient floor,
emergency department, operating room, transfusionist, and transporter. A more comprehensive and risk proactive analysis
of the blood transfusion process is necessary to improve patient safety.
A series of transfusion-related adverse events take place in the research hospital have urged the Patient Safety
Committee to take decisive actions to prevent harmful medical errors resulted from transfusion-related processes. An
efficient risk prevention method was anticipated to reduce the number of adverse blood transfusion events at the research
hospital. The aim of this study is to conduct the HFMEA to evaluate the risky and vulnerable blood transfusion process. By
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(8), 330-340, 2012.
 

ECODESIGN CASE STUDIES FOR FURNITURE COMPANIES USING THE


ANALYTIC HIERARCHY PROCESS
Miriam Borchardt, Miguel A. Sellitto, Giancarlo M. Pereira, Luciana P. Gomes
Vale do Rio dos Sinos University (UNISINOS)
Address: Av. Unisinos, 950 – São Leopoldo – CEP 93200-000 – RS - Brazil
Corresponding author e-mail: miriamb@unisinos.br

The purpose of this paper is to propose a method to assess the degree of the implementation of ecodesign in manufacturing
companies. This method was developed based on a multi-criteria decision support method known as analytic hierarchy
process (AHP). It was applied in three furniture companies. Ecodesign constructs were extracted from the literature related
to environmental practices and weighted according to the AHP method, allowing for a determination of the relative
importance of the constructs for each company. Finally, the team answered a questionnaire for each company to check each
item’s degree of application of these processes. One year later, the method was applied again to the same three companies.
By comparing the assessed relative importance of each ecodesign construct and the degree of its application, it was possible
for us to observe the relation of the priorities of the companies to their eco-conception.

Keywords: ecodesign, design for environment, sustainability, furniture industry, Analytic Hierarchy Process, eco-
conception.

(Received 11 Sep 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
One of the key contributing causes to the environmental degradation that threatens the planet is the increasing production
and consumption of goods and services. Some of the factors that contribute to environmental degradation are (a) the
lifestyle of some societies, (b) the development of emerging countries, (c) the aging of populations in developed countries,
(d) inequalities between the planet’s regions and (e) the increasingly short life cycles of products (Manzini and Vezzolli,
2005).
Environmental considerations, such as ecodesign (or design for (the) environment, DfE), cleaner production, recycling
projects and the development of sustainable products, promote a redesign of techniques for the conceptualization, design
and manufacturing of goods (Byggeth et al., 2007). A balance between the environmental “cost” and the functional
“income” of a production method is essential for achieving sustainable development, a requirement that has resulted in a
situation in which environmental issues must now be merged into “classical” product development processes (Luttropp and
Lagerstedt, 2006; Plouffe et al., 2011).
Out of this context, we can define ecodesign as a technique for establishing a product project in which the usual project
goals, manufacturing costs and product reliability are considered, along with environmental goals such as the reduction of
environmental risks, reduction in the use of natural resources, increase in recycling and the efficiency in the use of energy
(Fiksel, 1996). Such a technique makes it possible to relate the functions of a product or service to issues in environmental
sustainability, reducing environmental impact and increasing the presence of eco-efficient products, as well as encouraging
technological innovation (Manzini and Vezzoli, 2005; Santolaria et al., 2011).
The environmental practices observed in the literature on ecodesign are chiefly related to the materials, components,
processes and characteristics of products, including the use of energy, storage, distribution, packing and material residuals
(Wimmer et al., 2005; Luttropp and Lagersted, 2006; Fiksel, 1996). However, even though these techniques have been
explored in the literature, the environmental practices related to ecodesign have a generic shape and are difficult to fit to
specific product projects and industrial processes (Borchardt et al., 2009).
Authors such as De Mendonça and Baxter (2004) and Goldstein et al. (2011) have worked to develop performance
indicators associated with ecodesign and have related ecodesign principles with environmental management, showing a
positive correlation between the two. However, notably, there is no consensus regarding this topic. Despite the fact that
environmental assessments are commonly found in the literature, no objective method can generate an ecodesign
measurement instrument to evaluate the degree of implementation. Such an instrument would help organizations to
prioritize their efforts in terms of achieving the most significant environmental gains.
There is a need for a structural approach in ecodesign that can address environmental concerns in a coherent way.
However, the limits in capabilities and resources available to many companies frequently hamper the development of an

ISSN  1943-­‐670X                                                                                                                                                                                                                                                                                ©  INTERNATIONAL  JOURNAL  OF  INDUSTRIAL  ENGINEERING  


International Journal of Industrial Engineering, 19(9), 341-349, 2012.

APPLYING GENETIC LOCAL SEARCH ALGORITHM TO SOLVE THE


JOB-SHOP SCHEDULING PROBLEM
Chuanjun Zhu1, Jing Cao1, Yu Hang2, Chaoyong Zhang 2
1
School of Mechanical Engineering, Hubei University of Technology, Wuhan, 430068, P.R. China
2
State Key Laboratory of Digital Manufacturing Equipment & Technology, School of Mechanical Science and
Engineering, Huazhong University of Science and Technology, Wuhan, 430074, P.R. China

This paper presents a genetic local search algorithm for the Job-Shop Scheduling problem, and the chromosome
representation of the problem is based on the operation-based representation. In order to reduce the search space, schedules
are constructed using a procedure that generates active schedules. After a schedule is obtained, a local search heuristic
based on N6 neighborhood structure is applied to improve the solution. In order to avoid premature convergence of the
conventional genetic algorithms (GA), the improved precedence operation crossover (IPOX) and approach of the
generation alteration schema are proposed. The approach is tested on a set of standard instances taken from the literature.
The computation results validate the effectiveness of the proposed algorithm.

Keywords: Genetic Algorithms; Local Search Algorithms; Job-Shop Scheduling Problem

(Received 1 Oct 2010; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION

Generally the Job-Shop scheduling problem can be described as follows: a set of n jobs is to be processed on a set of m
machines that are continuously available from time zero onwards, and each job has special processing technology. Each job
consists of a sequence of operations, and each of the operations uses one of the machines for a fixed duration. The
scheduling problem is to find a schedule which optimizes some index by determining machining sequence of job in every
machine. The hypothesis is as follows:
(1) The processes of different jobs have no machining sequence constraint;
(2) Any process can not be interrupted once be begun, and every machine can only machining one job a certain time;
(3) Machines can not break down.
The objective of the problem is to find a schedule which minimizes the makespan (Cmax) or optimizes other indices by
determining start time and machining sequence of every job. The Job-Shop Scheduling problem can be simplified as
n/m/G/Cmax.
Job-Shop scheduling problem is a well-known NP-hard problem, which have wide applications in the industrial fields.
In order to solve the hard problem, Job-Shop scheduling has been studied by a significant number of researchers for several
decades, and many theoretical research results have been proposed. The research achievements mainly include heuristic
dispatch rules(Panwalkar S, et al, 1977), mathematical programming(Blazewicz J, et al,1991), simulation-based
methods(Kim M, et al, 1994), and Artificial Intelligence (AI)-based methods(Foo S Y, et al, 1994) and so on. Heuristic
dispatch rules is straightforward and easy to implement, but it can only obtain local optimization and moderate effect.
When using mathematical programming methods, computing burden may increase exponentially with the increasing of
Job-Shop scheduling scale. Simulation methods can lead to higher computational cost and not find optimal solution. With
the development of computer technology, a number of complicated optimization methods by simulating some feature of the
biology evolution system, physical systems and human being’s behavior are getting rapid development recently. Therefore,
the meta-heuristic methods, such as genetic algorithms (GA) (Croce et al.,1995, Ibrahim et al.,2008), neural network
method, simulated annealing (SA) (Van Laarhoven et al.,1992), tabu search (TS) (Taillard, 1994, Nowicki et al, 1996),
have become research hotspot of Job-Shop scheduling problem.
GA were originally developed by Professor J. Holland of Michigan University. Professor Holland published a
monograph which systematic exposition the basic theory and method of GA in 1975(Holland J H, 1975). GA main
reference the evolutionary criterion of the survival of the fittest in natural selection from Darwin’s evolutionism, imitate
biology reproduction, mating and gene mutation by selection, crossover and mutation operation, and find best chromosome
to solve problem by gene attached on it. GA are universal optimization algorithm, their coding technology and genetic
operation are comparatively simple, and the optimization process is no constraint condition and have the characteristics of
implicit parallelism and global solution space searching etc, so GA become the widely used in resolving Job Shop
scheduling problem. However, GA have global searching ability due to their population parallel searching so has poor local
searching ability and is prone to premature convergence. Local Search (LS) algorithm is used to local searching, but it is
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(9), 350-358, 2012.
 

BIOBJECTIVE MODEL FOR REDESIGNING SALES TERRITORIES


Juan Gabriel Correa Medina1, Loecelia Guadalupe Ruvalcaba Sánchez1, Elias Olivares-Benitez2, Vittorio Zanella
Palacios3
1
Department of Information Systems, Autonomous University of Aguascalientes
2
Metallurgical Engineer, National Polytechnic Institute
3
Department of Computer Engineering, Autonomous University of Puebla State

Designing and updating of sales territories are strategic activities that have several causes like mergers and changes in the
markets among others. The new territories must satisfy the planning characteristics defined by each company. In this paper
we propose a biobjective mixed integer programming model for redesigning sales territories. The study was motivated by
the case of a company that distributes its products along Mexico. The model looks for minimizing the total sum of the
distances and the variation of the sales volumes for each salesman with respect to the current situation. The model is solved
using the ε-constraint method to obtain the true efficient set, and a heuristic method to obtain the approximate efficient set.
Both efficient sets are compared to determine the quality of solutions obtained by the heuristic method.

Keywords: biobjective model, sales territory, integer programming, business strategies

(Received 23 Feb 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
The design and constant updating of the sales territories are important strategic activities that have as intention to improve
the service level to customers through an efficient and effective covering of the markets. The updating of the sales
territories is required mainly because of mergers of firms and changes in the markets (expansion, contraction).
Sometimes just small sales territory realignment can have a big impact on the sales force productivity. Therefore it is a
critical and ongoing process to help maximize sales productivity and revenue. Some of the benefits of sales territory design
include: 1) a better coverage and customer service leading to increased productivity and sales revenue; 2) Increased sales
by prioritizing accounts with the greatest potential; 3) Reduced costs of sales through shorter and cheaper travel times; 4)
improved morale, performance and permanence of sales people due to equitable distribution of accounts and an impartial
system for achieving rewards; 5) competitive advantage through the ability to reach new opportunities faster than the
competitors.
The territories design or redesign groups geographical small areas, defined as sales coverage units (SCUs), in larger
geographical units known as territories. These territories must satisfy certain planning characteristics determined by the
firm’s management considering the assignment of customers, types of products, geographical areas, workload, sales volume
and territories dimensions for every salesman, among others.
The sales territory design problem is classified as a districting problem. Typical districting problems include the drawing
of political constituencies, school board boundaries, sales or delivery regions (Bozcaya et al., 2003). Although multiple
exact and heuristic methods have been applied to solve this problem, its generalization is difficult because the goals of
every firm are different. In addition, Pereira-Tavares et al. (2007) mention that when there are multiple criteria, the problem
is considered NP-hard. Puppe and Tasnadi (2008) showed that in discrete districting problems with geographical
limitations, the determination of an impartial redistricting turns out to be a problem computationally intractable (NP-
complete).
In this paper a biobjective mixed integer programming model is proposed for redesigning sales territories. The work is
structured as follows. In section 2 the problem is described showing its characteristics. Section 3 presents the mixed integer
programming model, the exact method and the heuristic algorithm used to solve it and the comparison metrics. In section 4
the experiments are explained and the results obtained are shown. Section 5 shows the conclusions and future work for this
research.

2. PROBLEM DEFINITION
The problem analyzed in this paper is motivated by a firm which sells its products along Mexico. This problem was
analyzed originally by Olivares-Benítez et al. (2009). To control its sales force, the firm has divided the Mexican Republic
into regions. In every region, the salesmen have inherited and enlarged their customers portfolio to improve their income
without intervention from the firm’s management. This absence of control has produced unbalanced territories with regard
ISSN  1943-­‐670X                                                                                                                                                                                                                                                                                ©  INTERNATIONAL  JOURNAL  OF  INDUSTRIAL  ENGINEERING  
International Journal of Industrial Engineering, 19(10), 369-388, 2012.

REVERSE LOGISTICS: PERSPECTIVES, EMPIRICAL STUDIES AND


RESEARCH DIRECTIONS

*Arvind Jayant1, P.Gupta2, S.K.Garg3


1,2
Department of Mechanical Engineering,
Sant Longowal Institute of Engineering & Technology, Longowal, Punjab, India
(Deemed to be University)
3
Department of Mechanical Engineering,
Delhi Technological University, Delhi-110042
*Corresponding Author E-mail address: arvindjayant@rediffmail.com

Environmental and economic issues have significant impacts on reverse logistics practices in supply chain management and
are thought to form one of the developmental cornerstones of sustainable supply chains. Perusal of the literature shows that
a broad frame of reference for reverse logistics is not adequately developed. Recent, although limited, research has begun to
identify that these sustainable supply chain practices, which include the reverse logistics factors, lead to more integrated
supply chains, which ultimately can lead to improved economic performance. The objectives of this paper are to: report and
review various perspectives on design and development of reverse SC, planning and control issues, coordination issues,
product remanufacturing and recovery strategies, understand and appreciate various mechanisms available for efficient
management of reverse supply chains and identify the gaps existing in the literature. Ample opportunities exist for the
growth of this field due to its multi-functional and interdisciplinary focus. It also is critical for organizations to consider
from both an economic and environmental perspective. The characteristics of reverse logistics provided here can help the
researchers/practitioners to advance their work in the future.

Significance: The objective of this study is to encourage and provide researchers with future research directions in the field
of reverse logistics for which only empirical research methods are not appropriate. In addition, the research
directions suggested in the paper address several opportunities and challenges that currently face business
managers & academicians operating in closed loop supply chain management.

Keywords: Reverse supply chain management, Remanufacturing, Recycling, Reverse logistics.

(Received 11 May 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
Reverse logistics, which is the management or return flow due to product recovery, goods return, or overstock, form a
closed-loop supply chain. The success of the closed-loop supply chain depends on actions of both manufacturers and
customers. Now, manufacturers require producing products which are easy for disassembly, reuse and remanufacturing
owing to the law of environmental protection. On the other hand, the number of customers supporting environmental
protection by delivering their used products to collection points is increasing (Lee and Chan, 2009). According to the
findings, the total cost spent in reverse logistics is huge. In order to minimize the total reverse logistics cost and high
utilization rate of collection points, selecting appropriate locations for collection points is critical issues in RSC/reverse
logistics. Reverse logistics receive increasing attention from both the academic world and industries in recent years. There
are a number of reasons for its attention. According to the findings of Rogers and Tibben-Lembke (1998), the total logistics
cost amounted to $862 billion in 1997 and the total cost spent in reverse logistics is enormous that amounted to
approximately $35 billion which is around 4% of the total logistics cost in the same year. The concerns about energy
saving, green legislation and the rise of electronic retaining are increasing. Also, the emergence of e-bay advocates product
reuse. Online shoppers typically return items such as papers, aluminum cans, and plastic bottles whose consumption and
return rates are high. Although most companies realize that the total processing cost of returned products is higher than the
total manufacturing cost, it is found that strategic collections of returned products can lead to repetitive purchases and
reduce the risk of fluctuating the material demand and cost.
Research on reverse supply chain has been growing since the Sixties (see, for example, Zikmund and Stanton, 1971;
Gilson, 1973; Schary, 1977; Fuller, 1978). Research on strategies and models on RL can be seen in the publications in and
after the Eighties. However, efforts to synthesize the research in an integrated broad-based body of knowledge have been
limited (Pokharel and Mutha, 2009). Most research focuses only on a small area of RL systems, such as network design,
production planning or environmental issues. Fleischmann et al. (1997) studied RL from the perspectives of distribution
planning, inventory control and production planning. Carter and Ellram (1998) focused on the transportation and
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(10), 389-400, 2012.

CONTINUOUS-REVIEW INVENTORY MODELS USING DIFFUSION


APPROXIMATION FOR BULK QUEUES
Singha Chiamsiri1, Hui Ming Wee2 and Hsiao Ching Chen3
1
School of Management, Asian Institute of Technology, Klong Luang, Pathumthani 12120, Thailand
2
Industrial & Systems Engineering Department, Chung Yuan Christian University, Chungli,32023, Taiwan, ROC
3
Department of Business Management, Chungyu Institute of Technology , Keelung 20103, Taiwan ROC
H.M.Wee, e-mail: weehm@cycu.edu.tw

In this paper, two continuous-review inventory control models are developed using steady-state diffusion approximation
method. Accuracy evaluations of the approximate optimal solutions for the inventory control models are reported for
selected “Markovian-like” queues to approximate the steady-state queue size behavior of single-server queues with bulk-
arrival and batch-service. The diffusion approximation method gives a remarkably good performance in approximating the
base stock level one-to-one ordering policy inventory model. The approximation for the order-up to inventory model with
replenishment lot size greater than one is also exceptionally good at selected values of heavy traffic intensity and when the
service time replenishment process distributional characteristic does not differ greatly from the exponential inter-arrival
time of the demands.

Keywords: Inventory; Queueing; Continuous-review policy; Diffusion approximation

(Received 1 Apr 2010; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION

There are many applications of diffusion approximations in population genetics modeling (Bahrucha-Reid 1960, Cox and
Miller 1968, and Feller 1966), the optimal control of a stochastic advertising model (Tapiero 1975), storage systems model
and inventory control model (Bather 1966, Harrison and Taylor 1976, and Puterman 1975), and in queuing models
(Kingman 1965, Chiamsiri and Leonard 1981, and Whitt 2004) and queuing networks/systems in computer applications
(Kleinrock 1976).
Diffusion models have been developed in order to mitigate the analytical and the computational complexity of
performance measures and optimal solutions. For example, Chiamsiri and Leonard (1981) developed a diffusion process to
approximate the steady-state queue size behavior of single-server queues with bulk-arrival and batch-service, referred to as
bulk queues. Diffusion approximation solutions for various queue size statistics are developed and evaluated for a number
of special “Markovian-like” bulk queues. The diffusion approximation method provides a robust solution for the queue size
distribution under heavy traffic conditions. Rubio and Wein (1996) identified specific formula for the base stock levels
under a multi-product production-inventory system by exploiting the make-to-stock system and an open queuing network.
Perry et al, (2001) studied the problem of a broker in a dealership market whose buffer content (cash flow) is governed by
stochastic price-dependent demand and supply. Three model variants are considered. In the first model, buyers and sellers
(borrowers and depositors) arrive independently in accordance with price-dependent compound Poisson streams. The
second and the third models are two variants of diffusion approximations. They developed an approach to analyze and
compute the cost function based on the optional sampling theorem. Wein (1992) noted that diffusion models require a
heavy traffic condition to be valid and used the diffusion process to model a multi-product, single-server Make-to-Stock
system.
Diffusion approximation method provides an approximate solution for a general class of queuing models, and is
particularly valuable when compared with simulation since both methods provide approximate numerical results. However,
the diffusion approximation method requires far less computation time to generate numerical results, especially for queues
under heavy traffic conditions.
Bather (1966) was the first author to develop a diffusion process model for an inventory control problem. The inventory
control problem considered was assumed to have instantaneous replenishments with continuous-review (s, S) operating
policy type. Demands were assumed to be a Weiner (Gaussian) process and statistical decision theory was used to obtain
the optimal solution.
A more general diffusion process model for storage system was considered by Puterman (1975). The diffusion process
model was found to be suitable for a storage system with an infinitely divisible commodity such as liquids, e.g., oil, blood,
or whisky. Puterman (1975) also indicated that: “The model might also be used to approximate more lumpy quantities such
as tires, whistles, or people, especially if the numbers are large”. This is because the sequences of stochastic input-output
system processes such as queues, dams, and inventory system often converge to limiting stochastic processes which are else
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(9), 359-368, 2012.

A NURSE SCHEDULING APPROACH BASED ON SET PAIR ANALYSIS


Jianfeng Zhou1, Yuyun Fan2, Huazhi Zeng3
1
Department of Industrial Engineering, School of Mechatronics Engineering,
Guangdong University of Technology, Guangzhou, China
2
Responsibility Nurse, Guangzhou Chest Hospital of China
3
Nursing Director, Guangzhou Chest Hospital of China

In practice, multiple sources of uncertainties needed to be treated in nurse scheduling. The problem involves multiple
conflicting objectives such as satisfying demand coverage requirements and maximizing nurses’ preferences subject to a
variety of constraints imposed by legal regulations, personnel policies and many other hospital-specific requirements. The
aim of this research is twofold: Firstly, to apply SPA (set pair analysis) theory to the nurse scheduling problem (NSP) to
treat uncertainties and to model and solve the nurse schedule assessment problem. Secondly, to integrate the nurse schedule
assessment model with GA (genetic algorithm) to establish a nurse scheduling approach. A case study of nurse scheduling
in a surgical unit of Guangzhou Chest Hospital in China is presented to validate the approach.

Keywords: nurse scheduling problem; set pair analysis; genetic algorithm

(Received 27 Feb 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION

Nurse scheduling problem (NSP) is a highly constrained scheduling problem which involves generating individual
schedules for nurses over a planning period. Usually, the period is a week or a number of weeks. At the end of a period, the
time table of the next period is to be determined. The nurses need to be assigned to possible shifts in order to meet the
constraints, and to maximize the schedule quality by meeting the nurses’ requests and wishes as much as possible.
Nurse scheduling is a NP complete problem. It is hard to obtain a high quality schedule via automatic approach due to
various constraints including legal regulations, management objectives, and requests of nurses need to be considered. Thus,
the nurse scheduling is often solved manually in many hospitals in practice.
In the past, a considerable number of relevant studies on nurse scheduling problem have been found. The proposed
approaches can be divided into three types, the first is mathematical programming approach, the second is heuristic
approach, and the third is AI (Artificial Intelligence) approach (Cheang et al., 2003; Burke et al., 2004).
The mathematical programming approaches adopt traditional operational research methods, such as linear programming,
integer programming, and goal programming, to solve the objective optimization problem in nurse scheduling. The
objectives of nurse scheduling involve minimum nurses, maximum satisfaction of nurses’ requests, and minimum costs.
Warner (1976) proposed a nurse scheduling system, which poses the scheduling decision as a large multiple-choice
programming problem whose objective function quantifies preferences of individual nursing personnel concerning length of
work stretch, rotation patterns, and requests for days off. Bartholdi et al. (1980) presented an integer linear programming
model with cyclically structured 0-1 constraint matrix for cyclic scheduling. Bailey et al. (1985) utilized linear
programming for personnel scheduling when alternative work hours are permitted.
Heuristic approaches, especially meta-heuristic approaches, have shown their advantages in solving non-linear and
complex problems. They are generally better suited for generating an acceptable solution in cases where the constraint load
is extremely high and indeed in cases where even feasible solutions are very difficult to find. In recent years, the
meta-heuristic approaches, such as genetic algorithm, simulated annealing algorithm, and ant colony optimization algorithm,
have been adopted to solve nurse scheduling problem. Aickelin et al. (2003) presented a genetic algorithms approach to a
nurse scheduling problem arising at a major UK hospital. The approach used an indirect coding based on permutations of
the nurses, and a heuristic decoder that builds schedules from these permutations. Kawanaka et al. (2001) proposed a
genetic algorithm based method of coding and genetic operations with their constraints for NSP. The exchange of shifts is
done to satisfy the constraints in the coding and after the genetic operations. Thompson (1996) developed a
simulated-annealing heuristic for shift scheduling using employees having limited availability and, by comparing its
performance to that of an efficient optimal integer programming model, demonstrated its effectiveness. Gutjahr et al. (2007)
described the first ant colony optimization (ACO) approach applied to nurse scheduling, analyzing a dynamic regional
problem.
Many results of artificial intelligence research were also used to solve NSP. Petrovic et al. (2003) proposed a new
scheduling technique for capturing rostering experience using case-based reasoning methodology. Examples of previously
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(10), 401-411, 2012.

A FRAMEWORK OF INTEGRATED RECYCLABILITY TOOLS


FOR AUTOMOBILE DESIGN
Novita Sakundarini 1, Zahari Taha2,
Raja Ariffin Raja Ghazilla1, Salwa Hanim Abdul Rashid1, Julirose Gonzales1
1
Department of Engineering Design and Manufacture
Center for Product Design and Manufacturing
University of Malaya, 50603 Kuala Lumpur, MALAYSIA
2
Faculty of Mechanical Engineering
University Malaysia Pahang, 26600 Pekan, Pahang, MALAYSIA

N. Sakundarini, email : novitas73@siswa.um.edu.my1

Automobiles are major transportation choice for society around the world. Automotive industries in many
countries mostly are one of the drivers of economic growth, job creation and technology advancement.
Although automotive industry gives promising return, problem of managing disposal at the end of
automotive’s life is quite challenging. Automobile is a very complex product that comprise of thousand
components made from various materials that need to be separately treated. In addition, short supply of
natural resources has provided opportunities to either reuse, remanufacture or recycle automotive’s
components. End of Life Vehicle (ELV) Directive launched by European Union mandated that recyclability
rate of automobile must reach 85% by 2015. The aim of this legislation is to minimize the impact of end of
life vehicle, contributing to prevention, preservation and improvement of environment quality and energy
conservation. Vehicle manufacturers and suppliers requested to include these aspects at earlier stages of the
development of new vehicles, in order to facilitate the treatment of vehicles at the time when they reach the
end of their life. Therefore, the automobile industry has to establish its voluntary action plan for ELVs, and
has numerical target to improve ELV recycling rate, reduce automotive shredder residue (ASR) landfill
volume, and reduce lead content. Many innovative approaches in improving recyclability have been
implemented, but still called out for more intelligent solutions which integrate recyclability evaluation in
product development stage. This paper attempts to review some of current innovative approach that used to
improve recyclability and introduce a framework for integrated recyclability tool to improve product
recyclability throughout its development phase.

Keywords: End of Life Vehicle, disposal, product life cycle, ELV Directive, recyclability.

(Received 2 June 2009; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
Automobile industries provide essential need for society to support easiness of mobility. According to OECD, the total
number of vehicle are expected to increase by 32% from 1997-2020 (Kanari et al., 2003). In Europe, approximately 23
million units of automotive have been produce in 2007, while in Asia there were 30 million units and the number will
be increase every year (Pomykala et al., 2007). Automobile products comprise of thousand parts which 74-75% of
them compose from ferrous and non-ferrous material and 8-10% are from plastics, and typically only less than 75% of
weights to be recycled and the rest are not. This condition leads to the increasing number of landfill space.
Unfortunately, there is no more space available to threat this disposal.
According to Kumar and Putnam (2008), the automotive recycling infrastructure successfully recovers 75% of the
material weight in end-of-life vehicles mainly through ferrous metal separation. However, this industry faces
significant challenges as automotive manufacturers increase the use of nonferrous and non metallic materials. Vehicle
composition has been shifting toward light material such as aluminium and polymer that consequence on higher impact
to the environment. Vehicle affect the environment through their entire life cycle in energy consumption, waste
generation, green house gases, hazardous substances emissions and disposal at the end of their life (Kanari et al., 2003) .
To overcome this problem, European Union has established EU Direction for end of life vehicle and underlined that in
2015, recyclability rate of automobile must reach 85%. According to EU Directive, recyclability means the potential
for recycling of component parts or materials diverted from an end of life vehicle. Vehicle manufacturers and their
supplier are requested to include this aspect at the earlier stage of the development of new vehicle, in order to facilitate
the treatment of vehicle at the time when they reach their end of life. Many countries are now refer to the EU
legislation and try to demonstrate a strategy in fulfilling this requirement by using less of non-recyclable material in
their products, calculating for energy usage, limit waste stream, etc. Additionally, as consumption increases, raw

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(11), 412-427, 2012.

THE OPERATION OF VENDING MACHINE SYSTEMS


WITH STOCK-OUT-BASED, ONE-STAGE ITEM SUBSTITUTION
Yang-Byung Park, Sung-Joon Yoon

Department of Industrial and Management Systems Engineering, College of Engineering,


Kyung Hee University, 1 Seocheon-dong, Giheung-gu, Yongin-si, Gyeonggi-do 446-701, Republic of Korea

Corresponding author’s e-mail: {Yang-Byung Park, ybpark@khu.ac.kr}

The operation of vending machine systems presents a decision-making problem consisting of item allocation to storage
compartments, inventory replenishment, and vehicle routing, all of which have critical effects on system profit. In this
paper, we propose a two-phase solution with an iterative improvement procedure for the operation problem with stock-out-
based, one-stage item substitution in vending machine systems. In the first phase, the item allocation to storage
compartments and the replenishment intervals of vending machines are determined by solving a non-linear integer
mathematical model for each machine. In the second phase, vehicle routes for replenishing vending machine inventories are
determined by applying the savings-based algorithm, which minimizes the sum of transportation and shortage costs. The
accuracy of the solution is improved by iteratively executing the two phases. The optimality of the proposed solution is
evaluated on small test problems. We present an application of the proposed solution to an industry problem and carry out
computational experiments on test problems to evaluate the effectiveness of the stock-out allowance policy with one-stage
item substitution compared to the no-stock-out allowance policy with respect to system profit. The results show the
substantial economic advantage of the stock-out allowance policy. Sensitivity analysis indicates that some input variables
significantly impact the effectiveness of this policy.

Significance: A no-stock-out policy at vending machines may cause excess transportation and inventory costs. Allowing
stock-outs and substitutions for stock-out items might increase the profit of the vending machine system. A
proposed two-phase heuristic generates high quality solutions to the operation problem with stock-out-based,
one-stage item substitution in vending machine systems. The results of the computational experiments with
the proposed heuristic guarantee a substantial economic advantage of the stock-out allowance policy over the
no-stock-out allowance policy and present favorable environments to the stock-out allowance policy. The
proposed two-phase solution can be modified easily for application to various retail vending settings under a
vendor-managed inventory scheme.

Keywords: Vending machine system, inventory management, operation problem, item substitution

(Received 1 Jan 2012; Accepted in revised form 7 Oct 2012)

1. INTRODUCTION
Vending machines have become an essential part of daily life in many countries. Their spread is especially important from
an environmental perspective because they enable consumers in remote locations to make purchases without having to
drive long distances. The USA is estimated to have over four million vending machines, with retail sales over $30 billion
annually. Japan's vending machine density is the highest in the world. The number of vending machines in South Korea has
increased over 10% every year in recent years (Korea Vending Machine Manufacturers Association, 2009). Most vending
machines sell beverages, food, snacks, or cigarettes. Recently, they have expanded to include tickets, books, flower pots,
and medical supplies like sterile syringes.
Vending machine management companies manage a network of vending machines in dispersed locations. A company
assigns between 100~200 vending machines to different business offices based on location, and each business office
manages its machines using 10~20 vehicles. An example of a vending machine system is depicted in Figure 1. Under a
vendor-managed inventory scheme, the business office is responsible for coordinating item allocation to vending machine
storage compartments, inventory replenishment, and vehicle routing, with the objective of maximizing system profit. These
decisions and management practices are referred to as the operation problem for vending machine systems (OPVMS).

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(11), 428-443, 2012.

INDUSTRY ENERGY EFFICIENCY ANALYSIS IN NORTHEAST BRAZIL:


PROPOSAL OF METHODOLOGY AND CASE STUDIES
Miguel Otávio B C. Melo1, Luiz Bueno da Silva2, Sergio Campello3
1,2
Universidade Federal da Paraíba
Cidade Universitária,
mobcmelo@ct.ufpb.br, bueno@ct.ufpb.br
Tel.: +55 83 32167685 ; fax: +55 83 32167549.

João Pessoa – PB, Brazil 58051-970


3
Portal Tecnologia
Rua Joao Tude de Melo 77
Recife-PE, Brazil52060-010
sergio@portaltecnologia.com.br

Energy gains vital importance once it accounts for up to one-third of the product cost. One can also consider energy as a
strategic input for the establishment of any economic and social development policy. Electricity is the basis for industrial
production, agriculture, as well as in providing services chain; hence, the need to reduce the cost for that input is vital. This
produces great benefits to the production chain by making companies more competitive, and people benefit because the
products’ final price becomes cheaper. The aim of this paper is to present a new methodology for assessing industrial
efficiency energy and identify points of energy losses and the most influenced sectors within the production process, and
propose mitigation measures.

Keywords: Energy Efficiency; Clean Energy; Industrial Energy Management

(Received 8 Mar 2011; Accepted in revised form 3 Oct 2012)

1. INTRODUCTION

Energy management in industry or commerce should not be limited to concerns about assistance in demand and taking
energy-efficiency measures; it should also sustain the idea of knowing policies and rules of energy compound, quality
certificates, as well as environmental and CO2 certificates (Cullen et al. 2010, Siitonen et al. 2010).
Currently, there are several industrial sectors that have already obtained opportunities to improve energy efficiency in
thermal systems, efficient motors, buildings with thermal insulation, efficient automated cooling, expert systems, and more
efficient compressed air and chilled water and boilers (Laurijssen et al. 2010, Hasanbeigi et al. 2010, Kirschen et al. 2009,
Hammond, 2007).
In domestic industries, it is common to apply conventional techniques in the operation of motor system. The interpretation
of this reality drives us to undertake studies in this sector, proposing improvements in the production system. The following
are noteworthy: Replacement of induction motors with a conventional high-yield motor, methods of motor drives with
direct starters or star-delta starting device for smooth, soft-starter, and frequency inverter mainly used in processes that
enable operation to change motor shaft speed (Panesi, 2006).
Energy gains vital importance once it accounts for up to one-third of the product cost. One can also consider energy as a
strategic input for the establishment of any economic and social development policy. Electricity is the basis for industrial
production, agriculture, as well as in providing services chain; hence, the need to reduce the cost for that input is vital. This
produces great benefits to the production chain by making companies more competitive, and people benefit because the
products’ final price becomes cheaper.
The aim of this paper is to present a new methodology for assessing industrial efficiency energy and identify points of
energy losses and the most influenced sectors within the production process, and propose mitigation measures.

2. GENERAL CONSIDERATIONS
From the scope of production chains, energy efficiency is concerned with productivity, which in turn is linked to economic
results and management. The management aspects are those that relate to project deployment and implementation, hiring,
training and retraining of personnel, as well as system evaluation in general (Jochen et al. 2007).
The most important energy-efficiency evaluation factors in economic terms are data consistency, behavior of the
consumers, and incentive for participation as well as implementation of energy-efficiency programs (Vine et al. 2010).
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 19(11), 444-455, 2012.

PRODUCT PLATFORM SCREENING AT LEGO


Niels Henrik Mortensen1, Thomas Steen Jensen1 and Ole Fiil Nielsen2

Department of Mechanical Engineering1


Technical University of Denmark
Niels Koppels Allé, DTU Bygn. 404
DK-2800 Kgs. Lyngby, Denmark
Email: Niels Henrik Mortensen, nhm@mek.dtu.dk, Ole Fiil Nielsen, ofn@mek.dtu.dk

Product and Marketing Development2


LEGO Group A/S
Hans Jensensvej/Systemvej
DK-7190 Billund, Denmark
Email: Thomas Steen Jensen, thomas.steen.jensen@europe.lego.com

Product platforms offer great benefits to companies developing new products in highly competitive markets. Literature
describes how a single platform can be designed from a technical point of view, but rarely mentions how the process
begins. How do companies identify possible platform candidates, and how do they assess if these candidates have enough
potential to be worth implementing? Danish toy manufacturer LEGO has systematically gone through this process twice.
The first time the results were poor; almost all platform candidates failed. The second time, though, has been largely
successful after a few changes had been applied to the initial process layout. This case study shows how companies must
focus on a limited selection of simultaneous projects in order to keep focus. Primary stakeholders must be involved from
the very beginning, and short presentations of the platform concepts should be given to them throughout the whole process
to ensure commitment.

Significance: Product platforms offer great benefits to companies developing new products in highly competitive markets.
Literature describes how a single platform can be designed from a technical point of view, but rarely mentions how the
process begins. This paper describes how platform candidates are identified and synchronized with product development.

Keywords: Product platform, Product family, Multi-product development, Product architecture, Platform assessment

(Received 8 Jul 2011; Accepted in revised form 3 Oct 2012)

1. INTRODUCTION
Numerous publications show the benefits of product platforms. Companies use platforms to develop not a single, but
multiple products (i.e. a product family) simultaneously. This may lead to increased sales due to more customized products
as well as decreased costs due to reuse, making product platforms very profitable for product developing companies.
Designing product platforms is not straightforward, though.
How do companies start designing a product platform? Often they start by looking for a suitable platform candidate.
Many good examples of product platforms exist in literature, and companies will often look for similar candidates within
their own company.
But what if no apparent low-hanging fruits are available? How does the company then start designing a product platform?
Or what if the low-hanging fruits are too plentiful? How does the company then choose among these candidates, or can
they all be undertaken simultaneously?
In the literature cases, the case company always starts by having a generic product, which can then be analyzed and
modularized. The problem for most companies, however, is that they have no generic product. Instead, they have a range of
different products with different structures and different functions, and various restrictions like backwards-compatibility,
license-agreements, and existing production equipment prevent the company from changing this fact.
Secondly, how do companies know if their platforms will be beneficial? Can they simply assume that all candidates will
evolve into profitable platforms?
Although cases where platforms fail are very rare in literature, they are not unheard of in industry. It is only natural that
most companies would not want to share their unsuccessful platform experiences with the rest of the world. Still many
companies, who have finally achieved some degree of success, often describe the process of getting to this level as a
struggle, where several important platform initiatives have failed on the way.

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(12), 456-463, 2012.

MULTI-ASPIRATION GOAL PROGRAMMING FORMULATION


Hossein Karimi, Mehdi Attarpour
Department of Industrial Engineering, K.N. Toosi University of Technology, Tehran, Iran,
Postal Address: Iran, Tehran, 470 Mirdamad Ave. West, 19697, Postal Code: 1969764499.
Corresponding author email: mailto:hkarimi@shahed.ac.ir

A significant analytical approach contrived to solve many real-world problems is Goal Programming (GP). In many
marketing or decision management problems, the phenomenon of multi-segment aspiration levels and multi-choice
goal levels may exist for decision makers. This problem cannot be solved by current GP techniques such as multi-
choice goal programming and multi-segment goal programming. This paper provides a new idea to integrate the
multi-segment goal programming and multi-choice goal programming in order to solve multi-aspiration problems.
Moreover, it develops the concepts of these models significantly for real application; in addition, a real problem is
provided to demonstrate usefulness of the proposed model. The results of the problem are analyzed and finally, the
conclusion is remarked.

Keywords: Multi-aspiration levels; Multi-segment goal programming; Multi-choice goal programming; Decision
making; Marketing.

(Received 14 Mar 2011; Accepted in revised form 1 Nov 2012)

1. INTRODUCTION
Goal programming is a form of linear programming that considers multiple goals that are often in conflict with each
other. With multiple goals, all goals usually cannot be achieved properly. For example, an organization may want to:
(1) maximize profits and increase after-sales services; (2) increase product quality and reduce product cost and (3)
decrease credit sales and increase total sales. GP was originally introduced by Charnes and Cooper (1961). Then, it
was extended by Lee (1972), Ignizio (1985), Li (1996), Tamiz et al. (1998), Romero (2001), Chang (2004, 2007)
and Liao (2009). Goal programming seeks to minimize the deviations among the desired goals and the actual results
according to the assigned priorities. The objective function of a goal programming model is provided in terms of the
deviations from the target goals. The general GP model can be described as follows:
n ...
Minimize ∑ f i ( x) − g i (1)
i =1
Where f i (x ) and g i are the linear function and goal of the i th objective, respectively, and n is the number of
goals. The GP model mentioned above can be solved with many techniques such as Lexicographic GP (LGP),
Weighted GP (WGP), and so on. First, some of the GP model formulations are briefly explained.
In WGP model, the achievement function consists of the unpleasant deviation variables; the weight of each one
represents its importance. Ignizio (1976) provided the mathematical formulations of a WGP model. This model is as
following:
n
(
min ∑ α i d i+ + β i d i− ) ...
(2)
i =1
subject to
fi (x) − di+ + di− = gi ,
...
i = 1,2,..., n (3)
...
di+ , di− ≥ 0, i = 1,2,..., n (4)
x∈F ... (5)
Where d i+ and d i− are orderly the positive and negative deviation between the i th objective and goal. α i and β i
are the positive weights for the deviations.
In LGP model, an ordered vector makes the structure of achievement function. The dimension of this vector matches
Q , the number of priority levels, which is presented in the model. And its components are related to unpleasant
deviation variables of goal placed in the corresponding priority level. The mathematical formulation of a LGP model

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(12), 464-474, 2012.

COMPATIBLE COMPONENT SELECTION UNDER UNCERTAINTY VIA


EXTENDED CONSTRAINT SATISFACTION APPROACH
Duck Young Kim1, Paul Xirouchakis2, Young Jun Son3
1
Ulsan National Institute of Science and Technology, Republic of Korea
2
École Polytechnique Fédérale de Lausanne, Switzerland
3
University of Arizona, United States
Corresponding author email: dykim@unist.ac.kr

This paper deals with compatible component selection problems, where the goal is to find combinations of
components satisfying design constraints given a product structure, component alternatives available in design
catalogue for each subsystem of the product, and a preliminary design constraint. An extended Constraint
Satisfaction Problem (CSP) is introduced to solve component selection problems considering uncertainty in the
values of design variables. To handle a large number of all possible combinations of components, the paper proposes
a systematic filtering procedure and an efficient method to estimate a complex feasible design space to facilitate
selection of component combinations having more feasible solutions. The proposed approach is illustrated and
demonstrated with a robotic vacuum cleaner design example.

Keywords: Component Selection, Configuration, Design Constraint, Constraint Satisfaction Problem, Filtering

(Received 24 Apr 2012; Accepted in revised form 1 Nov 2012)

1. INTRODUCTION AND BACKGROUND


The product design process involves four main phases: (1) product specification, (2) conceptual design, (3)
embodiment design, and (4) detailed design. At each phase, design teams first generate or search for several design
alternatives, and select the best one considering design criteria and constraints. In conceptual design, for instance,
this generation and selection process consists of four main steps (Pahl and Beitz, 1988) (see Figure 1): (1)
decomposition-establish a function structure of a product, (2) definition-search for components to fulfil the sub-
functions and define a preliminary design constraint, (3) filtering-combine components to fulfil the overall function,
select suitable combinations, and firm up into concept variants, and (4) selection-evaluate concept variants against
technical and economic design criteria and select the best one. This divergence and convergence of the search space
in design is intended to allow design teams to have unrestrained creativity by producing many initial component
alternatives for subsystems, as well as to support the filtering and selection processes to find best design alternatives
for a product.
The focus of this paper is on “filterning” in Figure 1. In particular, we consdider a constraint based compatible
component selection problem under uncertainty in the values of design variables, especially in redesign and variant
design environments. This problem is a combinatorial selection problem, where a component satisfying design
constraints is chosen for each subsystem from a pre-defined set (i.e. design catalogue). It is compounded by multiple
values or continuous space of design variables and discrete choices of components. In this work, it is assumed that a
design catalogue (containing component alternatives for subsystems comprising a product) and a preliminary design
constraint are given as input information (see Table 3). By generalizing the problem characteristics found, we
formulate the constraint based component selection problem with an extended Constraint Satisfaction Problem
(CSP). Finally, a systematic filtering procedure and an efficient method to estimate a complex feasible design space
are proposed to select component combinations having more feasible solutions.
A product usually consists of a number of subsystems (see Table 1), where each of them has its own design
alternatives, namely component alternatives. Table 2 lists a major list of variables used in this paper. Any
combination of components of all subsystems can be a potential design alternative for a product. The selected
components must be mutually compatible to achieve the overall functionality, where compatibility means that when
components are designed to work others without adjustment (Patel et al., 2003). Therefore, design teams need to
find the compatible combinations of components satisfying design constraints from all possible combinations.

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 19(12), 476-487, 2012.

MONITORING TURNAROUND TIME USING AN AVERAGE CONTROL


CHART IN THE LABORATORY
Shih-Chou Kao
Graduate School of Operation and Management, Kao Yuan University, Taiwan
Corresponding author email: t80132@cc.kyu.edu.tw

The long turnaround time (TAT) will prolong the waiting time of patient, increase hospital costs and decrease
service satisfaction. None of the studies on control charts and medical care have applied control charts to monitor
TAT and proposed a probability function for the distribution of the mean. This study proposed a general formula for
the probability function of the distribution of the mean. The control limits of the average chart were determined
according to the type I risks (α) and the standardized Weibull, lognormal and Burr distributions. Furthermore,
compared to control charts that use α=0.0027, weighted variance (WV), skewed correction (SC) and traditional
Shewhart control charts, the proposed control chart is superior to other control chart, in terms of the αs for a skewed
process. An example of the TAT of laboratory for the medical center presented to illustrate these findings.

Significance: This study proposes a control chart to TAT of complete blood count (CBC) test of laboratory for a
medical center. Constants of average control chart are calculated in accordance with fixing type I risks( α, 0.0027)
with three distributions (Weibull, lognormal and Burr) by using the proposed a general model for the probability
density function of the distribution of the mean. Average control chart using the proposed method is superior to
other control chart, in terms of the type I risks for a skewed process.

Keywords: Average control chart, distribution of the mean, skewed distribution, type I risk, turnaround time.

(Received 23 Nov 2011; Accepted in revised form 3 Oct 2012)

1. INTRODUCTION
Timeliness is one of the most important characteristics of a laboratory test, but its importance has often been
overlooked. The timeliness with which laboratory staffs deliver test results is a manifest parameter of laboratory
service and a general standard by which clinicians and organizations judge laboratory performance (Valenstein,
1996).
The College of American Pathologists’ Q-Probes study in 1990 identified that the turnaround time (TAT) from
phlebotomy to reporting of results is the most important characteristic for laboratory testing and provided TATs for
various laboratory tests (Howanitz et al., 1992). Many studies also reported that poor laboratory performance in
terms of long TAT had a major impact on patient care (Vacek, 2002; Montalescot et al., 2004; Singer et al., 2005).
Until now, essentially all TAT studies have focused on inpatient testing (especially of an emergency nature),
outpatient testing and outfits (Howanitz and Howanitz, 2001; Novis et al. 2002; Steindel and Jones, 2002; Novis,
2004; Howanitz, 2005; Chien et al. 2007; Guss et al, 2008; Singer et al. 2008; Qureshi et al, 2010). Most of these
studies discussed the main factors that significantly affect the TAT, such as day of the week, drawing location,
ordering method and delivery method. Rare research proposed a statistical process control method to monitor the
TAT.
Valenstein and Emancipator (1989) noted that the distribution of TAT data is non-normal. The skewed nature of
TAT data distribution may result in specimens with excessively long TATs (Steindel and Novis, 1999). Hence, if the
traditional control charts based on the normality assumption are used to monitor a non–normal process, the
probabilities of a type I error (α) in the control charts increases as the skewness of the process increases (Bai and
Choi, 1995; Chang and Bai, 2001).
Most studies on statistical process control issue used a simulation method to estimate the α and related constant
values of an average control chart. No previous research has derived a probability function of the distribution of the
mean for a skewed distribution. The study will derive a general formula of the probability density function (pdf) of
the distribution of the mean and propose an average control chart that is both simple to use and more effective for
monitoring an out–of–control signal for TAT process.
In the area of the statistical process control, the α = 0.0027 is a well-known criterion for the design of a control
chart or the comparison among control charts. To monitoring a non-normal process, many studies designed some
new control charts by splitting the α-risks equally into the two tails (Castagliola, 2000; Chan and Cui, 2003; Khoo

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(1-2), 2–11, 2013.

A FRAMEWORK FOR SYSTEMATIC DESIGN AND OPERATION OF


CONDITION-BASED MAINTENANCE SYSTEMS: APPLICATION AT A
GERMAN SEA PORT
M. Lewandowski1, B. Scholz-Reiter1
1
BIBA - Bremer Institut für Produktion und Logistik GmbH
Germany
Corresponding author’s email: Marco Lewandowski, lew@biba.uni-bremen.de

Abstract: Ongoing improvement of logistics and intermodal transport leads to high requirements regarding availability of
machine resources like straddle carriers or gantry cranes. Accordingly, efficient maintenance strategies for port equipment
have to be established. The change to condition-based maintenance strategies promises to save resources while enhancing
availability and reliability. This paper introduces a framework of methods and tools that enable the systematic design of
condition-based maintenance systems on the one hand and offers integrated support for operating such systems on the other
hand. The findings are evaluated based on a case-study of a German seaport and illustrate the usage of the system based on
managing the equipping process of machines with sensors for condition monitoring as well as bringing the system into the
operation phase.

Keywords: Maintenance, Maintenance Management, Condition Monitoring, Condition-based Maintenance, Sensor


Application

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
Global distributed production structures and according supply networks have to work efficiently. Sea ports and
transhipment terminals, the backbone of Europe’s economy, have to possess lean structures and ensure seamless integration
in the supply chain. The ongoing improvement of logistics and intermodal transport with respect to through-put time and
cost reductions shall satisfy future demands for scalable structures in times of economic growth and recession. Accordingly,
this leads to high requirements regarding efficient maintenance strategies for port equipment like straddle carriers or gantry
cranes.
While cyclic and reactive maintenance actions are still the representative method in practice, the change to condition-
based monitoring of equipment is ongoing. The current research focuses on condition-based concepts for different
applications, including monitoring of tools, pumps, gearboxes, electrical equipment etc. (e.g. Al-Habaibeh et al., 2000;
Garcia et al., 2006; García-Escudero et al., 2011). Condition-based maintenance itself promises to make maintenance
processes more efficient (Al-Najjar, 2007; Sandborn et al., 2007), among others due to decentralized decision units in terms
of cognitive sensor applications at certain crucial components for instance so that the machine itself will be able to trigger a
maintenance action in terms of automated control and cooperation (e.g. Scholz-Reiter et al., 2007). Hence, this paper
presents the exploration of a fleet management case in a German seaport in which the specific requirements of the design
and operating phase were examined and transferred to an adopted systematic procedure model and a methodology
framework.
The paper is organized as follows: The first chapter introduces the maintenance topic with the specific requirements
regarding port equipment. A state of the art review refers to topical work on condition-based maintenance in general and
according endeavours to build a comprehensive framework for such systems. Chapter two presents the methodologies that
are part of a framework that enables and supports the design and operation of condition-based maintenance systems on top
of existing assets. Its application based, on a case study at a German seaport, verifies the applicability in chapter three. The
last chapter presents a conclusion on the work done and gives an outlook for necessary further research and work to be done
to put such systems into practice.

1.1 Maintenance in General


The term maintenance describes the combination of all technical and administrative actions that have to be fulfilled to
retain the functioning condition of a technical system or to restore it to a state in which it can perform in the required
manner. To this end, the main aim of maintenance is to secure the preferably continuous availability of machines. Based on
this definition, the holistic view on the maintenance topic is clear. The several processes based on the typical maintenance
tasks according to DIN EN 13306 as presented in the following table 1 consequently require task-specific know-how.
ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(1-2), 12–23, 2013.  

A HYBRID SA ALGORITHM FOR INLAND CONTAINER


TRANSPORTATION
Won-Young Yun1, Wen-Fei Wang2, Byung-Hyun Ha1*
1
Department of Industrial Engineering
Pusan National University
30 Jangjeon-Dong, Geumjeong-Gu,Busan
609-735, South Korea
*Corresponding author’s e-mail: bhha@pusan.ac.kr
2
Department of Logistics Information Technology
Pusan National University
30 Jangjeon-Dong, Geumjeong-Gu
Busan 609-735, South Korea

Abstract: Inland container transportation refers to container movements among customer locations, container terminals,
and inland container depots in a local area. In this paper, we consider the inland transportation problem where containers
are classified into four types according to the destination (inbound or outbound) and the container state (full or empty). In
addition, containers can be delivered not only by truck but also by train when time windows are satisfied. We propose a
graph model to represent and analyze the problem, and develop a mixed-integer programming model based on the graph
model. A hybrid simulated annealing algorithm is proposed to obtain the near-optimal transportation schedule of containers.
The performance of the proposed algorithm is investigated by numerical experiments.
Keywords: Inland container transportation; time windows; intermodal transportation; hybrid simulated annealing (SA)

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION

By inland transportation, containers are transported from their shippers to a terminal and delivered from another terminal to
their receivers. This study deals with the inland container transportation problem by taking the total transportation cost into
account. We consider four types of containers: inbound full, outbound full, inbound empty, and outbound empty ones. The
transportation process depends on the type of a container. For example, outbound freight transportation by a container is
briefly described as follows. First, a truck is assigned to carry an empty container to a customer and the empty container is
unloaded at the customer location. Then, freight is packed into the container and the container becomes a full one. The full
container is loaded onto the truck again and delivered to a terminal directly or a railway station where the container is
transported to a terminal by train subsequently. Finally, the container is transferred to another terminal by vessel, where the
container gets into another inland container transportation system. In addition, we consider multimodal transportation by
truck and train, and further impose the constraint of time windows when a container can be picked up and unloaded at its
origin and destination, respectively. Hence, containers can be delivered either by truck or by truck and train together, as
long as time windows are satisfied.
There are many papers in which various methods are proposed to find the optimal or good solutions for the inland con-
tainer transportation problem. Wen and Zhou (2007) developed a GA (genetic algorithm) to solve a container vehicle rout-
ing problem in a local area. Jula et al. (2005) formulated truck container transportation problems with time constraints as an
asymmetric multi-traveling salesman problem with time windows (m-TSPTW). They applied a DP/GA (dynamic pro-
gramming and genetic algorithm) hybrid algorithm for solving large size problems. Zhang et al. (2009) addressed a similar
problem, a graph model was built up, and a cluster method and a reactive tabu search were proposed and compared their
performance with each other. Liu and He (2007) decomposed a vehicle routing problem into several sub-problems accord-
ing to vehicle-customer assignment structure and a tabu search algorithm was applied to each sub-problem, respectively.
Intermodal transportation problems with time windows are more difficult to deal with, especially when a container is
related to more than one time window. Some researchers tried to transform and/or to relax the constraints related to time
windows. Lau et al. (2003) considered the vehicle routing problem with time windows under a limited number of vehicles,
and they provided a mathematical model to obtain the upper bound by selecting one of the latest-possible times to return to

ISSN  1943-­‐670X     ©INTERNATIONAL  JOURNAL  OF  INDUSTRIAL  ENGINEERING  


 
International Journal of Industrial Engineering, 20(1-2), 24–35, 2013.

A METHOD for SIMULATION DESIGN of REFRIGERATED


WAREHOUSES USING AN ASPECT-ORIENTED MODELING
APPROACH
G.S. Cho1, H.G. Kim2
1
Department of Port Logistics System,
Tongmyong University,
Busan, 608-711, Korea
2
Department of Industrial & Management Engineering,
Dongeui University,
Busan, 614-714, Korea
Corresponding author’s e-mail: GS Cho, gscho@tu.ac.kr

Refrigerated warehouses play a buffer function in a logistics system to meet the various demands of consumers. Over 50%
of Korean refrigerated warehouses are located in Busan, and Busan has become a strategic region for cold chain industries.
This paper suggests an Aspect-Oriented Modeling Approach (AOMA) for refrigerated warehouses which can design and
analyze system models using a simulation method considering the design conditions.
Significance: The AOMA is an analytic approach that combines the Aspect-Oriented Modeling considering cross cutting
concerns to the existing Object-Oriented Modeling. The purpose of this paper is to suggest a simulation
model using the AOMA for refrigerated warehouses. The suggested model can be utilized for
redesigning refrigerated warehouses for the easy control of reuse, extension and modification.
Keywords: Simulation, Refrigerated Warehouse, System Operation, Aspect-Oriented Modeling Approach.

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION

1.1 Background and Purpose of this Research


The refrigerated warehouse industry has grown as the consumption of fresh foods has increased. The Busan area in Korea
has become a strategic region for the refrigerated warehouse industry. Many refrigerated warehouses have been built since
2000s to meet the demands but the refrigerated warehouse industry has been in trouble due to the excessive facilities,
shared stevedoring and lower storage fees etc. (Kim et al., 2010). So, it is needed that the refrigerated warehouse should
support the high value-added service to customers but so far the function of a refrigerated warehouse is only focused on
storing the items. Although the minimum necessity is to consider the design factors such as layouts, facilities and items etc.,
there are needed operating alternatives as a system, to enhance the operations and performance of refrigerated warehouses
supporting the services. Until now the main function of refrigerated warehouses is restricted to storing and there has not
been any systematic approach method to solve the above mentioned problems.
For the complex system, Object-Oriented Modeling (OOM) has been utilized in other industrial system
(Venketeswaran and Son, 2004). The OOM along with applications in the computer science area has long been the essential
reference to object-oriented technology, which, in turn, has evolved to join the mainstream of industrial-strength software
development. The OOM is a modeling paradigm mainly used in computer programming. The OOM emphasizes the use of
discreet reusable code blocks that can stand on their own, take variables, perform a function, and return values. Aspect-
Oriented Modeling (AOM) that is an extension of the OOM may also contain interfaces to each model because they also
involve method interactions (Lemos et al., 2009). Such modeling techniques help separate out the different concerns
implemented in a software system and especially some that cannot be clearly mapped to isolated units of implementation.
The main idea of the AOM is suggested to improve the performance of the OOM for realizing the modularization of these
types of concerns. The terms of the AOM can be used to demonstrate the space of programmatic mechanisms for
expressing crosscutting concerns (Kiczales et al., 1997). The AOM should be built upon a conceptual framework and is
able to denote the space of modeling elements for specifying crosscutting concerns at a higher level of abstraction (Chavez
and Lucena, 2002). Recently an Aspect-Oriented Modeling Approach (AOMA) has been suggested and applied to enhance
the performance of the simulation models based on the AOM (France et al., 2004, Wu et al., 2010). In this paper, we
suggest and develop a prototype system which implements the class model composition behavior specified in the
composition metamodel to design the refrigerated warehouse system using the AOMA.

ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(1-2), 36–46, 2013.

A MULTI-PRODUCT DYNAMIC INBOUND ORDERING AND SHIPMENT


SCHEDULING PROBLEM AT A THIRD-PARTY WAREHOUSE
B. S. Kim1, W. S. Lee2

1
Graduate School of Management of Technology
Pukyong National University
Busan 608-737, Korea
2
Department of Systems Management & Engineering
Pukyong National University
Busan 608-737, Korea
Corresponding Author’s Email: iewslee@pknu.ac.kr

Abstract: This paper considers a dynamic inbound ordering and shipment scheduling problem for multiple products
that are transported from a supplier to a warehouse by common freight containers. The following assumptions are made:
(i) each ordering in a period is immediately shipped in the same period, (ii) the total freight cost is proportional to the
number of containers used, and (iii) demand is dynamic and backlogging is not allowed. The objective of this study is
to identify effective algorithms that simultaneously determine inbound ordering lot-sizes and a shipment schedule that
minimize the total cost consisting of ordering cost, inventory holding cost, and freight cost. This problem can be shown
in NP-hard, and this paper presents a heuristic algorithm that exploits the properties of an optimal solution. Also, a
shortest path reformulation model is proposed to obtain a good lower bound. Simulation experiments are presented to
evaluate the performance of proposed procedures.

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION

For the couple of decades, the reduction of transportation cost and warehousing cost have been two important issues to
enhance logistic efficiency and demand visibility in a supply chain. The logistic alliances and specialized Third-Party-
Logistic (TPL) providers have been growing to reduce the costs in industry. In a dynamic planning period, the issue of
transportation scheduling for inbound ordering and shipping of products to TPL warehouse by proper transportation
modes at scheduled time and the issue of lot size dispatching control including inventory control to the customers have
become significantly important for production and distribution management. Each warehouse purchases multiple
products and uses a freight container as a transportation unit to ship its purchased (or manufactured) products to
retailers, which may lead to the managerial decision problems including lot-sizes for each product, container types used,
loading policy in containers, and the number of containers used. Thus, this provides us with a motivation to investigate
the optimal lot-sizing and shipment scheduling problem. Also, the managerial decision problems have arisen in TPL.
Several articles have attempted to extend the classical Dynamic Lot-Sizing Model (DLSM) incorporating
production-inventory and transportation functions together. Hwang and Sohn (1985) investigated how to
simultaneously determine the transportation mode and order size for a deteriorating product without considering
capacity restrictions on the transportation modes. Lee (1989) considered a DLSM allowing multiple set-up costs
consisting of a fixed charge cost and a freight cost, in which a fixed single container type with limited carrying capacity
is considered and the freight cost is proportional to the number of containers used. Fumero and Vercellis (1999)
proposed an integrated optimization model for production and distribution planning considering such operational
decisions as capacity management, inventory allocation, and vehicle routing. The solution of the integrated
optimization model was obtained using the Lagrangean relaxation technique. Lee et al. (2003) extended the works of
Lee (1989) by considering multiple heterogeneous vehicle types to immediately transport the finished product in the
same period it is produced. It is also assumed that each vehicle has a type-dependent carrying capacity and the unit
freight cost for each vehicle type is dependent on the carrying capacity. Lee et al. (2003) considered a dynamic model
for inventory lot-sizing and outbound shipment scheduling in the third-party warehousing domain. They presented a
polynomial time algorithm for computing the optimal solution. Jaruphongsa et al. (2005) analyzed a dynamic lot-sizing
model in which replenishment orders may be delivered by multiple shipment modes with different lead times and cost
functions. They proposed a polynomial time algorithm based on the dynamic programming approach. However, the
aforementioned works have not considered a multiple product problem.
Emily and Tzur (2005) considered a dynamic model of shipping multiple items by capacitated vehicles. They
presented an algorithm based on a dynamic programming approach. Norden and Velde (2005) dealt with a multiple
product problem of determining transportation lot-sizes in which the transportation cost function has piece-wise linear

ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(1-2), 47–59, 2013.  

A STRUCTURAL AND SEMANTIC APPROACH TO SIMILARITY


MEASUREMENT OF LOGISTICS PROCESSES
Bernardo Nugroho Yahya1,3, Hyerim Bae1*, Joonsoo Bae2
1
Department of Industrial Engineering
Pusan National University
30-san Jangjeon-dong, Geumjong-gu, Busan 609-735, South Korea
*Coressponding author’s email: {bernardo;hrbae}@pusan.ac.kr
2
Department of Industrial and Information Systems Engineering
Chonbuk National University
664-14 Deokjin-dong, Jeonju, Jeonbuk 561-756, South Korea.
jsbae@chonbuk.ac.kr
3
School of Technology Management
Ulsan National Institute of Science and Technology
UNIST-gil 50, Eonyang-eup, Ulju-gun, Ulsan 689-798, South Korea
bernardo@unist.ac.kr

Abstract: The increased individuation and variety of logistics processes has spurred a strong demand for a new process
customization strategy. Indeed, to satisfy the increasingly specific requirements and demands of customers, organizations
have been developing more competitive and flexible logistics processes. This trend not only has greatly increased the
number of logistics processes in process repositories but also has resulted processes for business decision making hard.
Organizations, therefore, have turned to process reusability as a solution. One such strategy employs similarity
measurement as a precautionary measure limiting the occurrence of redundant processes. This paper proposes a structure-
and semantics-based approach to similarity measurement of logistics processes. Semantic information and semantic
similarity on logistics processes are defined based on logistics ontology, available in the supply chain operation reference
(SCOR) model. By combining similarity measurement based on both structural and semantic information of logistics
processes we show that our approach improves the previous approaches in terms of accuracy and quality.
Keywords: Logistics process, SCOR, similarity measurement, business process, logistics ontology

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION

To adapt to dynamic business conditions and achieve a competitive advantage, a logistics organization must implement
customized processes that meet customer requirements and that also further its business objectives. Thus, it could be said
that process customizability is integral to an organization’s competitive advantage. We define customizability as the ability
of the logistics party to apply logistics process objectives to many different business conditions (Lee and Leu (2010)).
Customization of reference processes or templates to reduce the time and effort required to design and deploy processes on
all levels is common practice (Lazovik and Ludwig (2007)). Customization of reference processes usually involves adding,
removing or modifying process elements such as activities, control flow and data flow connectors. However, the existence
of a large number of customized processes can incur process redundancy. For example, many similar processes with only
slight differences in terminology, structure and semantics can exist in maritime supply chains involving the handling of
containers. In such environments, the establishment of joint procedures among several global communities such as the
International Association of Ports and Harbors (IAPH), the International Network of Affiliated Ports (INAP) and the North
American Inland Port Network (NAIPN) can increase the process redundancy in some way.
For example, the three ports of the country of origin, hub and destination belong to the same global communities (Fig.
1). The conceptual processes of container flows at the hub and the destination are the same; however, their operational
processes might differ slightly according to the respective performers, which is to say, the country’s relevant laws, port’s
processing capacities, and other factors. When the ports are in the same communities, they are supposed to have either
similar or standardized processes to handle container flows. The existence of similar or standard processes inspires port
community members to reuse existing processes instead of creating new ones. In this sense, process redundancy encourages
organizations to prioritize process reusability. Process reusability is the ability to develop a process model once and use it
ISSN  1943-­‐670X     ©INTERNATIONAL  JOURNAL  OF  INDUSTRIAL  ENGINEERING  
International Journal of Industrial Engineering, 20(1-2), 60–71, 2013.

FUZZY ONTOLOGICAL KNOWLEDGE SYSTEM FOR IMPROVING RFID


RECOGNITION

H. K. Lee1, C. S. Ko2, T. Kim3*, T. Hwang4


1
Research associate hshklee@naver.com, 2Professor csko@ks.ac.kr, 3Professor twkim@ks.ac.kr, 4Professor
tajhwang@deu.ac.kr
1-3
Department of Industrial & Management Engineering,
Kyungsung University
314-79 Daeyeon-3 dong, Nam-gu,
Busan, Korea
4
Department of Civil Engineering,
Dongeui University
176 Eumkwang-ro, Jin-gu,
Busan, Korea

To remain competitive in business and to be quick responsive in the warehouse and supply chain, the use of RFID has been
increasing in many industry areas. RFID can identify multiple objects simultaneously as well as identifying individual
objects respectively. Some limitations of RFID still remain in the low recognition rate and the sensitive response according
to the material type and its ambient environment. Much effort has been made to enhance the recognition rate and to be
more robust. Examples include tag design change, antenna angle, search angle, and signal intensity to name a few.
The paper proposes fuzzy logic based ontological knowledge system for improving the recognition rate of RFID and the
variance of recognition. In order to improve a performance and reduce a variance, the following sub-goals are pursued.
First, ontology is constructed for the environmental factors to be used as a knowledge base. Second, fuzzy membership
function is defined using the Forward Link Budget in RFID. Finally, a conceptual knowledge system is proposed and tested
to verify the model in the experimental environment.

Keyword: RFID, Performance, Identification, Ontology, SWRL, Fuzzy

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
The radio frequency identification (RFID) technology allows remote identification of objects using radio signal, thus
without the need for line-of-sight or manual positioning of each item. With the rapid development of RFID technology and
its application, we expect a brighter future in the object identification and control. The major advantage of RFID
technology over the barcode is that the RFID system allows detection of multiple items simultaneously as they pass through
a reader field. Additionally, each physical object has its unique ID (even two products of the same type have two different
IDs) enabling to precisely track and monitor the position of each individually labeled product piece.
There is no doubt that the RFID technology has paid off in some areas. But, the effect is not so big in the industry
wide as it has been expected earlier. One of the limitations is the recognition rate of RFID. There are many environmental
factors affecting the performance of RFID. They are material type, packaging type, tag type, reader type and tag location to
name a few. As the variables affecting the RFID performance are unpredictable in advance and changes according to the
domain, a high demand exists in a reusable and robust knowledge system.
An ontology is a formal representation of the knowledge by a set of concepts within a domain and the relationships
between those concepts. It is used to reason about the properties of that domain, and may be used to describe the domain.
An ontology provides a shared vocabulary, which can be used to model a domain — that is, the type of objects and/or
concepts that exist, and their properties and relations. The focus of the ontology lies on the representation of the RFID
domain. The ontology can act as a model for exploring various aspects of the domain. Since part of the ontology deals with
the classification of RFID applications and requirements, it can also be used for supporting decisions on the suitability of
particular RFID tags for different applications.
Previous researches of RFID include RFID device, middleware, agent, ontology and industrial applications. Pitzek
(2010) focused ontologies as the representation of the domain for informational purposes, i.e., as a conceptual domain
model, and putting it into context with other domains, such as communication capable devices and automatic identification
ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(1-2), 72–83, 2013.

COLLABORATION BASED RECONFIGURATION OF PACKAGE SERVICE


NETWORK WITH MULTIPLE CONSOLIDATION TERMINALS

C. S. Ko1, K. H. Chung2, F. N. Ferdinand3, H. J. Ko4


1
Department of Industrial & Management Engineering
Kyungsung University
309, Suyeong-ro, Nam-gu
Busan, 608-736, Korea
e-mail: csko@ks.ac.kr
2
Department of Management Information Systems
Kyungsung University
309, Suyeong-ro, Nam-gu
Busan, 608-736, Korea
e-mail: khchung@ks.ac.kr
3
Department of Industrial Engineering
Pusan National University
Busandaehak-ro, Geumjeong-gu
Busan, 609-935, Korea
e-mail: csko@ks.ac.kr
4
Department of Logistics
Kunsan National University
558 Daehangno, Gunan
Jeonbuk, 573-701, Korea
Corresponding author’s e-mail: hjko@kunsan.ac.kr

Abstract: The market competition of package deliveries in Korea is severe because a large number of companies have
entered into the Korean market. A package delivery company in Korea generally owns and operates a number of service
centers and consolidation terminals for high level customer service. However, some service centers cannot create profits
due to low volume acting as the facilities raising the costs. This challenge can be overcome by collaboration strategy in
order to improve its competitiveness. In this regard, this study suggests an approach to the reconfiguration of package
service networks with respect to collaboration strategy. Thus, we propose a multi-objective nonlinear integer programming
model and a genetic algorithm-based solution procedure for participated companies to maximize their profit. An illustrative
numerical example in Korea is presented to demonstrate the practicality and efficiency of the proposed model.
Keyword: network reconfiguration, express package delivery, cutoff time, strategic partnership

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
The market of express package deliveries in Korea has been rapidly expanded according to the progress of TV home
shopping and internet buying and selling. Accordingly, various sized domestic express companies have been established,
and various foreign companies also have entered into the Korean express market. As a result of the surplus of express
companies, they are struggling with remaining competitive at a reasonable price with appropriate level of customer
satisfaction. In this regard, collaboration or partnership strategy can be a possible option in order to overcome such
difficulties. The collaboration or partnership is becoming a popular competitive strategy to be adopted in all business
sectors. Some of well-known examples can be seen in the air transportation system such as Sky Team, Star Alliance, and
Oneworld as well as in sea transportation such as CKYH-The Green Alliance, Grand Alliance, and so on. In addition, the
supply chain management regards the concept of collaboration as a critical factor for its successful implementation.
In Korea, an express company generally operates its own service network which consists of customer zones, service
centers, and consolidation terminals. Customer zones refer to geographical districts in which customers either ship or
receive packages and are typically covered by a service center. And a service center receives customer shipment requests
and picks up parcels from customer zones and then the packages are waited until its cutoff time for transshipment in bulk to
a consolidation terminal. In this way, the service center acts as a temporary storage facility connecting customers to a
ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(1-2), 84–98, 2013.

COMPARISON OF ALTERNATIVE SHIP-TO-YARD VEHICLES WITH THE


CONSIDERATION OF THE BATCH PROCESS OF QUAY CRANES

S. H. Choi1, S. H. Won2, C. Lee3


1
Port Management/Operation & Technology Department,
Korea Maritime Institute
1652, Sangam-dong, Mapo-gu,
Seoul, South Korea
2
Department of Logistics,
Kunsan National University
558 Daehangno, Gunsan,
Jeonbuk, South Korea
3
School of Industrial Management Engineering,
Korea University
Anam-dong, Seongbuk-gu,
Seoul, South Korea
Corresponding author’s email: Seung Hwan Won, shwon@kunsan.ac.kr

Container terminals around the world fiercely compete to increase their throughput and to accommodate new mega vessels.
In order to increase the port throughput drastically, new quay cranes capable of batch processing are being introduced. The
tandem-lift spreader equipped with a quay crane, which can handle one to four containers simultaneously, has recently been
developed. Such increase in the handling capacity of quay cranes requires significant increase in the transportation capacity
of ship-to-yard vehicles as well. The objective of this study is to compare the performances of three alternative
configurations of ship-to-yard vehicles in a conventional container terminal environment. We assume that the yard storage
for containers is horizontally configured and the quay cranes equip with tandem-lift spreaders. A discrete event simulation
model for a container terminal is developed and validated. We compare the performances of the three alternatives under
different cargo workloads and profiles, represented by different annual container handling volumes and different ratios of
tandem mode operations, respectively. The results show that the performances of the alternative vehicle types are largely
dependent on workload requirement and profile.
Keywords: ship-to-yard vehicle; simulation; container terminal; quay crane; tandem-lift spreader

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
As the size of trade between countries increased, there are rapid changes in the logistics environment concerning ports. The
world container traffic in 2008 is 540 million TEUs, which grew by 2.3 times compared to 230 million TEUs in 2000. It is
forecasted to achieve growth rate of around the annual average of 9% by 2013. Due to this, the marine transportation
industry has made Mega-Carrier appear through mergers and acquisitions between shipping lines to expand market
dominance, and they are continuing to make enormous investments for securing mega ships over 10,000 TEUs in order to
strengthen the competitiveness in shipping cost.
According to such changes in the shipping environment, large ports in the world are engaging in fierce competition for
hub ports by continents to attract mega fleet, and this is leading to the trend of strengthening port competitiveness through
the securing and operation of efficient port facilities. In other words, the world’s leading ports such as Singapore, Shanghai,
Hong Kong, Shenzhen, Busan, Rotterdam, and Hamburg are not only developing large-sized terminals but also investing
highly productive handling equipment for the efficiency of port operation.
The handling equipment in a port generally consists of quay cranes (QCs), ship-to-yard vehicles (terminal trucks or
automated guided vehicles), and yard cranes (YCs). Out of these, QCs and ship-to-yard vehicles are most closely related to
ships. These are the most important factors that determine the ship turnaround time in a port.
Berthing a mega ship over 10,000 TEUs in a port requires water depth, the workable specification of QCs, and the
high productivity of a terminal. Despite increasing the size of ships, shipping lines tend to require the service time in the
past. Therefore, ports unable to meet the trend of such customer requirements may bring about the desertion of customers.

ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(1-2), 99–113, 2013.

A HIERARCHICAL APPROACH TO VEHICLE ROUTING AND


SCHEDULING WITH SEQUENTIAL SERVICES USING THE GENETIC
ALGORITHM

K. C. Kim1, J. U. Sun2, S. W. Lee3


1
School of Mechanical, Industrial & Manufacturing Engineering,
Oregon State University,
USA
2
School of Industrial & Management Engineering,
Hankuk University of Foreign Studies,
Korea
3
Department of Industrial Engineering,
Pusan National University
Korea
Corresponding author’s email: slee7@pusan.ac.kr

Abstract To survive in today’s competitive market, material handling activities need to be planned carefully to satisfy
business’ and customers' demand. The vehicle routing and scheduling problems have been studied extensively for various
industries with special needs. In this paper, a vehicle routing problem considering unique characteristics of the electronics
industry is considered. A mixed-integer nonlinear programming (MINP) model has been presented to minimize the
traveling time of delivery and installation vehicles. A hierarchical approach using the genetic algorithm has been proposed
and implemented to solve problems of various sizes. The computational results show the effectiveness and the efficiency of
the proposed hierarchical approach. A performance comparison between the MINP approach and the hierarchical approach
is also presented.
Keywords: Vehicle Routing Problem, Delivery and Installation, Synchronization of Vehicles, Genetic Algorithm,
Electronics Industry

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION

To survive in this competitive business environment, a company must have ways to handle various materials of concern
cost-effectively. In manufacturing industries, material handling activities for raw materials and works-in-process are as
important as the ones for final products. So that material handling activities satisfy business’ and customers' demand
effectively, the vehicle routing and scheduling problems have been studied and implemented extensively for various
industries with special needs (Golden and Wasil, 1987; List and Mirchandani, 1991; Chien and Spasovic, 2002; Zografos
and Androutsopoulos, 2004; Ripplinger, 2005; Prive et al, 2006; Claassen and Hendricks, 2007; Ji, 2007). In this paper, a
variant of the vehicle routing problems (VRP), which has been characterized in the electronics industry to satisfy its unique
material handling needs as the paradigm of distribution has been shifted from the past, has been presented.
In recent days, the electronics industry experiences rapidly-emerging changes in their post-sales service, i.e., delivery
and installation. In the past, local stores individually are responsible for the services of the delivery and the installation.
However, due to the growing demand of direct orders from customers and the increasing complexity of advanced
electronics products, electronics manufacturers are acceleratingly required to directly deliver their goods to customers and
to provide on-site professional installation. The sales of electronics via e-commerce, large discount stores, general
merchandise stores, department stores, and etc. are very rapidly increasing. In addition, electronics manufacturers put
intensive efforts to increase sales through professional electronics franchises like Staples, OfficeMax, and etc., which do
not provide such delivery and installation. These trends tend to add the responsibilities of the delivery and the installation
onto electronics manufacturers, and the number of direct deliveries from electronics manufacturers to customers increases
at an explosive pace.
Another unique characteristic of the electronics industry can be identified in installation service. Some products like air-
conditioners have required the professional installation service even in the past. Many newly-emerging products require not

ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(1-2), 114–125, 2013.

THE PROBLEM OF COLLABORATION IN MANUFACTURED GOODS


EXPORTATION THROUGH AUTONOMOUS AGENTS AND SYSTEM
DYNAMIC THEORIES
V. M. D. Silva1, A. G. Novaes2, B. Scholz-Reiter3, J. Piotrowski3
1
Department of Production Engineering
Federal Technological University of Paraná,
Ponta Grossa, PR, 84016-210,
Brazil
Corresponding author’s e-mail: vaninasilva@utfpr.edu.br
2
Federal University of Santa Catarina
Florianopolis, SC, 88040-900
Brazil
e-mail: novaes@deps.ufsc.br
3
Bremen Institut for Production und Logistic
BIBA- University of Bremen,
Hochschulring 20, 28359,
Bremen, Germany
e-mail: bsr@biba.uni-bremen.de
e-mail: pio@biba.uni-bremen.de

Abstract: Along export chains transportation has an important cost impacting directly on the efficiency of the whole chain.
Experiments show satisfactory results in terms of reduced delivery time, increased productivity of transportation resources
as well as economies of scale by the implementation of the Collaborative Transportation Management (CTM). In this
context, this incipient study intends to present a real Brazilian problem about exports of manufactured products using
maritime transportation, as well as introducing the concept of CTM as a tool for helping companies on their decision
making. It identifies the major parameters that could support the maritime logistics of manufactured products and, the
Autonomous Agents and System Dynamics theories are described as possible methods to model and analyze this logistic
problem. As a result for this preliminary study, is intended to awake the readers interest about these emergent concepts
applied to such important problem to contribute with the costs reduction of the exports chain.
Key-Words: Collaborative transportation management, Manufactured exporters, Maritime shippers, Autonomous agents,
Decision-making, System Dynamics

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
In Brazil, the foreign trade has been not used as pro-active factor in development strategy because, historically, the
negotiations between the different participants of the export chain have presented conflicts. It is observed that each link
intend to minimize its individual costs, which normally does not converge to the global optimum of the supply chain.
Therefore, companies are being obliged to re-analyze its procedures, to use reengineering techniques and redefine the
relations and models of its supply chains to reduce costs, increase efficiencies and gain competitive advantage.
To reduce such problems it has recently emerged the concept of CTM, in the new concept of collaborative logistics. It
has been spread out from the year 2000 through Collaborative Planning, Forecasting and Replenishment (CPFR) approach,
and CTM has been defined by experts as a helpful tool to provide reductions in the costs of transactions and risks, enhance
the performance of service and capacity, as well as the achievement of a more dynamic supply chain (Silva et al., 2009).
As the exporter Brazilian companies are looking for higher competitiveness, they shall not act in an individual manner
and start acting in a collaborative manner. Therefore, it is required a detailed sharing of data and information by the agents
of the logistics chain to compose a solid partnership. It is understood as agent each integrant of this chain, as in the
maritime logistics chain: the producer company, road transportation, shipowners and maritime shippers.
After bibliographic studies and contacting entrepreneurs of this area, it is verified that there is restrict scientific work
exploring this subject comprising manufactory industries, freight contractors and maritime shippers, in order to contribute
with exportation. Therefore, this study, which is part of a Ph.D. thesis in development, intends to summarily present an
overview of the Brazilian exportation and its operation of manufactured exportation chain using maritime shippers, the
ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(1-2), 126–140, 2013.  

EVOLUTION OF INTER-FIRM RELATIONSHIPS: A STUDY OF


SUPPLIER-LOGISTICAL SERVICES PROVIDER-CUSTOMER TRIADS
P. Childerhouse1, W. Luo1, C. Basnet1, H. J. Ahn2, H. Lee3, G. Vossen4
1
Department of Management Systems, Waikato Management School,
University of Waikato,
Hamilton 3216, New Zealand
2
College of Business Administration,
Hongik University,
Seoul, Korea
3
Brunel Business School,
Brunel University, Uxbridge,
Middlesex, UK
4
School of Business Administration and Economics,
University of Muenster, 48149 Muenster,
Germany
Corresponding author’s email: Paul Childerhouse, pchilder@gmail.com

The concept of supply chain management has evolved from focussing initially on functional co-ordination within an
organisation, then to external dyadic integration with suppliers and customers and more recently towards a holistic
network perspective. The focus of the research described in this paper is to explore how and why relationships within
supply chain networks change over time. Since a triad is the simplest meaningful sub-set of a network, we use triads as
the unit of analysis in our research. In particular, we consider triads consisting of a supplier, their customer, and the
associated logistics services provider. An evolutionary triadic model with eight relational states is proposed and the
evolutionary paths between the states hypothesised, based on balance theory. The fundamental role of logistical service
providers is examined within these alternative triadic states with a specific focus on the relationships between the actors
in the triad. Empirical evidence is collected from three very different triads and cross-referenced with our proposed
model. How the interactions and relationships change over time is the central focus of the case studies and the
conceptual model. Our findings indicate that some networks are more stable than others and depending on their
position in a triad some actors can gain power over their business partners. Further, those organisations that act as
information conduits seem to have greater capacity to influence their standing in a supply chain network.

Significance: We make conceptual contribution to supply network theory, as well as reporting empirical investigation
of the theory.

Keywords: Supply networks, Inter-firm relationships, Triads, Balance theory, Logistical service providers.

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION

This paper investigates inter-firm relationships from a social network perspective. In particular, we examine the
relationship dynamics of a network of inter-connected firms with shared end consumers. The social network
perspective has gained significant momentum in the management literature (Wang and Wei, 2007). In this paper, we
use the psychological concept of balance theory (Simmel, 1950; Heider, 1958) to make sense of the dynamic inter-
relationships in a supply chain network. The most important dimensions of change in business networks that will be
focussed upon concern the development of activity links, resources ties, and actor relationship bonds (Gadde and
Hakansson, 2001).
A triad is the smallest meaningful sub-set of a network (Madhavan, Gnyawali and He, 2004) and as such will be
used as the unit of analysis throughout this paper. Figure 1 is a simplistic representation of the multi-layered complex
business interactions that make up supply chain networks. The actors are represented by nodes (circles) and the
connections between them as links. A triadic sub-set of the entire network is illustrated as the grey shaded area in
Figure 1. Three actors, ‘A,’ ‘B,’ and ‘C’ are highlighted and their three links, ‘A’ with ‘B,’ ‘A’ with ‘C’ and ‘B’ with
‘C.’ Each actor also has a potential mediating role in the relationship between the other two as indicated by the dashed
arrow from actor ‘A’ to the link between ‘B’ and ‘C.’ Thus, we contend that a representative sub-set of a network can
be investigated via triads. This cannot be said for dyads, which overly simplify the social complexities of real world
business interactions.
ISSN  1943-­‐670X     ©INTERNATIONAL  JOURNAL  OF  INDUSTRIAL  ENGINEERING  
International Journal of Industrial Engineering, 20(1-2), 141–152, 2013.  

OPERATION PLANNING FOR MARITIME EMPTY CONTAINER


REPOSITIONING
Y. Long1, L. H. Lee2, E. P. Chew3, Y. Luo4, J. Shao5, A. Senguta6, S. M. L. Chua7
1,2,3,4,5
Department of Industrial and Systems Engineering,
National University of Singapore,
Singapore,119260
1
Corresponding author’s email: g0701018@nus.edu.sg
2
iseleelh@nus.edu.sg
3
isecep@nus.edu.sg
4
g0701019@nus.edu.sg
5
Shaojijun@nus.edu.sg
6,7
Neptune Orient Lines Ltd.,
Singapore,119962
6
Arpan_Sengupta@apl.com
7
Selina_M_L_Chua@apl.com

Abstract
One of the challenges that liner operators face today is to effectively operate empty containers to meet demands and to
reduce inefficiency. In this study, we develop a decision support tool to help the liner operator in managing the maritime
empty container repositioning efficiently. This tool considers the actual operations and constraints of the problems faced by
the liner operator and uses mathematical programming approaches to solve it. We present a case study, which considers 49
ports and 44services.We also compare our proposed model with a simple rule, which attempts to mimic the actual operation
of a shipping liner. The numerical results show that the proposed model is promising. Moreover, our model is able to
identify potential transshipment hubs for intra-Asia empty container transportation.
Keywords Empty Container Repositioning Optimization Network Transshipment Hub Decision Support Tool

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
Since 1970s, containerization has become increasingly popular in global freight transportation activities, especially in
international trade routes. Containerization helps to improve port handling efficiency, reduce handling costs, and increase
trade flows. To increase the utilization of containers, containers should be reloaded with new cargoes as soon as possible
after reaching its destination. However, this is not always possible due to the trade imbalance between different regions in
the world and this has resulted in holding large inventory of empty containers by ocean liners and thereby increasing the
operating cost. Generally export-dominated ports need a large number of empty containers, while import-dominated ports
hold a large number of surplus empty containers. Under this imbalanced situation, a profitable movement of a laden
container usually generates an unprofitable empty container movement. The main challenge is when and how many empty
containers we should move from the import-dominated ports to export-dominated ports to meet the customer demands
while reducing the operational cost.
There are a large number of studies on the empty container repositioning problem. One area is to use inventory-based
control mechanisms for empty container management (e.g., Li et al., 2004; Song, 2007; Song and Earl, 2008). Another area
is to apply dynamic network programming methods to container management problem (e.g., Lai et al., 1995; Shen and
Khoong, 1995; Shitani et al., 2007; Liu et al., 2007; Erera et al., 2009). Some studies of this area focus on inland container
flow (e.g., Crainic et al., 1993; Jula et al., 2006), while some studies are on maritime transportation (e.g., Francesco et al.,
2009). Besides, there are studies developing intermodal models, which consider both inland and maritime transportation
(e.g., Choong et al., 2002; Erera et al., 2005; Olive et al., 2005).The general maritime network model for empty container
repositioning was proposed by Cheung and Chen (1998).They develop a time space network model and their study paves
the way for maritime empty container repositioning network modeling. To apply the general networking techniques to the
shipping industry, researchers tend to consider the actual services and the real scale network in the latest decade. Actual
service schedule is considered in Lam et al. (2007). They develop an approximate dynamic programming approach in
deriving operational strategies for the relocation of empty containers. Although actual services schedule is considered in
their study, the proposed dynamic approximation programming is limited to a small scale problem. One paper that
ISSN  1943-­‐670X     ©INTERNATIONAL  JOURNAL  OF  INDUSTRIAL  ENGINEERING  
International Journal of Industrial Engineering, 20(1-2), 153–162, 2013.

OPTIMAL PRICING AND GUARANTEED LEAD TIME WITH LATENESS


PENALTIES
K. S. Hong1, C. Lee2
1,2
Division of Industrial Management Engineering
Korea University
Anamdong 5-ga, Seongbuk-gu, 136-713
Seoul, Republic of Korea
1
justlikewind@korea.ac.kr
2
Corresponding author’s e-mail: leecu@korea.ac.kr

This paper studies the price and guaranteed lead time decision of a supplier that offers a fixed guaranteed lead time for a
product. If the supplier is not able to meet the guaranteed lead time, the supplier must pay a lateness penalty to customers.
Thus, the expected demand is a function of the price, guaranteed lead time and lateness penalty. We first develop a
mathematical model for a given supply capacity to determine the optimal price, guaranteed lead time and lateness penalty to
maximize the total profit. We then consider the case where it is also possible for the supplier to increase capacity and
compute the optimal capacity.
Keyword: Time-based competition, Guaranteed lead time, Pricing, Lateness penalty decision, Price and time sensitive
market

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
Increased competition has forced service providers and manufacturers to introduce new products into the market and time
has evolved as a competitive paradigm (Blackburn 1991, Hum and Sim 1996). As time has become a key to business
success, lead time reduction has emerged as a key competitive edge in service and manufacturing (Van Beek and Van
Putten 1987, Suri 1998, Hopp and Spearman 2000, White et al. 2009). This new competitive paradigm is termed as time-
based competition.
Suppliers exploit customers’ sensitivity to time to increase prices in return for shorter lead time. For instance,
amazon.com charges more than double the standard shipping costs to guarantee delivery in two days, while its normal
delivery time may be as long as a week (Ray and Jewkes, 2004). Likewise, suppliers differentiate their products based on
lead time in order to maximize the supplier’s revenue (Boyaci and Ray, 2003). In this case, the lead time reduction provides
suppliers with new opportunities. Additionally, in today’s global economy, suppliers are increasingly dependent on fast
response time as an important source of sustainable competitive advantage. As a result, one needs to consider the influence
of lead time on demand.
This paper considers a supplier that is using guaranteed lead time to attract customers and supply a product in a price and
time sensitive market. Time-based competition was first studied by Stalk and Hout (1990) who addressed the effect of time
as strategic competitiveness. Hill and Khosla (1992) developed an optimization model to calculate the optimal lead time
reduction, and compares the costs and benefits of lead time reduction. Palaka et al. (1998), So and Song (1998) and Ray
and Jewkes (2004) assumed that demands are sensitive to both the price and the guaranteed lead time, and investigated the
optimal pricing and guaranteed lead time decisions. Palaka et al. (1998) employed an M/M/1 queueing model, and
developed a mathematical model to determine the optimal guaranteed lead time, the capacity utilization and the price with a
linear demand function. So and Song (1998) extended the Palaka et al’s. (1998) results to consider a log-linear (Cobb-
Douglas) demand function, and analyzed the impact of using delivery time guarantees as a competitive strategy in service
industries. Ray and Jewkes (2004) assumed that the mean demand rate is a function of price and guaranteed lead time, and
the price is determined by the length of the lead time, and developed the optimization model to determine the optimal
guaranteed lead time. They also extended their results by incorporating economies of scale where the unit operating cost is
a decreasing function of the mean demand rate.
So (2000), Tsay and Agarwal (2000), Pekgun et al. (2006) and Allon and Federgruen (2008) also developed a
mathematical model to determine the optimal price and the optimal guaranteed lead time in a competitive setting where
suppliers selling a product compete on price and lead time. However, their models do not consider the lateness penalty
decision. When suppliers employ a guaranteed lead time strategy, they may have to pay the lateness penalty to customers
ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(1-2), 163–175, 2013.

OPTIMAL CONFIGURATION OF STORAGE SYSTEMS


FOR MIXED PYRAMID STACKING
D. W. Jang1, K. H. Kim2
1
Port Research Division Port Management/Operation & Technology Department
Korea Maritime Institute
Seoul, Korea
Email: dwjang@kmi.re.kr
2
Department of Industrial Engineering
Pusan National University
Busan, Korea
Corresponding author’s email: kapkim@pusan.ac.kr

Abstract: Pyramid stacking is a type of block stacking method for cylindrical unit loads such as drums, coils, paper rolls,
and so on. This study addresses how to determine the optimum configuration of a storage system for mixed pyramid
stacking of multi-group unit loads. It is assumed that multiple groups of unit loads, with different retrieval rates and
duration of stays from each other, are stored in the same storage system. The configuration of a storage system is specified
by the number of bays, the assignment of groups to each bay, and the height and width of each bay. A cost model
considering the handling cost and the space cost is proposed. Numerical experiments are provided to illustrate the
procedures for the optimization in this study.

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
Pyramid stacking is a storage method in which cylindrical unit loads are stacked on the floor as shown in Figure 1. Pyramid
stacking is one of the storage methods with a high re-handling cost and space utilization. The bay of pyramid stack in
Figure 1 consists of 3 tiers by 4 rows at the bottom resulting in 9 unit loads in total. There are 4, 3 and 2 unit loads at each
tier, respectively, from the bottom. When a retrieval order is issued for a unit load at a low tier, one or more than one
relocation must be performed before the target unit load is retrieved. Such relocations are a major source of inefficiency in
handling activities in pyramid stacking systems.
Figure 2 shows the total number of handling each unit load for retrieval in the pyramid stacking system of Figure 1.
The k is the index of the tier from the top and l is the index of the position in each tier from the left hand side. The s
represents the total number of handlings for retrieving a target unit load from each corresponding position. In case of a unit
load at (3,2), it requires 4 relocations of unit loads at (1,1), (1,2), (2,1) and (2,2) and thus the total number of handlings
becomes 5.
For a given number of unit loads in a bay, when the number of unit loads at the lowest tier decreases, the number of
tiers in pyramid stacking system must increase, which results in an increase in the expected number of relocations per
retrieval. However, the height of a pyramid stacking bay cannot exceed the number of unit loads at the lowest tier because
the number of unit loads per tier decreases one by one as the tier goes up. When the number of unit loads at the lowest tier
increases, the space required for the bay increases. Park and Kim (2010) attempted to estimate the number of re-handles for
a given number of unit loads at the lowest tier and the number of tiers in a bay when all the unit loads are heterogeneous,
which means that all the unit loads in the bay are different from each other and a retrieval order is issued for a specific unit
load in the bay. However, this study extends Park and Kim (2010) to the case where multiple unit loads in the bay are the
same type and thus a retrieval order is issued for the unit loads in the same type. Figure 3-(a) illustrates the case where all
the unit loads in a bay are different types, while Figure 3-(b) illustrates the case where there are three unit loads in each of
three types.
Many researchers have analyzed the re-handling operation. Watanabe (1991) analyzed the handling activities in
container yards and proposed a simple index, termed as the “index of accessibility” to express the accessibility of a stack
considering the number of relocations required to lift a container. Castilho and Daganzo (1993) analyzed the handling
activities in inbound container yards. Based on a simulation study, they proposed a formula for estimating the number of
relocations for the random retrieval of a container. Kim (1997) proposed a formula for estimating the number of relocations
for a random retrieval of an inbound container from a bay. Kim and Kim (1999) analyzed the handling activities for
relocations in inbound container yards and used the result for determining the number of devices and the amount of space
ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(1-2), 176–187, 2013.

PLANNING FOR SELECTIVE REMARSHALING IN AN AUTOMATED


CONTAINER TERMINAL USING COEVOLUTIONARY ALGORITHMS
K. Park1, T. Park1, K. R. Ryu1
1
Department of Computer Engineering
Pusan National University
Busan, Korea
Corresponding author’s email: krryu@pusan.ac.kr

Abstract: Remarshaling in a container terminal refers to the task of rearranging containers stored in the stacking yard to
improve the efficiency of subsequent loading onto a vessel. When the time allowed for such preparatory work is limited,
only a selected subset of containers can be rearranged. This paper proposes a cooperative co-evolutionary algorithm (CCEA)
that decomposes the planning problem into three subproblems of selecting containers, determining target locations, and
finding a moving order, and conducts a cooperative parallel search to find a good solution for each subproblem. To cope
with the uncertainty of crane operation in real terminals, the proposed method iteratively replans at regular intervals to min-
imize the gap between the plan and the execution. For an efficient search under real-time constraint of iterative replanning,
our CCEA reuses the final populations of the previous iteration instead of restarting from scratch.

Significance: This paper deals with an optimization problem having three constituent subproblems that are not inde-
pendent of each other. Instead of solving the subproblems in turn and/or heuristically, which sacrifices
solution quality for efficiency, we employ a CCEA to conduct a cooperative parallel search to find a
good solution efficiently. For applications to real world, issues like real-time constraint and uncertainty
are also addressed.
Keywords: Automated container terminal, remarshaling, container selection, iterative replanning, cooperative co-
evolutionary algorithm

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION

The productivity of a container terminal is critically dependent on the vessel dwell time that is mainly determined by how
efficiently the export containers are loaded onto the vessels. The efficiency of loading operation is dependent on how the
containers are stacked in the stacking yard where the containers are temporarily stored. The export containers should be
loaded in a predetermined sequence taking account of the weight balance of vessel and the convenience of operations at the
intermediate and the final destination ports. If a container to be fetched next is stored under some other containers, addi-
tional operations are required for the yard crane to relocate the containers above it. This rehandling is the major source of
inefficiency of loading, causing delays at the quayside. Loading operation is also delayed if a yard crane needs to travel a
long distance to fetch a container for loading. The delay of loading caused by rehandling or long travelling can be avoided
if the export containers are arranged in an ideal configuration respecting the loading sequence. There have been many stud-
ies on deciding ideal stacking positions of export containers coming into the yard (Kim et al., 2000, Duinkerken et al., 2001,
Dekker et al., 2006, Yang et al., 2006, Park et al., 2010a, and Park et al., 2010c). However, appropriate stacking of incom-
ing containers is difficult because most of the containers are carried into the terminal before the loading plan is made avail-
able. Remarshaling refers to the preparatory task of rearranging the containers during the idle times of yard cranes to avoid
rehandling and long travelling at the time of loading. In real container terminals, however, not all the export containers can
usually be remarshaled because the crane idle time is not long enough and the loading plan is fixed only a few hours before
the loading operation starts.
In this paper, we propose a cooperative coevolutionary algorithm (CCEA) that can derive a remarshaling plan for a se-
lected subset of the export containers under time constraint. The idea of CCEA is to efficiently search for a solution in a
reduced search space by decomposing a given problem into subproblems (Potter et al., 2000). In CCEAs, there is a popula-
tion of candidate solutions for each subproblem, and these populations evolve cooperatively via mutual information ex-
changes. Park et al. (2009) developed a planning method for remarshaling all the export containers using a CCEA assuming
no time constraint. In their CCEA, the problem of remarshaling is decomposed into two subproblems: one for determining
the target slots to which the containers are relocated and the other for determining the order of moving the containers. An-
other work by Park et al. (2010b) paid attention to the problem of selective remarshaling and proposed a genetic algorithm
ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(1-2), 188–210, 2013.

SEASONAL SUPPLY CHAIN AND THE BULLWHIP EFFECT


D. W. Cho1, Y. H. Lee2
1
Department of Industrial and Management Engineering
Hanyang University, Ansan, Gyeonggi-Do, 426-791, South Korea,
e-mail: dwcmjcho@hanyang.ac.kr
2*
Department of Industrial and Management Engineering
Hanyang University, Ansan, Gyeonggi-Do, 426-791, South Korea,
Corresponding author’s e-mail: yhlee@hanyang.ac.kr

Abstract In this study, we quantify the bullwhip effect in a seasonal two echelon supply chain with stochastic lead time.
The bullwhip effect is the phenomenon of demand variability amplification when one moves away from the customer
to the supplier in a supply chain. The amplification effect poses very severe problems for a supply chain. The retailer
faces external demand for a single product from end customers where the underlying demand process is a seasonal
autoregressive moving average, SARMA (1,0)X(0,1)s demand process. And the retailer employs a base stock periodic
review policy to replenish its inventory from the upstream party every period using the minimum mean-square error
forecasting technique. We investigate what parameters influence the bullwhip effect and how large each parameter
affects it. In addition, we investigate the respective relationship between the seasonal period and the lead time, the
seasonal moving average coefficient, and the autoregressive coefficient on the bullwhip effect in a seasonal supply
chain.
Keywords: Supply chain management, Bullwhip effect, Seasonal autoregressive moving average process, Stochastic
lead time

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
Seasonal supply chains are affected by seasonal behavior that impact material and information flows both in and
between facilities including vendors, manufacturing and assembly plants, and distribution centers. The seasonal
patterns of demand, which exist when time series data fluctuates according to some seasonal factor, are a common
occurrence in many supply chains. This may intensify the bullwhip effect, which causes severe problems in supply
chains. Seasonal peaks of demand may increase demand variability amplification across the supply chain. In the long
run, this result may lead to a reduction in supply chain profitability, the difference between the revenue generated from
the final customer and the total cost across the supply chain.
A basic approach to maintain supply chain profitability is for each independent entity of a supply chain to maintain
stable inventory levels to fulfill customer requests at a minimum cost. However, the main one among barriers both
internal and external to achieving this objective is recognized as the bullwhip effect. This effect is the phenomenon of
the increasing amplification of variability in orders occurring within a supply chain the more one moves upstream. This
amplification effect includes demand distortion described as a phenomenon where order to the suppliers tends to have
larger variance than the sales to the buyer. The occurrence of the bullwhip effect in a supply chain poses severe
problems such as lost revenues, inaccurate demand forecasts, low capacity utilization, missed production schedules,
ineffective transportation, excessive inventory investments, and poor customer service (Lee et al., 1997a, b).
Forrester (1969) proves evidence of the bullwhip effect. Sterman (1989) exhibites the same phenomenon through an
experiment known as the beer game. In addition, Lee et al. (1997a, b) discoveres five main sources that may lead to the
bullwhip effect, including demand signal processing, non-zero lead-time, order batching, rationing game under shortage,
and price fluctuations and promotions. They argue that eliminating its main causes may significantly reduce the
bullwhip effect. In the concrete, the demand process, lead times, inventory policies, supply shortage and the forecasting
techniques have a significant influence on the bullwhip effect. Among these, forecasting techniques, inventory policies
and to some extent replenishment lead time are controllable by supply chain members and hence can be decided upon
to mitigate the bullwhip effect. However, demand process whether seasonal or not is uncontrollable because of external
parameter occurring at the customer. It is reasonable for supply chain members to suitably respond to demand process
they face. Changing demand trends has a significant influence on supply chain performance measures (Byrne and
Heavey, 2006). Therefore, it is important to understand the impact of the seasonal demand process on the bullwhip
effect in a seasonal supply chain.
There have been many studies in the bullwhip effect including demand process, forecasting techniques, lead times
and an ordering policy. Alwan et al. (2003) studied the bullwhip effect under an order-up-to policy by applying the
mean squared error optimal forecasting method to an AR(1) and investigated the stochastic nature of the ordering
process for an incoming ARMA(1,1) using the same inventory policy and forecasting technique. Chen et al. (2000a,
2000b), Luong (2007), and Luong and Phien (2007) studied the bullwhip effect resulting from an order-up-to policy
ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(1-2), 211–224, 2013.

SCHEDULING ALGORITHMS FOR MOBILE HARBOR:


AN EXTENDED M-PARALLEL MACHINE PROBLEM
I. Sung1, H. Nam1, T. Lee1
1
Korea Advanced Institute of Science and Technology
Korea, Republic Of
Corresponding author’s email: taesik.lee@kaist.edu

Abstract: Mobile Harbor is a movable floating structure with container loading/unloading equipment on board. Mobile
Harbor is equivalent to a berth with a quay crane in a conventional port, except that it works with a container ship
anchoring on the open sea. A Mobile Harbor-based system typically deploys a fleet of Mobile Harbor units to handle a
large number of containers, and operations scheduling for the fleet is essential to the productivity of the system. In this
paper, a method to compute scheduling solutions for a Mobile Harbor fleet is proposed. Jobs are assigned to Mobile Harbor
units, and their operations sequence is determined, with an objective of minimizing the sum of completion times of all
container ships. This problem is formulated as a mixed integer programming (MIP) problem, which is modified from an m-
parallel machine problem. A heuristic approach using Genetic Algorithm is developed to obtain a near optimal solution
with reduced computation time.

(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
In today’s global economy environment, demand for maritime container transportation has been steadily increasing. This,
in turn, has stimulated the adoption of very large container ships of over 8,000TEU1 capacity, in an effort to reduce the
transportation costs. With the introduction of such large container ships, container terminals are now facing challenges to
dramatically improve their service capability to efficiently serve container ships. The challenges include providing
sufficient water depth at their berths and in their sea routes, and improving container handling productivity to reduce port
staying time for container ships. Resolving these problems by conventional approaches – expanding existing ports or
building new ones – requires massive investment and causes environmental concerns.
Mobile Harbor is a new concept developed by a group of researchers at Korea Advanced Institute of Science and
Technology (KAIST) as an alternative solution to this problem. Mobile Harbor is a container transportation system that can
load/unload containers from a container ship anchoring on the open sea. It can transfer containers from a container ship to a
container terminal, and vice versa. A concept design and dimensional specifications of Mobile Harbor are shown in Figure
1 and Table 1, respectively.

Figure 1. A concept design of Mobile Harbor

An illustrative operational scenario of Mobile Harbor is as follows:

Ÿ A container ship calls at a port, and instead of berthing at a terminal, it anchors at an anchorage, remotely located from
the terminal,

1 TEU stands for twenty-foot equivalent unit.


ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(1-2), 225–240, 2013.

SHORT SEA SHIPPING AND RIVER-SEA SHIPPING IN THE


MULTI-MODAL TRANSPORT OF CONTAINERS
J. R. Daduna

Berlin School of Economics and Law


Badensche Str. 52
D - 10825 Berlin, Germany
e-mail: daduna@hwr-berlin.de

Abstract: The constantly increasing quantitative and qualitative requirements for the terrestrial container and Ro/Ro
transport can not only be dealt with in road and rail freight transport and from transportation on inland waterways in the
upcoming years. Also suitable solutions have to be found, which include other modes of transport, where both economic
and ecological factors as well as macroeconomic considerations are of importance. One possible approach is to increase the
use of Short Sea Shipping and River-Sea Shipping that is less applied so far. In this contribution, the underlying structures
are presented and reviewed for their advantages and disadvantages. Potential demand structures are identified and illustrat-
ed by various examples. The paper concludes with analysis and evaluation of these concepts and the summary of necessary
measures for their implementation.

(Received November 30, 2010; Accepted March 15, 2012)

1. POLITICAL FRAMEWORK FOR FREIGHT TRANSPORT

The realization of cargo traffic, both as inland and port hinterland transport, largely occurs in road freight transport at the
moment. This situation is very contradictory to the (worldwide increasingly coming to the fore) transport policy objectives,
which provide a sustainable change of modal split for the benefit of rail freight transport and freight transport on inland
waterways. Considerations regarding the efficient use of resources and the reduction of mobility-based pollution receive
priority here. However, the results of a realization of these goals should not be overestimated. The critical question is to
which extend a modal shift can actually be achieved under the existing technical and organizational framework and the re-
quirements for operational processes in logistics (see e.g. Daduna 2009). In general, this concept does not exclude undertak-
ing measures to shift road transport to other modes of transport, but more importantly the existing potentials should be ex-
hausted, especially in long-distance haulage.
Targeted governmental measures in various countries, for example in the Federal Republic of Germany with the in-
troduction of the road toll (for heavy trucks over 12 tones admissible gross vehicle weight on highways), which has caused
an (administratively enforced) increase in cost of road transport, do not show the aspired effect (see e.g. Bulheller 2006;
Bühler / Jochem 2008). Only the economical behavior of suppliers of services in the road transport has led to noticeable
ecological effects, for example by increasing use of vehicles with lower pollutant category (see e.g. BAG 2009: 19p).
The (often existing and desired) political prioritization of multi-modal freight transport for road / rail has not yet led to
the expected results regarding a significant change in modal split, as from user perspective in many cases process efficiency
and adequate quality of services is not provided. In addition, in rail transport there are the often existing capacity re-
strictions regarding the available network infrastructure, as well as (for example within the European Communities (EC))
the sometimes significant interoperabilities in cross-border traffic. This especially occurs concerning the monitoring and
control technology and the energy supply as well as the legal framework (see e.g. Pachl 2004: 16pp).
Another possibility is the inclusion of inland waterway and maritime navigation in the structures of multi-modal
transport, regardless of (process-related) limits. The inland waterway navigation can offer only limited shift potentials be-
cause of capacity restrictions (referring to the authorized breadth and draught of inland waterway vessels) and the geo-
graphical structures of the available network. Also in maritime navigation accordant restrictions occur regarding (possible)
access to the (often close to the customer located but smaller) ports and therewith to the hinterland, for example with a (fur-
ther) increase in ocean-going vessel sizes (s. e.g. Imai et al. 2008).
An increasingly discussed and also worldwide in various areas implemented solution is given by the concept of Short
Sea Shipping (SSS) (also in the context of feeder traffic), whereupon a larger number of smaller ports (with local and / or
regional importance) is involved in the configuration of transport processes. In the focus of attention are multi-modal
transport chains, in which primarily the (coastal) shipping is efficiently linked with the (classical) terrestrial modes of
transport. A specific extension of these considerations results from the integration of River-Sea Shipping (RSS), because
not only coastal transport routes are used here, but with a suitably designed inland network of waterways also access to the
ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(3-4), 241-251, 2013.

A PREMLIMINARY MATHEMATICAL ANALYSIS FOR


UNDERSTANDING TRANSMISSION DYNAMICS OF NOSOCOMIAL
INFECTIONS IN A NICU
Taesu Cheong1, and Jennifer L. Grant2
1
Department of Industrial and Systems Engineering, National University of Singapore
Singapore 117576, Singapore
2
Rollins School of Public Health, Department of Health Policy and Management,
Emory University, Atlanta, GA 30322, USA

Nosocomial infections (NIs) have been a major concern in hospitals, especially in high-risk populations. Neonates
hospitalized in the intensive care unit (ICU) have a higher risk of acquiring infections during hospitalization than other
ICU populations, which often result in prolonged and more severe illness and possibly death. The corresponding
economic burden is immense, not only for parents and insurance companies, but also for hospitals faced with increased
patient load and resource utilization. In this paper, we attempt to systemically understand the transmission dynamics of
NIs in a neonatal intensive care unit (NICU). For this purpose, we present a mathematical model, perform sensitivity
analysis to evaluate effective intervention strategies, and discuss numerical findings from the analysis.

Keywords: Nosocomial infections, Hospital-acquired infections, Infection control, Mathematical model, Pathogen
spread

1. INTRODUCTION
Nosocomial infection (NI; also known as hospital-acquired infection or HAI) is defined as an infection during
hospitalization that was not present or incubating at the time of admission, according to the US Department of Health
and Human Services, Centers for Disease Control and Prevention (CDC) (Lopez Sastre et al.,2002). Data from CDC
suggests that 1 in 10 hospitalized patients in the United States acquire an infection each year (Buus-Frank,2004). This
calculates to approximately two million hospitalized patients with NIs and approximately 90,000 deaths that result from
these infections. The associated economic burden is also immense, and in fact, approximately USD 6.7 billion are spent
annually, primarily on costs associated with increased length of stay.
HAIs often lead to morbidity and mortality for neonates in intensive care. A 2007 study by Gastmeier et al. (2007)
compared reports of HAI outbreaks in the NICU to those in other intensive care units (ICUs). They found that, out of
729 outbreaks in all ICUs, 276 were in NICUs, totaling 37.9% of all ICU outbreaks. NICU outbreaks included 5718
patients making it the most frequent subgroup in ICU outbreaks.
Critically ill infants cared for in the intensive care environment are among the most vulnerable patient groups for
HAIs. Since these babies are underdeveloped and have weak skin, they have a higher risk of acquiring these infections.
The immunologic immaturity of this patient population, the need for prolonged stay, and the large use of invasive
diagnostic and therapeutic procedures also contribute to higher rates of infection in the NICU than in pediatric and adult
ICUs (Mammina et al.,2007). Rates of infections have varied from 6% to 40% of neonatal patients, with the highest
rates occurring most often in facilities having larger proportions of very low-birth-weight infants or neonates requiring
surgery (Brady, 2005). This group of infants also experiences more severe illness as a result of these infections, mainly
because of their profound physiologic instability and the diminished functional capacity of their immune system. Efforts
to protect NICU infants from infections must therefore be concomitant. Children's Healthcare of Atlanta (CHOA) 1 has
three ICUs, of which their NICU has the highest rate of infection. This is a concern for the management since these
rates are also higher than the national average.
A systemic approach and mathematical modeling have been increasingly used to understand the transmission
dynamics of infectious diseases in hospitals - particularly, to “test hypotheses of transmission” and “explore
transmission dynamics of pathogens” (Grundmann et al., 2006). In this study, we perform a preliminary mathematical
analysis of the spread of NIs in the CHOA NICU. We then evaluate the effectiveness of different intervention strategies,
including increased hand hygiene, patient screening at admission, and NICU-wide explicit contact precautions against
colonized or infected patients, numerically through sensitivity analysis. We remark that, in the field of industrial
engineering, the application of quality control charts to detect the outbreak of infections has been mainly discussed in
literature (e.g., Benneyan,2008).
We also consider facility surface disinfection including medical equipment and devices as an intervention strategy.

1
A non-profit pediatric hospital (http://www.choa.org/)
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(3-4), 252-261, 2013.

AUTOMATED METHODOLOGY FOR SCENARIO GENERATION AND ITS


FEASIBILITY TESTING
Sang Chul Park1, Euikoog Ahn1, Yongjin Kwon1,
1
Department of Industrial Engineering
Ajou University, Suwon, 443-749 South Korea
Email: yk73@ajou.ac.kr

The main purpose of this study is to devise a novel methodology for automated scenario generation, which
simultaneously checks the feasibility and the correctness of scenarios in terms of event sequence, logical propagation,
and violation of constraints. Modern day warfare is highly fluidic, fast moving, and unpredictable. Such situation
stipulates the fast decision making and rapid deployment of fighting forces. Management of combat assets and
utilization of battlefield information, therefore, become the key factors that deice the outcome of engagement. In this
context, the Korean Armed Forces are building a framework, in which commanders can rapidly and efficiently evaluate
every conceivable engagement scenario before committing real assets. The methodology is derived from the Conflict
Table, event transition probabilities, DEVS formalism, and DFS algorithm. The presented example illustrates an one-
on-one combat engagement scenario with two submarines, of which results validate the effectiveness of the proposed
methodology.

Keywords: Defense M&S; DEVS; DFS; Automated scenario generation; Conflict Tables; Event transition
probabilities.

1. INTRODUCTION
Defense modeling and simulation (M&S) technology enables a countless number of testing and engagement scenarios
evaluated without having to commit real assets (Lee et al. 2008). In defense M&S, real objects (e.g., soldiers, trucks,
tanks, and defense systems) are modeled as combat entities and embedded into a computer generated synthetic
battlefield. The interaction between the combat entities and the synthetic battlefield is dictated by the rules within the
sequence of events, which is basically an engagement scenario. Defense M&S manifests two broad categories: (1) a
testing of weapon’s effectiveness; and (2) a virtual engagement. The first category is highly favored due to many
benefits, including cost savings, less environmental damages, and reduced safety hazards. The second category
represents virtual war games or engagements, depending on the size of forces and theaters involved. By examining the
engagement scenarios, war strategists can formulate the factors important to the conduct of battles and visualize the
tactical merits and flaws that are otherwise difficult to identify. One problem is, however, the scenarios must be
manually composed, incurring much time and effort (Yoon, 2004). Due to complex and unpredictable nature of modern
warfare, every possible scenario needs to be evaluated to increase the chance of operational success. A manual
composition of engagement scenario, therefore, has been a great hindrance to the defense M&S.
To cope with the problem, a new method is needed, which is automatic and self-checking. In other words, it
automatically composes scenarios for every possible eventuality, while automatically ascertaining the correctness of the
scenarios. Such notion is well aligned with the concept of concurrent engineering (CE) that intends to improve
operational efficiencies by simultaneously considering and coordinating disparate activities spanning the entire
development process (Evanczuk, 1990; Priest et al. 2001; Prasad, 1995; Prasad, 1996; Prasad, 1997; Prasad, 1998;
Prasad, 1999). CE is known to successfully reduce the product development cycle time and the same can be true for the
defense M&S development process. In this context, the automated scenario generation is based on the atomic model of
DEVS (Discrete Event System specification) formalism and the DFS (depth first search) algorithm. DEVS provides a
formal framework for specifying discrete event models in hierarchical and modular manner (DEVS, 2010). It is
represented by the state transition tables. Many defense related studies capitalize on the DEVS formalism (Kim et al.
1997), such as a small scale engagement (Park, 2010), a simulation model for war tactics managers (Son, 2010), and
defense-related spatial models using the Cell-DEVS (Wainer et al. 2005). Correctness is checked out by the Conflict
Table, representing the possible or impossible pathways for the events. For this study, any discernible activities are
referred to as events, and each event should propagate into the next event on a continuous time scale. The Conflict
Table controls the transition of any events from one state to the next. By doing so, an event can unfold along any
feasible path. While the Conflict Table only manifests the tractable ways for the event transition, the probabilities of one
event becoming the next are very different when there exist many subsequent events to propagate into. Therefore, the
event transition probabilities (ETP) can be prescribed by the simulation planners (Figure 1).
For war tacticians and military commanders, the result of this study brings about an immediate enhancement in their
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(3-4), 262-272, 2013.

A NEW APPROXIMATION FOR INVENTORY CONTROL SYSTEM WITH


DECISION VARIABLE LEAD-TIME AND STOCHASTIC DEMAND
Serap AKCAN1, Ali KOKANGUL2
Department of Industrial Engineering1
University of Aksaray
68100, Aksaray, Turkey
E-mail: serapakcan@ymail.com
Department of Industrial Engineering2
University of Çukurova
01330, Adana, Turkey
E-mail: kokangul@cu.edu.tr

Demand for any material in a hospital depends on a random arrival rate and random length of stay in units. Therefore,
the demand for any material shows stochastic characteristics that make determining the optimum level of r and Q
problem more difficult. Thus, in this study, a single item inventory system for healthcare was developed using a
continuous review (r, Q) policy. A simulation meta-model was constructed to obtain equations for the average on-hand
inventory and average number of orders per year. Then, the equations were used to determine the optimal levels of r and
Q while minimizing the total cost in an integer non-linear model. The same problem investigated in this study was also
solved using OptQuest optimization software.

Significance: In this study, an applicable new approximation for inventory control system is constructed and this
approximation is examined by presenting a healthcare case study.

Keywords: Healthcare systems; Inventory control; (r, Q) policy; Integer non-linear programming; Simulation meta-
modeling

1. INTRODUCTION
There are a growing number of studies on continuous review inventory systems. The majority of these studies relate to
production applications, and backordering and shortages are allowed. However, there are very few studies concerning
the area of healthcare (Sees, 1999). Thus, this study aimed to determine the optimal reorder point (r) and the order
quantity (Q) required to minimize the expected annual total cost considering a single-item continuous review (r, Q)
policy for a hospital.
Many models have been developed for continuous review (r, Q) policies. Çakanyıldırım et al. (2000) modeled (Q, r)
policy where the lead-time depends on lot size. Salameh et al. (2003) considered a continuous inventory model under
permissible delays in payments. In this model, it was assumed that expected demand was constant over time and the
order lead-time was random. Durán et al. (2004) developed a continuous review inventory model to find the optimal
inventory algorithm when there was an expediting option. In their inventory policy, decision variables were integers.
They also discussed the case when the decision variables were real values. Mitra and Chatterjee (2004) modified a
continuous review model for two-stage serial systems first developed by De Both and Graves. The model was examined
for fast-moving items. Park (2006) used analytic models in the design of inventory management systems. Chen and Levi
(2006) examined a continuous review model with infinite horizon and single product; pricing and inventory decisions
were made simultaneously and ordering cost included a fixed cost. Mohebbi and Hao (2006) investigated a problem of
random supply interruptions in a continuous review inventory system with compound Poisson demand, Erlang-
distributed lead-times and lost sales. Axsäter (2006) developed a single-echelon inventory model controlled by
continuous review (r, Q) policy in which it was assumed that the lead-time demand was normally distributed and in
which the aim was to minimize holding and ordering cost under fill rate constraint. Lee and Schwarz (2007) considered
a continuous review (Q, r) inventory system with single-item from an agency perspective, in which the agent’s effort
influences the item’s replenishment lead-time. Their findings revealed that the possible influence of the agent on the
replenishment lead-time could be large, but that a simple linear contract was capable of recapturing most of the cost
penalty of ignoring agency. Hill (2007) investigated continuous review lost-sales inventory models with no fixed order
cost and a Poisson demand process. In addition, Hill et al. (2007) modeled a single-item, two-echelon, continuous
review inventory model. In their model, demands made on the retailers follow a Poisson process and warehouse lead-
time cannot exceed retailer transportation time. Darwish (2008) examined a continuous review model to determine the

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(3-4), 273-281, 2013.

MANPOWER MODELING AND SENSITIVITY ANALYSIS FOR AFGHAN


EDUCATION POLICY
Benjamin Marlin 1, 2 and Han-Suk Sohn 2 *
1
United States Army TRADOC Analysis Center, TRAC-WSMR, White Sands MR, NM 88002, USA
2
Dept. of Industrial Engineering, New Mexico State University, Las Cruces, NM 88003, USA
E-mail address: benjamin.marlin@us.army.mil (B. Marlin) and hsohn@nmsu.edu (H. Sohn)

This paper provides a demand based balance of flow manpower model premised in mathematical programming to
provide insight into the potential futures of the Afghan Education System. Over the previous three decades, torn from
multiple wars and an intolerant governing regime, the education system in Afghanistan has been decimated. Over the
past 10 years Afghanistan and the international community have dedicated a substantial amount of resources to educate
the youth of Afghanistan. By forecasting student demand we are able to determine points of friction in the teacher
production policy regarding grade level, gender, and province across a medium-term time horizon. We modify the
model to provide sensitivity analysis to inform policies. Examples of such policies are accounting for the length of
teacher training programs and encouragement of inter-provincial teacher moves. By later applying a stochastic
optimization model potential outcomes regarding changes in teacher retention attributed to policy decisions, incentives
to teach, or security concerns are highlighted. This model was developed in support of the validation of a large scale
simulation regarding the same subject.

Keywords: Manpower model, sensitivity analysis, Afghanistan, education policy, mixed integer linear program.

1. BACKGROUND
Over the previous three decades, torn from multiple wars and an intolerant governing regime the education system in
Afghanistan has been decimated. Only in the recent decade has there been a unified effort toward the improvement of
education. This emphasis regarding education has provided benefit, but has also brought unexpected problems. There
has been a seven fold increase in the demand for primary and secondary education with nearly seven million children
enrolled in school today (Ministry of Education, 2011). Unfortunately, in a country with 27% adult literacy, an ongoing
war upon its soil, an opium trade as a primary gross domestic product, and an inefficient use of international aid,
meeting the increasing demand for education is difficult at best (Sigsgaard, 2009). The Afghanistan Ministry of
Education (MOE) has stated the future of Afghanistan depends on the capacity of its people to improve their own lives,
the well being of their communities, and the development of the nation. Concurrently, the United Nations (UN) has
supported a tremendous amount of research stating that primary and secondary education is directly linked to the ability
of a people to better their lives and their community (Dickson, 2010). This has resulted in the UN charter for universal
primary education and improved secondary education by 2015.
As of 2012, there are 56 primary donors who have donated approximately $57 billion U.S. to Afghanistan
(Margesson, 2009). The UN Coalition is dedicated to the security and infrastructure improvement of Afghanistan in
order to ensure Afghan Government success. In 2014, with the anticipated withdrawal of coalition forces and a newly
autonomous Afghan state, the future is uncertain. The purpose of this research is to use mathematical modeling to
demonstrate potential outcomes and points of friction regarding the demand for teachers in Afghanistan given the
substantial forthcoming changes in the country.

2. INTRODUCTION
Teacher management is a critical governance issue in fragile state contexts, and especially those in which the education
system has been destroyed by years of conflict and instability (Kirk, 2008). For this reason, this research focuses on the
capacity for teacher training in Afghanistan as it pertains to the growing demand for education. Although the current
pool of teachers has a mixed training background (73% of teachers have not met the grade 14 graduate requirement
(Ayobi, 2010), the Afghanistan Ministry of Education requires a two year teacher training college (TTC) after a
potential teacher has passed the equivalent of 12th grade (Ministry of Education, 2011). Therefore, it is rather important
to determine the number of future teachers required to enter the training base each year to support the increasing
education demand. Of equal importance is discovering potential weaknesses in training capacity, and where these
potential friction points exist. The issues cannot be remedied in the short run; therefore, it is beneficial to use insights
gained through modeling to inform policy decision.
The technique presented in this paper is based on a network flow integer program which has been successfully applied
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(3-4), 282-289, 2013.

QUICK RELIABILITY ASSESSMENT OF TWO-COMMODITY


TRANSMISSION THROUGH A CAPACITATED-FLOW NETWORK
Yi-Kuei Lin
Department of Industrial Management
National Taiwan University of Science and Technology
Taipei 106, Taiwan, R.O.C.
Tel: +886-2-27303277, Fax: +886-2-27376344
yklin@mail.ntust.edu.tw

Each arc in a capacitated-flow network has discrete and multiple-valued random capacities. Many studies evaluated the
probability named system reliability herein that the maximum flow from source to sink is no less than a demand d for a
capacitated-flow network. Such studies only considered commodities of a same type transmitted throughout the
network. Many real-world systems allow commodities of multiple types to be transmitted simultaneously, especially in
the case that different type of commodity consumes the arc’s capacity differently. For simplicity, this article assesses the
system reliability for a two-commodity case as follows. Given the demand (d1,d2), where d1 and d2 are the demands of
commodity 1 and 2 at the sink, respectively, an algorithm is proposed to find out all lower boundary points for (d1,d2).
The system reliability can be computed quickly in terms of such points. The computational complexity of the proposed
algorithm is also analyzed.
Keywords: Reliability; two-commodity; capacitated-flow networks; minimal paths

1. INTRODUCTION
A minimal path (MP) is a path whose proper subsets are no paths and a minimal cut (MC) is a cut whose proper subsets
are no cuts. When the system is binary-state and composed of binary-state components (Endrenyi, 1978; Henley, 1981),
the typical method uses MPs or MCs to compute the system reliability, the probability that the source node s connects
the sink node t. When the system is multistate (Aven, 1985; Griffith, 1980; Hudson and Kapur, 1985; Xue, 1985), the
system reliability, the probability that the system state is not less than a state d, can be evaluated in terms of d-MPs or d-
MCs. Note that a d-MP (not a MP) and a d-MC (not a MC) are both vectors denoting the state of each arc. In the case
that the considered multistate system is a single-commodity capacitated-flow network (i.e., flow is considered), the
system reliability is the probability that the maximum flow (from s to t) is not less than a demand d. The typical
approach to assesse such a reliability is to first search for the set of d-MPs (Lin et al., 1995; Lin, 2001, 2003, 2010a-d;
Yeh, 1998) or d-MCs (Jane et al., 1993; Lin, 2007, 2010e).
However, in real world many capacitated-flow networks allow commodities of multiple types to be transmitted from s
to t simultaneously, especially in the case that different type of commodity consumes the capacity on an arc differently.
A broadband telecommunication network is one of such flow networks as several types of services (audio, video, etc.)
share the bandwidth (capacity of an arc) simultaneously. The purpose of this article is to extend the reliability
assessment from single-commodity case to a two-commodity case. The source node s supplies commodities
unlimitedly. The demands of commodity 1 and 2 at the sink t are d1 and d2, respectively. An algorithm is first proposed
to generate all lower boundary points for (d1,d2), called (d1,d2)-MPs, in terms of MPs. Then the system reliability, the
probability that the system satisfies the demand (d1,d2), can be computed in terms of (d1,d2)-MPs. The remainder of this
paper is organized as follows. The two-commodity capacitated-flow model is presented in section 2. Theory and
algorithm are proposed in section 3 & 4, respectively. In section 5, a numerical example is presented to illustrate such
an approach and also how the system reliability be calculated. The analysis of computational time complexity is shown
in section 6.

2. TWO-COMMODITY CAPACITATED-FLOW NETWORK

Notation and Nomenclature


G (A, N, C): a capacitated-flow network
A {ai|1  i  n}: the set of arcs
N the set of nodes
C (C1, C2, …, Cn): Ci is the maximal capacity of ai
s, t the unique source node, the unique sink node
 ik (real number) the weight of commodity k (k = 1, 2) on ai. It measures the consumed amount of capacity on ai

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(3-4), 290-299, 2013.

APPLICATIONS OF QUALITY IMPROVEMENT AND ROBUST DESIGN


METHODS TO A PHARMACEUTICAL RESEARCH AND DEVELOPMENT

Byung Rae CHO1, Yongsun CHOI2and Sangmun SHIN2*


1
Department of Industrial Engineering, Clemson University
Clemson, South Carolina 29634, USA
2
Department of Systems Management & Engineering, Inje University
Gimhae, GyeongNam 621-749, South Korea

Researchers often identify robust design, based on the concept of building quality into products or processes, as one of
the most important systems engineering design concepts for quality improvement and process optimization. Traditional
robust design principles have often been applied to situations in which the quality characteristics of interest are typically
time-insensitive. In pharmaceutical manufacturing processes, time-oriented quality characteristics, such as the
degradation of a drug, are often of interest. As a result, current robust design models for quality improvement which
have been studied in the literature may not be effective in finding robust design solutions. In this paper, we show how
the robust design concepts can be applied to the pharmaceutical production research and development by proposing
experimental and optimization models which should be able to handle the time-oriented characteristics. This is perhaps
the first attempt in the robust design field. An example is given and comparative studies are discussed for model
verification.
Keywords: Robust design; mixture experiments, pharmaceutical formulations, censored data, Weibull distribution,
maximum likelihood estimation.

1. INTRODUCTION

Continuous quality improvement has become widely recognized by many industries as a critical concept in maintaining
a competitive advantage in the marketplace. It is also recognized that quality improvement activities are efficient and
cost-effective when implemented during the design stage. Based on this awareness, Taguchi (1986) introduced a
systematic method for applying experimental design, which has become known as robust design which is often referred
to as robust parameter design. The primary goal of this method is to determine the best design factor settings by
minimizing performance variability and product bias, i.e., the deviation from the target value of a product. Because of
the practicability in reducing the inherent uncertainty associated with system performance, the widespread application
of robust design techniques has resulted in significant improvements in product quality, manufacturability, and
reliability at low cost. Although the main robust design principles have been implemented in a number of different
industrial settings, our literature study indicates that robust design has been rarely addressed in the pharmaceutical
design process.
In the pharmaceutical industry, the development of a new drug is a lengthy process involving laboratory
experiments. When a new drug is discovered, it is important to design an appropriate pharmaceutical dosage or
formulation for the drug so that it can be delivered efficiently to the site of action in the body for the optimal therapeutic
effect on the intended patient population. The Food and Drug Administration (FDA) requires that an appropriate assay
methodology for the active ingredients of the designed formulation be developed and validated before it can be applied
to animal or human subjects. Given this fact, one of the main challenges faced by many researchers during the past
decades is the optimal design of pharmaceutical formulations to identify better approaches to various unmet clinical
needs. Consequently, the pharmaceutical industry’s large investment in the research and development (R&D) of new
drugs provides a great opportunity for research in the areas of experimentation and design of pharmaceutical
formulations. By definition, pharmaceutical formulation studies are mixture problems. These types of problems take
into account the proportions within the mixture, not the amount of the ingredient; thus, the ingredients in such
formulations are inherently dependent upon one another and consequently experimental design methodologies
commonly used in many manufacturing settings may not be effective. Instead, for mixture problems, a special kind of
experimental design, referred to as a mixture experiment, is needed. In mixture experiments, typical factors in question
are the ingredients of a mixture, and the quality characteristic of interest is often based on the proportionality of each of
those ingredients. Hence, the quality of the pharmaceutical product is influenced by such designs when they are applied
in the early stages of drug development.
In this paper, we propose a new robust design model in the context of pharmaceutical production R&D. The main
contribution of this paper is two-fold. First, traditional experimental design methods have often applied to situations in
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(3-4), 300-310, 2013.

USING A CLASSIFICATION SCHEMA TO COMPARE BUSINESS-IT


ALIGNMENT APPROACHES

Marne de Vries
University of Pretoria
South Africa
Department of Industrial and Systems Engineering

Enterprise engineering (EE) is a new discipline that emerged from existing disciplines, such as industrial engineering,
systems engineering, information science and organisation science. EE has the objective to design, align and govern the
development of an enterprise in a coherent and consistent way. Within the EE discipline, knowledge about the
alignment of business components with IT components is embedded in numerous business-IT alignment frameworks
and approaches, contributing to a fragmented business-IT alignment knowledge base. This paper presents the Business-
IT Alignment Model (BIAM) as a conceptual solution to the fragmented knowledge base. The BIAM provides a
common frame of reference to compare existing business-IT alignment approaches. The main contribution of this article
is a demonstration of BIAM to compare two business-IT alignment approaches: the foundation for execution approach
and the essence of operation approach.

Significance: To provide enterprise designers/architects with a qualitative analysis tool for understanding and
comparing the intent, scope and implementation means of existing/already-implemented business-IT alignment
approaches.

Keywords: enterprise engineering, enterprise architecture, enterprise ontology, enterprise design, business-IT alignment

1. INTRODUCTION
Enterprise systems of the 21st century are exceedingly complex, and in addition, these systems need to be dynamic to
stay ahead of competition. Information technology opened up new opportunities for enterprises to extend enterprise
boundaries in offering complementary services, entering new business domains and creating networks of collaborating
enterprises. The extended enterprise however still needs to comply with corporate governance rules and legislation and
need to be flexible and adaptable to seize new opportunities (Hoogervorst, 2009).
Supporting an overall view of a complex enterprise, enterprise engineering (EE) emerged as a new discipline for
designing, aligning and governing the development of an enterprise. EE consists of three subfields: enterprise ontology,
enterprise governance, and enterprise architecture (Barjis, 2011). One of the potential business benefits of EE, is to
design and align the entire enterprise (Kappelman et al., 2010). However, a strong theme within enterprise alignment, is
alignment between business components and IT components, called business-IT alignment. Although various theoretical
approaches and frameworks emerged in literature (Schekkerman, 2004) to facilitate business-IT alignment, a study
performed by OVUM (Blowers, 2012) indicates that 66% of enterprises had developed their own customised
framework, with one third of the participants making use of two or more theoretical frameworks. The expanding
number of alignment approaches and frameworks create difficulties in comparing or extending a current alignment
approach with knowledge from the existing business-IT alignment knowledge base. Previous studies circumvented this
problem by providing a common reference model, the Business-IT Alignment Model (BIAM) (De Vries, 2010,
2012)for understanding and comparing alignment approaches.
This article applies the BIAM in contextualising two business-IT alignment approaches, the foundation for execution
approach (Ross et al., 2006) and the essence of operation approach (Dietz, 2006). The aim is to enhance the foundation
for execution approach, due to certain method deficiencies of its associated operating model (OM), with another
approach, the essence of operation approach.
The main contribution of the article is to demonstrate how the classification categories of the BIAM are used to
compare two alignment approaches in confirming their compatibility. As demonstrated by the comparison example,
BIAM is useful to enterprise engineering practitioners for contextualising current alignment approaches implemented at
their enterprise, to identify similarities and differences between current approaches and opportunities for extension.
The paper is structured as follows: Section 2 provides background on the topic of business-IT alignment, the
Business-IT Alignment Model (BIAM) and two alignment approaches, the foundation or execution approach and
essence of operation approach. Section 3 defines the problem of assessing the feasibility of combining current
alignment approaches. Section 4 suggests the use of the BIAM components as comparison categories in contextualising
two alignment approaches, presenting the results of the a comparison demonstration in section 5. Section 6 concludes
with opportunities for follow-up research.
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(3-4), 311-318, 2013.

VARIABLE SAMPLE SIZE AND SAMPLING INTERVALS WITH FIXED


TIMES HOTELLING’S T2 CHART
M. H. Lee
School of Engineering, Computing and Science,
Swinburne University of Technology (Sarawak Campus),
93350 Kuching, Sarawak, Malaysia.
Email: mhlee@swinburne.edu.my

The idea of variable sample size and variable sampling interval with sampling at fixed times is extended to the Hotellling’s
T2 chart in this study. This chart is called variable sample size and sampling intervals with fixed times (VSSIFT)
Hotellling’s T2 chart, in which samples with sample size n always be taken at some specified fixed equally spaced time
points but additional samples larger than n are allowed between these time points whenever there is some indication of a
process mean shift. The numerical comparison shows that the VSSIFT Hotellling’s T2 chart and the variable sampling
interval and variable sample size (VSSI) Hotellling’s T2 chart give almost the same effectiveness in detecting shifts in the
process mean. However, from the administration viewpoint, the VSSIFT chart is considered to be more convenient than the
VSSI chart.

Keywords: sampling at fixed times; steady-state average time to signal; variable sample size; Hotelling’s T2 chart; Markov
chain method

1. INTRODUCTION
The usual practice in using the control charts is to take samples of fixed size from the process at fixed sampling interval.
Recently the variable sample size and variable sampling interval (VSSI) Hotellling’s T2 chart has been shown to give
substantially faster detection of most process mean shifts than the standard Hotellling’s T2 chart (Aparisi and Haro, 2003).
In the design of the VSSI chart, the sample size and the sampling interval are allowed to change based on the chart statistic.
It is reasonable to relax the control by taking the next sample at long sampling interval with small sample size if the current
sampling point is close to the target. On the other hand, it is reasonable to tighten the control by taking the next sample at
short sampling interval with large sample size if the current sampling point is far from the target but still within the control
limit. Thus the actual number of samples taken in any time period will be a random variable, and the time points at which
the samples are taken will be unpredictable. The variability in the sampling intervals may be inconvenient from an
administrative viewpoint and also undesirable for drawing inferences about the process (Reynolds, 1996a; Reynolds,
1996b). To alleviate the disadvantage of unpredictable sampling times, Reynolds (1996a; 1996b) proposed a modification
of the variable sampling interval (VSI) idea for X chart in which samples always be taken at some specified fixed equally
spaced time points but additional samples are allowed between these time points whenever there is some indication that the
process has shifted from the target. This chart is called variable sampling interval with sampling at fixed times (VSIFT)
control chart. The VSIFT control chart may conform more closely to the natural periods of the process and be more
convenient to administer. It seems reasonable to increase the size of such samples to improve the performance of the
control chart since the additional samples are always taken when there is some indication that the process has changed
(Costa, 1998). Lin and Chou (2005) extended this idea of sampling at fixed times to the VSSI X chart, and they showed
that the VSSI X chart with sampling at fixed times gives almost the same detection ability as the original VSSI X chart.
From the practical viewpoint of administration, the variable sample size and sampling intervals with fixed times (VSSIFT)
X chart is relatively easy to set up and implement. In this study, the VSSIFT feature is extended to the multivariate chart,
which is the Hotellling’s T2 chart.

2. VSSIFT HOTELLING’S T2 CHART


Consider a process with p quality characteristics of interest for each item are observed over time, and the distribution of the
observations is p-variate normal with mean vector µ 0 and covariance matrix Σ0 when the process is in-control. Assume that
a sample of size n is taken at every sampling point, and let X t be the average vector for tth sample. Then the chart statistic
Tt 2 = n( Xt − µ 0 ) Σ0−1 ( Xt − µ 0 ) (1)
2
is plotted in the Hotelling’s T chart with control limit CL = χ 2
p ,α
where χ 2
p ,α
is the upper α percentage point of the chi-
square distribution with p degrees of freedom. As pointed out by Aparisi (1996), α = 0.005 has been widely employed in
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(5-6), 319-328, 2013

SYSTEM RELIABILTIY WITH ROUTING SCHEME FOR A


STOCHASTIC COMPUTER NETWORK UNDER ACCURACY RATE
Yi-Kuei Lin and Cheng-Fu Huang
Department of Industrial Management
National Taiwan University of Science & Technology
Taipei 106, Taiwan, R.O.C.

Under the assumption that each branch’ capacity of the network is deterministic, the quickest path problem is to
find a path sending a specific of data from the source to the sink such that the transmission time is minimized.
However, in many real-life networks such as computer systems, the capacity of each branch is stochastic with a
transmission accurate rate. Such a network is named a stochastic computer network. Hence, we try to compute the
probability that d units of data can be sent through the stochastic computer network within both the time and
accuracy rate constraints according to a routing scheme. Such a probability is a performance indicator to provide to
managers for improvement. This paper mainly proposes an efficient algorithm to find the minimal capacity vector
meeting such requirements. The system reliability with respect to a routing scheme then can be calculated.

Keywords: Accuracy rate; Time; Quickest path; Routing scheme; Stochastic computer network; System reliability.

1. INTRODUCTION
From the perspectives of network operations, management, and engineering, service level agreements (SLAs) are an
important part of the networking industry. SLAs are used in contracts between network service providers and their
customers. An SLA can be measured by many criteria: for instance, availability, delay, loss, and out-of-order
packets. A basic index is the accuracy rate, which is often used to measure the performance of enterprise networks.
Therefore, from the viewpoint of quality of service (QoS) (Sausen et al., 2010; Wei et al. 2008), maintaining a high
network traffic accuracy rate is essential for enterprises to survive in a competitive environment. Many researchers
have discussed issues related to measuring local area network (LAN) traffic (Amer, 1982; Chlamtac, 1980; Jain and
Routhier, 1986) and previous studies have considered flow accuracy in traffic classification. Such flows are called
elephant flows. Because high packet-rate flows have a great impact on network performance, identifying them
promptly is important in network management and traffic engineering (Mori et al., 2007). A conventional method
for estimating the accuracy rate of large or elephant flows is the use of packet sampling. However, packet sampling
is the main challenge in network or flow measurements. Feldmann et al. (2001) presented a model for traffic
demands to support traffic engineering and performance debugging of large Internet service provider networks.
Choi et al. (2003) used packet sampling to accurately estimate large flows under dynamic traffic conditions. The file
is said to be transmitted correctly only if the file received at the sink is identical to the original file. In fact, data
transfer is done through packet transmission. The network supervisor should monitor the number of error packets to
assess the accuracy rate of the network. However, the previous papers did not involve system reliability when
measuring the accuracy rate
Nowadays, computer technology is becoming more important to modern enterprises. Computer networks are the
major medium for transmitting data/information in most enterprises. As the stability of computer networks strongly
influences the quality of data transmissions from a source to a sink, especially for accurate traffic measurement and
monitoring, the system reliability of the computer network is always of concern for information technology
departments. Many enterprises regard system reliability evaluation or improvement as crucial for network
management, traffic engineering, and security tasks. In general, a computer network is usually modeled as a network
topology with nodes and branches, in which each branch represents a transmission line and each node represents a
transmission device such as a hub, router, or switch. In fact, a transmission line is combined with several physical lines
such as twisted pairs, coaxial cables, or fiber cables. Each physical line may provide a capacity or may fail; this
implies that a transmission line has several states where state c means that c physical lines are operational. Hence, the
capacity of each branch has several values. In other words, the computer network should be multistate due to the
various capacities of each transmission line. Such a network is a typical stochastic flow network (Aven, 1985; Cheng,
1998; Jane et al., 1993; Levitin, 2001; Lin et al., 1995; Lin, 2001, 2003, 2007a, 2007b, 2009a-c; Yeh, 1998, 2004, 2005)
and is called a stochastic computer network (SCN) herein.
Another important issue is transmission time for the computer network. From the point of view of quality
management and decision making, it is an important task to reduce the transmission time through a computer
network. When data are transmitted through the computer network, it should select a shortest delayed path to


yklin@mail.ntust.edu.tw

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(5-6), 329-338, 2013.

OBSERVED BENEFITS FROM PRODUCT CONFIGURATION SYSTEMS


Lars Hvam1, Anders Haug2, Niels Henrik Mortensen3, ChristianThuesen4
Department of Management Engineering1
Operations Management
Technical University of Denmark
Building 426, DK-2800 Kgs. Lyngby
Email: lahv@dtu.dk

Department of Entrepreneurship and Relationship Management2


University of Southern Denmark
Engstien 1, DK-6000 Kolding
Email: adg@sam.sdu.dk

Department of Mechanical Engineering3


Product Architecture Group
Technical University of Denmark
Building 426, DK-2800 Kgs. Lyngby
Email: nhmo@mek.dtu.dk

Department of Management Engineering4


Production and Service Management
Technical University of Denmark
Building 426, DK-2800 Kgs. Lyngby
Email: chth@dtu.dk

This article presents a study of the benefits obtained from applying product configuration systems based on a case
study in four industry companies. The impacts are described according to main objectives in literature for imple-
menting product configuration systems: lead time in the specification processes, on-time delivery of the specifica-
tions, and resource consumption for making specifications, quality of specifications, optimization of products and
services, and other observations.
The purpose of the study is partly to identify specific impacts observed from implementing product configuration
systems in industry companies and partly to assess if the objectives suggested are appropriate for describing the
impact of product configuration systems and identifying other possible objectives. The empirical study of the com-
panies also gives an indication of more overall performance indicators being affected by the use of product configu-
ration systems e.g. increased sales, decrease in the number of SKU’s, improved ability to introduce new products,
and cost reductions.

Significance: Product configuration systems are increasingly used in industrial companies as a means for efficient
design of customer tailored products. There are examples of companies who have gained significant benefits from
applying product configuration systems. However companies considering use product configuration systems have a
challenge in assessing the potential benefits to reach from applying product configuration systems. This article pro-
vides a list of potential benefits based on a case study of four industry companies.

Keywords: Mass Customization, product configuration, engineering processes, performance measurement, com-
plexity management.

1. INTRODUCTION
Customers worldwide require personalised products. One way of obtaining this is to customise the products by use
of product configuration systems (Tseng and Piller, 2003), (Forza and Salvador, 2007), (Hvam et al 2008). Product
configuration systems are increasingly used as a means for efficient design of customer tailored products, and this
has led to significant benefits for industry companies. However, the specific benefits gained from product configu-
ration are difficult to measure. This article discusses how to assess the benefits from the use of product configura-
tion based on a suggested set of measurements and an empirical study of four industry companies.
Several companies have acknowledged the opportunity to apply product configuration systems to support the ac-
tivities of the product configuration process (see for example www.configurator-database.com). Companies like
Dell Computer and American Power Conversion (APC) rely heavily on the performance of their configuration sys-
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(5-6), 339-371, 2013.

Decision Support for the Global Logistics Positioning


Chun-Wei R. Lin a, Sheng-Jie J. Hsu b,c,*
a
Department of Industrial Engineering and Management, National Yunlin University of Science and Technology,
123, University Road Section 3, Douliou, Yunlin, Taiwan, 640, R.O.C.
E-mail: lincwr@yuntech.edu.tw
b
Graduate School of Management, National Yunlin University of Science and Technology, Douliou, Yunlin,
Taiwan, 640, R.O.C.
E-mail: g9020816@yuntech.edu.tw
c
Department of Information Management, Transworld Institute of Technology, Douliou, Yunlin, Taiwan, 640,
R.O.C.
E-mail: jess@tit.edu.tw

According to the enterprise's global operations, its global logistics system had to be cooperated. It is clear that the
global logistics (GL) is more complicated than the local logistics. However, it lacks a generic structure for GL's
position to support its decision-making.
Therefore, this article proposed a Global Logistic Positioning (GLP) framework by means of literatures review
and practice experience. And, constructed the variables in this framework to be a Decision Support System (DSS),
which is useful for the GLP decision-making. This DSS can suggest the decision-maker to decide the positions of
the operation headquarters, research and development bases, production bases, and distribution bases.
For efficiency, this article proposed a four-phase algorithm which integrates the goal programming, revised
Analytic Hierarchy Process method, Euclidean distance, and Fitness concept, to execute the GLP computation.
Finally, by a virtual example: ABC Company, to verify the GLP theoretical feasibility.

Keywords:Global Logistic Management, Global Logistic Positioning, Framework, Decision Support System.

1. INTRODUCTION

Many organizations have a significant and growing presence in resource and/or demand markets outside their
country of origin. Current business conditions blur the distinctions between domestic and international logistics.
Successful enterprises have realized that to survive and prosper they must go beyond the strategies, policies, and
programs of the past and adopt a global view of business, customers, and competition. [Stock and Lambert, 2001]
Therefore, enterprise extends its operation to the global and become a Multi-National Enterprise (MNE). And its
logistics system must to match up its enterprise strategy, to be a global logistics system. Dornier et al. [1998] argued
that geographical boundaries are losing their importance. Companies view their network of worldwide facilities as a
single entity. Implementing worldwide sourcing, establishing production sites on each continent, and selling in
multiple markets all imply the existence of an operations and logistics approach designed with more than national
considerations in mind. Bowersox et al. [1999] argued that the business model of successful global operations is

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(5-6), 372-386, 2013.
 

A CLASSIC AND EFFECTIVE APPROACH TO INVENTORY MANAGEMENT


J. A. López, A. Mendoza, and J. Masini
Department of Industrial Engineering
Universidad Panamericana
Guadalajara, Jalisco 45010, MEXICO
Corresponding author´s email: {Abraham Mendoza, amendoza@up.edu.mx}

Many organizations base their demand forecasts and replenishment polices only on judgmental or qualitative approaches.
This paper presents an application where quantitative demand forecasting methods and classic inventory models are used to
achieve a significant inventory cost reduction and improved customer service levels at a company located in Guadalajara,
Mexico. The company currently uses a naive method to forecast demand. By proposing the use of Winters method, the
forecast accuracy was improved by 41.12%. Additionally, as a result of an ABC analysis for the product under analysis, a
particular component was chosen (it accounts for the 70.24% of the total sales and 60.06% of the total volume) and two
inventory policies studied for that particular component. The first inventory policy considers the traditional EOQ model,
whereas the second one uses a continuous-review (Q,R) policy. The best policy achieves a 43.69% total cost reduction,
relative to the current inventory policy. This policy translates into several operational benefits for the company, e.g.,
improved customer demand planning, simplified production and procurement planning, lower level of uncertainty and a
better service level.

Significance: While many organizations base their demand forecast and replenishment decisions only on judgmental or
qualitative approaches, this paper presents an application where forecasting methods and classic inventory models are used
to achieve a significant inventory cost reduction and improved customer service levels at a company located in Guadalajara,
Mexico.

Keywords: Inventory Management, Forecasting Methods.

1. INTRODUCTION

On one hand, small and medium companies seem to be characterized by the poor efforts they make optimizing their
inventory management systems. They are mainly concerned with satisfying customers’ demand by any means and barely
realize about the benefits of using scientific models for calculating optimal order quantities and reorder points while
minimizing inventory costs (e.g., holding and setup costs) and increasing customer service levels. On the other hand, large
companies have developed stricter policies for controlling inventory. However, most of these efforts are not supported by
scientific models either.
Many authors have proposed inventory policies based on mathematical models that are easy to implement in practical
situations. For example, Harris introduced the well-known Economic Order Quantity (EOQ) model to calculate optimal
inventory policies for situations in which demand is relatively constant (Harris, 1990). This model has been extended to
include transportation freight rates, production rates, quantity discounts, quality constraints, stochastic environments and
multi-echelon systems. The reader is referred to Silver, Pyke and Peterson (1998), Nahmias (2001), Chopra and Meindl
(2007), Mendoza (2007), and Hillier and Lieberman (2010) for more detailed texts on these extensions. Moreover, the EOQ
has been successfully applied by some companies around the world. For instance, Presto Tools, at Sheffield, UK, obtained
a 54% annual reduction in their inventory levels (Liu and Ridgway, 1995).
Despite the benefits shown in some companies, “in these days of advanced information technology, many companies are
still not taking advantage of fundamental inventory models”, as stated by Piasecki (2001). For example, companies do not
rely on the effectiveness of the EOQ model because of its apparent simplicity. Part of the problem is due to the lack of
thorough knowledge of the model’s assumptions and benefits. Along these lines, Piasecki (2001) stated: “many ERP
packages have built-in calculations for EOQ that work automatically, so the users do not know how it is calculated and
therefore do not understand the data inputs and system set-up that control the output”.
The success of any inventory policy depends on an effective customer demand planning (CDP), which begins with
accurate forecasts (Krajewski and Ritzman, 2005). At least in small companies, the application of quantitative methods for
forecasting, as well as the implementation of replenishment policies, through scientific models, is not well-known. Many
organizations base their demand forecasts and replenishment polices only on judgmental or qualitative approaches.
This paper presents an application where quantitative demand forecasting methods and classic inventory models are used
to achieve a significant reduction in inventory costs. We offer an analysis of the current inventory replenishment policies of
a company located in Guadalajara, Mexico, and propose significant cost improvements. Because of confidentiality issues,
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
 
International Journal of Industrial Engineering, 20(5-6), 401-411, 2013.

PRACTICAL DECOMPOSITION METHOD FOR T2 HOTELLING


CHART
Manuel R. Piña-Monarrez
Department of Industrial and Manufacturing Engineering
Institute of Engineering and Technology
Universidad Autónoma de Ciudad Juárez
Cd. Juárez Chih. México, C.P. 32310
Ph (656) 688-4843│Fx (656) 688-4813
Corresponding author’s e-mail: manuel.pina@uacj.mx

In multivariate control process, T2 Hotelling chart had shown to be useful to efficiently detect a change in a system, but
it is not capable of diagnosing the root causes of the change. This because the used MTY decomposition method
presents p! different possible decompositions of the T2 statistic and p*2(p-1) terms to be estimated for each possible
decomposition; so when p is large the estimation of the terms and their diagnostic became too complex. In this article by
considering the inverse of the covariance matrix of phase I as the standard one, a practical decomposition method, based
on the relations of each pair of variables is proposed. In the proposed method only p*p different terms are estimated and
its decomposition gives the variable contribution due its variance and due its covariance with each one of the other (p-1)
variables. Since the proposed method is a transformation of the T2 polynomial, the estimated T2 and its corresponding
decomposition always hold. Detailed guide for the application of the T2 chart and numerical application to a set of three
and twelve variables is given.

Significance: Since the proposed method let to practitioners determine which variable(s) generate the out of control
signal, and because it quantifies the proportion of the estimated T2 statistic that is due the variance and due the
correlation, its application to a multivariate control process is useful.

Keywords: Decomposition method, T2 Hotelling chart, Mahalanobis distance, Multivariate control process.

1. INTRODUCTION
Nowadays, the manufacturing processes are more complex and products are multifunctional, so they have more than
one quality characteristic to be controlled. For these processes, one of the most useful multivariate control charts is the
T2 Hotelling chart, which is based on the multivariate normal distribution theory (for details see Alvin 2002). When a
multivariate control chart signals, it is necessary to identify the variable(s) which causes the out of control signal. With
this particular purpose the Minitab 16 (MR) software, presents a decomposition method based on the MTY method
proposed by (Mason et. al 1995, 1997,1999), (for details go in Minitab16 to Stat>Control Charts>Multivariate
Charts>Tsquared–Generalized variance>Help>see also>methods and formulas >Decomposed T2 statistic).
Unfortunately since the Mahalanobis distance (MD) used in the T2 Hotelling chart is estimated as a nested process (see
section 3.1 and Piña 2011), its individual decomposition, and the estimated MD does not hold (see section 3 for details).
Other decomposition methods had been proposed. Among them we find in literature the methods proposed by Roy
(1958), Murphy (1987), Doganaksoy et al. (1991), Hawkins (1991, 1993), Timm (1996) and Runger et. al (1996).
Recently, Alvarez et, al (2007), proposed the method called Original Space Strategy (OSS), which unfortunately as
Alvarez mention (pp. 192), “in the approach there exist several methods to calculate the used R value, therefore the
decision to choose the R value could be very subjective, consequently; certain amount of the available information
could be lost by the Projection” (for details see Alvarez et. al (2007)). Li, et. al. (2008), with the objective to reduce the
computational complexity of the MTY method, they proposed a method called causation T2 decomposition method,
which integrates the causal relationships revealed by a Bayesian network with the traditional MTY approach, and by
theoretical analysis and simulation studies they demonstrated that their proposed method substantially reduces the
computational complexity and enhances diagnosticability, this by comparing their method with the traditional MTY
approach. Mason et. al. (2008), presented an interesting analysis to apply the MTY method to data of phase I, in order to
use it as standard in phase II. Alfaro et. al. (2009) proposed a boosting approach by training a classification method with
data of phase I and then by using the trained method in phase II, they determine the variable which causes the out of
control signal. In their study, they use data sets of 2, 3, and 4 variables, and found their method was inconsistent for the
2 variable case, and for the 3 variable case, the error was below of 5%. (for details see Alfaro et. al. (2009)). Cedeño et.
al. (2012), because the MTY approach has p! different but non-independent partitions of the overall T2 proposed a
decomposition method based only in the first two unconditional elements of the MTY method. Nevertheless, when the

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(5-6), 412-418, 2013.

COST ASSESSMENT FOR INTEGRATED LOGISTIC SUPPORT


ACTIVITIES
Maria Elena Nenni
Department of Industrial Engineering, University of Naples Federico II, Italy

An Integrated Logistic Support (ILS) service has the objective to improve system availability at an optimum life
cycle cost. It is usually offered to the customer by the system constructor, who becomes the Contractor Logistic
Support (CLS). The aim of this paper is to develop a clear and substantial cost assessment method to support the
CLS budgetary decisions. The assessment concerns the cost elements structure for ILS activities and includes an
economic analysis to provide details among competing alternatives. A simple example derived from an industrial
application is also provided in order to illustrate the idea.

Significance: many documents and standards have been produced by military about ILS but the focus is always on
performance or life cycle cost. The CLS perspective is completely not attended. Even models from scientific
literature are not useful to support CLS decisions because they seem too far from ILS or too general to be
implemented effectively. The lack of specific models has become a general problem because if the ILS service has
been originally developed for military purposes, now it is applied in commercial product support or customer
service organizations as well. Therefore many CLSs are requiring a deeper and wide-ranging investigation on the
topic. The method developed in this paper approaches the problem from the perspective of the CLS and it is
specifically tailored to the main issues of an ILS service.

Keywords: Logistic Support, maintenance, cost model, lifecycle management, after-sale contract.

1. INTRODUCTION
The Integrated Logistic Support (ILS) aims at ensuring the best system capability at the lowest possible life cycle
cost (DOD Directive, 1970). According to this purpose the system owner builds a partnership with the Contractor
Logistic Support (CLS) who implements the ILS process in a continuous way throughout the life cycle, frequently
very long, 30 or 35 years.
The CLS has usually specific technical skills on the system but he needs to improve decision-making about costs
since early stages (Mortensen et alii, 2008). Literature is not really exhaustive. Many documents and standards have
been produced about ILS by military (MIL-STD-1388/1A, 1983; Def-Stan 00-60, 2002; Army Regulation 700-127,
2007) and they don’t attend the CLS perspective.
Basically the CLS requires appropriate methods to optimize overall costs in the operation phase that is the longest
and the most costly (Asiedu and Gu, 1998; Choi, 2009) but approaches from scientific literature are often
inadequate. Many authors have spent themselves to develop optimization models: Kaufman (1970) has provided a
first original contribution on the structure of life cycle costs in general; other authors (Lapa et alii, 2006; Chen and
Trivedi, 2009; Woohyun and Suneung, 2009) have focused more specifically on costs of operation phase with the
aim to optimize preventive maintenance policies. Hatch and Badinelli (1999) have instead studied the way of
gathering in a single objective function two conflicting components, Life Cycle Cost (LCC) and system availability
(A). All the contributions partially address the issue and they are lacking into considering the problem from the
perspective of CLS actor. A most fitting paper is from the same author (Nenni, 2013) but it is really recent and it
takes the first step on the topic highlighting the discrepancies between the Life Cycle Management approach and
the cost management from the perspective of the CLS and through the proposition of a basic cost element structure.
The aim of this paper is to develop a cost assessment method based on a clear and substantial cost element
structure. Moreover the author proposes a simulation in a real case to point out the importance of determining
sensitivity to key inputs in order to find the best value solution among competing alternatives.

2. THE COST MODEL


The CLS needs of cost estimates to develop annual budget requests, to evaluate resource requirements at key
decision points, and to choose about investment. A specific cost model, really fitting with the ILS issues, is the base
for the estimation. The author proposes a cost model where most of the elements are derived from DOD Guide
(2005) but the link between cost and performance is original as well as some key decision parameters (Nenni,
2013).
Before going through the cost model, it is necessary describe some assumptions. The first one concerns the area in
which it runs. In this paper only costs for activities in maintenance planning and supply support have been
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(5-6), 419-428, 2013.

AXIOMATIC DESIGN AS SUPPORT FOR DECISION-MAKING IN A


DESIGN FOR MAINTENANCE IMPROVEMENT MODEL: A CASE STUDY
Jorge Pedrozo, Alfonso Aldape, Jaime Sánchez and Manuel Rodríguez
Graduate Studies & Research Division
Juarez Institute of Technology
Ave. Tecnológico 1340, Cd. Juárez, Chih. 32500 México
Corresponding author’s e-mail :{ Jorge Pedrozo, jpedrozo351@hotmail.com }

Decision-making is one of the most critical issues in design models. The design of new Maintenance methodologies that
improve the levels of reliability, maintainability and availability of equipment has roused a great interest in the last
years. Axiomatic Design (AD) is a design theory that provides a framework to decision-making in the designing
process. The objective of this paper is to present the validation of a new maintenance improvement model as an
alternative model to improve maintenance process..

Significance:The usage of information axiom as decision-making tool is examined, this paper present an example used
to describe how AD was applied to select the best maintenance model in order to meet the maintenance functional
requirements.

Keywords: Decision Making, Axiomatic Design, Information Axiom, Maintenance, Reliability, Availability

1. INTRODUCTION
Axiomatic Design (AD) theory provides a valuable framework for guiding designers through the decision process to
achieve positive results in terms of final design object (Nordlund and Suh, 1996). Several companies have used the
axiomatic design methodology successfully in order to develop new products, processes and even approaches. AD was
born about 20 years ago and was conceived as a systematic model for engineering education and practice (Suh, 1990). It
addresses designers in the complex process of the design, at the same time, it is catalogued as one of the most difficult
tools to master (Eagan et al., 2001).
Historically maintenance has evolved throughout time (Moubray 1997), from maintenance’s point of view, we can
differentiate approaches of “best practices” applied each one at certain period. For a better understanding of the
evolution and development of maintenance from its beginnings until these days, Moubray distinguishes three different
generations, see Fig 1.
First generation: It includes the period until the end of Second World War, at this time the industries had few
machines, they were very simple, easy to repair and normally oversized. The volumes of production were low, reason
why the down times were not important. The prevention of equipment’s failures were not of high priority for
management, and only was applied the reactive or corrective maintenance the maintenance policy was run to failure.
Second generation: It was born as a result of the war, at this time more complex machineries were gotten up, and the
unproductive time began to be a preoccupation of the management since they were let perceive gains by effects of new
demands, from this reason arose the idea that equipment’s failures could and must be prevented, idea that would take
the name of preventive maintenance. In addition new control and planning systems of maintenance started to be
implemented, in other words, revisions to predetermined time. This change of strategy made it not only possible to plan
maintenance activities; it also made it possible to start controlling maintenance performance, costs and production assets
availability.
Third generation: It begins in the middle of the Seventies where the changes, as a result of the technological advance
and of new researches are accelerated, mechanization and automatization in the industry were increased, operates with
higher volumes of production, downtime achieve more importance due to the costs by losses of production,
machineries reach greater complexity and increases our dependency of them, products and services of quality are
demanded considering aspects of security and environmental and the development of preventive maintenance was
consolidated.
In the latter years we have lived a very important growth of new concepts of maintenance and methodologies applied
to the management of maintenance (Duran 2000). Until ends of 90’s, the developments reached in 3º generation of the
maintenance included:
• Decision Aid tolls and new maintenance techniques.
• Design teams, giving high relevance to reliability and maintainability
• An important change in organization thinking towards the participation, the team work and the flexibility
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(5-6), 429-443, 2013.
 

A STUDY OF KEY FACTORS IN THE INTRODUCTION OF RFID INTO


SUPPLY CHAINS THROUGH THE ADAPTIVE STRUCTURATION
THEORY
Mei Ying Wu,
Department of Information Management,
Chung-Hua University, 707, Sec.2, WuFu Road, Hsinchu 300.
Chun Wei Ku,
Department of Information Management, Chung-Hua University
Taiwan, Province of China

Since 2003, the Radio Frequency Identification System (RFID) technology has gained importance and has been widely
applied. Numerous statistics indicate a high potentiality for RFID development in the near future. This study focuses on
the issues derived from RFID technology and explores the impact of its introduction into supply chains. Based on the
framework of the Adaptive Structuration Theory (AST), a questionnaire is designed for collecting research data, and
Structural Equation Modeling (SEM) is adopted in order to identify the relationships among research constructs.
The research findings indicate that technological features, promoters, and group cooperation systems of RFID have
significant effects on the supply chain operation structure and indirectly influence factors of RFID introduction. It is
evident from this study’s results that certain factors of RFID and a good supply chain operation structure have positive
effects on the introduction of RFID into supply chains.

Keywords: Radio Frequency Identification System, Adaptive Structuration Theory, Structural Equation Modeling,
Supply Chain Operation Structure, Introduction of RFID.

1. INTRODUCTION

The objective of this study is to investigate the effects of the introduction of RFID into supply chains and the
interactions between upstream and downstream firms. This objective is similar to that of the Adaptive Structuration
Theory (AST) proposed by DeSanctis and Poole (1994). AST was developed in order to examine the interactions
among groups and organizations using information technology (IT). Thus, based on the AST framework, Structural
Equation Modelling (SEM) is adopted in order to analyse the relationship among research constructs.
This study focuses on firms in a supply chain from upstream to downstream, whose businesses encompass
manufacturing, logistics, warehousing, retailing, and selling. The firms are selected from the list of Top 500
Manufacturers released by Business Weekly; members of the Taiwan Association of Logistics Management; and the
database of publicly-listed manufacturers, logistics firms, and retailers owned by the Department of Commerce in the
Ministry of Economic Affairs. The results are expected to serve as a reference for enterprises that are planning or
preparing for RFID introduction.

2. LITERATURE REVIEW

2.1 Introduction to RFID

RFID was created in an attempt to replace the widely used barcode technology. Thus far, it has garnered much attention
and has been extensively applied. Capable of wirelessly reading a large amount of data of various types, RFID can be
used to create an information system that can easily identify an object and extract its attributes. Table 1 presents the
features of RFID technology that have been mentioned in previous studies.

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


 
 
International Journal of Industrial Engineering, 20(5-6), 444-452, 2013.

ORDER PROCESS FLOW OF MASS CUSTOMIZATION BASED


ON SIMILARITY EVALUATION
Xuanguo XU, Zhongmei LIANG
Department of Economics & Management
Jiangsu University of Science and Technology
Mengxi street No.2
Zhenjiang, China 212003
Email: Xuanguo XU, seawaterxxg@163.com

The main result presented in this paper is the order process flow of mass customization based on similarity
evaluation. An order process flow of mass customization is put forward with a view to meet customers’ specific
needs as well as to reduce difficulties in subsequent production. As the basis of this order process flow, we suppose
that all the orders in the accepted order pool are profitable confirmed by the market. A similarity evaluation method
is put forward which includes the following steps: determine whether an order is acceptable or not; put the
profitable order into the accepted pool; for those not profitable, negotiate with customer to determine which pool it
belongs to; order similarity analysis with system clustering method; arrange for batch production for those have
much similarities; arrange for completely customized production for those specific orders; arrange order insertion
for those have little similarity but can be inserted to the scheduled plan. At the end of this paper, an example case
study of one China Air Conditioning Company is presented to illustrate the application of the process flow and the
similarity evaluation method.

Significance: Order acceptance has been studied in mass production mostly. This paper discussed how to process
the orders in mass customization after they are being accepted, so as to make more profit.

Keywords: Order process, Mass customization, System clustering, Similarity evaluation

1. INTRODUCTION
Since 1990s, customer requirements have become increasingly diversified and individual. Manufacturing
enterprises are gradually transferring their production mode from traditional mass production to customization in
order to survive in severe competition. Especially, in recent years, due to the demand diversification, order is
getting more and more obvious characteristics of personalized, and customized production has become very popular
in manufacturing production. Production orders can be customized based on the need to provide customers with
personalized products and services. On the other hand, complete customization is too much expensive, long
delivery time, low productivity and low capacity utilization (X.-F. SHAO and J.-H.JI, 2008). In this context, mass
customization (MC) came into being.
As customers' requirements are different from each other, manufacturing enterprises must analyze customers'
requirements and adopt specific procedures according to different customization requests and customization degree.
Order acceptance is a critical decision-making problem at the interface between customer relationship management
and production planning of order-driven manufacturing systems in MC. To solve this problem, the key issue is
order selection to get the maximum profit by capacity management. Over the past decade the strategic importance
of order acceptance has been widely recognized in practice as well as academic research in mass production (MP)
and MTO (make to order) systems. Some papers have discussed order acceptance decisions when capacity is
limited and penalty for late delivery (Susan A. Slotnick etc., 2007). Some paper uses different algorithm to solve the
order acceptance problems (Walter O. Rom etc., 2008). Simultaneous order acceptance and scheduling decisions
were examined in a single machine environment (Ceyda Oguz etc., 2010). All these papers studied the order
properties such as release dates, due dates, processing times, setup times and revenues, and offer trade-off between
earnings and cost. This strategy is more suitable separately to accept orders for the inventory production or order
driven production (such as make to stock and make to order), only need to consider the profits and capacity.
Therefore, in make to stock (MTS) mode, the problem is to consider the earnings and profits as a condition to
accept or reject orders for the subsequent production is in accordance with high-volume production as quickly as
possible to achieve the final product.
Unlike MTS mode which holds final finished products in stock as a buffer against demand variability, MC
production systems must hold production capacity and work in process (WIP) inventories to accept only orders of
the most profitable type. Generally speaking, customers' requests can be divided into three categories in the view of
enterprise: standard parts, simple customization and special customization. Standard part means the commonly used
accessories in the customized product. Simple customization can further be divided into customization based on
parameters and customization based on configurations. If customers' needs for customization are beyond the scope
of simple customization, such as changing product's shape dramatically or adding some functions which are not

ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(7-8), 453-467, 2013

MEAN SHIFTS DIAGNOSIS AND IDENTIFICATION IN BIVARIATE


PROCESS USING LS-SVM BASED PATTERN RECOGNITION MODEL

Cheng Zhi-Qiang1, Ma Yi-Zhong1 , Bu Jing2, Song Hua-Ming1
1
Department of Management Science and Engineering, Nanjing University of Science and Technology, Nanjing
Jiangsu, 210094, P.R.China.
czqme@163.com, yzma-2004@163.com, songhuaming@sohu.com
2
Automation Institute, Nanjing University of Science and Technology,
Nanjing Jiangsu, 210094, P.R.China.
bujing30@foxmail.com

This study develops a least squares support vector machines (LS-SVM) based model for bivariate process to
diagnose abnormal patterns of process mean vector, and to help identify abnormal variable(s) when Shewhart-type
multivariate control charts based on Hotelling’s T 2 are used. On the basis of studying and defining the
normal/abnormal patterns of the bivariate process mean shifts, a LS-SVM pattern recognizer is constructed in this
model to identify the abnormal variable(s). The model in this study can be a strong supplement of the Shewhart-
type multivariate control charts. Furthermore, the LS-SVM techniques introduced in this research can meet the
requirements of process abnormalities diagnosis and causes identification under the condition of small sample size.
An industrial case application of the proposed model is provided. The performance of the proposed model was
evaluated by computing its classification accuracy of the LS-SVM pattern recognizer. Results from simulation case
studies indicate that the proposed model is a successful method in identifying the abnormal variable(s) of process
mean shifts. The results demonstrate that the proposed method provides an excellent performance of abnormal
pattern recognition. Although the proposed model used for identifying the abnormal variable(s) of bivariate process
mean shifts is a particular application, the proposed model and methodology here can be potentially applied to
multivariate SPC in general.

Key words: multivariate statistical process control; least squares support vector machines; pattern recognition;
quality diagnosis; bivariate process

1. INTRODUCTION

In many industries, complex products manufacturing in particular, statistical process control (SPC)[1] is a widely
used tool of quality diagnosis, which is applied to monitor process abnormalities and minimize process variations.
According to Shewhart’s SPC theory, there are two kinds of process variations, common cause variations and
special cause variations. Common cause variations are considered to be induced by the inherent nature of normal
process. Special cause variations are defined as abnormal variations of process, which are induced by assignable
causes. Traditional univariate SPC Control charts are the most widely used tools to reveal abnormal variations of
monitored process. Abnormal variations should be identified and signaled as soon as possible to the effect that the
quality practitioners can eliminate them in time and bring the abnormal process back to the normal state.
In many cases, the manufacturing process of complex products may have more than two correlated quality
characteristics and a suitable method is needed to monitor and identify all these characteristics simultaneously. For
the purpose of monitoring the multivariate process, a natural solution is to maintain a univariate chart for each of
the process characteristics separately. However, this method could result in higher fault abnormalities alarms when
the process characteristics are highly correlated [2] (Loredo, 2002). This situation has brought about the extensive
research performed in the field of multivariate quality control since the 1940s, when Hotelling introduced that the


Corresponding author of this paper. E-mail address: yzma-2004@163.com, czqme@163.com

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(7-8), 468-486, 2013

PARALLEL KANBAN-CONWIP SYSTEM FOR BATCH PRODUCTION IN


ELECTRONICS ASSEMBLY

Mei Yong Chong, Joshua Prakash, Suat Ling Ng, Razlina Ramli, Jeng Feng Chin
Universiti Sains Malaysia,Malaysia
School of Mechanical Engineering, Universiti Sains Malaysia (USM), Engineering Campus

This paper describes a novel pull system based on value stream mapping (VSM) in an electronics assembly plant. Production
in an assembly plant can be characterized as multi-stage, high mix, by batch, unbalanced, and asynchronous. The novelty of
the system lies on the two kanban systems working simultaneously: a standard lot size kanban system for high-demand
products (high runners) and a variable lot size constant work-in-process (ConWIP) system for low-demand products (low
runners). The pull system is verified through computer simulation and discussions with production personnel. Several
benefits are achieved, including level scheduling and significant reduction in the work-in-process (WIP) level. Production
flows are regulated through a process called pacemaker, which involves varying the standard lot size and the number of
ConWIP kanban. The available interval time could be utilized for other non-kanban-driven parts and routine maintenance
(5S). Only a moderate decline in the production output is seen, compared with the target, because of the increase in the
overall set-up time, along with the small lot size production.

Keywords: Pull system, kanban system, ConWIP system, batch production

1. INTRODUCTION
The philosophy of lean manufacturing originated from the Toyota Production System (TPS) and was envisioned by Taiichi
Ohno and Eiji Toyoda (Liker, 2004). This practice considers the deployment of resources only for activities that add value
from the perspective of end customers. Other activities that depart from this intention are viewed as wasteful and should be a
target for total elimination. Taiichi Ohno identified seven forms of waste: overproduction, queue, transportation, inventory,
motion, over-processing, and defective product (Heizer and Render, 2008). Ultimately, the production must be a continuous
single flow throughout the shop floor, driven by customer demand.
However, this ultimate objective would take years to realize. Moreover, the service work applied to work-in-process (WIP),
even if considered as waste, is still needed. The main function of WIP is to decouple parts among machines running at
different capacities, set-up times, and failure rates. An excessive amount of WIP prolongs lead time, whereas an insufficient
amount of WIP results in the occasional starving and blocking of machine during production (Hopp and Spearman, 2000;
Silver et al., 1998). Thus, the pertinent question is how to maintain the minimum amount of WIP in the manufacturing
system. One way is to move WIP only when needed, rather than pushing it on the next machine. This is the essence of the
pull system. Specifically, a preceding machine produces parts only after receiving a request from its succeeding machine for
the immediate replacement of items removed from the stock. Therefore, the flow of information is in the opposite direction of
the material flow (Bonney et al., 1999). Lean practitioners often use a kanban (card) to signal the production (authorization)
for the next container of material (Gaury et al., 2000).
A review of literature (Berkley, 1992; Lage Junior and Godinho Filho, 2010) reveals at least 20–30 existing kanban
systems, all of which differ in terms of the medium used, lot sizes, and transferring mechanism. Hybrid systems involving
multiple types of kanbans have also been established and studied. The development has led to belief that future kanban
systems will be increasingly complex. Some systems, such as those by Takahashi and Nakamura (1999), require the aid of
software for real-time adjustments. Other researchers (Markham et al., 2000; Zhang et al., 2005; Moattar Husseini et al.,
2006) have ventured into creating optimum kanban settings using advanced computer techniques, such as artificial
intelligence.
In this paper, we offer a new hybrid system based on a value stream mapping (VSM) exercise. The system is a combination
of two well-known techniques: kanban and ConWIP. To the best of our knowledge, even though the mechanism employed is
simple and naturally fits into the production under study, this system has yet to be proposed elsewhere. The system is also
sufficiently generic to warrant wider applications, especially as the production setting and problems faced in the case study
are not unique.
The paper begins with an introduction on the pull system and its various types. Afterwards, VSM is introduced as the main
methodology, and the sequences of its implementation leading to the final value stream map are presented. The description of
the proposed system is then given. Finally, the setup of computer simulation and discussions on the results obtained are
provided.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(7-8), 487-501, 2013

LEAN INCIPIENCE SPIRAL MODEL FOR SMALL AND MEDIUM


ENTERPRISES

Mei Yong Chong, Jeng Feng Chin, Wei Ping Loh


Universiti Sains Malaysia
Malaysia
School of Mechanical Engineering, Universiti Sains Malaysia (USM), Engineering Campus, 14300 Nibong Tebal,
Penang, Malaysia

Small and medium enterprises (SMEs) support a balanced local economy by providing job opportunities and
industry diversity. However, weak management practices result in suboptimal operations and productivity in SMEs.
Few generic lean models conceived with SMEs in mind. In this light, a lean transformation framework for SMEs is
conceived. The model is known as lean incipience spiral model (LISM). It aims to effectively introduce lean
concepts and later to facilitate sustainable transformation in SME which has limited relevant prior exposure to the
concepts. The model builds upon a steady and parsimonious diffusion of lean practices. A progressive
implementation is promoted through a spiral life cycle model, where each cycle must undergo four phases. The lean
transformation is guided with value stream mapping and a commercial lean assessment tool. Finally, the model was
implemented in a suitable case study.

Keywords: Lean manufacturing, lean enterprise, small and medium enterprises, value stream mapping

1. INTRODUCTION
In general, small and medium enterprises (SMEs) are business enterprises operating with minimal resources for a
small market. However, the actual definition tends to vary among countries and is subject to constant revision. In
Malaysia, a manufacturing company with less than 150 employees or less than RM 25 million sales turnover is
categorized as an SME (Small and Medium Industries Development Corporation, 2011).
SMEs are acknowledged as key contributors to the development of the economy, increase in job opportunities, and
general health and welfare of global economies. SMEs provide more than half of the employment and value-added
services in several emerging countries; their impact is bigger in developed countries. Nevertheless, SMEs always
fall behind large enterprises in terms of gross domestic product (GDP). For example, a 2006 report in Kaloo (2010)
showed that although SMEs accounted for 99.2% of the total establishment in Malaysia, with 65.1% of the total
workforce, they only generated 47.9% of the GDP.
Kaloo (2010) reasoned that large enterprises have a size advantage and are able to acquire mass production
technologies to reduce production cost. The required setup costs and relevant technologies may be financially
infeasible or unavailable for SMEs. With thinner profit margin, SMEs are also more vulnerable to financial losses
than large enterprises.
SMEs also face fierce competition due to low marketing channels and small niche market share (Analoui and
Karami, 2003). Most SMEs heavily rely on general machines that are shared by a high variety of products. To
maximize machine utilization, batch production is adopted with ad-hoc scheduling. Inefficient management
practices further add to the variability of production forms. This entails long production lead times, impeding rapid
response to customer demand.
SMEs are seed beds for future large enterprises. Eventually, a SME needs to grow into a large enterprise. The
inability of an SME to capitalize on the benefits of accumulated knowledge and experiences over long periods of
time may be a sign that something is amiss. In this premise, the constant upgrade of operations is vital. Best
practices have to be introduced and adopted for SMEs to achieve high performance in operational areas, rightly with
the stage of expansion. Unfortunately, case studies by Davies and Kochhar (2000) found the predominance of
malpractices and the high rate of fire fighting in SMEs. The mixed implementation of best practices also did not
appear to be the result of a structured selection process. Poor understanding of the relationship between practices
and the effects of implementing practices was ascertained. With protecting capital investment as top priority, SMEs
largely adopt a conservative and follower mindset that prefers short-term benefits.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(7-8), 502-514, 2013

DESIGN AND IMPLIMENTATION OF LEAN FACILITY LAYOUT


SYSTEM OF A PRODUCTION LINE

Zhenyuan Jia, Xiaohong LU, Wei Wang, Defeng Jia

To resolve the problem that the unreasonable facility layout of a production line directly or indirectly leads to
inefficient production efficiency are very common in Chinese manufacturing workshops, facility lean layout system
of a production line is designed and developed. By analyzing the influence factors of the facility layout, Optimization
objectives and constraint conditions of facility layout were summarized. A functional model and a design structure
model of the lean layout system are built. Based on the in-depth analyses of the mathematical model designed to
denote the optimization of facility layout of a production line, a prototype lean facility layout system of a production
line is developed. The results of applying the facility layout system in cylinder liner product line showed that the
designed lean facility layout system can effectively enhance the productivity efficiency and increase the efficiency of
the using of equipments.

Key words: production line, lean, facility layout, model, design

1. INTRODUCTION

Due to the phenomena that the unreasonable facility layout of a production line directly or indirectly leads to
inefficient production efficiency are very common in Chinese manufacturing workshops, the research on facility
layout of a production line has always been the key research area of industrial engineering domain(Sahin, Ramazan
and Türkbey, Orhan, 2009, and Zhang, Min et al., 2009, Diego-Mas, J.A. et al., 2009 and Raman, Dhamodharan et al.,
2009). The facility layout form of a production line depends on the types of enterprises and forms of production
organization(Khilwani, N. et al., 2008 and Amaral, André R. S. ,2008). The facility layout types are divided into the
technological layout, product layout, fixed-station layout (SUO Xiaohong and LIU Zhanqiang, 2007), chain
distribution (SUN Hailong, 2005) and the particular layout combining with the actual situation (CAO Zhenxin et al.,
2005), and so on.
Traditional qualitative methods of facility layout mainly include modeling method, sand table method, drawing
pictures method and graphic illustration method, etc. (QU Shiyong and MU Yongcheng, 2007); these methods rely
mainly on personal experience and lack of scientific basis. When there are many production units the relationships
between the facilities become more complex and the qualitative layout methods are often unable to meet the
requirements of the workshop, thus the quantitative distribution technologies emerge. The quantitative layout
methods mainly include process flow diagram method, from-to table method, relationships of the work units method
and SLP method (advanced by Richard Muther), etc (ZHU Yaoxiang and ZHU Liqiang, 2004). SLP method provides
a layout planning method, which selects the relationship analyses of the logistics and non-logistics of the production
units as the main line of the planning method, and is the most typical system layout planning method (Richard Muther,
1988). In recent years, with the improvement of the computer’s performance and the development of the digital
analysis methods, appeared computer-aided system layout planning (CASLP) method on the basis of applying
computer and its related technologies on SLP method (CHANG Jian’e and MA Likun, 2008). The CASLP method
not only greatly speeds up the layout plan process, but also provides simulation display for the layout scheme
depending on the advanced functions of people-machine-interaction and computer aided drawing.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(7-8), 515-525, 2013

A CRITICAL PATH METHOD APPROACH TO A GREEN PLATFORM


SUPPLY VESSEL HULL CONSTRUCTION
Eda TURANa, Mesut GÜNERb
a
Department of Naval Architecture and Marine Engineering, Yildiz Technical University, 34349 Besiktas, Istanbul,
Turkey
E-mail: edaturan@yildiz.edu.tr Phone: +902123833156 Fax: +902122364165
b
Department of Naval Architecture and Marine Engineering, Yildiz Technical University, 34349 Besiktas, Istanbul,
Turkey
E-mail: guner@yildiz.edu.tr Phone: +902123832859 Fax: +902122364165

This study generates a critical path method approach for the first Green Platform Supply Vessel hull constructed in
Turkey. The vessel was constructed and partly outfitted in a Turkish Shipyard and delivered to Norway. The project
management of the vessel was conducted utilizing Critical Path Method (CPM) and the critical paths during
construction and partly outfitting period of this sophisticated vessel were presented. Additionally, the precautions in
order to prevent the delay of the project were discussed.

Keywords: Project Management, Production, Critical Path Method (CPM), Green Vessel, Platform Supply Vessel.

1. INTRODUCTION
A Platform Supply Vessel (PSV) carries various types of cargoes such as chemicals, water, diesel oil, fuel oil, mud,
brine oil etc. between the platforms and ports. She supplies the requirements of the platforms during operations and
brings the wastes to the port.
Platform supply vessels are separated into three groups as small-sized, medium-sized and large-sized platform supply
vessels according to their deadweight tonnages. The platform supply vessels with a capacity less than 1500 DWT are
named as small-sized, between 1500 DWT – 4000 DWT are medium-sized and more than 4000 DWT are large-sized
platform supply vessels.
The vessel in this paper is a large-sized platform supply vessel with her 5500 DWT capacity. This type of construction
is the first application in the Turkish Shipyards. The vessel is the first and biggest merchant ship using a fuel cell to
produce power on board. The length of the vessel is 92,2 meters and the beam is 21 meters. After completion of the hull
construction and partly outfitting in a Turkish Shipyard, the vessel was delivered to Norway. Remaining works were
completed in a Norwegian Shipyard. The vessel operates in the North Sea.
The vessel uses not only heavy oil or diesel but also liquefied natural gas engines and fuel cell. This is the difference
of the vessel from other merchant vessels. SOx, NOx and CO2 emissions are reduced with the combination of gas
engines and the fuel cell on board.
The construction of Platform Supply Vessels is more difficult and complicated than the vessels that the Turkish
Shipyards are experienced in construction of vessels such as chemical carriers and cargo vessels. The length of these
vessels are shorter than conventional cargo vessels, however since the steel weights are more than conventional cargo
vessels, there is a high demand to build these vessels from the shipyards. Nowadays, these vessels also become the
mostly demanded vessels for construction subject to the above reasons.
Ship production is a project type production. Therefore project management is a vital factor in the construction of a
vessel. The most common planning type in Turkish Shipyards is block planning. In this planning approach, the ship is
divided into various sizes of blocks before the construction commences. Firstly, blocks are constructed separately and
they are erected on the slipway after completion (Odabasi, 1996). Since the block weights are determined according to
crane capacities of the shipyards, the block weights may vary from one shipyard to another.
There are various processes during a shipbuilding stage. In order to complete the ship construction profitably on time,
information, material, workmanship and workflows should be managed under control, which is appropriate to the
shipyard (Turan, 2008).
The material flow is also significant for the delivery of the vessels on time. Required materials should be present at
the needed time and location. The delays on material supplies may slow down the production or even stop it (Acar,
1999).
In the literature, Yang and Chen (2000), performed a study in order to determine the critical path in an activity network.
Lu et al. (2008), deals with resource constrained critical path analysis for construction area. Duan and Liao (2010),
evaluated improved ant colony optimization for determining project critical paths. Guerriero and Talarico (2010),
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(9-10), 526-533, 2013

RECENT DIRECTIONS IN PRODUCTION AND OPERATION


MANAGEMENT: A SURVEY
Vladimir Modrak1 and Ion Constantin Dima2
1
Technical University of Kosice, Bayerova, Nr. 1, Presov, Slovakia
e-mail: vladimir.modrak@tuke.sk
2
Valahia University of Targoviste, B-dul Regele Carol I, Nr. 2, Targoviste, Romania
e-mail: dima.ioan_constantin@yahoo.com

Although the overviews on overall historical developments in given cognition domains are useful, in this survey a
modern era of operations management is treated. This work starts with describing the development and current
position of operation management in production sector. Subsequently, decisive development features of operations
management are articulated and analyzed. Finally, in the paper, opportunities and challenges of a modern operations
management for practitioners are discussed.

Keywords: strategic management, operations strategy, organizational change, innovation

1. INTRODUCTION

Operations management (often called as production management) may be defined in different ways depending upon
angle of view. Since this discipline is a field of management then it focuses on carefully managing the processes to
produce and distribute products faster, better, and cheaper than competitors. Operations Management (OM)
practically concerns all operations within the organization and objectives of its activities focuses on efficiency and
effectiveness of processes. Modern history of production and operations management was initiated in 1950s by an
extensive development of operation research tools of waiting line theories, decision theories, mathematical
programming, scheduling techniques and other theories. However, the material covered in higher education was
quite fragmented without umbrella what it is called as production and operations management (POM).
Subsequently, the first publications ‘Analysis of Production Management’ by Bowman and Fetter (1957) and
‘Modern Production Management’ by Elwood Buffa (1961) represented an important transition from industrial
engineering to operations management. Operations management finally appears to be gaining position as a
respected academic discipline. OM as a discipline went through its own evolution that has been comprehensively
characterized by Chase and Aquilano (1989). Thus, this may be a good time to update the evolution of the field. To
achieve this goal, the major publications/citations in this field and their evolving research utility over the decades
will be identified in this paper.

2. OPERATION MANAGEMENT IN THE CONTEMPORARY ERA

The process of building operation management theory and definition of its scope or area has been treated by a
number of authors. As it has been mentioned above, a modern era of POM is closely connected with a history of
industrial engineering (IE). The development of IE discipline has been greatly influenced by the impact of operation
research (Turner et al, 1993). Operation research (OR) was originally aimed at solving difficult war-related
problems through the use of mathematics and other scientific branches. The diffusion of new mathematical models,
statistics and algorithms to aid in decision-making had a dramatic impact on industrial engineering development.
Major industrial companies established operation research groups to help solve their problems. In the 60’s,
expectations from OR were extremely high, and as was commented by Luss and Rosenwein (1997), “over the years
it often appeared that the mathematics of OR became the goal rather the means to support solving real problems”. It
caused that OR groups in companies were transferred to traditional organization units within companies. As a
reaction on this disappointment Corbett and Van Wassehove (1993) classified OR specialists into three classes:
theoreticians, management consultants, who focus on using available methods to solve practical problems, and the
“in-between” specialists called operations engineers, who adapt and enhance methods and approaches in order to
solve practical problems. The term “operations engineers” was formulated due to lack of better term and
accordingly the group could be also called as operations managers and the field conducting applied research that
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(9-10), 534-547, 2013

A CASE STUDY OF APPLYING FUZZY DEMATEL METHOD TO


EVALUATE PERFORMANCE CRITERIA OF EMPLOYMENT
SERVICE OUTREACH PROGRAM

Jiunn-I Shieh 1, Hsuan-Kai Chen 2 (Corresponding Author), Hsin-Hung Wu 3


1
Department of Information Science and Applications, Asia University
No. 500, Lioufeng Rd., Wufeng , Taichung County, Taiwan 41354
E-mail: jishieh@yahoo.com.tw
2
Department of Marketing and Logistics Management, Chaoyang University of Technology
No. 168, Jifong E. Rd., Wufeng, Taichung County 41349, Taiwan
E-mail: lisa123@ms18.hinet.net
3
Department of Business Administration, National Changhua University of Education
No. 2 Shida Road, Changhua City, Taiwan 500
E-mail: hhwu@cc.ncue.edu.tw

The economic and financial crisis leads to deterioration in the employment market in Taiwan. The Bureau of
Employment and Vocational Training, Council of Labor Affairs of Executive Yuan has been aggressively
conducting Employment Service Outreach Program to resolve this tough issue. Under such program, the
outreach personnel are recruited, trained, and supervised to perform the duties including identifying unemployed
persons and then providing job information for them, using the social resource link to increase employment
opportunities, conducting employer forum or workshops for job-seekers, and so on. This study applies fuzzy
decision-making trial and evaluation laboratory method to not only evaluate the importance of the criteria but
also construct the causal relationships among the criteria of evaluating outreach personnel. The results show that
job-seeking service is the most critical criterion among the three first-tier criteria. In addition, identification of
the number of unemployed people and number of follow-up visit are the two most important causes under the
category of job-seeking service when the performance of outreach personnel in Employment Service Outreach
Program is evaluated.

Keywords: Employment service outreach program, Outreach personnel, Fuzzy theory, Fuzzy DEMATEL

1. INTRODUCTION

In the early 2008, the unemployment rate in Taiwan was 3.80%. Because of the economic and financial crisis,
the average unemployment rate in 2009 has been increased to 5.85%. Further, the highest unemployment rate
was occurred in August 2009 with 6.13%, representing 672 thousand unemployed persons. As a result, reducing
the unemployment rate has become a tough issue faced by the government. In order to ease the negative impact
on unemployment, the Bureau of Employment and Vocational Training, Council of Labor Affairs of Executive
Yuan has been aggressively conducting Employment Service Outreach Program.
This program performed by outreach personnel consists of identifying unemployed persons and then

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(9-10), 548-561, 2013

UTILIZING SIGN LANGUAGE GESTURES FOR GESTURE-BASED


INTERACTION: A USABILITY EVALUATION STUDY

Minseok Son 1, Woojin Park* 1, Jaemoon Jung 1, Dongwook Hwang 1 and Jungmin Park 2
1
Seoul National University, Seoul, 1 Gwanak-ro, Gwanak-gu, Seoul Korea, 151-744
2
Korea Institute of Science and Technology, 5 Hwarang-ro, Seongbuk-gu, Seoul Korea,136-791
*Woojin Park is the corresponding author of the paper

Utilizing gestures of major sign languages (signs) for gesture-based interaction seems to be an appealing idea as it
has some obvious advantages, including: reduced time and cost for gesture vocabulary design, immediate
accommodation of existing sign language users and supporting universal design and equality by design. However, it
is not well understood whether or not sign language gestures are indeed adequate for gesture-based interaction,
especially in terms of usability. As an initial effort to enhance our understanding of the usability of sign language
gestures, the current study evaluated Korean Sign Language (KSL) gestures employing three usability criteria:
intuitiveness, preference and physical stress. A set of 18 commands for manipulating objects in virtual worlds was
determined. Then, gestures for the commands were designed using two design methods: the sign language method
and the user design method. The sign language method consisted of simply identifying the KSL gestures
corresponding to the commands. The user design method involved having user representatives freely design
gestures for the commands. A group of evaluators evaluated the resulting sign language and user-designed gestures
in intuitiveness and preference through subjective ratings. Physical stresses of the gestures were quantified using an
index developed based on Rapid Upper Limb Assessment. The usability scores of the KSL gestures were compared
with those of the user-designed gestures for relative evaluation. Data analyses indicated that overall, the use of the
KSL gestures cannot be regarded as an excellent design strategy when viewed strictly from a usability standpoint,
and the user-design approach would likely produce more usable gestures than the sign language approach if design
optimization is performed using a large set of user-designed gestures. Based on the study findings, some gesture
vocabulary design strategies utilizing sign language gestures are discussed. The study findings may inform future
gesture vocabulary design efforts.

Keywords: sign language, gesture, gesture-based interaction, gesture vocabulary, usability

1. INTRODUCTION
Gesture-based interaction has been actively researched in the human computer interaction (HCI) community as it
has a potential to improve human-machine interaction (HMI) in various circumstances (Nielsen et al., 2003; Cabral
et al., 2005; Bhuiyan et al., 2009; Wachs et al., 2011; Choi et al., 2012). Compared with other modalities of
interaction, the use of gestures has many distinct advantages: first, gestures are the most basic means of human-to-
human communication along with speech, and thus, may be useful for realizing natural, intuitive and comfortable
interaction (Baudel and Beaudouin-Lafon, 1993). Second, human gestures are rich in expressions and can convey
many different meanings and concepts as can be seen in the existing sign languages’ extensive gesture vocabularies.
Third, gesture-based interaction can be utilized in situations where the use of other interaction methods is
inadequate. For example, covert military operations in battle fields would preclude the use of voice-based or
keyboard and mouse-based interaction. Fourth, the use of touchless gestures would be ideal in environments that
require absolute sanitation, such as operating rooms (Stern et al., 2008a; Wachs et al., 2011). Fifth, gestures may
promote chunking, and therefore, may alleviate cognitive burden during human-computer interaction (Baudel and
Beaudouin-Lafon, 1993; Buxton, 2013). Sixth, gesture can be combined easily with other input modalities,
including voice, to enhance ease of use and expressiveness (Buxton, 2013). Finally, the use of the hands (or other
body parts) as the input device eliminates the needs for intermediate transducers, and thereby, may help reduce
physical stresses on the human body (Baudel and Beaudouin-Lafon, 1993).
One of the key research issues related to gesture-based interaction is the design of gestures. Typically, a gesture
design problem is defined as determining the best set of gestures for representing a set of commands necessary for
an application. Such set of gesture-command pairs is often referred to as a gesture vocabulary (GV) (Wachs et al.,
2008; Stern et al., 2008a). Gesture design is important because whether or not gesture-based interaction achieves
naturalness, intuitiveness and comfort largely depends on the qualities of designed gestures.

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(9-10), 562-573, 2013

A STUDY ON PREDICTION MODELING OF KOREA MILLITARY


AIRCRAFT ACCIDENT OCCURRENCE
Sung Jin Yeoum1, Young Hoon Lee2
1
Department of IIE,Yonsei University
2
Department of IIE, Yonsei University, Korea, Republic Of

This research reports the analysis on the causes of accidents and case studies during the last 30 years in order to predict
chances of accident occurrences for the Republic of Korea Air Force (ROKAF) proactively. Systematic and engineered
analytical methods i.e. artificial neural network (ANN) and logistics regression are employed in practice to develop
prediction models in order to predict accidents for the purpose of identifying superior technique among the two. After
experimentation, it is revealed that ANN outperforms logistic regression technique in terms of enhanced prediction rate.

Significance: This research proposes accident prediction models which are anticipated to perform in an effective
manner regarding superior accident prediction and prevention rate for military aircrafts. Moreover, this research also
serves the purpose of providing an academic base, data and direction for future research on this specific topic.

Keywords: Prediction Modeling, Accident Prediction Rate, Artificial Neural Network

1. INTRODUCTION

The ROKAF is facing chronic challenge of one or two aircraft accidents per year during commencement of its
scheduled air operations and training exercises. The aforesaid fact inevitably incurs high aircraft cost and results in loss
of precious pilot’s life having detrimental effects in terms of lowering of morale and causing great grief among citizens.
The ROKAF is making best of its effort to address this challenge and has established Air Safety Management Wing in
this regard. Few improvements in a scientific and realistic fashion compared to the existing situation have been reported
but complete accident prevention is yet to be achieved (Byeon et al., 2008, Myung, 2008). An extensive research with
focus on pilot error has been conducted but no research with focus on jet fighter accident variable determination and
consequent accident prevention models is available. The reason behind aforesaid shortcoming is that the data related to
jet fighter accident is restricted, off-limits and not accessible due to security issues.
Due to aforementioned reason, accident prevention models have been developed for nuclear facilities and civilian
aircrafts etc. but no research has been conducted regarding jet fighter accident prediction and prevention. This research
is one of its kinds because it analyzes a total of 38 major jet fighter (F-A, F-B, F-C and F-D types) accidents over the
span of last 30 years (from 1980 to 2010) in an effort to comprehensibly determine all factors and variables affecting
military aircraft accidents. Instead of using traditional qualitative accident prevention variables, a quantitative analysis
is engineered to extract accident prevention data. To increase the credibility of aforesaid data, we have used two data
mining and analysis techniques i.e. logistic regression analysis and ANN.
Casual jet fighter accident causes have also been included in the proposed accident prevention model as ‘applicable
variables’ along with other factors or variables depicting major accident causes. Individual flight capability is: fighter jet
pilot of age 23 years, 2400 hours of flight duration, experience as safety flight leader and squadron leader is included in
the crash prediction model. It is worth mentioning that literature on theoretical considerations, suitable research methods
with safety management and crash prediction theories related to this specific domain of knowledge have been studied in
detail before this research. Two groups were made prior to collecting data via basic statistical analysis i.e. t-evaluation
in order to distinguish between accident prone variables and accident free variables. Durbin-Watson’s statistic was used
to resolve variable independence, multi co-linearity, tolerance limit, dispersed expansion factor, state index and degree
of dispersion issues.
Crash prediction models were made through the analysis of aforementioned data via logistics regression and ANN
(using SPSS-18TM). The aforesaid models were also verified using test data for the purpose of validation. The
superiority of one model over another is determined based on better and enhanced prediction rate. Comprehensive
literature survey related to this area of research is presented in Section-2. In Section 3-5, models for jet fighter accident
prediction have been developed and validated using test data as gathered for the span of last 30 years. In Section-6-7,
few imperative conclusions are drawn along with future research directions and suggested implications.

ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(9-10), 574-588, 2013

A COMINED APPROACH OF CYCLE TIME ESTIMATION IN MASS


CUSTOMIZATION ENTERPRISE

Feng Liang1*, Richard Y K Fung2 and Zhibin Jiang3


1
Dept. of Industrial Engineering, Nankai University, Tianjin 300457, China
2
Department of Manufacturing Engineering & Engineering Management
City University of Hong Kong, Hong Kong, China
3
Dept. of Industrial Engineering, Shanghai Jiao Tong University, Shanghai 200240, China

To enhance customer satisfactions and improve ability of quick responses, the production type of mass
customization is advocated in many manufacturing enterprises. But in the mass customization enterprises, the
customization demands will influence the standard cycle time estimation, which is essential in the analysis of
contracts negotiation, capacity planning, and the assignments of due dates. Hence in this paper, a combined
methodology employing an analytical model and a statistical regression method is proposed to facilitate the cycle
time estimation in the mass customization enterprise. Using inferential reasoning for the analytical optimal model
for cost minimization, it is deduced that the relationship between the customization degree and cost coefficient
provider an efficient way to estimate the cycle time accurately. And their relationship is described with a statistical
regression method. Finally, a case study from a bus manufacturing enterprise is used to illustrate the detailed
estimation procedures and the further discussion is presented to explicate the significance for practice.

Key words: Cycle Time; Mass Customization; Statistics Regression; Customization Degree; Cost Coefficient

1. INTRODUCTION
One of the essential criteria having reliable due date commitments and maintaining high level of customer service is
to have accurate estimates of cycle time. Owing to the lack of the fast and accuracy cycle time estimation methods
in the mass customization enterprise, practitioners often use constant cycle times as bases for due date assignment
and scheduling. However, the constant cycle time is so much simplified that due dates and schedules may not be
assigned and constructed with acceptable accuracy. Actually in many production systems, this approach results in a
high degree of late deliveries as the mean cycle-time is used as a basis to determine the delivery date. Therefore, the
development of a model for cycle time estimation for the mass customization enterprise is essential though it may
be rather complex. However, beyond the objective of due date setting, accurate cycle time estimates are also needed
for better management of the shop floor control activities, such as order review/release, evaluation of the shop
performance, identification of jobs that requires expediting, leadtime comparisons, etc. All these application areas
make the cycle time estimation problem as important as other shop floor control activities (Sabuncuoglu and
Comlekci, 2002).
In fact, in the mass customization enterprise, the difficulty of cycle time estimation is not only due to the
complexity of manufacturing systems, but also the high customization degree. It is well known that the actual cycle
time may vary from the theoretical cycle time because of the demands of customization. For example, in an
automobile enterprise which is a typical mass customization enterprise, there are 35 important parts in which 14
parts are optional for the customers. Therefore the cycle time vary from 20 days to 24 days. If the average cycle
time is determined as the promised cycle time, the delivery late rate may be 23%. Hence in order to avoid late
deliveries, the actual cycle time has to be determined according to the required customization degree.
However under a mass customization environment, estimating the cycle time as the important base to meet due
dates on time and utilize existing capacity efficiently is a complex problem than in other production systems due to
the example stated below.
According to the statistics and analysis of history production and after service data in a bus manufacturing
company, the measures of customer satisfaction on the six factors as shown in Figure 1.

*
Correspondence author: Dr Feng Liang, Dept. of Industrial Engineering, Nankai University, 23 Hongda Street, Tianjin
Economical Development Area, Tianjin 300457, China. Phone: (+86) 22 6622 9204, Fax: (+86) 22 6622 9204, Email:
liangfeng79@nankai.edu.cn.
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(11-12), 589-601, 2013

SYSTEM ENGINEERING APPROACH TO BUILD AN INFORMATION


SYSTEM FOR EMERGENCY CESAREAN DELIVERIES IN SMALL
HOSPITALS
Gyu M Lee
Department of Industrial Engineering, Pusan National University,
Korea, Republic of

The human is an imperfect being so that he/she has a limitation in perceiving the situations appropriately and making a
right decision quickly. His/her perceptions and decisions often come from the personal experiences and characteristics.
This vulnerability leads to the frequent errors or mistakes and aggravates things in an emergency situation of time pressure
and complex confusions. In a situation where an emergency cesarean delivery (ECD) is required, the immediate and
appropriate medical cares are very important to the fetus and the mother. However, the number of high-risk pregnancy
obstetrics doctors is decreasing in recent days and more medical staffs are currently in a great need. The American College
of Obstetricians and Gynecologists (ACOG) stated in 1989 that hospitals with obstetric services should have the
capability to begin an ECD within 30 minutes of the decision. This requirement places intense time pressure on the
preparation and surgical teams. A distributed, mobile communication and information system to facilitate ECDs at Pusan
National University Hospital has been developed along with its healthcare staffs. The developed ECD Facilitator has been
demonstrated to the staff at the hospital and their responses has been obtained to assess that such a system would reduce
the decision-to-incision intervals (DII) to well below the 30-minute ACOG guideline and reduce the likelihood of human
errors that compromise patient safety. This system engineering approach can be readily adaptable to other emergency
disastrous situations.

1. INTRODUCTION
The operating room (OR) in hospitals is a complex system in which the effective integration of personnel, equipment, and
information is essential to the delivery of high quality health care services. A team of surgeons, nurses, and technicians
with appropriate knowledge and skills must be assembled to perform many complex tasks necessary to properly prepare
for and successfully complete the surgery. Then, they must have the appropriate equipment, supplies, and materials at
hand, and those items must not only be present, but properly placed and correctly configured to be used by the OR team.
Besides the knowledge and skills they bring to the OR, team members require additional information to support their
decisions and guide their actions, including accurate vital data, proper protocols and procedures, and medical reference
information, particularly if they encounter unfamiliar situations or complications in the course of the surgery. All of these
components in the complex OR system must be properly coordinated. The surgery must be carefully planned, personnel
must be in the right places at the right times, activities must be properly synchronized, logistics must be executed
efficiently, and the right information must be available when and where needed.
This coordination is made more difficult by the fact that emergency surgery is time-critical, where the life of the patient
may depend on the hospital’s ability to assemble the OR team, prepare the OR and equipment, and provide the necessary
information to begin a surgical procedure within minutes. Emergency surgeries challenge even the largest, most capable
hospitals, but they are especially challenging for small, rural hospitals that do not have enough personnel and resources.
When a patient needing emergency surgery presents at a small hospital, the medical staffs may be at home, on call, and
must be contacted and summoned to the hospital. As the team members begin arriving, they must start preparing the
patient, the OR and equipment for surgery. This is often complicated by the fact that small hospitals have few ORs and it
may not be practical to have one always ready for any specific class of emergency surgical procedure, thus requiring a
more lengthy preparation process. Moreover, small, rural hospitals often lack the information infrastructure needed to
deliver patient data, procedural knowledge, and medical reference information in an effective and timely manner.
The potential chaos and confusion of an emergency surgery in the middle of the night is compounded by the fact that the
medical personnel involved in the case are human, and human beings are fallible. Since human beings are limited by
nature in their abilities to sense, perceive, and act accurately and quickly, and innate cognitive biases compromise their
judgment and decision making capabilities. These fallibilities combine and interact with characteristics of the complex
system and complicated situation, as described above, to yield delays and errors that may lead to further harm to the
emergency patient, or even, in some cases, the death.
With that principle in mind, we utilize medical knowledge and engineering methods to design efficient, best-practice
processes and to create information and communication systems to facilitate emergency surgeries in small, rural hospitals.
The developed Emergency Cesarean Delivery Facilitator (ECD Facilitator) is a job performance aid to help summon,
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 20(11-12), 602-613, 2013

EXPLORING BUSINESS MODELS FOR APPLICATION SERVICE


PROVIDERS WITH RESOURCE BASED REVIEW

JrJung Lyua, Chia-Hua Changb *,


aDepartment of Industrial and Information Management,
National Cheng Kung University, Tainan 701, Taiwan, ROC
jlyu@mail.ncku.edu.tw
bDepartment of Management and Information Technology,
Southern Taiwan University of Science and Technology,
Tainan City 710, Taiwan, ROC
chiahua@mail.stust.edu.tw

The Application Service Provider (ASP) concept is extended from traditional information application outsourcing as
currently used by numerous industries for information applications. Although value-added service can be generated with
ASPs, it can still have a high failure rate in ASP markets. This research applies Resource Based Theory (RBT) to
evaluate the business models of ASPs in order to assess their positions and provide suggestions for development
directions. Top ten application service providers among the fifty most significant ASPs were selected to investigate the
global markets of the ASP industry and the trend of services beforehand. Then three of them were explored to illustrate
the RBT review. Based on the market review and the empirical investigation, it was found that only a few ASPs can
provide integrated service contents which can adapt to fit the real demands of customers. ASPs should focus on the
perspective of the ecosystem and consider employing strategic alliances in order to provide an integrated solution for
their customers and sustain competitive advantage.

Keywords: Application Service Provider, Resource Based Theory, Outsourcing Strategy, Business Model

1. INTRODUCTION

Information technology (IT) has become one of the most critical survival strategies for enterprises wishing to adapt to
rapidly evolving environments as a result of the advent of the network economy. To retain competitive advantage,
enterprises must seek out more efficient ways to utilize available resources. Thus, when internal resources cannot meet
environmental changes, enterprises may turn to outsourcing strategy and ally with their suppliers to better use external
resources. In this way, industries can broaden their services, reduce transaction cost, maintain core competence, and
increase benefit margins through the employment of such combinations of outsourced resources (Cheon et al, 1995).
Since information technology has become a critical resource for business, outsourcing strategy is therefore an option
involving commitment of all or parts of information system (IS) activities, manpower and other IS resources to exterior
suppliers (Adeley et al., 2004). The most critical reason for employing IS outsourcing strategy is to decrease the
inherent risks and compensate for the lack of abilities to develop such strategic applications in-house. Application
service providers (ASPs) have emerged in recent years offering services like traditional outsourcing, which receive
much attention in IS markets. Although the market scale of ASPs shows continued growth, most enterprises do not
realize or are not familiar with the way to outsource through ASPs (Currie and Seltsikas, 2001; Chen and Soliman,
2002). Therefore, the purpose of this research is to develop an evaluation structure for exploring and evaluating ASPs
from a supply side perspective. The consequences could then help ASPs recognize corresponding strategic marketing
directions in the future.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(11-12), 602-613, 2013

EXPLORING BUSINESS MODELS FOR APPLICATION SERVICE


PROVIDERS WITH RESOURCE BASED REVIEW

JrJung Lyua, Chia-Hua Changb *,


aDepartment of Industrial and Information Management,
National Cheng Kung University, Tainan 701, Taiwan, ROC
jlyu@mail.ncku.edu.tw
bDepartment of Management and Information Technology,
Southern Taiwan University of Science and Technology,
Tainan City 710, Taiwan, ROC
chiahua@mail.stust.edu.tw

The Application Service Provider (ASP) concept is extended from traditional information application outsourcing as
currently used by numerous industries for information applications. Although value-added service can be generated with
ASPs, it can still have a high failure rate in ASP markets. This research applies Resource Based Theory (RBT) to
evaluate the business models of ASPs in order to assess their positions and provide suggestions for development
directions. Top ten application service providers among the fifty most significant ASPs were selected to investigate the
global markets of the ASP industry and the trend of services beforehand. Then three of them were explored to illustrate
the RBT review. Based on the market review and the empirical investigation, it was found that only a few ASPs can
provide integrated service contents which can adapt to fit the real demands of customers. ASPs should focus on the
perspective of the ecosystem and consider employing strategic alliances in order to provide an integrated solution for
their customers and sustain competitive advantage.

Keywords: Application Service Provider, Resource Based Theory, Outsourcing Strategy, Business Model

1. INTRODUCTION

Information technology (IT) has become one of the most critical survival strategies for enterprises wishing to adapt to
rapidly evolving environments as a result of the advent of the network economy. To retain competitive advantage,
enterprises must seek out more efficient ways to utilize available resources. Thus, when internal resources cannot meet
environmental changes, enterprises may turn to outsourcing strategy and ally with their suppliers to better use external
resources. In this way, industries can broaden their services, reduce transaction cost, maintain core competence, and
increase benefit margins through the employment of such combinations of outsourced resources (Cheon et al, 1995).
Since information technology has become a critical resource for business, outsourcing strategy is therefore an option
involving commitment of all or parts of information system (IS) activities, manpower and other IS resources to exterior
suppliers (Adeley et al., 2004). The most critical reason for employing IS outsourcing strategy is to decrease the
inherent risks and compensate for the lack of abilities to develop such strategic applications in-house. Application
service providers (ASPs) have emerged in recent years offering services like traditional outsourcing, which receive
much attention in IS markets. Although the market scale of ASPs shows continued growth, most enterprises do not
realize or are not familiar with the way to outsource through ASPs (Currie and Seltsikas, 2001; Chen and Soliman,
2002). Therefore, the purpose of this research is to develop an evaluation structure for exploring and evaluating ASPs
from a supply side perspective. The consequences could then help ASPs recognize corresponding strategic marketing
directions in the future.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 20(11-12), 614-630, 2013

HYBRID FLOW SHOP SCHEDULING PROBLEMS INVOLVING SETUP


CONSIDERATIONS: A LITERATURE REVIEW AND ANALYSIS
Márcia de Fátima Morais, Moacir Godinho Filho, Thays Josyane Perassoli Boiko

Affiliation: Federal University of São Carlos


Department of Industrial Engineering
Rodovia Washington Luiz, km 235 - São Carlos - SP - Brazil
email: moacir@dep.ufscar.br

This research is dedicated to the Production Scheduling Problem in a hybrid flow shop with setup times separated
from processing times. The goal is to identify and analyze the current literature to identify papers that develop
methods to solve this problem. In this review, it was possible to identify and analyze 72 papers that have addressed
this issue since 1991. Analyses were performed using the number of papers published over the years, the approach
used in the development of the methods for the solutions, the type of objective function, the performance criterion
adopted, and the additional constraints considered. The analysis results provide some conclusions about the state of
the art in the subject and also enable us to identify suggestions for future research in this area.

Keywords: Production Scheduling, Hybrid Flow Shop, Sequence-Dependent Set-up Time, Sequence-Independent
Set-up Time.

1. INTRODUCTION

In scheduling theory, a multi-stage production process with the property that all of the products must pass through a
number of stages in the same order is classified as a flow shop. In a simple flow shop, each stage consists of a single
machine that handles at most one operation at a time. When it is assumed that, at least in one stage, a number of
machines that operate in parallel are available, this model is known as a hybrid flow shop (Sethanan, 2001).
According to Ruiz and Vázquez-Rodríguez (2010), a hybrid flow shop (HFS) system processes jobs in a series of
production stages, each containing parallel machines, with the aim of optimizing one or more objective functions.
Solving the production scheduling in such an environment is, in most cases, NP-hard.
Many real manufacturing systems are hybrid flow shop systems. The products manufactured in such an
environment can differ in certain optional components; consequently, the processing time on a machine differs from
one product to the next, and the need to prepare one or more machines before beginning a job or after finishing a job
is frequently present. In scheduling theory, the time required to shift from one job to another on a given machine is
defined as the additional production cost or the setup time. The corresponding scheduling problems, which consider
the setup times, have a higher computational complexity (Burtseva, Yaurima and Parra, 2010).
An explicit treatment of the setup times in most of the applications is required and represents a special interest, as
machine setup time is a significant factor for production scheduling in many practical cases. Setup time could easily
consume more than 20% of the available machine capacity if it is not handled well (Pinedo, 2008). Many examples
of scheduling problems that consider separable setup times are given in the literature, including electronics
manufacturing, automobile assembly plants, the packaging industry, the textile industry, steel manufacturing,
airplane engine plants, label sticker manufacturing companies, the semiconductor industry, maritime container
terminals, and the ceramic tile manufacturing sector, as well as in the electronics industry in sections for inserting
components on printed circuit boards (PCB), where this type of problem occurs frequently. Hybrid flow shop
scheduling problems that consider setup times are among the most difficult classes of scheduling problems.
Research in production scheduling began in the 1950s; however, until the mid-1990s, most research considered
setup times to be irrelevant or a slight variation and usually included it in the processing times of jobs and/or batch
jobs (Allahverdi, Gupta and Aldowaisan, 1999). Ruiz and Vázquez-Rodríguez (2010) show that studies that address
hybrid flow shop scheduling and that consider separate setup cost or time arose in the early 1990s. Within this
context, the main goal of this research is to perform a literature review on hybrid flow shop scheduling problems
with setup considerations. After the literature review, this paper also presents an analysis of this review, attempting
to find literature gaps and suggestions for future research in this field.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(1), 1-17, 2014

DEVELOPING A ROBUST PROGRAMMING APPROACH FOR THE


RESPONSIVE LOGISTICS NETWORK DESIGN UNDER UNCERTAINITY
Reza Babazadeh , Fariborz Jolai, Jafar Razmi, Mir Saman Pishvaee
Department of Industrial Engineering, College of Engineering,
University of Tehran, Tehran, Iran
Iran, Islamic Republic Of

Operational and disruption risks derived from the environment have forced firms to design responsive supply chain
networks. This paper presents a multi-stage multi-product robust optimization model for responsive supply chain
network design (SCND) under operational and disruption risks. First, a deterministic mixed-integer linear programming
(MILP) model is developed considering different transportation modes, outsourcing, flexibility and cross-docking
options. Then, the robust counterpart of the presented model is developed to deal with the inherent uncertainty of input
parameters. The proposed deterministic and robust models are assessed under both operational and disruption risks.
Computational results show the superiority of the proposed robust model in managing risks with a reasonable increase
in the total costs compared to deterministic model.

Keywords: Robust Optimization, Responsive Supply Chain Network Design, Operational & Disruption Risks.

1. INTRODUCTION

Facility location is one of the most important decisions in the supply chain network design (SCND) problem and plays a
crucial role in the overall performance of the supply chain. Generally, the SCND problem includes determining the
numbers, locations and capacities of facilities, as well as the amount of shipments between them (Amiri, 2006).
Nowadays, time and cost are common gauges used to assess the performance of the supply chains and both are
minimized, as they are treated simultaneously. The delivery time criterion is considered to be an individual objective
that leads to a bi-objective problem. Minimizing delivery time and cost objectives in the form of a bi-objective problem
are in conflict with each other (Pishvaee and Torabi, 2010). That is, quick delivery implies high amount of costs. The
time minimization objective, however, can be integrated in cost objective when it is expressed in terms of monetary
value. Increased environmental changes in the competitive markets force manufacturing companies to be more flexible
and improve their responsiveness (Gunasekaran and Kobu, 2007). Some components, such as direct shipments from the
supply centres to customers and decisions on opening or closing facilities (plants, distribution centres and etc.) for the
forward seasons (Rajabalipour et al., 2013), utilizing different transportation modes can improve the flexibility of an
SCN. Cross-docking is a logistics function in which products are shipped directly from the origin to the destination,
without being stored in warehouses or distribution centres (Choy et al., 2012). Utilizing cross-dock centres as an
intermediary stage between supply centres and customer zones leads to significant advantages for the manufacturing
and service industries (Bachlaus et al., 2008). In recent decades, some companies, including Wal-Mart used cross-docks
in different sites to achieve competitive advantages in distribution activities. Although inventory holding is not
attractive, especially in lean production systems, it could play a significant role in dealing with supply and demand
uncertainty (You and Grossmann, 2008). In today’s world, the increased diversity of customer needs prevents
manufacturing and service industries from making fast changes, unless it is done through outsourcing. Outsourcing is
performed for many reasons, such as saving on costs, focus on core business, quality improvement, knowledge, reduced
time to market, enhance capacity for innovation and risk management (Kang et al. 2012). Some of companies, like Gina
and Zara Tricot, which use the outsourcing approach, have a massive advantage (Choudhury and Holmgren, 2011).
Many previously presented models consider fixed capacities for all facilities, whereas determining capacity of
facilities is often difficult in practice (Wang et al., 2009). Therefore, capacity level of facilities should be determined as
a decision variable in mathematical programming models. Since opening and closing of facilities are strategic and time-
consuming decisions (Pishvaee et al., 2009), an SCN should be designed in the way that could be sustained under
operational and disruption risks. Chopra and Sodhi (2004) and Chopra et al. (2005) mentioned that the organizations
should consider uncertainty issues with its various forms in supply chain management to deal with their destructive and
burdensome effects on supply chain.
Exploring various sources proves that most presented works in the SCND area assume that input parameters, such as
demands, are deterministic (see Melo et al., 2009; Klibi et al. 2010). Although some studies have considered the SCND
under tentative conditions, most of them used the concept of stochastic and chance constrained programming methods
(Alonso-Ayuso et al., 2003; Santoso et al., 2005; Listes and Dekker, 2005; Salema et al., 2007). The major drawbacks

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(1), 18-32, 2014

DEVELOPMENT OF A CLOSED-LOOP DIAGNOSIS SYSTEM FOR


REFLOW SOLDERING USING NEURAL NETWORKS AND SUPPORT
VECTOR REGRESSION
Tsung-Nan Tsai1 and Chiu-Wen Tsai2
1
Department of Logistics Management, Shu-Te University, Kaohsiung, 82445, Taiwan
2
Graduate School of Business and Administration, Shu-Te University, Kaohsiung, 82445, Taiwan
Corresponding author’s e-mail: {Tsung-Nan Tsai, tntsai@stu.edu.tw}

This study presents an industrial application of artificial neural network (ANN) and support vector regression (SVR)
to diagnose control reflow soldering process in a closed-loop framework. Reflow soldering is the principal process
for the fabrication of a variety of modern computer, communication, and consumer (3C) electronics products. It is
important to achieve robust electrical connections without changing the mechanical and electronic characteristics of
the components during reflow soldering process. In this study, a 38-4 experimental design was conducted to collect
the structured process information. The experimental data was then used for data-training via the ANN and SVR
techniques to investigate both the forward and backward relationships between the heating factors and the resultant
reflow thermal profile (RTP) and so as to develop a closed-loop reflow soldering diagnosis system. The proposed
system includes two modules: (1) a forward-flow module used to predict the output elements of the RTP and
evaluate its performance based on ANN and a multi-criteria decision-making (MCDM) criterion; (2) a
backward-flow module employed to ascertain the set of heating parameter combinations which best fulfill the
production requirements of the expected throughput rate, product configuration, and the desired solderability. The
efficiency and cost-effectiveness of this methodology were empirically evaluated and the results show the
promising to improve soldering quality and productivity.

Significance: The proposed closed-loop reflow soldering process diagnosis system can predict the output elements
of a reflow temperature profile according to process inputs. This system is also able to ascertain the set of heating
parameter combinations which best fulfill the production requirements and the desired solderability. The empirical
evaluation demonstrates the efficiency and cost-effectiveness for the improvements of soldering quality and
productivity.

Keywords: SMT, analytic hierarchy process, neural network, reflow soldering, support vector regression

1 INTRODUCTION

A high-speed surface mount technology (SMT) is an important development to fabricate many types of modern 3C
products in the electronics assembly industry. A SMT assembly process consists of three main process steps: the
stencil printing application, component placement, and reflow soldering. Reflow soldering is the principal process
used to melt powder particles in the solder paste and then solidify them to create strong metallurgical joints between
the pads of printed circuited board (PCB) and the surface mounted devices (SMDs) through a reflow oven. The
reflow soldering operation is widely recognized as a key determinant of production yield in PCB assembly (Soto,
1998; Parasad, 2002). A poor understanding of reflow soldering behavior can result in remarkable troubleshooting
time, soldering defects, considerable manufacturing costs.
The required function of a reflow oven is to heat the assembled boards to a predefined temperature at the proper
heating rates for a specific elapsed time. The forced convection reflow oven is the most commonly used heating
source in the SMA since it meets the economic and technical requirements of mass production. A reflow thermal
profile (RTP) is a time-temperature graph used to monitor and control the heating phases and duration, so that the
assembled boards are heated enough to form reliable solder joints without changing the mechanical and electronic
characteristics of the components. An inhomogeneous and inefficient reflow temperature profile may cause various
soldering failures (Illés, 2010), as shown in Figure 1. A typical RTP is comprised of preheating, soaking, reflowing
and cooling phases using a leaded solder paste, as shown in Figure 2. During the preheating phase, the board and
the relevant components are heated quickly from room temperature to about 150 ºC. In the soaking phase, the
temperature continues rising to approximately 180 ºC. At the same time, flux is activated to gradually wet and clean
oxidation from the surfaces of the metal pads and component leads. The solder paste is melt and changing into a
liquid solder mass in the reflowing phase. Eventually, during the cooling phase, electrical connections form between
the component leads and the PCB pads. The grey area between the contoured and faint lines shows the acceptable
temperature range that might produce acceptable soldering quality according to the specification provided by solder
paste maker (Itoh, 2010).

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(1), 33-44, 2014

EFFICIENT DETERMINATION OF HELIPORTS IN THE CITY OF RIO DE


JANEIRO FOR THE OLYMPIC GAMES AND WORLD CUP: A FUZZY
LOGIC APPROACH
Claudio S. Bissoa, Carlos Patricio Samanezb
a
Production Engineering Program
Federal University of Rio de Janeiro (UFRJ), COPPE, Brazil
b
Industrial Engineering Department
Pontifical Catholic University of Rio de Janeiro PUC-Rio, Brazil

The purpose of this study was to determine a method of evaluation for the use and adaptation of Helicopter Landing
Zones (HLZs) and their requirements for registered public-use for the Olympic Games and the World Cup. The
proposed method involves two stages. The first stage consists of clustering the data obtained through the Aerial and
Maritime Group/Military Police of the State of Rio de Janeiro (GAM/PMERJ). The second stage uses the weighted
ranking method. The weighted ranking method was applied to a selection of locations using fuzzy logic, linguistic
variables and a direct evaluation of the alternatives. Based upon the selection of four clusters, eight HLZs were obtained
for ranking. The proposed method may be used to integrate the air space that will be used by the defense and state
assistance agencies with the locations of the sporting events to be held in 2014 and 2016.

Significance: In this paper we proposed a model for evaluating the use and adaptation of Helicopter Landing Zones.
This method involves clustering data and the selection of locations using fuzzy logic and a direct evaluation of the
alternatives. The proposed method allowed for precise ranking of the selected locations (HLZs) contributing to the
development of public policies aimed at reforming the local aerial resources.

Keywords: Fuzzy logic, Site selection, Transport, Public Policy.

1. INTRODUCTION

The city of Rio de Janeiro will host the 2014 World Cup and 2016 Olympic competitions. Thus, the development of
more effective and technical mapping is urgently needed to rationalize the use of aerial resources (helicopters) that
belong to the state of Rio de Janeiro. Consequently, the helicopters will meet the demands for human health and safety
better, as well as actively participate in these large sporting events.
The main objective of this study was to determine a method that could be used to justify potential investment
opportunities in registered public-use heliports based on their requirements and their locations relative to points of
public interest. To accomplish this task, Helicopter Landing Zones, or HLZs, were mapped and identified by the Aerial
and Maritime Group (Grupamento Aéreo e Marítimo (GAM)) of the Military Police of the State of Rio de Janeiro
(Polícia Militar do Estado do Rio de Janeiro (PMERJ)).
In the city of Rio de Janeiro, various zones were identified by the GAM as HLZs. Yet, these zones do not have the
appropriate identification, illumination or signage. Thus, these HLZs do not meet the appropriate technical standards
that would define them as zones being appropriate for helicopter landing. Here, several aspects, including the proximity
of the HLZs to hospitals, PMERJ (Military Police of the State of Rio de Janeiro) units, Fire Department (CBMERJ),
Civil Police (PCERJ) and the major sporting competition locations, were used to identify the most relevant HLZs in the
city of Rio de Janeiro (according to these criteria). In addition, this study serves to stimulate the use of the HLZs and
provide subsidies for developing public policies for streamlining the existing aerial resources (helicopters) that belong
to corporations within the state of Rio de Janeiro.
Considering that is not likely that the city will have a fitting conventional terrestrial transport that can handle the
numerous tourists and authorities, Rio de Janeiro will face an increased demand for air transport via helicopter to move
between the different sports facilities, integrated with the local assistance and defense agencies.
Today, Rio de Janeiro faces a challenge that it has never face before. The burden of investments in various sectors –
led by the oil and gas industry – sum, according to the Federation of Industries of the State of Rio de Janeiro (FIRJAN),
$ 76 billion during the period from 2011 to 2013. This is one of the largest concentrations of investment in the world,
given the volume of investments in relation to the small territorial dimension of the state.
Air transport demand brought by those investments, combined with the fact that the city will host the 2014 World Cup
and the 2016 Olympic Games, requires a focused technical mapping that allows streamlining the aerial resources
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 21(1), 45-51, 2014

AN APPLICATION OF CAPACITATED VEHICLE ROUTING PROBLEM TO


REVERSE LOGISTICS OF DISPOSED FOOD WASTE
Hyunsoo Kim1, Jun-Gyu Kang2*, and Wonsob Kim3
1,3
Department of Industrial and Management Engineering, University of Kyonggi
San 94-6, Iui-dong, Yeongtong-gu, Suwon, Gyeonggi-do 443-760, Republic of Korea
2
Department of Industrial and Management Engineering, Sungkyul University
Sungkyul Daehak-Ro 53, Manan-gu, Anyang, Gyeonggi-do 430-742, Republic of Korea
*Corresponding author’s e-mail: Jun-Gyu.KANG@sungkyul.ac.kr

Reverse logistics amended transportation of food waste from local collecting areas to designated treatment facilities
produce enormous amounts of greenhouse gas. The Korean government has recently introduced the RFID technology in
hopes of reducing CO2 production problems. In this study, we evaluated the reduction of total route distance required
for reverse logistics based on the total weight of food waste in each collecting area. We defined the testing environment
as CVPR (capacitated vehicle routing problem) based on the actual field data. As our first alternative method, we
introduced Fixed CVRP for the improvement of current reverse logistics and also applied the daily Dynamic CVRP,
which considers daily weight information of total food waste at each collecting area in order to determine the optimum
routes for reverse logistics. We performed and compared experimental results of total routing distance using three
different scenarios; current, Fixed CVRP, and daily Dynamic CVRP.

Key words: Reverse logistics, Food waste, CVRP, Sweep method, RFID, Greenhouse gas (CO2)

1. INTRODUCTION
The amount of disposed food waste has been continuously increasing since January of 2013 when the Korean
government prevented the dumping of food waste into the marine. This act was a correspondence to the 1996 London
Dumping Convention Protocol to stop marine pollution by dumping of waste and other matter (Daniel. 2012). Food
waste is the largest portion (28.8%) of domestic municipal waste and the disposed amount has been increasing
continuously since 2005; 65ton/day (2005), 72.1ton/day (2007), 79.1ton/day (2009), and 79.8ton/day (2011) (Seoul
Metropolitan Government. 2012).
In order to reduce and properly manage disposed food waste, the Ministry of Environment and the Ministry of
Public Administration and Security started a pilot project in 2011 over the weight-based charging system, under which
the fee charged increases in proportion to the weight of food waste discarded using RFID technology. The system can
charge a monthly fee to an individual based on the total amount of disposed food waste measured via RFID reader
equipped containers. Originally operational in only 10 of 229 local governments, this system has now spread to 129
local governments as of June, 2013. According to the report from the Gimcheon-gu local government in
Gyeongsangbuk-do provincial government, which has already adopted this system, 47% of disposed food waste has
been reduced since 2011 (Korea Environment Corporation. 2012).
With the advent of RFID technology, it has become possible to take full advantage of all information (Zhang et al.
2011). Currently, the RFID technology solely operates using the identification of the individual whom disposes food
waste and its measured weight. Unfortunately, however, the important information of the total weight of food waste
disposed at each container, which can be obtained from current RFID system, is not being used by reverse logistics
providers (we call ‘collectors’) who collect the disposed food waste from each containers. Therefore, fixed routings
based on fixed schedules are still applied for reverse logistics of disposed food waste now.
The problem dealt in this paper is considered as a Vehicle Routing Problem (VRP, here after), which is a
combinatorial optimization and integer programming problem designing optimal delivery or collection routes from one
or several depots to a number of geographically scattered cities of customers with a fleet of vehicles (Laporte, 1992). In
general, the VRP comprises of two combinatorial optimization problems, i.e., the Bin-Packing Problem (BPP) and the
Travelling Salesman Problem (TSP). Assigning each customer to a vehicle is considered a BPP while designing the
optimal route for each vehicle with assigned customers is considered a TSP (Fenghe and Yaping, 2010). VRP is the
intersection of two difficult problems, typically known as NP-hard problem, which becomes more difficult or even
impossible to solve as the number of customers or vehicles increases (Lin et al., 2014). Since the first proposal of VRP
by Danzig and Ramser in 1959, it has been playing an important role in the fields of transportation, distribution and
logistics. According to additional practical restrictions, there exists a wide variety of VRPs. Along with the traditional
variation of VRPs, Capacitated VRP (CVRP), VRP with Time Window (VRPTW), Multi depot VRP (MDVRP), and
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 21(2), 53-65, 2014

DELIVERY MANAGEMENT SYSTEM USING THE CLUSTERING BASED


MULTIPLE ANT COLONY ALGORITHM: KOREAN HOME APPLIANCE
DELIVERY
Taeho Kim and Hongchul Lee
School of Industrial Management Engineering, Korea University, Seoul, Republic of Korea

This paper deals with the heterogeneous fleet vehicle routing and scheduling problems with time windows (HFVRPTW)
in the area of Korean home appliance delivery. The suppliers of modern home appliance products in Korea not only have
to provide the traditional service of simply delivering goods to customers within the promised time, but they also need to
perform additional services such as installation of the product and explanation of the products functions. Therefore,
businesses reducing the delivery cost while improving the quality of the service experienced by customers is an important
issue. In order to meet these two demands, we generated a delivery schedule by using a heuristic clustering-based multiple
ant colony system (MACS) algorithm. In addition, to improve service quality, we set up an expert system composed of a
manager system and an android-based driver system. The system was tested for home appliance delivery in Korea. This
paper is significant in that it constructs an expert system for the entire process of distribution, from the generation of an
actual schedule to management system setup.

Keywords: HFVRPTW, Ant colony algorithm, Home appliance delivery, Android; Information System

1. INTRODUCTION
The physical distribution industry is facing a rapid change in its business environment due to the development of
information and communication technology and the spread of Internet accessibility. Products are ordered both online and
offline. Through online communities, customers can freely share information relating to the entire process of product
purchasing such as product functions, delivery and installation. In particular, products handled in home appliance delivery
in recent years, like Smart TVs, notebooks, air conditioners and refrigerators, have complex functions in contrast to their
predecessors. Hence why it is important to provide installation and demonstration service while guaranteeing accurate
and timely delivery. Such extended services have actually become an important factor for customers in building an image
of a given company. Accordingly, separately from the traditional work of simply delivering a product to a customer,
qualitative improvements of service, like product installation and explanation of product functions, have become an
important part of home appliance delivery in Korea (Kim et al., 2013). From the companies’ point of view, reducing
delivery costs while improving the quality of delivery service experienced by customers is an important problem.
Basically, the problem of satisfying the constraints of delivery time desired by customers while finding the shortest
traveling route for vehicles is known as the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW model
is a representative NP-hard problem (Lenstra and kan, 1981; Savelsbergh, 1985). There are many studies that have used
metaheuristics to solve this problem (Cordeau et al., 2001; Haghani and Banihashemi, 2002; Sheridan et al., 2013). In this
paper, we used the ant colony system (ACS) among the various metaheuristic methods to generate schedules (Dorigo and
Gambardella, 1997a, 1997b). ACS has the advantage of being able to respond flexibly even when the constraint rules
change. We also utilized a heuristic clustering algorithm in this paper to improve the calculation speed of the local search
part that requires the longest calculation time among the ACS processes (Dondo and Cerdá, 2007).
A delivery management system is required for qualitative delivery service improvement. (Santos et al., 2008; Moon
et al. 2012). We constructed an Android-based delivery management system to flexibly handle such problems as delivery
delays and delivery sequence changes that can occur due to the characteristics of delivery work. With this system,
managers can easily manage various accidents that can occur during deliveries and more effectively monitor the locations
of drivers and manage the delivery progress rate as well as idle drivers.

2. LITERATURE REVIEW
Ever since Dantzig and Ramser (1959) attempted to solve the vehicle routing problem (VRP) by using an LP heuristic,
many researchers have introduced various mathematical models and solutions. Of the VRP types, VRPTW is the VRP
with the customer-demanded time constraint. Since VRPTW is an NP-hard problem, an optimum solution within the
restricted time cannot be found. Studies related to VRPTW have advanced greatly with insertion heuristic research
(Solomon, 1987) as the starting point. Supported by recent advances in computer technology, studies applying
metaheuristic methods such as simulated annealing (Osman,1993; Czech and Czarnas, 2002; Lin et al., 2011), tabu search

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(2), 66-73, 2014

INFLUENCE OF DATA QUANTITY ON ACCURACY OF PREDICTIONS


IN MODELING TOOL LIFE BY THE USE OF GENETIC ALGORITHMS
Pavel Kovac, Vladimir Pucovsky, Marin Gostimirovic, Borislav Savkovic, Dragan Rodic
University of Novi Sad, Faculty of Technical Science, Trg Dositeja Obradovica 6, 21000 Novi Sad, Serbia
pkovac@uns.ac.rs, pucovski@uns.ac.rs, maring@una.ac.rs, savkovic@una.ac.rs, rodic@una.ac.rs

It is widely known that genetic algorithms can be used in search space and modeling problems. In this paper theirs
ability to model a function while varying the amount of input data is tested. Function which is used for this research
is a tool life function. This concept is chosen because by being able to predict tool life, workshops can optimize
their production rate – expenses ratio. Also they would gain profit by minimizing number of experiments necessary
for acquiring enough input data in process of modeling tool life function. Tool life by its nature is a multiple factor
dependent problem. By using four factors, to acquire adequate tool life function, vivid complexity is simulated
while acceptable duration of computational time is maintained. As a result almost clear threshold, of data quantity
inputted in optimization model to gain acceptable results in means of output function accuracy, is noticed.

Keywords: Modeling; Genetic Algorithms; Tool Life; Milling; Heuristic Crossover

1. INTRODUCTION
From early days when artificial intelligence was introduced, there is a prevailing trend of discovering capabilities
which lies inside this branch of science. As all machine related domain, with this one being no exception, there are
limits. These limits and boundaries of usage are often expanded and new purposes are constantly discovered. To be
able to achieve this goal one must be a very good student of the best teacher that is known to mankind; mother
nature. With an experience of more than five billion years our nature is a number one scientist and we are all proud
that we have an opportunity to learn whatever she has to offer. Mastery of creation such a variety of living beings is
no easy task and maintaining this delicate balance between species is something that requires time, experience and
understanding. No scientist is able to create something graceful, like variety of life on Earth, by share coincidence.
There has to be a consistency in process of creating and maintaining this complexity of living beings. Law which
lies behind this consistency had prevailed more than we can remember and is a simple postulate which tells us that
only those who are most adaptable to their environment will survive. By surviving more than others, less adaptable
individuals, every living organism is increasing chance to mate, with equally adaptable member of same specie and
creating offspring which posses the same, or higher level of adaptability to their environment. This law of selection
is something that enabled creation of this world that we live in. Seeing its effectiveness yet understanding simplicity
of this concept, we decided to model it. One way of succeeding in this is through genetic algorithms (GA). Since
they have been introduced, in early 1970’s, GA present a very powerful tool in space search and optimization fields.
Introduce them to a certain area and, with a proper guidance, they will create a population of their own and
eventually yield individuals with highest attributes.
Through time many scientist manage to successfully implement GA as a problem solving technique. Sovilj et al.
(2009) developed a model for predicting tool life in milling process. Pucovsky et al. (2012) studied dependence
between modeling ability of tool life with genetic algorithm and the type of function. Čuš and Balič (2003) used GA
to optimize cutting parameters in process of milling. Similar procedure for optimizing parameters in turning
processes was employed by Srikanth and Kamala (2008). And optimization of multi-pass turning operations using
genetic algorithms for the selection of cutting conditions and cutting tools with tool-wear effect has been
successfully reported by Wang and Jahawir (2005). Zhu (2012) managed to implement genetic algorithm with local
search in solving the job shop scheduling problem. Since job shop scheduling is major area of interest and progress,
Wang et al. (2011) succeeded in constructing the genetic algorithm with a new repair operator for assembly
procedure. Ficko et al. (2005) reported positive experiences in using GA in forming a flexible manufacturing
system. Regarding tool life in face milling, statistical approach by the use of response surface method have been
covered by Kadirgama et al (2008). Khorasani et al (2011) used both Taguchi’s design of experiment and artificial
neural networks for tool life prediction in face milling. Pattanaik and Kumar (2011), using a bi-criterion evolution
algorithm for identification of Pareto optimal solution, developed a system for product family formation in area of
reconfigurable manufacturing. And knapsack problem is now widely considered as a classical example of GA
implementation (Ezzaine, 2002).
Taking in consideration weight and importance of milling tool life modeling with evolutionary algorithms, very
small amount of articles on this subject was noticed. Also no papers discuss on influence of quantity of input data
on results of genetic algorithms optimization function. In absence of these two facts this article is presented as a
way to, at least partially, fill existing gap.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(2), 74-85, 2014

PROACTIVE IDENTIFICATION OF SCALABLE PROGRAM


ARCHITECTURES:
HOW TO ACHIEVE A QUANTUM-LEAP IN TIME-TO-MARKET

Christian Lindschou Hansen & Niels Henrik Mortensen


Department of Mechanical Engineering
Product Architecture Group
The Section of Engineering Design & Product Development
Technical University of Denmark
Building 426
DK-2800 Kgs. Lyngby
Email: chrlh@mek.dtu.dk, nhmo@mek.dtu.dk,

This paper presents the Architecture Framework for Product Family Master Plan. This framework supports the identification
of a program architecture (the way cost competitive variance is provided for a full range of products) for a product program
for product-based companies during the early stages of a product development project. The framework consists of three basic
aspects: the market, product program, production and a time aspect – captured in the multi-level roadmap. One of the unique
features is that these aspects are linked, allowing for an early clarification of critical issues through a structured process. The
framework enables companies to identify a program architecture as the basis for improving time-to-market and R&D
efficiency for products derived from the architecture. Case studies show that significant reductions of development lead time
up to 50% is possible.

Significance: Many companies are front-loading different activities when designing new product programs. This paper
suggests an operational framework for identifying a program architecture during the early development phases, to enable a
significantly improved ability to launch new competitive products with fewer resources.

Keywords: Product architecture, program architecture, product family, platform, time-to-market, scalability

1. INTRODUCTION
Many industrial companies are experiencing significant challenges in maintaining competitiveness. There are many
individual explanations behind these, but some of the common challenges that are often recorded from companies are:
 Need to reduce time-to-market in R&D:
o Shorter product life cycles are increasing the demand for faster renewal of the product program in order to
postpone price drops and maintain competitive offerings (Manohar et al., 2010)
o Loss of market share in highly competitive markets call for improved launch responsiveness to match and
surpass the offerings of competitors (Chesbrough, 2013)
o Protection of niche markets and their attractive price levels requires continuous multi-launches of
competitive products (Hultink et al., 1997)
 Need for achieving attractive cost and technical performance levels for the entire product program
o Increased competitiveness requires all products to be attractive both cost wise and performance wise
(Mortensen et al., 2010)
o Focusing of engineering resources requires companies to scale solutions to fit across the product program
(by sharing) and prepare them for future product launches (by reuse) (Kester et al., 2013)
o Sales forecasts from global markets are affected by an increasing number of external influences making it
more and more difficult to predict the sales of individual product variants, thus leaving no room for
compromising competitive cost and performance for certain product variants (Panda and Mohanty, 2013)
These externally induced challenges pose a major task to the whole company. As such, many approaches exist to handle
these challenges which are of organizational- , process-, tool-, and competence nature originating within research from
sciences across business, marketing, organization, technology, socio-technical, and engineering design. The research
presented here originates within engineering design and product development focusing on the development of a program
architecture for a company. Although originating from the engineering design domain which is naturally centered in the R&D
function of a company, the development of program architectures have relations that stretches far into the marketing, product
planning, sourcing, production, and supply chain domains as well as into the companies’ overall product strategy.
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 21(2), 86-99, 2014

AN APPROACH TO CONSIDER UNCERTAIN COMPONENTS’


FAILURE RATES IN SERIES-PARALLEL RELIABILITY
SYSTEMS WITH REDUNDANCY ALLOCATION
Ali Ghafarian Salehi Nezhada,*, Abdolhamid Eshraghniaye Jahromib, Mohammad Hassan Salmanic, Fereshte Ghasemid
a
M.Sc. Graduated Student of Industrial Engineering at Sharif University of Technology, Tehran, 14588-89694, Iran.
b
Associate professor of Industrial Engineering at Sharif University of Technology, Tehran, 14588-89694, Iran.
c
PhD Student of Industrial Engineering at Sharif University of Technology, Tehran, 14588-89694, Iran
d
M.Sc. Graduated Student of Industrial Engineering at Amirkabir University of Technology, Tehran, 15875-4413, Iran.
* Author Phone Number: +98 936 337 7547
Fax Number: +98 331 262 4268
Emails: a_ghafarian@ie.sharif.edu, eshragh@sharif.edu,m_salmani@ie.sharif.edu, ghasemi.fereshte@aut.ac.ir

Redundancy Allocation Problem (RAP) is a combinatorial problem to maximize system reliability by discrete
selection from available components. The main purpose of this study is to prove the effectiveness of robust
optimization to solve RAP. In this study it is assumed to have Erlang distribution density for components' failures
where to implement robust optimization. We suppose that failure rate attains dynamic values instead of exact and
fixed values. Therefore, a new calculation method is presented to consider dynamic values for failure rate in RAP.
Another assumption is that each subsystem can have one of cold-standby or active redundancy strategies. Moreover,
due to complexity of RAP, two Simulated Annealing (SA) and Ant Colony Optimization (ACO) algorithms are
designed to determine the robust system with respect to uncertain values for parameters. In order to solve this problem
and prove efficiency of proposed algorithms, a problem benchmark in literature is solved and discussed.

Keywords: Reliability Optimization; Robust Optimization; Series-Parallel System; Uncertain Failure Rate; Ant
Colony Optimization; Simulated Annealing.

1. INTRODUCTION
In general, reliability is the ability of a system to perform and maintain its functions in routine circumstances, as well
as hostile or unexpected circumstances. Redundancy Allocation Problem (RAP) is one of the classical problems in
engineering and other sciences to plan the selection of components for a system simultaneously, where these
components can be combined by different strategies. Generally, this problem is defined to maximize the system
reliability such a way that some predetermined constraints such as total weight, total cost, and total volume be
satisfied. The attractiveness of this problem to design an appropriate system will be arisen for different products with
high reliability value. In general, it is possible to categorize the series-parallel systems into three major parts: the
reliability allocation, the redundancy allocation and the reliability and the redundancy allocation. In the reliability
allocation problems, the reliability of the components is determined such that the consumption of a resource under a
reliability constraint is minimized while the redundancy allocation problem generally involves the selection of
components and redundancy levels to maximize the system reliability given various system-level’s constraints [1]. In
fact, we can implement two approaches to improve the reliability of such a system using RAP. The first one is to
increase the reliability of the system components while the second one is using redundant components in various
subsystems in the system [2; 3]. This problem also has four major inputs; λ   λ iz  which represents failure rate for
 i 
component zi in subsystem i , C  Ciz  and W   Wiz  which are cost and weight of component
 i  i
zi for

subsystem i , respectively, and     i t   which is switch reliability in subsystem i at a predetermined time t


. The general structure series-parallel systems is shown in Fig. 1 where i indicates index of each subsystem.

Generally, previous studies are contributed in deterministic environment in which the failure rate of each component
is constant. Conversely, in real world the precise failure rates for each component are usually very hard to estimate
and it would be more practical to consider flexible values for these groups of parameters. This assumption seems
more invaluable when the failure rates can be affected by different factors such as labors, machines, environmental
conditions, and the way which components are using. In this study, it is assumed that there are no deterministic values
available for failure rates. In general, the major goal of this study is to solve RAP under uncertainty values for failure
rate by implementing robust optimization approach.
The general structure if this paper is as following. First of all, a concise and comprehensive literature review is
presented for various studies which have been done in last decades. Afterward, an extensive definition is proposed
for robustness in RAP and according to this definition, an appropriate mathematical model is developed. Following
with these sections, we present two SA and ACO algorithms in sections 5 and 6, respectively. Then, the proposed

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(2), 100-116, 2014

A MODEL BASED QUEUING THEORY TO DETERMINE THE


CAPACITY SUPPORT FOR TWIN-FAB OF WAFER FABRICATION

Ying-Mei Tua,Chun-Wei Lub


Department of Industrial Management
a.

Chung Hua University


707, Sec.2, WuFu Rd., Hsinchu, Taiwan 30012, R.O.C.
b.
Ph.D. Program of Technology Management- Industrial Management
Chung Hua University
707, Sec.2, WuFu Rd., Hsinchu, Taiwan 30012, R.O.C.
Corresponding author´s e-mail: amytu@chu.edu.tw

The twin-fab concept has been established over the past decade due to considerations of cheaper facility cost, faster
installation and more flexible productivity management. Nevertheless, if lacking in completed backup policies, the
benefits of twin-fab will decrease significantly, particularly in production flexibility and effectiveness. In this work,
the control policy of capacity support is established and two control thresholds are developed. The first is the threshold
of Working in Process (WIP) amount, which acts as a trigger for backup action. The concept of protective capacity
is applied to set this threshold. In order to endorse the effectiveness of WIP transfer between twin fabs, the threshold
of WIP amount difference (WDTH) is set as a control gate. The design of WDTH is to maximize the expected saved
cycle time. The GI/G/m model is applied to develop equations for the calculation of expected saved time. Finally, the
capacity support policy is validated by a simulation model. The results show that this policy is both feasible and
efficient.

Keywords: Twin-fab, Capacity support policy, Protective capacity, Queuing theory

1. INTRODUCTION

Compared with other industries, the manufacturing processes of wafer fabrication is more complicated, such as re-
entrant flows, batch processing, and time constraints (Rulkens et al., 1998; Robinson and Giglio, 1999; Tu et al.,
2010). In order to maintain high competitiveness, the capacity expansion and upgrade of advanced technology are
necessary. However, managers have to suffer many difficulties in these situations; the market demand changes
quickly and equipment costs are more expensive. Given this situation, expanding capacity in dynamic environments
is risky ( Chou et al., 2007).
Over the past decades, many semiconductor manufacturing companies have adopted the twin-fab concept where
two neighboring fabs are installed in the same building and connected to each other through an Automatic Material
Handling System (AMHS). The advantages of twin-fab are as follows.
1. Reducing the cost of capacity expansion through sharing essential facilities, such as gas pumps and recycling
polluted water systems.
2. Due to the building and basic facilities established in the beginning stage, the construction time of the second fab
is reduced.
3. As twin-fab is two neighboring fabs, the real-time capacity backup can be achieved by AMHS.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(3), 117-128, 2014

LOCATION DESIGN FOR EMERGENCY MEDICAL CENTERS


BASED ON CATEGORY OF TREATABLE MEDICAL DISEASES
AND CENTER CAPABILITY
Young Dae Ko1, Byung Duk Song2, James R. Morrison2 and Hark Hwang2†
1
Deloitte Analytics, Deloitte Anjin LLC
Deloitte Touche Tohmatsu Limited
One IFC, 23, Yoido-dong, Youngdeungpo-gu
Seoul 150-945, Korea
2
Department of Industrial and Systems Engineering
Korea Advanced Institute of Science and Technology
Guseong-dong, Yuseong-gu
Daejeon 305-701, Korea

Corresponding author’s e-mail: hhwang@kaist.ac.kr

With the proper location and allocation of emergency medical centers, the mortality rate of emergency patients could be
improved by providing the required treatment within an appropriate time. This paper deals with the location design of
emergency medical centers in a given region under the closest assignment rule. It is assumed that the capability and
capacity to treat various categories of treatable medical diseases are provided for each candidate medical center as a
function of possible subsidies provided by the government. It is further assumed that the number of patients occurring at
each patient group node during a unit time is known along with the categories of their diseases. Additionally, to emphasize
the importance of timely treatment, we use the concept of a survival rate dependent on patient transportation time as well
as the category of disease. With the objective of minimizing the total subsidies paid, we select from among the candidate
medical centers subject to minimum desired survival rate constraints.

Keywords: Emergency Medical Center, Location Design, Closest Assignment Rule, Genetic Algorithm, Simulation, and
Survival Rate

1. INTRODUCTION
1.1 Background

A medical emergency is an injury or illness that is acute and poses an immediate risk to a person's life or long-term health.
For emergencies starting outside of medical care, two key components of providing proper care are to summon the
emergency medical services and to arrive at an emergency medical center where the necessary medical care is available.
To facilitate this process, each country provides its own national emergency telephone number (e.g., 911 in the USA, 119
in Korea) that connects a caller to the appropriate local emergency service provider. Appropriate transportation, such as an
ambulance, will be dispatched to deliver the emergency patient from the site of the medical emergency to an available
emergency medical center.
In Korea, there are four classes of emergency medical center: regional emergency medical center, specialized care
center, local emergency medical center, and local emergency medical facilities. One regional emergency medical center
can be assigned to each metropolitan city or province based on the distribution of medical infrastructure, demographics
and population. Specialized care centers can be allocated by the Korean Ministry of Health, Welfare and Family Affairs
with the special purpose of treating illnesses caused by poison, trauma and burns. According to Act 30 of the Korean
Emergency Medical Service Law, one local emergency medical center should be operated per 1 million people in
metropolitan cities and major cities. One such center per 0.5 million people is provided in the provinces. The facility to be
designated as such a center should be selected from among the general hospitals in a region based on the accessibility to
local residents and capability to address the needs of emergency patients with serious medical concerns. To retain the
designation as a local emergency medical center, the general hospital should provide more than one specialist in the fields
of internal medicine, surgery, pediatrics, obstetrics and gynecology and anesthesiology. Local emergency medical
facilities may be appointed from among the local hospitals to support the local emergency medical center and to treat less
serious conditions.
A flow chart depicting the Korean emergency medical procedure is provided in Figure 1; it is from the National
Emergency Medical Center of Korea (National Emergency Medical Center, 2013). Initially, the victim(s) or a first
responder calls 119 to request emergency medical service. The Emergency Medical Information Center (EMIC) then
dispatches an ambulance to the scene. When an ambulance arrives at the scene, firstly, on-scene treatment is performed by
an emergency medical technician (EMT). And then, the patient(s) transport to an emergency medical service (EMS)
facility by an ambulance. During transport, information on the patient’s condition may be communicated to the EMIC.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(3), 129-140, 2014

OPTIMAL JOB SCHEDULING OF A RAIL CRANE IN A RAIL TERMINAL


Vu Anh Duy Nguyen and Won Young Yun
Department of Industrial Engineering,
Pusan National University,
30Jangeon-Dong, Geumjeong-Gu,
Busan 609-735, South Korea
Corresponding author’s email: wonyun@pusan.ac.kr

This study investigates the job sequencing problem of a rail crane at rail terminals with multiple train lanes. Two kinds
of containers are carried between trains and trucks. Inbound containers are loaded onto trains and outbound containers
are unloaded from trains. We consider the dual-cycle operation of the crane to load and unload containers between trains
and trucks. A branch-and-bound algorithm is used to obtain the optimal solution. A parallel simulated annealing algorithm
is also proposed to obtain near optimal solutions to minimize the makespan in job sequencing problems of large size.
Numerical examples are studied to evaluate the performance of the proposed algorithm. Finally, three different layouts
for rail terminals with different temporary storage areas are considered and their performance of three layouts is compared
numerically.

1. INTRODUCTION
Rail transportation becomes more important in intermodal freight transportation, to cope with the rapid changes which
are taking place in global trade. However, the percentage of goods carried by trains within Europe has dropped to 16.5%
in 2009, from 19.7 %, in 2000 (Boysen et al. 2012). Main reasons for this decrease are the difficulties in door-to-door
transportation and the enormous initial investments involved in the construction of railroad infrastructure. However, the
unit transportation costs decrease as the transportation distance increases. In addition, rail transportation is more
environmentally friendly than road transportation.
Inbound and outbound containers are loaded and unloaded by rail cranes (RMGC, RTGC), forklifts and reach stackers
at rail stations, so that the handling equipment plays an important role of the infrastructure at rail terminals. When the
trains arrive at the rail station, outbound containers must first be unloaded from the trains, after which inbound containers
that are located in the container yard need to be loaded onto the trains. We consider the job sequencing problem for a rail
crane because its performance affects significantly the dwelling duration of trains at rail terminals and the throughput of
the terminals.
In this paper, we deal with the job sequencing problem associated with a rail crane at rail terminals and want to minimize
the makespan for unloading and loading operations. Dual-cycle operation of a crane is defined as follows; 1) picking up
an inbound container from a truck, 2) loading it onto one of flat wagons in a train, 3) picking up an outbound container
from a train, and 4) loading it onto a truck that moves it to the yard terminal.
The operational problems in rail stations including layout design, load planning and rail crane scheduling have been
studied in the past. Kozan (1997) considered a heuristic decision rule for the crane split and a dispatching rule to assign
trains to rail tracks. Ballis and Golias (2002) considered the optimal design problem of some main design parameters of
a rail station, including length and utilization of transshipment tracks, train and truck arrival behavior, type and number
of handling equipment, and stacking height in storage areas. Abacoumkin and Ballis(2004) studied a design problem with
a number of user-defined parameters and equipment selections.
Feo and González-Velarde(1995) proposed a branch and bound algorithm and greedy randomized adaptive search
procedure to optimally assign highway trailers to railcar hitches. Bostel and Dejax(1998) studied the process of loading
containers onto trains in a rail-rail transshipment shunting yard. They proposed both optimal and heuristic methods to
solve it. Corry and Kozan (2006) studied the load planning problem on flat wagons, considering a number of uncertain
parameters including dual-cycle operations and mass distribution. Bruns and Knust(2012) studied an optimization
problem to assign containers to wagons in order to minimize the set-up and transportation costs along with the aim of
maximizing the utilization of the train when a rail terminal is developed.
Boysen and Fliedner (2010) determined the disjunctive working area for each rail crane by dynamic programming,
although they did not consider the job sequence of the rail cranes. They employed simple workflow measures to separate
the crane working areas. Jeong and Kim(2011) dealt with the scheduling problem of a rail crane and parking position
problem of trucks in rail stations located at seaport container terminals. In their scheduling problem, a single crane covers
each train and moves in one direction along the train. Pap et al. (2012) developed a branch and bound algorithm to
determine optimally the crane scheduling arrangement. They focused on the operation of a single crane, which is used to
transfer containers directly between the container yard and the train. (Guo et al. 2013) dealt with a scheduling problem of
loading and unloading containers between a train and yards. They assumed that multiple gantry cranes are used, safety
distance is required and cranes cannot cross other cranes. However, the article assumed one dimension travel (gantry
travel) of the cranes and did not consider the dual-cycle operation and the re-handle issues of transferring containers.
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 21(3), 141-152, 2014

BAYESIAN NETWORK LEARNING


FOR PORT-LOGISTICS-PROCESS KNOWLEDGE DISCOVERY
Riska Asriana Sutrisnowati1, Hyerim Bae1 and Jaehun Park2
1
Pusan National University, Korea Republic of,
2
Worcester Polytechnic Institute, United States

A Bayesian network is a powerful tool for various analyses (e.g. inference analysis, sensitivity analysis, evidence
propagation, etc.); however, it is first necessary to obtain the Bayesian network structure of a given dataset, and this, an
NP-hard problem, is not an easy task. However, an enhancement approach has been followed in order to learn Bayesian
network from event logs. In the present study, a genetic-algorithm-based method for generation of a Bayesian network is
developed and compared with a dynamic programming method. We herein also present the useful knowledge found using
our inference method.

Keywords: Bayesian network learning, mutual information, event logs

1 INTRODUCTION
Currently many businesses are supported by information systems that provide insight into what actually happens in
business process execution. This abundant data has been studied mainly in the growing research area of process mining
(Weijters et al., 2006; Goedertier et al., 2007; Gunther and van der Aalst, 2007; Song et al., 2009; van der Aalst, 2011;
De Weerdt et al., 2012;). There are four perspectives on process mining (van der Aalst, 2011): control flow, organizational
flow, time, and data. Current process mining techniques for the most part can accommodate only one of these. A Bayesian
network, however, can handle two perspectives at once (e.g. control flow and data).
In our previous work (Sutrisnowati et al., 2012), we used a dependency graph, retrieved by Heuristic Miner (Weijters
et al., 2006), and decomposed any cycles found into a non-cycle structure. This methodology, though enabling quick
retrieval of the constructed Bayesian network, has drawbacks relating to the fact that its non-cycle structure is dependent
solely on the structure of the dependency graph. In other words, we have to take note of the fact that the structure is
supported only by the successive occurrences between activities and not by the common information shared. To remedy
this shortcoming, we have developed a dynamic programming procedure (Sutrisnowati et al, 2013) of mutual information
score using Mutual Information Test (MIT) (De Campos, 2006). The data used to calculate MIT score was originally not
in a form of event logs, and, indeed, MIT was not designed for the business process management field. Therefore, the
formula was modified to accommodate the problem at hand. However, the dynamic programming, while capable of
delivering the optimal score, still lacks in performance. Therefore, genetic algorithms along with a comparison of dynamic
programming are also presented in this paper.
This paper is organized as follows: section 2 discusses the background; sections 3 and 4 introduce the proposed method
and a case study from a real-world example, respectively; section 5 offers a discussion, and finally, section 6concludes
our work.

2 BACKGROUND
2. 1 Process Structure

The dataset used in the present study was in the form of an event log, denoted L. According to Van der Aalst (2011)’s
proposed hierarchical structure of process execution event logs, a process consists of cases, denoted c, and each case
consists of events, denoted e, such that an event is always related to one case. For instance, suppose a tuple  A, B, C , D 
and  A, C , B, D  , which represents an event-log case in which an event A is followed by an event B and then an event
C and, eventually, an event D. Since a case c in the event logs contains a sequential process execution, we can assume
that the data in the event logs is ordered.
For convenience, we assume that each event in the event log is represented by one random variable X , so that X A
represents a random variable of an event A. Pa ( X i ) and Pa( X i ) denote a set of candidate parent(s) and actual parent(s)
of an event in the event logs, respectively. We can say that Pa( X i )  Pa( X i ) always holds, due to the fact that a candidate
parent that makes no higher contribution in the iterative calculation of MI L ( X i , Pa( X i )) cannot be considered as the actual
parent. For example, an event A has an empty candidate parent, since event A is the start event, denoted Pa( X A )  {} ,
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 21(3), 153-167, 2014

A HYBRID ELECTROMAGNETISM-LIKE ALGORITHM FOR A


MIXED-MODEL ASSEMBLY LINE SEQUENCING PROBLEM
Hong-Sen Yan, Tian-Hua Jiang, and Fu-Li Xiong
MOE Key Laboratory of Measurement and Control of Complex Systems of Engineering, and School of Automation,
Southeast University, Nanjing, China
Corresponding author’s e-mail: {Hong-Sen Yan, hsyan@seu.edu.cn}

With the growth in customer demand diversification, research on mixed-model assembly lines have been given
increasing attention in the field of management. Sequencing decisions are crucial for managing mixed-model assembly
lines. To improve production efficiency, the product sequencing problem with skip utility work strategy and
sequence-dependent setup times is focused on in this study, and its mathematical model is established, whereby the idle
cost, the utility cost and the setup cost are to be optimized simultaneously. A necessary condition for skip policy of the
system is set, and a lower bound of utility work cost is given and theoretically proved. Strong NP-hardness of the
problem is confirmed. Addressing the main features of the problem, a hybrid EMVNS (electromagnetism-like
mechanism-variable neighborhood search) algorithm is developed. To enhance the local search ability of EM, a VNS
algorithm is employed and five neighborhood structures are designed. With the aid of the VNS algorithm, the fine
neighbour search of the optimum individual is made available, thus improving the solution to a certain extent.
Simulation results demonstrate that the algorithm is feasible and valid.

Significance: This paper presents a hybrid EMVNS algorithm to solve the product sequencing problem of a
mixed-model assembly line with skip utility work strategy and sequence-dependent setup times. The simulation results
demonstrate that the proposed method is feasible and valid.

Keywords: Scheduling, Mixed-model assembly line sequencing, Skip utility work strategy, Sequence-dependent setup
times, Hybrid EMVNS algorithm

1. INTRODUCTION
To cope with diversification of customers demand, mixed-model assembly lines have gained increasing importance in
the field of management. A mixed model assembly line (MMAL) is a type of production line where a variety of product
models similar in product characteristics are assembled. Two important decisions for managing mixed-model assembly
lines are balancing and sequencing. Sequencing is a problem of determining a sequence of the product models, whereby
a major emphasis is placed on maximizing the line utilization. In MMAL, products are transported on the conveyor belt
and operators move along with the belt while working on a product. An operator can work on a product only when it is
within his work zone limited by upstream and downstream boundaries. Whenever multiple labor-intensive models, e.g.,
all having an elaborate option, follow each other in direct succession at a specific station, a work overload situation
occurs, which means that the operator cannot finish work on a product before it leaves his station. Many outstanding
results have been achieved in this field. Okamura and Yamashina (1979) developed a sequencing method for
mixed-model assembly lines to minimize line stoppage. Yano and Rachamadugu (1991) addressed the problem of
sequencing mixed-model assembly lines to minimize work overload. Miltenburg and Goldstein (1991) developed
heuristic approaches to smooth production times by minimizing loading variation. Kim and Cho (2003) studied the
sequencing problem in a mixed-model final assembly line with multiple objectives by using simulated annealing
algorithm. Zhao and Ohno (1994, 1997) proposed a branch-and-bound method for finding an optimal or sub-optimal
sequence of mixed models that minimizes the total conveyor stoppage time. Chutima et al. (2003) applied fuzzy genetic
algorithm to the sequencing problem of mixed-model assembly line with processing time. Simaria and Vilarinho (2004)
presented an iterative genetic algorithm-based procedure for the mixed-model assembly line balancing problem with
parallel workstations to maximize the production rate of the line for a pre-determined number of operators. Akpinar and
Baykasoğlu (2014) proposed a multiple colony hybrid bee algorithm to solve the mixed-model assembly line balancing
problem with setups.
To simultaneously optimize the idle and overload costs, Sarker and Pan (1998) studied MMAL design problem in
the cases of closed and opened workstation. Yan et al. (2003) presented three heuristic methods combining tabu search
with quick schedule simulation for optimizing the integrated production planning and scheduling problem on
automobile assembly lines to minimize the idle and setup cost. Moghaddam and Vahed (2006) addressed a
multi-objective mixed assembly line sequencing problem to optimize the costs of utility work, productivity and setup
simultaneously. Tsai (1995) studied a class of assembly line sequencing problem to minimize the utility work and the
risk of line stop simultaneously. Fattahi and Salehi (2009) addressed a mixed-model assembly line sequencing
optimization problem with variable production cycle time to minimize the idle time and utility work costs. Bard et al.
(1994) developed a mathematical model that involved two objective functions in the mixed model assembly line
sequencing (MMALS): minimizing the overall line length and keeping a constant rate of part usage. They combined the
two objectives using a weighted sum and suggested a tabu search algorithm. Mohammadi and Ozbayrak (2006)
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 21(3), 168-178, 2014

A VARIANT PERSPECTIVE TO PERFORMANCE APPRAISAL SYSTEM:


FUZZY C – MEANS ALGORITHM
Coskun Ozkana, Gulsen Aydin Keskinb,*, Sevinc Ilhan Omurcac
coskun_ozkan@yahoo.com, gaydin@kocaeli.edu.tr, silhan@kocaeli.edu.tr
a
Yıldız Technical University, Mechanical Engineering Faculty, Industrial Engineering Department, Istanbul – Turkey,
Tel: +90 212 383 2865, Fax: +90 212 383 2866
b
Kocaeli University, Engineering Faculty, Industrial Engineering Department, Umuttepe Campus, Kocaeli – Turkey
c
Kocaeli University, Engineering Faculty, Computer Engineering Department, Umuttepe Campus, Kocaeli – Turkey

Performance appraisal and evaluating the employees for awarding is an important issue in human resource management.
In performance appraisal systems, ranking scales and 360 degree are the most commonly used types of evaluating
methods in which the evaluator gives a score for each criterion to assess all employees. Ranking scales are relatively
simple assessment methods. Despite using ranking scales allows the management to complete the evaluation process in
a short time, they have some disadvantages. In addition, although, all the performance appraisal methods evaluated the
employees in different ways, the employees get scores for each evaluation criteria and then their performances are
evaluated according to total scores.
In this paper, the fuzzy c – means (FCM) clustering algorithm is applied as a new method to overcome the common
disadvantages of the classical appraisal methods and help managers to make better decisions in a fuzzy environment.
FCM algorithm not only selects the most appropriate employee(s), but also clusters them with respect to the evaluation
criteria. To explain the FCM method clearly, a performance appraisal problem is discussed and employees are clustered
both by the proposed method and the conventional method. Finally, the results obtained by the current system and FCM
have been presented comparatively. This comparison concludes that, in performance appraisal systems, FCM is more
flexible and satisfactory compared to conventional method.

Key words: Performance appraisal, fuzzy c – means algorithm, fuzzy clustering, multi criteria decision making,
intelligent analysis.

1. INTRODUCTION
Employee performances such as capability, knowledge, skill, and other abilities are significantly important for the
organizations (Gungor et al., 2009). Hence, accurate personnel evaluation has a significant role in the success of an
organization. Evaluation techniques that allow companies to identify the best employee from the personnel are the key
components of human resource management (Sanyal and Guvenli, 2004). However, this process is so complicated due
to human nature. The objective of an evaluation process depends on appraising the differences between employees, and
estimating their future performances. The main goal of a manager is to attain ranked employees who have been
evaluated with regard to some criteria. Therefore, the development of efficient performance appraisal methods has
become a main issue. Some authors define the performance appraisal problem as an unstructured decision problem, that
is, no processes or rules have been defined for making decisions (Canos and Liern, 2008).
Previous researches have shown that performance appraisal information is used especially in making decisions
requiring interpersonal comparisons (salary determination, promotion, etc.), decisions requiring personal comparison
(feedback, personal educational need, etc.), decisions orientated to the continuation of the system (target determination,
human force planning, etc.) and documentation. It is clear that in a conventional way, there are methods and tools to do
those tasks (Gürbüz and Albayrak, 2014); however, each traditional method has certain drawbacks. In this paper, fuzzy
c – means (FCM) clustering algorithm is proposed to make a more efficient performance evaluation by removing these
drawbacks.
The proposed method enables the managers group their employees with respect to several criteria. Thus, managers can
determine the most appropriate employee(s), in case of promotion, salary determination, and so on. In addition, in case
of personal educational requirement, they will know which employee(s) needs training by the proposed method.
This paper proposes an alternative suggestion to performance appraisal system. After a brief review of performance
appraisal in Section 2, FCM algorithm is described in Section 3. A real-life problem is solved both by FCM and the
conventional method to evaluate their performances and the findings are discussed in Section 4. Finally, this paper
concludes with a discussion and a conclusion.

ISSN 1943-670X ©INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(4), 179-189 , 2014

AN EARLY WARNING MODEL FOR THE RISK MANAGEMENT OF


GLOBAL LOGISTICS SYSTEMS BASED ON PRINCIPAL COMPONENT
REGRESSION
Jean Betancourt Herrera1, Yang-Byung Park2
Department of Industrial and Management Systems Engineering
College of Engineering, Kyung Hee University
1
Seocheon-dong, Giheung-gu, Yongin-si, Gyeonggi-do 446-701, Republic of Korea
1
jpierrebetancourt@gmail.com, 2ybpark@khu.ac.kr (corresponding author)

This paper proposes an early warning model for the risk management of global logistics systems based on principal
component regression (PCR) that predicts a country’s global logistics system risk, identifies risk sources with
probabilities, and suggests ways of risk mitigation. Various quantitative and qualitative global logistics indicators are
utilized for monitoring the global logistics system. The Enabling Trade Index is used to represent the risk level of a
country’s global logistics system. Principal component analysis is applied to identify a small set of global logistics
indicators that account for a large portion of the total variance in the original set. An empirical study is carried out to
validate the predictive ability of PCR using datasets of years 2010 and 2012 published by the World Economic Forum.
Furthermore, the superiority of PCR is evaluated by comparing its performance with that of a neural network with
respect to the correlation coefficient and coincident rate. Finally, a real-life example of the South Korean global
logistics system is presented.

Keywords: early warning model, global logistics system, risk management, principal component regression, neural
network.

1. INTRODUCTION
Global logistics is a collection of moving and storage activities required for trade between countries. In general, global
logistics is much more complicated and difficult to perform than domestic logistics because the goods flow over borders
and thus take a long time to transport. Complex administrative processes are involved, and more than one mode of
transportation is required (Shamsuzzoha and Helo, 2012). The components of a typical global logistics system of a
country are tariff, customs, documentations, transport infrastructure and services, information and communication
services, regulations, and security (Gourdin, 2006).
As global trade continues to expand, the sustainable global logistics system of a country plays a crucial role in
achieving global competiveness by shortening the logistics process time, reducing the logistics cost, and securing
interoperability between different logistics sectors (Yahya et al., 2013). The establishment of the sustainable global
logistics system requires a big investment for a government and takes a period of multiple years. If a country cannot
provide traders with a satisfactory global logistics system, it will lose valuable customers and experience a significant
drop in trade. Therefore, it is very important for a country to predict its global logistics system risk in advance, identify
risk sources where improvements are most needed, and investigate effective ways for risk mitigation.
An early warning system is responsible for monitoring the system conditions and determining the issue of a warning
signal in advance through the analysis of various system indicators. Thus, an early warning system is an effective tool
for the operation of a sustainable global logistics system for a country. An early warning system may contribute to
providing relevant government ministries with strong evidence for improving certain areas of the global logistics
system, especially when allocating limited resources or establishing various global logistics policies.
A few researchers have studied the development of risk early warning system. Fordyce et al. (1992) proposed a
method for monitoring the manufacturing flow of semi-conductor facilities in a logistics management system. Xie et al.
(2009) developed an early warning and control management process for inner logistics risk in small manufacturing
enterprises based on label-card system equipped with RFID, through which an enterprise can monitor the quantity and
quality of work in process in a dynamic manner. Xu et al. (2010) presented the early warning model for food supply
chain risk based on principal component analysis and logistics regression. Li et al. (2010) presented a novel framework
for early warning and proactive control systems in food supply chain networks that combine expert knowledge and data
mining methods. Feng et al. (2008) proposed a simple early warning model with thresholds for four indicators as a
subsystem of the decision support system for price risk management of the vegetable supply chain in China. Xia and
Chen (2011) proposed a decision-making model for optimal selection of risk management methods and tools based on

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(4), 190-208, 2014

AGILE AND FLEXIBLE SUPPY CHAIN NETWORK DESIGN UNDER


UNCERTAINITY
Morteza Abbasi, Reza Hosnavi, Reza Babazadeh
Department of Management and Sot Technologies, Malek Ashtar University of Technology, P.O. Box 1774/15875, Tehran, Iran

Agile supply chain has proved its efficiency and capability in dealing with the disturbances and turbulences of
today’s competitive markets. This paper copes with the strategic and tactical level decisions in agile supply
chain network design (SCND) under interval data uncertainty. In this study, an efficient mixed integer linear
programming (MILP) model is developed that is able to consider the key characteristics of agile supply chain
such as direct shipments, outsourcing, different transportation modes, discount, alliance (process and
information integration) among opened facilities and maximum waiting time of customers for deliveries. In
addition, in the proposed model capacity of facilities is determined as decision variables which are often
assumed to be as an input parameter. Then, the robust counterpart of the presented model according to the recent
advances in robust optimization theory is developed to deal with the inherent uncertainty of input parameters.
Computational results illustrate that the proposed robust optimization model has high degree of responsiveness
in dealing with uncertainty compared with deterministic model. Therefore, the robust model can be applied as a
power tool in agile and flexible SCND which faces with different risks in competitive environments.

Keywords: Robust optimization, Agile supply chain network design, Flexibility, Outsourcing, Responsiveness.

1.INTRODUCTION
Today’s, high fluctuations and disturbances in business environments have caused the supply chains to seek an
effective way to deal with the undesirable uncertainties which affect the overall supply chain performance.
Supply chain network design (SCND) decisions, as the most important strategic level decisions in supply chain
management, concerned with complex interrelationships between various tiers, such as suppliers, plants,
distribution centers and customer zones as well as determining the number, location and capacity of facilities to
meet customer needs, effectively. Supply chain management integrates interrelationships between various
entities through creating alliance, i.e. information-system integration and process integration, between entities to
improve response to customers in various aspects such as, higher product variety and quality, lower costs and
quick responses.
Typically, risks in SCND are classified in two categories: operational or internal risk factors and disruption or
external risk factors. Operational risks is related to those risks which occur because of internal factors in supply
chain because of improper coordination between entities in various tiers, such as, production risk, distribution
risk, supply risk and demand risk. In contrast, disruption risks are resulted because of external risk factors which
occur due to interaction between supply chain and environment, such a natural disasters, exchange rate
fluctuations and terrorist attacks (Singh et al., 2011).
Increasing utilization of outsourcing approach through sub-contracting some of customer demands as well as
reduction in life cycle of products due to enthusiasm of customers to welcome fashion goods rather than
commodities has increased the uncertainties in competitive environments. Therefore, the supply chain network
(SCN) should be designed in the way that could sustain in dealing with such uncertainties. Chopra and Sodhi
(2004) mentioned that the organizations should consider uncertainty issue with its various forms in planning and
supply chain management to deal with their destructive and burdensome effects on supply chain network.
One of the vital challenges for organizations in today’s turbulent markets is the need to respond to customer
needs with different volumes and vast variety quickly and efficiently (Amir, 2011). Agility with its various
contexts is the most popular approach that enables organizations to face with unstable and high volatile
customer demands. The most important concepts of agility are described in the next section. The point that
should be mentioned here is that the agility concepts should be applied in upstream and downstream
relationships of the supply chain management involving supplier selection, logistics, information system and
etc. Since the SCND is the most important strategic level decision which affects the overall performance of
supply chain, it is necessary to consider agility concepts such as response to customers in maximal allowable
time, direct shipment, alliance (i.e., information and process integration) between entities in different echelons,
discount to achieve competitive supply chain, outsourcing and using different transportation modes to achieve
flexibility as well as safety stock to improve responsiveness. It is evident that considering agility concepts in
SCND plays incredible role in agility of the overall supply chain. As yet many researchers have tried to show
the most important factors in agile supply chain management conceptually and this context has omitted in
mathematical modeling area especially in supply chain network design area.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(4), 209-230, 2014

SIMULATION OPTIMIZATION OF FACILITY LAYOUT DESIGN


PROBLEM WITH SAFETY AND ERGONOMICS FACTORS
Ali Azadeh1, Bita Moradi
1
Department of Industrial Engineering, College of Engineering, University of Tehran, Tehran, Iran
Corresponding author e-mail address: aazadeh@ut.ac.ir

This paper presents an integrated fuzzy simulation-fuzzy data envelopment analysis (FDEA)-fuzzy analytic hierarchy
process (FAHP) algorithm for optimization of flow shop facility layout design (FSFLD) problem with safety and
ergonomics factors. Almost all FSFLD problems are solved and optimized without safety and ergonomics factors. At
first, safety and ergonomics factors are retrieved from a standard questionnaire. Then, feasible layout alternatives are
generated by a software package. Third, FAHP is used for weighting non-crisp ergonomics and safety factors in
addition to maintainability, accessibility and flexibility (or qualitative) indicators. Fuzzy simulation is used
consequently to incorporate the ambiguity associated with processing times in the flow shop by considering all
generated layout alternatives with uncertain inputs. The outputs of fuzzy simulation or non-crisp operational indicators
are average waiting time in queue, average time in system and average machine utilization. Finally, FDEA is used for
finding the optimum layout alternatives with respect to ergonomics, safety, operational, qualitative and dependent
indicators (distance, adjacency and shape ratio). The integrated algorithm provides a comprehensive analysis on the
FSFLD problems with safety and ergonomics issues. The results have been verified and validated by DEA, principal
component analysis and numerical taxonomy. The unique features of this study are the ability of dealing with multiple
non-crisp inputs and outputs including ergonomics and safety factors. It also uses fuzzy mathematical programming for
optimum layout alternatives by considering safety and ergonomics factors as well as other standard indicators.
Moreover, it is a practical tool and may be applied in real cases by considering safety and ergonomics issues within
FSFLD problems.

Keywords: Simulation Optimization; Flow Shop Facility Layout Design; Fuzzy DEA; Safety; Ergonomics

Motivation and Significance: Almost all FSFLD problems are solved and optimized without safety and ergonomics
factors. Moreover, standard factors related to operational and layout dependent issues are only considered in such
problems. There are usually missing data, incomplete data or lack of data with respect to FSFLD problems in general
and safety and ergonomics issues in particular. This means data could not be collected and analyzed by deterministic or
stochastic models and new approaches for tackling such problems are required. This gap motivated the authors to
develop a unique simulation optimization algorithm to handle such gaps in FSFLD problems.
The integrated fuzzy simulation-fuzzy DEA algorithm-fuzzy AHP presents exact solution to the FSFLD problems
with safety and ergonomics issues whereas previous studies present incomplete and non exact alternatives. Also, it
provides a comprehensive analysis on the FSFLD problems with uncertainty by incorporating non-crisp ergonomics and
safety indicators in addition to fuzzy operational, dependent and qualitative indicators. Moreover, it provides complete
and exact rankings of the plant layout alternatives with uncertain and fuzzy inputs. The superiority and effectiveness of
the proposed integrated algorithm is compared with previous DEA-Simulation-AHP, AHP-DEA, AHP-principal
component analysis (PCA), and numerical taxonomy (NT) methodologies through a case study. The unique features of
the proposed integrated algorithm are the ability of dealing with multiple fuzzy inputs and outputs (ergonomics and
safety in addition to operational, qualitative and dependent). It also optimizes layout alternatives through fuzzy DEA.
Third it is a practical approach due to considerations of ergonomics, safety, operational and dependent aspects of the
manufacturing process within FSFLD problems.

1. INTRODUCTION
Facility Layout design (FLD) is a critical issue in productivity and profitability through redesigning, expanding, or
designing the manufacturing systems, e.g. flow shop systems (FSFLD). Zhenyuan, et al. (2013) showed that the
designed lean facility layout system can increase the productivity efficiency. Also, Niels Henrik Mortensen, et al.

                                                                                                                       

 
ISSN 1943-670X © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

 
International Journal of Industrial Engineering, 21(4), 231-242, 2014

A CONCURRENT APPROACH FOR FACILITY LAYOUT AND AMHS


DESIGN IN SEMICONDUCTOR MANUFACTURING
Dongphyo Hong1, Yoonho Seo1*, Yujie Xiao2
1
School of Industrial Management Engineering, Korea University, Seoul, Korea
2
Department of Logistic Management, School of Marketing & Logistic Management, Nanjing University of Finance & Economics, Nanjing, People's
Republic of China
*Corresponding author: yoonhoseo@korea.ac.kr

This paper presents a concurrent approach to solve the design problem of facility layout and automated material
handling system (AMHS) for semiconductor fabs. The layout is composed of bays which are unequal-area blocks with
equal height but flexible width. In particular, the bay width and locations of a shortcut, bays, and their stockers, which
are major fab design considerations, are concurrently determined in this problem. We developed a mixed integer
programming (MIP) model for this problem to minimize the total travel distance (TTD) based on unidirectional inter-
bay flow and a bidirectional shortcut. To solve large-sized problems, we developed a five-step heuristic algorithm to
exploit and explore the solution space based on the MIP model. The computational results show that the proposed
algorithm is able to find optimal solutions of small-sized problems and to solve large-sized problems within acceptable
time.

Keywords: Facility Layout, Bay Layout, AMHS Design, Concurrent Approach, Semiconductor Manufacturing

1. INTRODUCTION
In a 300 mm wafer fabrication facility (fab), a wafer typically travels about 13–17 km and visits 250 process equipment
during the processing (Agrawal and Heragu, 2006). An effective facility layout and material handling system design can
significantly reduce the total travel distance of wafers. As stated by Tompkins et al. (2010), 20–50% of the total
operating expenses within manufacturing are attributed to material handling. An efficient design of facility layout and
material handling can reduce operational cost by at least 10–30%. Therefore, two challenges are presented to a fab
designer: (1) facility layout; (2) automated material handling system (AMHS) design (Montoya-Torres, 2006).
This paper focuses on the fab design comprising a bay layout and AMHS design with a spine configuration, which
usually has a unidirectional flow and bidirectional shortcuts, as presented by Peters and Yang (1997). Here, each bay is
composed of equipment that performs similar processes and forms a rectangular shape. However, they approached the
two sub-problems in a sequential manner, which may result in a local optimal solution. In this study, a concurrent
approach is proposed to find optimal solution of the two sub-problems simultaneously as shown in Figure 1.

Figure 1. Layout example using a shortcut Figure 2. Representation of the problem

The facility layout problem (FLP) is to determine the physical placement of departments within the facility (Kusiak
and Heragu 1987). In semiconductor manufacturing, the layout design usually has a bay structure with a spine
configuration to enhance the utilization of process equipments and frequent maintenance (Agrawal and Heragu, 2006).
Peters and Yang (1997) suggested a methodology for an integrated layout and AMHS design which enables spine and
perimeter configurations in a semiconductor fab. Azadeh and Izadbakhsh (2008) presented an analytic hierarchy process
and principal component analysis to solve the FLP. Ho and Liao (2011) proposed a two-row dual-loop bay layout. They
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 21(5), 243-252, 2014

ASSISTING WHEELCHAIR USERS ON BUS RAMPS: A POTENTIAL CAUSE


OF LOW BACK INJURY AMONG BUS DRIVERS
Piyush Bareria1, Gwanseob Shin2
1
Department of Industrial and Systems Engineering
State University of New York at Buffalo
Buffalo, New York, USA
Corresponding author’s e-mail: pbareria@buffalo.edu
2
School of Design and Human Engineering
Ulsan National Institute of Science and Technology
Ulsan, Korea

Manual assistance to wheelchair-users while boarding and disembarking a bus may be an important risk factor for
musculoskeletal disorders of bus drivers, but no study has yet assessed biomechanical loads associated with the manual
assist operations. In this study, off-duty bus drivers simulated wheelchair-user assisting operations using forward and
backward strategies for boarding and disembarking ramps. Low-back compression and shear forces, shoulder moments and
percent population capable of generating required shoulder moment were estimated using the University of Michigan
Three-Dimensional Static Strength Prediction Program. L4-L5 compression force ranged from 401.6 N for forward
boarding to 2169.3 N for backward boarding (pulling), and from 2052.4 N for forward disembarking to 434.2 N for
backward disembarking (pushing). The shoulder moments were also consistently higher for the pushing tasks. It is
recommended that bus drivers adopt backward boarding and forward disembarking strategies to reduce the biomechanical
loads on the low back and shoulder.

Keywords: musculoskeletal injury, bus driver, wheelchair pushing/pulling, bus access ramp

(Received on September 9, 2012; Accepted on September 15, 2014)

1. INTRODUCTION

Bureau of Labor Statistics data (BLS, 2009) indicate that among bus drivers (transit and intercity), back injuries and
disorders constitute about 25% of reported cases of nonfatal work-related injuries and illnesses resulting in days away from
work. Data from the same year reports a work-related nonfatal back injury/illness incidence rate (IR) of 12.3 per 10,000 full
time bus drivers, which was greater than that of construction laborers (IR = 10.6). A number of studies have also evaluated
the prevalence of work-related musculoskeletal disorders in the upper body quadrant (neck, upper back, shoulder, elbow,
wrist, etc.) in drivers of different types of vehicles (Greiner & Krause, 2006; Langballe, Innstrand, Hagtvet, Falkum, &
Aasland, 2009; Rugulies & Krause, 2008). The prevention of musculoskeletal injuries of bus drivers and associated
disability has become a major challenge for employers, insurance carriers, and occupational health specialists.
Physical risk factors that have been associated with the high prevalence of work-related musculoskeletal disorders of
drivers include frequent materials handling activities as well as prolonged sitting and exposures to whole body vibration
(WBV) (Magnusson, Pope, Wilder, & Areskoug, 1996; Szeto & Lam, 2007). Specifically, bus drivers in public
transportation may also be exposed to the risks of heavy physical activities from manual assisting of wheelchair users.
Bus drivers of public transit system are mandated by law to assist a person in wheelchair to board and disembark
buses if needed. Sub-section 161(a) of the Code for Federal Regulation on Transportation services for individuals with
disability (49 CFR 37, U.S.) requires “public and private entities providing transportation services maintain in operative
condition those features of facilities and vehicles that are required to make the vehicles and facilities readily accessible to
and usable by individuals with disabilities”. In addition, 49 CFR 37 sub-section 165 (f) states that “where necessary or upon
request, the entity's personnel shall assist individuals with disabilities with the use of securement systems, ramps and lifts.
If it is necessary for the personnel to leave their seats to provide this assistance, they shall do so.” With an estimated 1.6
million non-institutionalized wheelchair users in U.S. of which about 90% are hand-rim propelled or so-called manual
wheelchairs (Kaye, Kang, & LaPlante, 2005), bus drivers are likely to assist wheelchair users during their daily shift which
could involve manual lifting, pushing and/or pulling the occupied wheelchair.
Pushing a wheelchair could cause overexertion and lead to injury since even ramps that comply with the Americans
with Disabilities Act (ADA) standards can be difficult to climb for wheelchair pushers of any strength (Kordosky, Perkins,
ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

 
International Journal of Industrial Engineering, 21(5), 253-270, 2014

OPEN INNOVATION STRATEGIES OF SMARTPHONE MANUFACTURERS:


EXTERNAL RESOURCES AND NETWORK POSITIONS
Jiwon Paik1, Hyun Joon Chang2
1,2
Graduate School of Innovation and Technology Management
Korea Advanced Institute of Science and Technology
Daejeon 305-701, South Korea
Corresponding author’s e-mail: jiwon.paik@kaist.ac.kr

A smartphone is not only a product made up of various integrated components, but also a value-added service. As the
smartphone ecosystem has evolved within the value chain of the ICT industry, smartphone manufacturers can benefit from
open innovation, such as by making use of external resources and collaboration networks. However, most studies on
smartphones have focused on aspects of product innovation, such as functional differentiation, usability, and market
penetration rather than on innovation networks. The aim of this study is to examine how the open innovation approaches and
strategic fit of smartphone manufacturers function in delivering innovation outcomes and business performance. This
research examines the relationship between seven smartphone manufacturers and their collaboration partners during a recent
three-year period, by analyzing four specific areas: hardware components, software, content services, and
telecommunications.

Keywords: smartphone, open innovation, external resources, network positions

(Received on September 7, 2012; Accepted on August 7, 2014)

1. INTRODUCTION

Information and communications technology (ICT) firms are now experiencing a new competitive landscape that is
redefining and eroding the boundaries between software, hardware, and services. In 2000, the first device marketed as a
‘smartphone’ was released by Ericsson; it was the first to use an open operating system and to combine the functions of a
mobile phone and a personal digital assistant (PDA). Then in 2007, the advent of the iPhone redefined the smartphone
product category, with the convergence of traditional mobile telephony, Internet services, and personal computing
representing a paradigm shift for this emerging industry (Kenney and Pon 2011).
Smartphones are becoming increasingly popular: smartphone sales to end users accounted for 19 percent of total mobile
communications device sales in 2010, a 72.1 percent increase over 2009. In comparison, worldwide mobile device sales to
end users increased by 31.8 percent during the same perioda
The smartphone industry is undergoing rapid and seismic change. Within two years, the iPhone went from nothing to
supplying 30% of Apple's total revenue. Indeed, the iPhone has been the best performer in terms of global sales, capturing
more than 14% of the market in 2009; whereas Nokia, once the smartphone industry leader, has seen its market share fall
dramatically. Stephen Elop, the former chief executive officer of Nokia, expressed a sense of crisis in February 2011: “We are
standing on a burning platform.” Figure 1 shows the global market share of eight smartphone manufacturers, and provides an
indication of the fierce competition in this industry.
During the remarkable flourishing of the smartphone industry, most theoretical analysis has strongly emphasized either
the product or the service aspects of smartphones, such as their usability, diffusion, software development, and service
provision (Funk 2006; Kenney and Pon 2011; Eisenmann et al. 2011; Doganova and Renault 2011). In contrast, the
competitive management aspects, such as integration or collaboration, have been relatively neglected.
The purpose of this paper is to analyze the open innovation strategy of smartphone manufacturers who have experienced
sudden performance changes; examples of such open innovation strategies include managing complementary assets and
integrating or collaborating with other companies. This paper examines the impact of utilizing external resources on the
innovation output, performance, and network position of smartphone manufacturers; and also formulates and tests several
hypotheses, by means of theoretical analyses and empirical research.
This paper is organized as follows: the next section provides an overview of the relevant literature, and the hypotheses are
defined in accordance with the theoretical analysis; in the third section, the dataset and methodology are explained; the fourth

a
Gartner press releases, 2011. Gartner Says Worldwide Mobile Device Sales to End Users Reached 1.6 Billion Units in 2010;
Smartphone Sales Grew 72 Percent in 2010

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(5), 271-283 2014

ENHANCING PERFORMANCE OF HEALTHCARE FACILITY VIA NOVEL


SIMULATION METAMODELING BASED DECISION SUPPORT
FRAMEWORK
Farrukh Rasheed1, Young Hoon Lee2
1 2
, Department of Information and Industrial Engineering
Yonsei University College of Engineering
50 Yonsei-ro, Seodaemun-gu, Seoul
120-749, Republic of Korea.
Corresponding author’s e-mail: farrukhaccount@gmail.com

A simulation model of patient throughput in the community healthcare center (CHC) located in Seoul, Korea is developed.
The aforementioned CHC is providing primary, secondary and tertiary healthcare (HC) services, i.e. diagnostic, illness,
treatment, health screening, immunization, family planning, ambulatory care, pediatric and gynecologic along with various
other support services to uninsured, under-insured and low income patients residing in the nearby medically underserved
areas. The prime aim of this investigation is to identify main imperative variables via statistical analysis of de-identified
customer tracking system dataset and based-on expert opinion. Afterwards, using proposed novel simulation metamodeling
based decision support framework to gauge their impact on performance measures of interest. The identified independent
variables are resource shortage and stochastic demand pattern while performance measures of interest are the average
length of stay (LOSa), balking probability (Pb), reneging probability (Pr), overcrowding and resource utilization.
Significance: The methodology presented in this research is unique in a sense: a single meta-model represents a single
performance measure and the solution found may be sub-optimal, having a detrimental effect on other crucial performance
measures of interest if not considered. Hence, it is emphasized to develop all possible meta-models representing all the
crucial performance measures individually for the purpose of overcoming aforesaid draw back so that final solution may
qualify itself as a real-optimal solution.
Keywords: simulation, regression, performance analysis, healthcare system and application, decision making.

(Received on December 18, 2012; Accepted on September 15, 2014)

1. INTRODUCTION
Today's highly competitive HC sector must be able to adjust as per customers' ever changing requirements to survive.
Specific HC installation as considered for this research is a CHC located in Seoul, Korea serving medically underserved
areas. A HC facility can only survive by delivering high quality service at reduced cost while promptly responding to
associated challenges: swift changes in technology, patient load fluctuations, longer patient LOS, sub-optimal resource
utilization, unnecessary inter-process delays, inefficient information access and control, compromised patient safety,
overcrowding, surge, emergency department (ED) use and misuse and medication errors (Erik. et al. (2010), Mare et al.
(1995), Nan et al. (2009), Nathan and Dominik (2009)). Foregoing in view, the CHC administration was frantically looking
for ways to improve service quality because if systems under investigation are multifaceted as in numerous practical
situations, mathematical solutions become impractical and simulation is used as a contrivance for system evaluation.
Simulation represents transportation, manufacturing and service systems in a computer program for performing
experiments which enables testing of design changes without disruption to the system being modelled i.e. representation
mimics system’s pertinent outward characteristics (Wang et al. (2009)).
Many HC experts have used simulation for the analysis of different situations aiming at better service quality and
improved performance. Hoot et al. (2007, 2008, 2009) used real-time simulation to forecast crowding in an ED. Kattan and
Maragoud (2008) uses simulation to address problems of an ambulatory care unit in a large cancer center, where operations
and resource utilization challenges led to overcrowding, excessive delays, and concerns regarding safety of critical patients.
Santibanez et al. (2009) analyzes the impact of operations, scheduling and resource allocation on patient waiting time,
clinic over-time and resource utilization using simulation. Zhu et al. (2009) analyzes appointment scheduling systems in
specialist outpatient clinics to determine the optimal number of appointments to be planned under different performance
indicators and consult room configurations using simulation. Wang et al. (2009) modelled ED services using ARIS /
ARENA software. Su et al. (2010) used simulation to improve the hospital registration process by re-engineering actual
process. Blasak et al. (2003) used simulation to evaluate hospital operations between emergency department (ED) and
medical treatment unit to suggest improvements. Samaha et al. (2003) proposed a framework to reduce patient LOS using
simulation. Holm and Dahl (2009) used simulation to analyze the effect of replacing nurse triage with physician triage.
Reindl et al. (2009) used simulation to analyze and suggest improvements for the cataract surgery process.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

 
International Journal of Industrial Engineering, 21(5) ,284-294, 2014
 

ECONOMIC-STATISTICAL DESIGN OF THE MULTIVARIATE SYNTHETIC


T2 CHART USING LOSS FUNCTION
Wai Chung Yeong1, Michael Boon Chong Khoo1, Mohammad Shamsuzzaman2, and Philippe Castagliola3
1
School of Mathematical Sciences,
Universiti Sains Malaysia, 11800 Penang, Malaysia
Corresponding author’s e-mail: thomas_yeong@yahoo.com
2
Industrial Engineering and Management Department,
College of Engineering, University of Sharjah,
Sharjah, United Arab Emirates
3
Unam Université, Université de Nantes &
Irccyn umr cnrs 6597, Nantes, France

This paper proposes the economic-statistical design of the synthetic T2 chart, where the optimal chart parameters are
obtained by minimizing the expected cost function, subject to constraints on the in-control and out-of-control average run
lengths (ARL0 and ARL1). The quality cost is calculated by adopting a multivariate loss function. This paper also
investigates the effects of input parameters, shift sizes and multivariate loss coefficients toward the optimal cost, choice of
chart parameters and ARLs. Interaction effects are identified through factorial design. Besides that, comparisons are made
between the significant parameters of the synthetic T2 chart with that of the Hotelling’s T2 and Multivariate Exponentially
Weighted Moving Average (MEWMA) charts. Conditions where the synthetic T2 chart shows better economic-statistical
performance than the Hotelling’s T2 and MEWMA charts are identified. The synthetic T2 chart compares favorably with the
other two charts in terms of cost, while showing better ability to detect shifts.

Keywords: economic-statistical design; factorial design; Hotelling’s T2 chart; MEWMA chart; multivariate loss function;
synthetic chart

(Received on April 1, 2014; Accepted on September 25, 2014)

1. INTRODUCTION

Multivariate control charts are used when more than one correlated variables need to be monitored simultaneously. The
Hotelling’s T2 control chart is one of the most popular multivariate control charts used in practice. However, this chart is
not very efficient in detecting small to moderate shifts. To improve the performance of the Hotelling’s T2 chart, Ghute and
Shirke (2008) combined the Hotelling’s T2 chart with the conforming run length (CRL) chart, leading to a multivariate
synthetic T2 chart. The multivariate synthetic T2 chart operates by defining a sample as non-conforming if the T2 statistic is
larger than CLT 2 / S , the control limit of the T2 sub-chart. Unlike the T2 chart, an out-of-control signal is not immediately
generated when the T2 statistic is larger than CLT 2 / S . An out-of-control signal will only be generated when the number of
conforming samples between two successive non-conforming samples is smaller than or equal to L, the lower control limit
of the CRL sub-chart. Ghute and Shirke (2008) have shown that the multivariate synthetic T2 chart gives better ARL
performance, in comparison to the Hotelling’s T2 chart. Some recent studies on control charts include Zhao (2011), Chen et
al. (2011), Kao (2012a), Kao (2012b), Pina-Monarrez (2013), and many more.
Duncan (1956) developed an economic design of X control charts, for the purpose of selecting optimal control chart
design parameters. This approach was generalized by Lorenzen and Vance (1986), so that it can be adopted on other charts.
The major weakness of the economic design of control charts is that it ignores the statistical performance of the control
charts. Woodall (1986) criticized that in the economic approach, the Type I error probability is considerably higher than it
would usually be compared to statistical designs, which leads to more false alarms. To improve the poor statistical
performance of the economically designed control chart, Saniga (1989) proposed an economic-statistical design of the
univariate X and R charts. In the economic-statistical design, statistical constraints are incorporated into the economic
model. The economic-statistical design can be viewed as a cost improvement approach to statistical designs, or as a
statistical performance improvement approach to economic designs.

 
ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
 
International Journal of Industrial Engineering, 21(6), 295-303, 2014

THE EFFECT OF WRIST POSTURE AND FOREARM POSITION ON THE


CONTROL CAPABILITY OF HAND-GRIP STRENGTH
Kun-Hsi Liao

Department of Product Development and Design


Taiwan Shoufu University
Tainan, Taiwan
Corresponding author’s e-mail: liaokunhr@gmail.com

Economic and industrial developments have yielded an increase in automated workplace operations; consequently,
employees must learn to operate various hand tools and equipment. The hand grip strength exerted by workers during
machinery operation has received increasing attention from engineers and researchers. However, research on the
relationship between hand grip strength and posture—a crucial issue in ergonomics—is scant. Therefore, in this study, the
relationships among wrist posture, forearm position, and hand grip strength were examined among 72 university students.
Three wrist posture and forearm positions of grip span were tested to identify the maximum volitional contraction (MVC)
and hand gripping control (HGC) required for certain tasks. A one-way analysis of variance was conducted using MVC and
HGC as dependent variables, and the optimal wrist posture and forearm position were identified. The findings provide a
reference for task and instrument design and protecting industrial workers from diseases.

Keywords: wrist posture; forearm position; hand gripping control; maximum volitional contraction

(Received on November 19, 2013; Accepted on September 25, 2014)

1. INTRODUCTION

Hand-grip strength is crucial in determining the ability to handle and control an object. Two types of hand-grip strength are
associated with tool operation—maximal grip force and hand-gripping control strength (HGC). Numerous previous studies
have elucidated the factors associated with the design of industrial safety equipment and tools based on hand-grip strength
and maximum volitional contraction (MVC), maximum force which a human subject can produce in a specific isometric
exercise ( Hallbeck and McMullin, 1993; Carey and Gallwey, 2002; Kong et al., 2008; Lu et al., 2008; Schlüssel et al.,
2008; Liao, 2009; Liao, 2010a, 2010b; Shin, 2012; Boonprasurt and Nanthavanij; 2012; Liao, 2014). Those studies have
shown that hand-grip strength is a critical source of power for operating equipment and tools in the workplace. HGC
represents a controlled force precisely exerted using the palm of the hand (Murase et al., 1996; Hoeger and Hoeger, 2002).
For example, HGC can indicate the force required to cut electrical wire or to tighten a screw. Hoeger and Hoeger (2002)
applied MVC to standardize test results. For example, 70% MVC (MVC-70%), the vale equals to seventy percent of
maximum volitional contraction force, is a typical measurement standard. Numerous previous studies have applied HGC to
measure the force exerted during daily tasks, work performance, and tool operation (Mackin, 1984; Murase, 1996; Kuo,
2003). Moreover, it has been proposed that hand-grip strength could predict mortality and the expectancy of being able to
live independently. Hand-grip strength measurement is a simple and economical test that provides practical information
regarding muscle, nerve, bone, or joint disorders. Thus, measuring the HGC required for work tasks can provide a useful
reference for designing new hand tools. Numerous studies have shown that hand-grip strength is moderated by factors such
as age, gender, posture, and grip span (Carey and Gallwey, 2002; Watanabe et al., 2005; Liao, 2009). In specific
circumstances, posture is the most critical factor affecting grip strength; thus, measuring grip strength can provide crucial
knowledge for tool designers.
The American Academy of Orthopedic Surgeons (1965) and Eijckelhof et al. (2013) have identified the following
three types of wrist and forearm posture scaling for observational job analysis: (1) flexion/extension; (2) radial/ulnar
deviation; and (3) pronation/supination (Figure 1), demonstrating joint range of motion (ROM) boundaries of 85 to -95,
70 to -45, and 130 to -145, respectively.
Numerous previous studies have reported various grip strengths based on differing postures (O’driscoll et al., 1992;
Hallbeck and McMullin, 1993; Mogk and Keir, 2003; Shih et al., 2006; Arti et al., 2010; Barut and Demirel, 2012). Kattel
et al. (1996) indicated that shoulder abduction, elbow and wrist flexion, and ulnar deviation significantly affect grip force.
Regarding wrist posture, numerous studies have consistently shown that large deviations from the neutral position weaken
grip force (Kraft and Detels, 1972; Pryce, 1980; Lamoreaux and Hoffer, 1995; Subramanian and Mita, 2009). Carey and
Gallwey (2002) evaluated the effects of wrist posture, pace, and exertion on discomfort. They concluded that extreme

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(6), 304-316, 2014

INTEGRATING PHYSIOLOGICAL AND PSYCHOLOGICAL TECHNIQUES


TO MEASURE AND IMPROVE USABILITY: AN EMPIRICAL STUDY ON
KINECT APPLYING OF HEALTH MANAGEMENT SPORT
Wei-Ying Cheng1, Po-Hsin Huang1, and Ming-Chuan Chiu1,*
1
Department of Industrial Engineering and Engineering Management
National Tsing Hua University
HsinChu, Taiwan, R.O.C
*
Corresponding author’s e-mail: mcchiu@ie.nthu.edu.tw

This research aimed to develop an approach for measuring, monitoring and auditing the usability of a motion-related health
management product. Based on an ergonomic perspective and principles, the interactions between test participants and a
motion sports device were studied using physiological data gathered from a heart rate sensor. Based on our literature
review, we customized a psychological usability questionnaire which considered effectiveness, efficiency, satisfaction,
error, learnability, sociability, and mental workload, generating a tool meant to reveal the subjective cognition of product
usability. This research analyzed the objective (physiological) and subjective (psychological) data simultaneously to gain
greater insight about the product users. In addition, heart rate data, mental workload data and the questionnaire data were
synthesized to generate a comprehensive, detailed approach for evaluating usability in order to provide suggestions for
improving the usability of an actual health care product.

Keywords: usability; physiological techniques; questionnaires; health management product

(Received on November 19, 2014; Accepted on October 20, 2014)

1. INTRODUCTION

According to the Directorate-General of Budget, Accounting and Statistics of the R.O.C., the average number of working
hours per worker in Taiwan in 2012 was 2140.8, ranking third in the world. On average, employees in Taiwan work 44.6
hours every week and almost 9 hours every day. This busy status is echoed among workers in Korea, Singapore and Hong
Kong, who are measurably among the busiest people throughout the world. To balance work, family, and quality of life, an
increasing emphasis is being placed on the concept of personal exercise since the lack of exercise has been shown to lead to
common long-range health problems such as high blood pressure, diabetes and hyperlipidemia. Despite this recognition,
many people do not know how often or how long to exercise in order to achieve maximum benefit. In response to this need,
various products have been designed and manufactured to address this problem and to help maintain personal health status.
During such product development, usability has been considered an important design issue; however, there are few
usability evaluation methods that totally fit with assessment for health maintenance or product improvement, especially for
the infirm and the elderly. Therefore, a method to measure and assess product use and satisfaction is important and
necessary in order to distinguish the usability features of these products and to improve their usability. Thus, the purpose of
this research is to establish an evaluation method which can detect the intention of the customers so as to measure the
usability of the products.

2. LITERATURE REVIEW

New approaches to the ancient study of ergonomics continue to emerge. During the last decade, for instance, Baesler and
Sepulveda (2006) applied a genetic algorithm heuristic and a goal programming model to address ergonomics in a cancer
treatment facility. Jen et al. (2008) conducted a research on a VR-Based Robot Programming and Simulation System that
was ergonomics dominated. Subramanian and Mital (2008) investigated the need to customize work standards for the
disabled. Nanthavanij et al. (2010) made a comparison of the optimal solutions obtained from productivity-based, safety-
based, and safety-productivity workforce scheduling models. The analysis of healthcare issues, processes, and products
continues to increase, influenced by modernized work conditions as well by evolving government mandates.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(6), 317-326, 2014

AN OPERATION-LEVEL DYNAMIC CAPACITY SCALABILITY MODEL FOR


RECONFIGURABLE MANUFACTURING SYSTEMS
Zhou-Jing Yu1, Jeong-Hoon Shin1, and Dong-Ho Lee1,*
1
Department of Industrial Engineering
Hanyang University
Seoul, Republic of Korea
*
Corresponding author’s email: leman@hanyang.ac.kr

This study considers the problem of determining the facility requirements for a reconfigurable manufacturing system to
satisfy the fluctuating demand requirements and the minimum allowable system utilization over a given planning horizon.
Unlike the existing capacity planning models for flexible manufacturing systems, the problem considered in this study has
both design and operational characteristics since the reconfigurable manufacturing systems have the capability of changing
its hardware and software components rapidly in response to market changes or system changes. To represent the problem
mathematically, a nonlinear integer programming model is suggested for the objective of minimizing the sum of facility
acquisition and configuration change costs, while the throughputs and utilizations are estimated using a closed queuing
network model. Then, due to the problem complexity, we suggest three heuristics, two forward-type and one backward-type
algorithms. To compare the performances of the three heuristic algorithms, computation experiments were done and the
results are reported.

Keywords: reconfigurable manufacturing system; capacity scalability; closed queuing network; heuristics.

(Received on November 28, 2013; Accepted on October 20, 2014)

1. INTRODUCTION

Reconfigurable manufacturing system (RMS), one of recent manufacturing technologies, is a manufacturing system
designed at the outset for rapid changes in its hardware and software components in order to quickly adjust its production
capacity and functionality in response to sudden market changes or intrinsic system changes (Koren et al. 1999, Bi et al.
2008). In fact, the RMS is a new manufacturing paradigm that overcomes the concept of flexible manufacturing system
(FMS) with limited success in that it is expensive due to more functions than needed, not highly reliable, and subject to
obsolescence due to advances in technology and their fixed system software and hardware (Mehrabi et al. 2000). See Koren
et al. (1999), Mehrabi et al. (2000, 2002), ElMaraghy (2006) and Bi et al. (2008) for more details on the characteristics of
RMS.
There are various decision problems in designing and operating RMSs, which can be classified into system-level,
component-level and ramp-up time reduction decisions (Mehrabi et al. 2000). Among them, we focus on system-level
capacity planning, called the capacity scalability problem in the literature. Capacity planning, an important system-level
decision in ordinary manufacturing systems, is especially important in the RMS since it has more expansion flexibility than
FMSs. Here, the expansion flexibility is defined as the capability to expand or contract production capacity. In particular,
the RMS can utilize the expansion flexibility in short-term operation-level because it has the inherent reconfigurability. See
Sethi and Sethi (1990) for more details on the importance of expansion flexibility.
From the pioneering work of Manne (1961), various models have been developed on the capacity expansion problem
in traditional manufacturing systems. See Luss (1982) for an extensive review of the classical capacity expansion problems.
Besides these, there are a number of previous studies on capacity planning or system design in FMSs. For examples, see
Vinod and Solberg (1985), Dallery and Frein (1988), Lee et al. (1991), Rajagopalan (1993), Solot and van Vliet (1994),
Tetzlaff (1994), Lim and Kim (1998) and Chen et al. (2009).
Unlike the classical ones, not many studies have been done on capacity scalability in RMSs since the new paradigm
has been emerged recently. One of the earlier studies is done by Son et al. (2001) that suggest a homogeneous paralleling
flow line in which paralleling is done to scale and balance the capacity of transfer lines. Deif and ElMaraghy (2006a)
suggest a dynamic programming model, based on the set theory and the regeneration point theorem, to find the optimal
capacity scalability plans that minimize the total cost, and Deif and ElMaraghy (2006b) suggest a control theoretic
approach for the problem that minimizes the delay in capacity scalability, i.e. ramp-up time of new configurations. See Deif
and ElMaraghy (2007a, b) for other extensions. Also, Spicer and Carlo (2007) consider a multi-period problem that
determines the system configurations over a planning horizon and suggest two solution algorithms, an optimal dynamic

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(6), 327-336, 2014

A ROBUST TECHNICAL PLATFORM PLANNING METHOD TO ASSURE


COMPETITIVE ADVANTAGE UNDER UNCERTAINTIES
Jr-Yi Chiou1 and Ming-Chuan Chiu1,*

1
Department of Industrial Engineering and Engineering Management
National Tsing-Hua University
101Kuang-Fu Road, Hsinchu
Taiwan 30013, R.O.C.
*
Corresponding author’s e-mail: mcchiu@ie.nthu.edu.tw

Developing a technology-based product platform (technical platform) that can deliver a variety of products has emerged as
a strategy for obtaining competitive advantage in the global marketplace. Technical platform planning can improve
customer satisfaction by integrating diversified products and technologies. Prior studies have alluded to developing a robust
framework of technical platforms and validated methodologies. We propose a multi-step approach to organize technical
platforms based on corporate strength while incorporating technological improbability during platform development. A
case study is presented to demonstrate its advantages, referencing a company developing 3-Dimension Integrated Circuitry
(3D-IC) for the semiconductor industry. We evaluate four alternatives to ensure compliance with market demands. This
study applies assessment attributes for technology, commercial benefits, industrial chain completeness, and risk. Using
Simple Multi-Attribute Rating Technique Extended to Ranking (SMARTER), decision-makers can quickly determine
efficient alternatives in uncertain situations. Finally, a scenario analysis is presented to simulate possible market situations
and provide suggestions to the focal company. Results illustrate the proposed technical platform can enhance companies’
core competencies.

Significance: The proposed method incorporates technical platform planning to help fulfill current and future market
demands. This method can also provide robust solutions for enterprises should untoward events occur. Thus the competitive
advantage of the focal company can be assured in the future.

Keywords: technical platform planning; decision analysis; technology management; fuzzy simple multi-attribute rating
technique extended to ranking (SMARTER); 3-dimension integrated circuit (3D-IC)

(Received on November 28, 2013; Accepted on October 20, 2014)

1. INTRODUCTION

In an effort to achieve customer satisfaction, many companies have adopted product family development and platform-
based methods to improve product variety, to shorten lead times, and to reduce costs. The backbone of a successful product
family is the product platform, which can be generated by adding, removing, or substituting one or more modules to the
platform. The platform may also be scaled in one or more dimensions to target specific market niches. This burgeoning
field of engineering planning has prospered for the past 10 years. However, most of the related research has solely
considered customer-oriented metrics. Other key factors such as core technologies of enterprises and technology trends
under uncertainties can also affect the development of the industry. This recognition is what motivated us to conduct this
research. This paper integrates these elements within technical platform planning. Technical platform planning is
considered in tandem with technology management to achieve efficient solutions so as to maintain and enhance the strength
of the focal company. The proposed methodology can enable companies to incorporate future-bound technology in their
technology roadmap to meet diverse customer needs in the future. It also enables enterprises to concentrate their resources
in the right directions based on scenario analysis. In previous studies, the technology management framework and
assessment methods for uncertain situations have rarely been addressed. Fuzzy SMARTER is a decision analysis method
which can solve problems under uncertainty. Experts work with limited data and linguistic expressions like good or bad to
forecast future trends. In this research, fuzzy analysis was applied to resolve this set of circumstances. SMARTER only
required the sequence information of future products and technologies. Therefore, this study can address the gap and
combine technical platform planning, technology management as well as decision analysis to generate a new planning tool
for enterprises.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(6), 337-347, 2014

UNCERTAINTY ANALYSIS FOR A PERIODIC REPLACEMENT PROBLEM


WITH MINIMAL REPAIR: PARAMETRIC BOOTSTRAPPING
Y. Saito1, T. Dohi1,*, and W. Y. Yun2
1
Department of Information Engineering
Hiroshima University
1-4-1 Kagamiyama,
Higashi-Hiroshima, 739-8527 Japan
*
Corresponding author’s e-mail: saitou@s.rel.hiroshima-u.ac.jp
2
Department of Industrial Engineering
Pusan National University
30 Jangjeon-dong,
Geumjeong-gu,
Pusan, 609-735 Korea

In this paper we consider a statistical estimation problem for a periodic replacement problem with minimal repair which is
one of the most fundamental maintenance models in practice, and propose two parametric bootstrap methods which are
categorized into simulation-based approach and re-sampling-based approach. Especially, we concern two data analysis
techniques: direct data analysis of the minimal repair data which obeys a non-homogeneous Poisson process and indirect
data analysis after data transformation to a homogeneous Poisson process. Through simulation experiments, we investigate
statistical features of the proposed parametric bootstrap methods. Also, we analyze the real minimal repair data to
demonstrate the proposed methods in practice.

Significance: In practice, we often encounter situations where the optimal preventive maintenance policy should be trigged.
However, only a few research results on the statistical estimation problems of the optimal preventive maintenance policy
have been reported in the literature. We take place the high level statistical estimation of the optimal preventive
maintenance time and its associated expected cost, and derive estimators of higher moments of the optimal maintenance
policy, and its confidence interval. Then, the parametric bootstrap methods play a significant role. The proposed approach
enables us the statistical decision making on the preventive maintenance planning under uncertainty.

Keywords: statistical estimation; parametric bootstrap method; periodic replacement problem; minimal repair; non-
homogeneous poisson process;

(Received on November 29, 2013; Accepted on October 7, 2014)

1. INTRODUCTION

The periodic replacement problem by Barlow and Proschan (1965) is one of the simplest, but most important preventive
maintenance scheduling problems. The extended versions of this model have been studied in various papers (Valdez-Flores
and Feldman 1989, Nakagawa 2005). Boland (1982) gave the optimal periodic replacement time in case where the minimal
repair cost depends on the age of component, and showed necessary and sufficient conditions for the existence of an
optimal periodic replacement time in the case where the failure rate is strictly increasing failure rate (IFR). Nakagawa
(1986) proposed generalized periodic replacement policies with minimal repair, in which the preventive maintenance is
scheduled at periodic times. If the number of preventive maintenance reaches to a pre-specified value, the system is
replaced at the next preventive maintenance time. Nakagawa (1986) derived simultaneously both the optimal number of
preventive maintenance and the optimal preventive maintenance time. Recently, Okamura et al. (2014) developed a
dynamic programming algorithm to obtain the optimal periodic replacement time in Nakagawa (1986) model effectively.
Sheu (1990) considered a preventive maintenance problem in which the minimal repair cost varies with the number of
minimal repairs and the age of component. Sheu (1991) also proposed another generalized periodic replacement problem
with minimal repair in which the minimal repair cost is assumed to be composed of age-dependent random and
deterministic parts. If the component fails, it is replaced randomly by a new one or repaired minimally. He showed that the
optimal block replacement time can be derived easily in numerical examples.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(6), 348-359, 2014

STRATEGIC OPENNESS IN QUALITY CONTROL: ADJUSTING NPD


STRATEGIC ORIENTATION TO OPTIMIZE PRODUCT QUALITY

Dinush Chanaka Wimalachandra1,*, Björn Frank1, Takao Enkawa1


1
Department of Industrial Engineering and Management
Tokyo Institute of Technology
Tokyo, 152-8552
*
Corresponding author’s e-mail: wimalachandra.d.aa@m.titech.ac.jp

Many firms have shifted to an ‘open innovation’ strategy by integrating external information into new product development
(NPD). This study extends the open innovation paradigm to the area of product quality control practices in NPD. Using
data collected in 10 countries, this study investigates the role of external information acquired through B2B/B2C customer,
competitor, technology, and manufacturing orientation in meeting quality and performance specifications of newly
developed products. It also illuminates the interconnected roles of B2B and B2C customer orientation in meeting these
specifications. Contrary to conventional wisdom, the results show that leveraging a variety of external information sources
(in particular, frequent and informal communication with B2B customers and coordination with the manufacturing
department) indeed helps firms improve internal product quality control practices in NPD. Information on B2C customers
is beneficial in B2B contexts only if effectively integrated by means of B2B affective information management.

Keywords: product quality; B2B customer orientation; B2C customer orientation; manufacturing orientation

(Received on November 29, 2013; Accepted on April 21, 2014)

1. INTRODUCTION

Research has identified product quality as one of the key determinants of NPD performance (Sethi, 2000). Due to growing
competition in most industries, managers thus have come to regard the quality of newly developed products as crucial for
maintaining a competitive edge in the long run (Juran, 2004). Research based on Chesbrough’s (2003) ‘open innovation’
paradigm indicates that firms’ openness to its external environment can improve their ability to innovate by enabling them
to leverage outside capabilities and follow changes in the environment (Laursen and Salter, 2006), but it remains unknown
whether such openness might also help firms improve their mostly internally oriented quality management practices.
Hence, our study seeks to verify whether the open innovation paradigm can be extended to the area of product quality
control practices in NPD. Moreover, our study aims to identify the types of external information acquired through NPD
strategies (B2B/B2C customer, competitor, technology, and manufacturing orientation) that best help firms meet quality
and performance specifications of newly developed products in B2B contexts.
Our original claim is that accounting for external information during quality control can help firms to minimize the
reoccurrence of past quality-related problems detected by B2B customers, to minimize manufacturing problems, to improve
the effectiveness of early-stage prototype testing, and to learn from competitors’ best practices in quality control. Hence, we
argue that many firms would profit from greater openness in quality management. Firms in B2B markets may benefit from
integrating external information on B2B customers and on their eco-system, which includes product technology,
manufacturing techniques, and competitor strategies. As information on B2C customers at the end of the supply chain is not
directly related to immediate concerns of internal quality control in B2B contexts, we argue that accounting for this type of
information directly may be problematic. However, firms might learn to leverage such information to improve prototype
testing in collaboration with B2B customers. Hence, even information on B2C customers may be beneficial to firms’
quality control practices in B2B contexts if such information is handled appropriately.
To examine the effectiveness of strategic openness in quality control and thus provide industrial engineers with
actionable knowledge of how to improve quality control practices, our study establishes hypotheses about the influence of
externally oriented NPD strategies on product quality. To test these hypotheses empirically, we collected data from 10
countries (Bangladesh, Cambodia, China, Hong Kong, India, Japan, Sri Lanka, Taiwan, Thailand, and Vietnam) in the
textile and apparel industry, covering firms across the supply chain starting from raw material suppliers via manufacturers
and value-adding firms (printing/dyeing/washing) to buying offices. As our study is based on statistical analyses, confirmed
hypotheses are valid and can be generalized to the entire population of firms from which our firm sample was drawn. Thus,
our study is not simply a case study. Rather, it derives generalizable insights that can be applied across different contexts.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(6), 360-375, 2014

A LAD-BASED EVOLUTIONARY SOLUTION PROCEDURE FOR BINARY


CLASSIFICATION PROBLEMS
Hwang Ho Kim1 and Jin Young Choi1,*
1
Department of Industrial Engineering
Ajou University
206, World cup-ro, Yeongtong-gu,
Suwon-si, Gyeonggi-do, Korea
*
Corresponding author’s e-mail: choijy@ajou.ac.kr

Logical analysis of data (LAD) is a data analysis methodology used to solve the binary classification problem via
supervised learning based on optimization, combinatorics, and Boolean functions. The LAD framework consists of the
following four steps: data binarization, support set generation, pattern generation, and theory formulation. Patterns that
contain the hidden structural information calculated from the binarized training data play the most important roles in the
theory, which consists of a weighted linear combination of patterns and works as a classifier of new observations. In this
work, we develop an efficient parameterized iterative genetic algorithm (PI-GA) to generate a set of patterns with good
characteristics in terms of degree (simplicity-wise preference) and coverage (evidential preference) of patterns. The
proposed PI-GA can generate simplicity-wise preferred patterns that also have high coverage. We also show the efficiency
and accuracy of the proposed method through a numerical experiment using benchmark machine learning datasets.

Keywords: logical analysis of data; binary classification; pattern generation; genetic algorithm

(Received on November 29, 2013; Accepted on October 20, 2014)

1. INTRODUCTION

Binary classification (Lugosi, 2002) is an issue arising in the field of data mining and machine learning and involves the
study of how to classify observations with characteristics of two classes. It has been used in the medical, service,
manufacturing, and various other fields. For example, binary classification methods are used for diagnostic criteria using
information obtained through inspection of patients in medicine (Prather et al., 1997); in the service field, it is used for
credit ratings based on customers’ applications and history (Berry and Linoff, 1997). Binary classification problems with
two data classes as defective or non-defective goods in manufacturing are particularly important when we are looking for
the cause of the defects and trying to increase productivity (Chien et al., 2007).
To solve the binary classification problems, various data mining approaches such as decision trees (J48), support
vector machines (SVMs), neural networks (NNs) have been proposed and utilized. However, one of the main drawbacks of
these learning methods is the lack of interpretation ability of the results. An NN is generally perceived as a “black box”
(Ahluwalia and Chidambaram, 2008; Yeoum and Lee, 2013), and it is extremely difficult to document how specific
classification decisions are reached. SVMs are also “black box” systems that do not provide insights on the reasons or
explanations about classification (Mitchell, 1997). Thus, these approaches do not exhibit both high accuracy and
explanatory power for binary classification. Meanwhile, the major disadvantage of the decision tree is its computational
complexity. Decision trees examine only a single field at a time so that large decision trees with many branches are
complex and time-consuming (Safavian and Landgrebe, 1991).
The logical analysis of data (LAD; Crama et al., 1998; Boros et al., 1997; Boros et al., 2000; Hammer and Bonates,
2006) proposed recently is a data analysis methodology used to solve the binary classification problem via supervised
learning based on patterns that contain hidden structural information calculated from binarized training data. Therefore,
LAD is an effective methodology that can easily explain the reasons for the classification using patterns. Moreover, LAD
can provide higher classification accuracy than others if the patterns used for the classification represent all characteristics
of data and the number of patterns is sufficient. In many medical application studies, LAD has been applied to classification
problems for diagnosis and prognosis. Such studies have shown that the accuracy of LAD is comparable with the accuracy
of the best methods used in data analysis so far, usually providing similar results to other binary classification methods
(Boros et al., 2000). However, there is still a problem in that classification performance of LAD can vary depending on
certain characteristics of the patterns generated in the LAD framework. Therefore, pattern generation is the most important
issue in the LAD framework, and has been studied in various ways these days.
The conventional pattern generation methods can mainly be divided into (i) enumeration-based approaches and (ii)
mathematical approaches. First, most of the early studies on pattern generation used enumeration-based techniques (Boros

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(6), 376-383, 2014

A STUDY ON COLLABORATIVE PICK-UP AND DELIVERY ROUTING


PROBLEM OF LINE-HAUL VEHICLES IN EXPRESS DELIVERY SERVICES
Friska Natalia Ferdinand1, Young Jin Kim3, Hae Kyung Lee2, and Chang Seong Ko2,*
1
Department of Information System
University of Multimedia Nusantara
Kampus UMN, Scientia Garden, Boulevard Gading Serpong,
Tangerang, Indonesia
2
Department of Industrial and Management Engineering
Kyungsung University
309 Suyeong-ro, Nam-gu, Busan, 608-736
Busan, South Korea
*
Corresponding author’s e-mail: csko@ks.ac.kr
3
Department of Systems Management and Engineering
Pukyoung National University
45 Yongso-ro, Nam-gu, Busan, 608-737
Busan, South Korea

In the Korean express delivery service market, many companies have been striving to extend their own market share. An
express delivery system is defined as a network of customers, service centers and consolidation terminals. Some companies
operate line-haul vehicles in milk-run types of pick-up and delivery services among consolidation terminals and service
centers with locational disadvantages. The service centers with low sales are kept operating, even if they are unprofitable, to
ensure the quality of service. Recently, a collaborative operation is emerging as an alternative to reduce the operating costs
of the handicapped centers. This study considers a collaborative service network with pick-up and delivery visits for line-
haul vehicles for the purpose of maximizing the incremental profits of collaborating companies. The main idea is to operate
only one service center shared by different companies for service centers with low demands and change the visit schedules
accordingly. A genetic algorithm-based heuristic is proposed and assessed through a numerical example.

Keywords: express delivery services; collaborative pick-up and delivery; line-haul vehicle; milk-run, genetic algorithm

(Received on November 29, 2013; Accepted on October 20, 2014)

1. INTRODUCTION

Pick-up and delivery problems (PDPs) are aimed at designing a vehicle route starting and ending at a common depot in
order to satisfy pick-up and delivery requests in each location. In a traditional pick-up and delivery problem, each customer
usually receives a delivery originating from a common depot and sends a pick-up quantity to the same depot. Most of the
express delivery service centers in Korea are directly linked to a consolidation terminal. However, service centers located in
rural areas with low utilization may not be directly linked to a consolidation terminal (Ferdinand et al., 2013). These remote
service centers with low sales are mostly operated, even though unprofitable, in order to ensure the quality of service. There
has thus been a growing need to develop an operational scheme to ensure a higher level of service as well as profitability. It
has been claimed that a collaborative operation among several companies may provide an opportunity to increase
profitability as well as to ensure the quality of service. There exist various types of collaboration in express delivery
services and such an example includes sharing of vehicles, consolidation terminals, and other facilities. This study
considers the collaboration among companies sharing service centers and line-haul vehicles. Visit schedules are also
determined accordingly to enhance profitability of collaborating companies. Service centers located in rural areas with low
utilization may not be profitable, and thus only one company will operate a service center and vehicles in each location
along the route (so-called, ‘monopoly of service center’). Other companies will use the service center and vehicles at a
predetermined price. All the routes should provide pick-up and delivery services, and all the vehicles should return to the
depot at the end of each route. The objective of this study is to construct a network design for profitable tour problem (PTP)
with collaborative pick-up and delivery visits that maximizes the incremental profit based on the maxmin criterion.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering,21(6),384-395, 2014

OPTIMIZATION OF WIND TURBINE PLACEMENT LAYOUT ON NON-FLAT


TERRAINS
Tzu-Liang (Bill) Tseng1, Carlos A. Garcia Rosales1, and Yongjin (James) Kwon2,*
1
Department of Industrial, Manufacturing and Systems Engineering
The University of Texas at El Paso
500 W, University Ave.
El Paso, TX 79968, USA
2
Department of Industrial Engineering
Ajou University
Suwon, 443-749, Republic of Korea
*
Corresponding author’s e-mail: yk73@ajou.ac.kr

To date, wind power has become popular due to climate change, greenhouse gases and diminishing fossil fuel. Although
wind turbine technology for electricity is already mature, industry is looking to achieve the best utilization of the wind
energy in order to fulfill the electrical needs for cities at a very affordable cost. In this paper, a method entitled Cluster
Identification Algorithm (CIA) and an optimization approach called a Multi-Objective Genetic Algorithm (MOGA) has
been developed. The main objective is to maximize the power and the efficiency, while minimize the cost caused by the
size and quantity of wind turbines installed on non-flat terrains (i.e., a terrain with different heights). The fitness functions
evaluate different population sizes and generation numbers to find the best options. Necessary assumptions are made in
terms of wind directions, turbine capacities, and turbine quantities. Furthermore, this study considers how the downstream
decay model from the wind energy theory describes a relationship between the wind turbines positioned ahead and the
subsequent ones. Finally, a model that relates the layout of wind farm with an optimal combination of efficiency, power and
cost is suggested. A case study that addresses the three dimensional terrain optimization problems using the combination of
CIA and MOGA algorithms is presented, which validates the proposed approach. The methodology is expected to help
solving other similar problems that occur in the renewable energy sectors.

Keywords: wind turbine; cluster identification algorithm (CIA); multi-objective genetic algorithm (MOGA); optimization
of wind farm layout; wind energy

(Received on November 29, 2013; Accepted on June 10, 2014)

1. INTRODUCTION

Currently, wind energy is receiving considerable attention as an emission-free, low cost alternative to fossil fuel. It has a
wide range of applications such as battery charging, mobile power generator, or auxiliary power sources for ships, houses
and buildings. In terms of a large, grid-connected array of turbines, it is becoming an increasingly important source of
commercial electricity. In this paper, an optimization methodology encompassing Cluster Identification Algorithm (CIA)
and Multi-Objective Genetic Algorithm (MOGA) are developed to optimize the wind farm layout on non-flat terrain. The
optimization of layout is a multi-faceted problem, such that (1) maximizing the efficiency, which can be heavily affected by
the aerodynamic losses; (2) maximizing the wind power generation; and (3) minimizing the cost of installation, which is
affected by the size and quantity of wind turbines. At the same time, other important variables, including different terrain
heights, wind directions, wind speed over a period of one year, and terrain size, are taken into consideration. The terrain is
analyzed with the use of Cluster Identification Algorithm (CIA) because it is possible to determine a cluster of positions.
After that, a subset of positions that is the most suitable can be selected from the total land area. Another important fact is
that the wind turbine capacities and characteristics are not the same. Physical and performance characteristics like the rotor
area and the turbine height should be analyzed simultaneously. Based on the extensive review of closely related literature, it
is difficult to locate the proper methodology for optimal wind turbine placement problems that comprehensively considers
the aforementioned issues [Lei 2006, Kusiak and Zheng 2010, Kusiak and Song 2010], which has been the motivation of
this research. In this context, this paper presents the development of optimization algorithms and the computational results
of the real-world case.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(6), 396-407, 2014

AGENT-BASED PRODUCTION SIMULATION OF TFT-LCD FAB DRIVEN BY


MATERIAL HANDLING REQUESTS
Moonsoo Shin1, Taebeum Ryu1, and Kwangyeol Ryu2,*
1
Department of Industrial and Management Engineering
Hanbat National University
Daejeon, Korea
2
Department of Industrial Engineering
Pusan National University
Busan, Korea
*
Corresponding author’s e-mail: kyryu@pusan.ac.kr

Thin film transistor-liquid crystal display (TFT-LCD) fabs are highly capital-intensive. Therefore, to ensure that a fab
remains globally competitive, production must take place at full capacity, with extensive utilization of resources, and must
employ just-in-time principles that require on-time delivery with minimum work-in-process (WIP). However, limited space
and lack of material handling capacity act as constraints that hamper on-time delivery to processing equipment. Therefore,
to build an efficient production management system, a material handling model should be incorporated into the system.
This paper proposes a simulation model applying an agent-based collaboration mechanism for a TFT-LCD fab, which is
driven by material handling requests. Every manufacturing resource, including equipment for processing and material
handling as well as WIP, is represented as an individual agent. The agent simulates operational behaviors of associated
equipment or WIP. This paper also proposes an event graph-based behavior model for the agent.

Keywords: TFT-LCD fab; production management; production simulation; material handling simulation; agent

(Received on December 1, 2013; Accepted on November 30, 2014)

1. INTRODUCTION

The thin film transistor-liquid crystal display (TFT-LCD) industry is naturally capital-intensive, with a typical fab requiring
an investment of a few billion dollars. A cutting-edge TFT-LCD fab contains highly expensive processing equipment
performing complicated manufacturing operations and large material handling equipment connecting this processing
equipment (Chang et al., 2009). Because idle equipment and more work-in-process (WIP) than necessary lead to high
operational costs, production must take place at full capacity, with extensive utilization of resources, and employ just-in-
time principles that require on-time delivery with minimum WIP to ensure that the fab remains globally competitive
(Acharya, 2011). Thus, optimal management of production capacity is critical, and consequently, efficient production
planning and scheduling pose great challenges to the TFT-LCD industry.
Two alternative approaches are usually employed for production planning and scheduling in TFT-LCD fabs (Ko et
al., 2010): 1) optimization and 2) simulation. An optimization approach aims to find an optimal solution, which is
represented as a combination of resources and products within a given time frame, and typically applies linear
programming (LP) methods (Chung et al., 2006, Chung and Jang, 2009, Leu et al., 2010). It is difficult for a mathematical
model to sufficiently reflect dynamic field constraints, and it is challenging (albeit possible) to reformulate the
mathematical model in response to environmental changes. On the other hand, a simulation approach continuously searches
for an optimal solution by alternating decision variables, such as step target, equipment arrangement, and dispatching rules,
according to the given processing status (Choi and You, 2006). Thus, a simulation approach is more suited to a dynamic
environment than an optimization approach. However, existing approaches to production simulation for TFT-LCD fabs
restrictively implement material handling processes; consequently, they have certain limitations in their prediction power
(Shin et al., 2011).
This paper proposes a material handling request-driven simulation model for production management of a TFT-LCD
fab, aiming to implement dynamic material handling behavior. In particular, an agent-based collaboration mechanism is
applied to production simulation that provides a production manager with the capability of performing “what-if” analysis
on production management problems such as production planning and scheduling. Agent-based approaches have been
widely adopted to ensure collaborative decision-making (Lee et al., 2013). Every manufacturing resource, including process
and material handling equipment as well as WIP, is represented as an individual agent, and material handling request-driven
collaboration among these agents implements dynamic WIP routing. The remainder of this paper is organized as follows.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(6), 408-420, 2014

LOWER AND UPPER BOUNDS FOR MILITARY DEPLOYMENT PLANNING


CONSIDERING COMBAT
Ivan K. Singgih1 and Gyu M. Lee1,*
1
Department of Industrial Engineering
Pusan National University
Busan, Korea
*
Corresponding author’s e-mail:glee@pusan.ac.kr

In the military deployment planning problem (DPP), troops and cargoes are transported from source nodes to destination
nodes, while considering various constraints, such as the supply availability, demand satisfaction constraints, and
availability of required multimodal transportation assets. The enemies may exist at several locations to block the
transportation of troops and cargoes to the destinations. In order to arrive at the destinations, the troops need to have
combats with the enemies, which cause loss of troops. To satisfy the demands, additional troops may be necessary to be
transported. Usage of various transportation modes leads to the introduction of subnodes and subarcs in the graphical
representation. A mixed integer programming (MIP) formulation is proposed, which is classified as a fractional
programming. A solution method which calculates the lower and upper bounds is developed. Then, the gap between the
lower and upper bounds is calculated. The computational results are provided and analyzed.

Keywords: military deployment planning problem; multimodal transportation

(Received on December 1, 2013; Accepted on September 26, 2014)

1. INTRODUCTION

Military DPP is dealing with transportation of troops and cargoes from the sources to destinations using transportation
assets to satisfy the demand requirements at the destinations. The transportation assets in a single mode or multi-modes can
be used. Practically, the usage of multimodal transportation assets is needed in some geographical location which cannot be
traveled only by transportation assets of a single mode. The usage of multimodal transportation assets requires the
unloading of troops and cargoes from transportation assets of a mode and loading of troops and cargoes to transportation
assets of another mode. Each unit of troops or cargoes is required to be transported to the destinations before a certain due
date, in order to support military activities in peace or war situations. Late deliveries are not preferred, so penalties are
charged for late deliveries. However, the enemy troops exist between some nodes and block the transportation of the troops
and cargoes. To transport the troops and cargoes between nodes where the enemies exist, the troops need to have combats
with the enemies. Each combat reduces the size of the troops and the enemies. The costs related with transportation,
transfer, and inventory of troops and cargoes, procurement and inventory of transportation assets, number of troops loss,
and penalties of late deliveries are minimized in the objective function. Each part of the objective function is associated
with a certain weight. Several constraints, which are the availability of supplies and flow balance of troops, cargoes, and
transportation assets, must be satisfied. The multimodal transportation assets used to transport the troops and cargoes are
shown in Figure 1.
Some studies on DPP using multimodal transportation assets have been conducted. A multimodal DPP for military
purposes was formulated by Singgih and Lee (2013) who introduced a graphical representation of subnodes and subarcs,
which are used to express the nodes and arcs while considering the usage of multimodal transportation assets. They
formulated the problem as an MIP, obtained the solutions using LINGO and analyze the characteristic of the problem using
sensitivity analysis. A large-scale multicommodity, multi-modal network flow problem with time windows is solved by
Haghani and Oh (1996). A heuristic which exploits an inherent network structure of the problem with a set of constraints
and an interactive fix-and-run heuristic are proposed to solve a very complex problem in disaster relief management. A new
large-scale analytical model was developed by Akgun and Tansel (2007) and solved using CPLEX. The usage of relaxation
and restriction methods enables the model to find the solution in a shorter time. The studies on the multicommodity freight
flows over a multimode network were reviewed by Crainic and Laporte (1997).
Lanchester combat model is a set of differential equation models that describe change in the force levels that describe
the combat process (Caldwell et al., 2000). Lanchester differential equation models are able to provide insight into the
dynamics of the combat and provide more information to address more critical operational problems. Proposed by F. W.
Lanchester in 1914, the Lanchester combat models are used in various researches. Kay (2005) explained and gave examples

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(6), 421-435, 2014

ANALYSIS OF SUPPLY CHAIN NETWORK BY RULES OF ORIGIN


IN FTA ENVIRONMENT
Taesang Byun1, Jisoo Oh2, and Bongju Jeong2,*
1
Sales Planning Team
Nexen Tire Inc.
Bangbae-dong 796-27, Seoul, Korea
2
Department of Information and Industrial Engineering
Yonsei University
50 Yonsei-ro 120-749, Seoul, Korea
*
Corresponding author’s e-mail: bongju@yonsei.ac.kr

This paper presents a supply chain in Free Trade Agreements (FTA) environments governed by rules of origin and analyzes
it using profit analysis and supply chain planning. The proposed supply chains follow the rules of origin as wholly
obtained, substantially transformation, and third country processing criteria. These supply chain can be used to take non-
tariff benefits according to the rules of origin. In order to evaluate the validity of the proposed supply chain, we construct
profit models and show optimal sales prices can maximize net profits. The profit model encompasses the structure of
supply chain which enables decision-makers to make a strategic decision on evaluation and selection of efficient FTA.
Using the output of profit models, global supply chain planning models are built to maximize profit and customer needs. A
case study for a textile company in Korea is provided for illustrating how the proposed supply chain models work.

Keywords: FTA; rules of origin; supply chain management; profit analysis; supply chain planning

(Received on December 16, 2013; Accepted on October 20, 2014)

1. INTRODUCTION

In recent global market, Free Trade Agreements (FTA) are rapidly increasing to maximize the international trade profits
among countries. By joining FTA, each country expects to explore and acquire new market for export, promote industrial
restructuring, and improve the relevant systems. Moreover, the trade tariff concession results in the economic effects of
inflow of oversea capital and technologies. Relaxing the tariff barriers and extensive application of rules of origin expedites
the adoption of FTA and improve the multi-national production environments. This is because in employing the rules of
origin, different tariff rates are applied and resolved with regard to boundary of origins. Therefore, there is a strong
motivation for global companies to construct a framework for FTA supply chain and then take advantage of it. In this
research, we propose supply chain networks according to the rules of origin in FTA environment. We investigate a profit
structure of company and find optimal selling price in FTA supply chain. Then companies can decide overseas production
for their profit maximization and establish supply chain planning on multinational production activities. In this paper, we
pursue the profit maximization of each company involved in FTA supply chain and try to simplify it for further analysis.
The case study shows how a Korean textile company can take advantage of non-tariff concession in FTA environment.
Although many previous literatures are found in addressing the various issues of FTA environment, which are mostly
its economic impacts, few studies have been performed in view of supply chain network. Not surprisingly, some
researchers are interested in competitiveness gains in FTA (Weiermair and Supapol (1993), Courchene (2003), and Seyoum
(2007)). Regarding the rules of origin, many researchers considered the benefit of usage of it in FTA environments to
maximize the profit of companies (Estevadeordal and Suominen (2004), Bhavish et al. (2007), Scott et al. (2007), and
Drusilla et al. (2008)). In terms of pricing policy, Zhang (2001) formulated the model for profit maximization to choose the
location of a delivery center considering customer demand and the selling price of a product and Manzini et al.(2006)
developed mathematical programming models for the design of a multi-stage distribution system to be flexible and
maximum profit. Hong and Lee (2013) and Lee (2013) suggested the price and guaranteed lead time of a supplier that
offers a fixed guaranteed lead time for a product. Savaskan et al. (2004) defined the model relationship between producer,
retailer, and third party in recycling environment and developed a profit model for each member. On the other hand, Zhou
et al. (2007) tried to guarantee the profits of all supply chain members using the profit model in consideration of order
quantity and selling price.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(1), 1-10, 2015

APPLICATION OF INTEGRATED SUSTAINABILITY ASSESSMENT: CASE STUDY OF A SCREW DESIGN

Zahari Taha 1, H. A. Salaam2,*, S. Y. Phoon1, T.M.Y.S. Tuan Ya3 and Mohd Razali Mohamad4
1
Faculty of Manufacturing Engineering
2
Faculty of Mechanical Engineering
Universiti Malaysia Pahang
Pekan, Pahang 26600, Malaysia
*
Corresponding author’s e-mail: hadisalaam@ump.edu.my
3
Department of Mechanical Engineering
Universiti Teknologi PETRONAS
Bandar Sri Iskandar
Tronoh, Perak 31750, Malaysia
4
Faculty of Manufacturing Engineering
Universiti Teknikal Malaysia Melaka
Hang Tuah Jaya, Durian Tunggal, 76100, Malaysia

Sustainability can be referred to as meeting the needs of the present generation without compromising the ability of future
generations to meet their own needs. For politicians, it is an attempt to shape the social; sustain the economy and preserved
the environment for future generations. Balancing these three criteria is a difficult task since it involves different output
measurements t. The aim of this paper is to present a new approach of evaluating sustainability at the product design stage.
There are three criteria involved in this study which is manufacturing costs, carbon emission release into the air and
ergonomic assessment. Analytic hierarchy process (AHP) is used to generalize the outputs of the three criteria which is then
ranked accordingly. The highest score is selected as the best solution. In this paper, a simple screw design is presented as a
case study.

Keywords: sustainable assessment; multi-criteria decision method (MCDM); analytic hierarchy process (AHP); screw.

(Received on November 30, 2013; Accepted on October 20, 2014)

1. INTRODUCTION

The United Nations Department of Economic and Social Affairs/Population Division projected that the world population will
increase from 6.1 billion in the year 2000 to 8.9 billion by the year 2050 (United Nations 2004). With this huge number of
human population, the need for consumer products will increase. Many consumers purchase multi-functional products
according to their individual preferences (Thummala, 2011).
In order to fulfill consumer demand for products, manufacturing companies can consider four (4) ways to do it. The first
way is by expanding their production lines or factory areas. By doing this, they can buy more equipment and hire more
workers to increase their productivity. Besides that they can explore new business by adding more new products in the
production line to increase the company profits. The second way is by increasing the number of workers and machines
without expanding the factory building. By doing this, the productivity can be increased; but with a minimal cost compared to
expanding the factory.
The third way is by giving the operator opportunity to work overtime or changing the operation time to a 24 hours
production line system with two or three shifts. By doing this, it will give the operators a chance to increase their income for
a better living. Lastly, they can outsource the manufacturing of some components. The difficulty with this is in ensuring the
exact quality needed by the customers and the capability of the third party company in delivering those components on time to
the customers. However which way they want to do it they need to consider manufacturing, environmental and social costs.
Theoretically, expanding the factory will increase productivity, but the investment cost is too high and it can lead to
serious environmental problems since more and more land must be used. On the other hand, failure to expand can lead to
serious society problems such as poverty which can further lead to criminal activities. On the other hand allowing workers to
work overtime will cause them to have less resting time thus affecting productivity and their health.
To protect the environment for the future generation, many countries around the world have introduced more stringent
environmental legislations. As a result; manufacturing companies especially in Malaysia are forced to abide by these new

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(1), 11-22, 2015

International Journal of Industrial Engineering, 21(1), 99-111, 2014


International Journal of Industrial Engineering, 21(1), 99-111, 2014
GLOBAL SEARCH OF GENETIC ALGORITHM ENHANCED BY
MULTI-BASIN DYNAMIC NEIGHBOR SAMPLING
Misuk Kim1 and Gyu-Sik Han2,*
1
Department of Industrial Engineering
Seoul National University
1 Gwanak-ro, Gwanak-gu,
Seoul, 151-742, Republic of Korea
2
Division of Business Administration
Chonbuk National University
567 Baekje-daero, Deokjin-Gu,
Jeonju-si, Jeollabuk-do 561-756, Republic of Korea
*
Corresponding author’s e-mail: gshan0815@jbnu.ac.kr

We propose a pioneering enhanced genetic algorithm to find a global optimal solution without derivatives information. A new
neighbor sampling method driven by a multi-basin dynamics framework is used to efficiently divert from one existing local
optimum to another. The method investigates the rectangular-box regions constructed by dividing the interval of each axis in
the search domain based on information of the constructed multi-basins, and then finds a better local optimum. This neighbor
sampling and the local search are repeated alternately throughout the entire search domain until no better neighboring local
optima could be found. We improve the quality of solutions by applying genetic algorithm with the resulting point as an initial
population generator. We fulfill two kinds of simulations, benchmark problems and a financial application, to verify the
effectiveness of our proposed approach, and compare the performance of our proposed method with that of direct search,
genetic algorithm, particle swarm optimization, and multi-starts.

Keywords: genetic algorithm, global optimal solution, multi-basin dynamic neighbor sampling, heston model

(Received on November 27, 2013; Accepted on September 02, 2014)

1. INTRODUCTION

Many practical scientific, engineering, management, and finance problems can apply to global optimization problems
[Armstrong (1978), Conn et.al. (1997), Cont and Tankov (2004), Goldstein and Gigerenzer (2009), Lee (2005), Modrak
(2012), Shanthi and Sarah (2011)]. From the complexity point of view, the global optimization problems belong to the hard
problem class, with the assumption that the computational time and cost required to solve them increase exponentially with
the input size of the problem. In spite of these difficulties, various heuristic algorithms have been developed to reduce
computational time and cost in resolving them. The classical smooth methods are optimization techniques that need objective
functions that behave smoothly because the methods use the gradient, the Hessian, or both information types. Mathematically,
the methods are well established, and some smooth optimization problems are resolved fast. However, the derivative
information is not given in most real-world optimization problems, which are large and complicated. Thus, more time and
cost are required to find solutions. Stochastic heuristics such as genetic algorithm, direct search, simulated annealing, particle
swarm optimization, and clustering method are other popular methods that proved to work well for many problems that are
completely impossible to solve using classical methods [Gilli et.al. (2011), Kirkpatrick et.al. (1983), Michaelwicz and Fogel
(2004), Törn (1986), Wang et.al. (2013)]. The performances of these previous studies depend on where the heuristic
algorithms are applied to which problem or what the initial estimate for its optimization is. One of the main drawbacks of
these stochastic heuristics is that too much computing time and cost are used to locate a local (or improved local) optimal
solution, but not a global one.
In this paper, we propose a novel enhanced genetic algorithm that incorporates and extends the basic framework from
Lee (2007), which is a deterministic methodology for global optimization to reduce the disadvantages of stochastic heuristics.
The method utilizes multi-basin dynamic neighbor sampling to locate an adjacent local optimum by constructing
rectangular-box regions that approximate multi-basins of convergence in the search space. By alternating this neighbor
sampling and the local search, we try to improve and accelerate the search for better local optima. Then, the resulting point is
used as an ancestor of the initial descendant populations to enhance the global search of genetic algorithm. We will also
compare the performances of the conventional heuristic global optimization algorithms with that of our proposed method.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(1), 23-34, 2015

THE IMPACT OF RELATIONSHIP STRATEGIES ON SURVIVABILITY OF


FIRMS INSIDE SUPPLY NETWORKS
Mohamad Sofitra12,*, Katsuhiko Takahashi1 and Katsumi Morikawa1
1
Graduate School of Engineering
Hiroshima University
Higashi-Hiroshima, Japan 739-8527
*
Corresponding author’s e-mail: m.sofitra@gmail.com
2
Department of Industrial Engineering
Universitas Tanjungpura
Pontianak, Indonesia

A relationship strategy, which engages firms with each other, mainly intends to achieve a firm’s goals. One goal of such
firms is to prolong the firm’s survival in the market. At the supply networks (SN) level, the interactions among firms by
means of engaging strategies, i.e., cooperation, defection, competition and co-opetition, are complexly interconnected and
coevolved. Due to their complexity and dynamic nature, investigations of the outcomes of the coevolution of
interconnected relationship strategies are non-trivial tasks. To overcome these difficulties, this paper proposes cellular
automata simulation frameworks and adopts a complex adaptive supply networks perspective to develop a model of the
coevolution of interconnected relationship strategies in a supply network. We aimed to determine how and under what
conditions the survivability of firms inside supply networks is affected by the coevolution of interconnected relationship
strategies among them. We constructed experiments using business environment scenarios of a SN as its factors and
observed how different interaction policies of firms could produce networks effects that impact the lifespan of firms. We
found that a co-operation coupled with a co-opetition policy in a business environment that favors co-operation can
promote the lifespan of nodes at both the individual and SN level.

Keywords: interconnected relationships strategy; complex adaptive supply network; cellular automata; survivability.

(Received on November 29, 2013; Accepted on October 20, 2014)

1. INTRODUCTION

Each firm situated in any network needs to build relationships with other firms. A relationship strategy, which engages
firms with each other, mainly intends to achieve a firm’s goals. One goal of firms is to prolong its survival in the market.
Issues in the buyer-supplier relationship strategy and its impact at individual or dyad level of firms have been studied for
over two decades (Choi & Wu, 2009). However, at networks level it recognized that instead a particular relationship
strategy (e.g., cooperation, defection, competition and coopetition) exists and being independent from each others, they are
complexly interconnected (Ritter, 2000). None of the relationships in a network are built or operate independently of others
(Hakansson & Ford, 2002). A small shift in a particular relationship state in a given network could affect the other
relationships that are directly connected and then in turn affect the other indirectly connected relationships. This domino
effect can result in either a minor or major complication at both the individual and SN level. Moreover, firms and their
relationship strategies are very dynamic, similar to living entities that co-evolved over time (Choi, Dooley, &
Rungtusanatham, 2001; Pathak et.al., 2007). Therefore, to further our understanding of the complex nature of a SN, we
must extend our analysis from individual firms or the dyadic level to the network level. At the network level of analysis, we
will attempt to determine how individual strategies (i.e., cooperation, defection, competition and co-opetition) interconnect
and coevolve inside the SN and investigate the related emergence network effects.
A cooperation relationship between firms is motivated by a common goal (e.g., to solve problems, to improve
products and streamline processes, etc.) (Choi, Wu, Ellram, & Koka, 2002) and/or a resource dependency (Ritter (2000);
Lee & Leu (2010)). This type of relationship builds upon teamwork by sharing information and resources. Conversely, a
defection relationship between firms is provoked by short-term opportunistic behavior (e.g., being lured by better terms of a
contract from other firms) (Nair, Narasimhan, & Choi, 2009).
A competition relationship between firms is based on the logic of economic risks (e.g., appropriation risk, technology
diffusion risk, forward integration by suppliers and/or backward integration by buyers, etc.) that can introduce threats to the
core competence of a firm (Choi et al., 2002). Conversely, co-opetition is a strategy employed by firms that simultaneously
mixes competitive actions with co-operative activities (Gnyawali & Madhavan, 2001). The motivation for engaging in co-

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(1), 35-45, 2015

DRIVERS AND OBSTACLES OF THE REMANUFACTURING INDUSTRY IN


CHINA: AN EMPIRICAL STUDY
Yacan Wang1,*, Liang Zhang1, Chunhui Zhang1 and Ananda S Jeeva2
1
Department of Economics
Beijing Jiaotong University
No.3 Shangyuan Residency, Haidian District
Beijing 100044, People’s Republic of China
*
Corresponding author’s email: ycwang@bjtu.edu.cn
2
Curtin Business School
Curtin University
Perth, Australia 6845

Remanufacturing is one of the prioritized sectors that pushes sustainability forward and has been vigorously promoted by
two rounds of experimental programs in China. A survey of 7 Chinese remanufacturing enterprises involving 190
respondents is used to empirically identify the current situation and explore influential factors of the remanufacturing
industry in China. The results of principal component factor analysis indicate that enterprise strategy factors as well as
policy and technical factors are the major drivers of the remanufacturing industry with the largest contribution rate of
21.424% and 20.486% respectively. The policy, economic factors and industry environmental factors are major barriers
with the largest contribution rate of 29.361% and 19.690% respectively. This is the first empirical study to explore the
influencing factors of the remanufacturing industry in China. The results provide preliminary reference for government and
industry to further develop mechanism to promote remanufacturing practice in China.

Keywords: remanufacturing industry; drivers; barriers; empirical study; China

(Received on November 29, 2013; Accepted on August 10, 2014)

1. INTRODUCTION

The current challenges in scarce resources and polluted environment in China have spurred the circular economy as a new
key to China’s economic growth (Zhu & Geng, 2009). Remanufacturing, as a pillar of circular economy, is pushed forward
by a series of policies by the Chinese government. In 2005, the State Council issued Several Options of the State Council
on Speeding up the Development of Circular Economy, which included remanufacturing as an important component of
circular economy. In 2008, the National Development and Reform Commission (NDRC) launched experimental auto-part
remanufacturing programs in 14 selected firms. In 2009, the Ministry of Industry and Information Technology (MIIT) also
launched the first block of experimental machinery and electronic products remanufacturing programs in 35 selected firms
and industry agglomeration areas. In 2011, NDRC issued Information on Further Improving the Work on Experimental
Remanufacturing Programs, which aimed to further expand the category and coverage of remanufacturing products.
These experimental programs have generated some professional remanufacturing firms. Data from China Association
of Automobile Manufacturing showed that by the end of 2010, China had already built a remanufacturing capacity of 0.63
million pc/set including engines, gear boxes, steering booster, dynamos, etc., and 12 million retreaded tires. However,
remanufacturing is still in an infancy stage in China, encumbered by various obstacles. The Chinese government has not
established an independent and robust legal system specific to the remanufacturing industry (Wang, 2010). Furthermore,
there is no clear direction for the growth of the remanufacturing industry (Zhang et al, 2011).
Most of the studies on remanufacturing in China focus on research and development (R&D) of technology and products.
Extant literature that qualitatively analyzes the drivers and barriers of remanufacturing are limited, and empirical studies are
rare. Although Zhang et al. (2011) propose different development paths based on the features of the resources input in
different phases of automobile remanufacturing development, the current situation and influential factors have not been
tested empirically. Hammond et al. (1998) explore the influential factors of the automobile remanufacturing industry in the
USA by carrying out a series of empirical investigations. Seitz (2007) has empirically examined the influencing factors of
remanufacturing by interviewing a number of Original Equipment Manufacturers (OEM). Nevertheless, owing to the
different development level and overall environment of remanufacturing, the influential factors of remanufacturing industry

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(1), 46-61, 2015

AN INTEGRATED FRAMEWORK- FOR DESIGNING A STRATEGIC GREEN


SUPPLY CHAIN WITH AN APPLICATION TO THE AUTOMOTIVE
INDUSTRY
S. Maryam Masoumi K1; Salwa Hanim Abdul-Rashid1,*, Ezutah Udoncy Olugu1; Raja Ariffin Raja Ghazilla1
1
Centre for Product Design and Manufacturing (CPDM), Department of Mechanical Engineering, Faculty of Engineering,
University of Malaya, 50603 Kuala Lumpur, Malaysia
*Corresponding author’s e-mail: salwa_hanim@um.edu.my

In today’s global business, several organizations have realized that green supply chain practices provide them with
competitive benefits. In this respect, a strategically oriented view of environmental management is critical to supply chain
managers. Regarding the importance of this issue, an attempt has been made to develop an integrated framework for
designing a Strategic Green Supply Chain (SGSC). Firstly, by reviewing the literature, a causal relationship model is
developed. This model presents the main factors affecting the decisions for prioritizing green strategies and initiatives.
Secondly, based on this model, a decision-making tool using the Analytic Network Process (ANP) is provided. This tool
assists companies to prioritize the environmental strategies and related initiatives in different operational areas of their supply
chain. Finally, in order to provide part of the data required in this tool, a performance measurement system is developed to
evaluate the strategic environmental performance of the supply chain.

Keywords: strategic green supply chain; green supply chain design; analytical network process; environmental strategy;
environmental performance measurement

(Received on November 30, 2013; Accepted on January 2, 2015)

1. INTRODUCTION

In recent years, increased pressure from various stakeholders, such as regulators, customers, competitors, community groups,
global communities, and non-governmental organizations (NGOs), have motivated companies to initiate environmental
management practices not only at the firm level, but also throughout the entire supply chain (Corbett and Klassen 2006,
Gonzalez-Benito and Gonzalez-Benito 2006). This shift from the implementation of green initiatives at the firm level towards
the whole supply chain, requires a broader development of environmental management from the initial sources of raw
material to the end-user customers in both the forward and reverse supply chain (Linton et al. 2007).
Previous studies have introduced a long list of green initiatives associated with various operational areas of supply
chains (Thierry et al. 1995, Zsidisin and Hendrick 1998, Rao and Holt 2005, Zhu et al. 2005). The highly competitive nature
of the business environment requires the companies to carefully consider the outcomes of these green initiatives, focusing on
only those that are strategic to their operational and business performance. In fact, making the wrong choice of green
initiatives can lead to wasted cost and effort, and may even reduce competitive advantages (Porter and Kramer 2006). In this
respect, supply chain managers have to consider only the green supply chain initiatives (GSCIs) that are strategic to their
business performance. In other words, there is a need to make informed decisions in terms of selecting practices that would
potentially deliver better value and competitiveness.
Adopting the concept of strategic social responsibility defined by Porter and Karmer (2006), the term Strategic Green
Supply Chain (SGSC) in this paper refers to a green supply chain (GSC) that strategically selects and manages green
initiatives to generate sustainable competitive advantage when implemented throughout the entire chain. The term ‘strategic’
reflects the proactive approach as opposed to a responsive approach taken in initiating GSCIs.
According to the theory of the Natural-Resource-Based-View (NRBV) developed by Hart (1995), and Hart et al. (2003),
there are three distinct kinds of green strategy – pollution prevention, product stewardship, and clean technology. Each of
these green strategies has its own drivers, which enable it to provide the organizations with a specific competitive advantage.
In an attempt to decide which green strategy is more suitable for a firm’s business, it has to consider several determining
factors.
In this study, an integrated framework is developed to assist the organizations to design a SGSC that provides them with
a framework for selecting the most suitable green strategy for their supply chain and aligning all of their green initiatives with
the selected strategy. This framework will provide the insight into the strategic importance of green initiatives for a company
and assist the managers to strategically manage their green supply chain improvement programmes. The strategic importance
of green strategies to an enterprise will be determined by evaluating the role of these initiatives in meeting the requirements of

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(1), 62-79, 2015

SELECTING AN OPTIMAL SET OF KEYWORDS FOR


SEARCH ENGINE ADVERTISING
Minhoe Hur1, Songwon Han1, Hongtae Kim1, and Sungzoon Cho1,*
1
Department of Industrial Engineering
Seoul National University,
Seoul, Korea
*Corresponding author’s e-mail: zoon@snu.ac.kr

Online advertisers who want their website to be shown in the web search pages need to bid for relevant keywords. Selecting
such keywords in advertising is challenging because they need to find relevant keywords of different click volumes and costs.
Recent works focused on merely generating a list of words by using semantic or statistical methodologies. However, limited
previous studies do not guarantee that those keywords will be used by customers and subsequently provide large traffic
volume with lower costs. In this study, we propose a novel approach of generating relevant keywords by combining search
log mining and proximity-based approach. Subsequently the optimal set of keywords with a higher volume while minimizing
costs was determined. Experiment results show that our method generate an optimal set of keywords that are not only
accurate, but also attract more click volume with less cost.

Keywords: search engine advertising; ad keywords; query logs; knapsack problem; genetic algorithm

(Received on November 30, 2013; Accepted on December 29, 2014)

1. INTRODUCTION

Search engine advertising is a widely used business model in the online search engine system (Chen et al., 2008, Shih et al.,
2013). In this model, advertisers who want their ads to be displayed in the search results page bid on keywords that are related
to the context of ads (Chen Y. et al., 2008). The ads can be displayed when the corresponding keywords are searched and
their bid prices are higher than the minimum threshold (Chen et al., 2008). It is demonstrated that this business model offers a
much better return on investment for advertisers, because those ads are presented to the target users who consciously made
search queries using relevant keywords (Szymanski et al., 2006). Figure 1 shows the example of search engine advertising
where advertisements are displayed on the result page followed by a query.
To bid on keywords, advertisers need to choose which keywords would be associated by considering their ads that will
be displayed (Ravi et al., 2010). In general, there are three criteria widely known that apply to good keywords. First,
advertisers need to select relevant keywords that relate to their advertisement closely so that many potential customers would
query those keywords to find their product or services (Kim et al., 2012). It is the most important step for reducing the gap
between keywords selected by advertisers and their potential customers (Oritz-Cordova and Jansen, 2012). Secondly,
choosing keywords that attract larger volume of clicks toward their advertisements among relevant keywords will be more
desirable (Ravi et al., 2010). As keywords have their own click volume in the search engine, selecting them to increase the
number of clicks on their ads as possible is one of the critical elements in search engine marketing. Finally, when comparing
a group of keywords that are relevant and popular, identifying and selecting keywords that are cheaper than others will be
also desirable to implement, more efficient and an effective marketing campaign with limited budgets.
However, selecting keywords manually by considering such criteria is a challenging and time-consuming task for
advertisers (Abhisher and Hosanagar, 2007). For one, it is difficult to determine which keywords are relevant to the target
ads. Though advertisers generally have a good understanding over their ads, their desire is to select keywords that would not
only represent their ads well but also be used by potential customers who would ultimately be interested in the products or
services they offer. Moreover keywords have volatile click volumes and cost-per-click influenced by user search behavior in
search engine for a long time. Therefore it is not easy to grow influx of customers into their websites while reducing costs at
once.
To overcome the raised problems, many studies have been proposed and they can be divided into two categories: (1)
Generating related keywords by developing certain automatic methods for generating related keywords so that advertisers
would find suitable keywords more easily and (2) Selecting an optimal set of keywords to maximize the objective values such
as click volume or ad effects with budget constraints. Though such efforts work well in their own experiments, they have
several limitations to be widely applied in real problems. First, some studies have no guarantee that the keywords would
actually be queried by users. Generated keywords should be familiar to not only advertisers but also potential customers so

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(1), 80-92, 2015

A STUDY ON THE EFFECT OF IRRADIATION ANGLE OF LIGHT


ON DEFECT DETECTION IN VISUAL INSPECTION
Ryosuke Nakajima1,*, Keisuke Shida2, and Toshiyuki Matsumoto1
1
Department of Industrial and Engineering
Aoyama Gakuin University
Kanagawa, Japan
*
Corresponding author’s e-mail: d5613005@aoyama.jp
2
Department of Administration Engineering
Keio University
Kanagawa, Japan

This study focuses on the difference in the visibility of defects according to the irradiation angle of the light in visual
inspection using fluorescent light, and also considers the relationship between the irradiation angle and the defect detection.
In the experiment, the irradiation angle of light is considered as the experimental factors. The visibility of defects that are
different according to the irradiation angle of the light are reproduced using a tablet PC, and the effect of inspection
movement on the defect detection is evaluated. As the result, it is observed that the inspection oversights occurs by the
irradiation angle of light. Also, it is observed that the angle formed by the visual line and the inspection surface becomes
not perpendicular, the defects detection also becomes more difficult. Based on the above observation, new inspection
method is proposed instead of the conventional inspection method.

Keywords: visual inspection, peripheral vision, irradiation angle of light, inspection movement

(Received on November 30, 2013; Accepted on October 20, 2014)

1. INTRODUCTION

In order to prevent defective products from being overlooked, product inspection has been given as much attention as
processing and assembling in the manufacturing industries. There are two types of inspections, functional inspection and
appearance inspection. In functional inspection, the effectiveness of the products are inspected, whereas in appearance
inspection, small visual defects like scratches, stains, surface dents and unevenness of the coating color are inspected.
Advancements have been made in functional inspection automation because it is easy to determine whether a product is
working (Hashimoto et al., 2009). On the other hand, in appearance inspection, it is not easy to establish standards to
determine whether a product is defective, because there are many types of defects. In addition, the categorization of a
product as non-defective or defective is affected by the size and depth of the defect. Moreover, some products have recently
become more detailed and smaller and the type of production has shifted to high-mix, low-volume production. Thus, it is
difficult to develop technologies that can discover small defects and create algorithms that identify different types of defects
with high precision. Therefore, appearance inspection depends on visual inspection using human senses (Kitagawa, 2001)
(Kubo et al., 2009) (Chang et al., 2009).
It is common in visual inspection to overlook the defects on defective products. This problem must be solved in
manufacturing industries. Generally, visual inspection is performed under a fluorescent light, and the inspectors check for
various defects by irradiating the light on the inspection surface. The defects that are frequently overlooked have a common
features, including a difference in the visibility of the defects according to the irradiation angle of the light (Hirose et al.,
2003) (Morita et al., 2013). Furthermore, the irradiation angle of the light that allows a defect to be visible differs with the
condition and type of defect. Therefore, it is necessary to change the irradiation angle of the light by moving the product in
order to detect various defects.
Moreover, the inspection movements of the inspector should change according to the irradiation angle, since the
visibility of a defect is determined by the virtual angle between the irradiation angle of the light and the inspection surface.
Although it is clear that the light should be installed in the appropriate position, the effect of the relation between the
irradiation angle of the light and the inspection movement on defect detection has not been clarified, and no one has
determined the appropriate position for the light to be installed. Therefore, rather than being installed in a consistent
position, the light is installed at either the upper front or upper rear of the object to be inspected.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(1), 93-101, 2015

HEURISTIC RULES BASED ON A PROBABILISTIC MODEL AND A GENETIC


ALGORITHM FOR RELOCATING INBOUND CONTAINERS WITH
UNCERTAIN PICKUP TIMES
Xun Tong1, Youn Ju Woo2, Dong-Won Jang2, Kap Hwan Kim2,*
1
Department of Logistics and Maritime Studies
The Hong Kong Polytechnic University
Hung Hom, Hong Kong
2
Department of Industrial Engineering
Pusan National University
Busan, Korea
*Corresponding author’s e-mail: kapkim@pusan.ac.kr

Because dwell times of inbound containers are uncertain and trucks request containers in a random order, there are many
rehandling operations for containers on the top of the pickup containers. By analyzing dwell time data of various groups of
inbound containers, it is possible to derive a probability distribution of each group of containers. Assuming that dwell times of
each group of inbound containers follow a specific probability distribution, this paper discusses how to determine the
locations for rehandled inbound containers during the pickup process. The aim of this study was to minimize the total
expected number of rehandling steps for retrieving all the inbound containers from a bay. Two heuristic rules were suggested:
a heuristic rule obtained from a genetic algorithm, and a heuristic rule considering the confirmed and potential rehandlings
based on statistical models. A simulation study was performed to compare the performance of the two heuristic rules.

Keywords: container terminal; relocation; simulation; statistics; storage location

(Received on December 01, 2013; Accepted on August 10, 2014)

1. INTRODUCTION

Efficient operation of container yards is an important issue for the operation of container terminals (Ma and Kim, 2012; Jeong
et al., 2012). One of major operational inefficiencies in container terminals comes from rehandling operations for inbound
containers. Inbound containers may be picked up after discharging only if required administrative procedures including
customs clearance are finished. But, the pickup time of a container from a port container terminal is determined by the
corresponding consignee or the shipping liner considering various factors such as delivery request for the container from the
consignee, the storage charge for the container in the terminal, and the free-of-charge period. However, from the viewpoint of
the terminal operator, the pickup time of an inbound container is uncertain.
Data on inbound containers were collected from a container terminal in Busan, which has the 1,050m quay, 11 quay
cranes, 30 rubber tiered gantry cranes (RTGCs), and the total area of 446,250m2. The terminal handled 260,761 inbound
containers during 2012. The average duration of stay of an inbound container at the terminal was 5.7 days. Figure 1 illustrates
the average dwell times of inbound containers picked up by different groups of trucking companies, which were obtained
from the data. Ryu (1998) reported the results of time study for various operations by RTGCs. According to the study, the
average cycle time for a pickup operation, which is performed by an RTGC in the yard for transferring an inbound container
to a road truck, was 84 seconds. The average cycle time for a rehandling operation by an RTGC within the same bay was 74
seconds. There are 20~40 bays in a block. An RTGC can access all the bays in a block or even bays in neighboring blocks. But,
an RTGC holding a container does not usually move from one bay to another and so this study focused on the rehandling
operation within one bay.
Because of uncertainty of the dwell time (Kim and Kim, 2010), which is the duration of stay of a container at the yard,
the rehandling is a serious problem during the pickup operation of inbound containers. Thus, in studies of container terminals,
it is important to minimize the total expected number of relocations during the pickup process. Instead of assuming that the
pickup order of containers is completely unknown, when some attributes of containers are analyzed, there is a possibility to
reduce the number of relocations by utilizing the results of the analysis. Figure 1 illustrates that the average dwell times of
inbound containers, which are picked up by trucks from different groups of companies, are significantly different from each
other. This figure shows that by analyzing data on pickup times and various information which may be useful for reducing the
number of rehandles, can be derived. Voyages of vessels, vessel carriers, and shippers may be attributes to be used for the

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(1), 102-116, 2015

ADAPTIVITY OF COMPLEX NETWORK TOPOLOGIES FOR DESIGNING


RESILIENT SUPPLY CHAIN NETWORKS
Sonia Irshad Mari1, Young Hae Lee1,*, Muhammad Saad Memon1, Young Soo Park2, Minsun Kim2
1
Department of Industrial and Management Engineering
Hanyang University
Ansan, Gyoenggi-do, Korea
*Corresponding Author’s email: yhlee@hanyang.ac.kr
2
Korea National Industrial Convergence Center
Korea Institute of Industrial Technology, Korea

Supply chain systems are becoming more complex and dynamic as a result of globalization and the development of
information technology. This complexity is characterized by an overwhelming number of relations and their
interdependencies, resulting in highly nonlinear and complex dynamic behaviors. Supply chain networks grow and self-
organize through complex interactions between their structure and function. The complexity of supply-chain networks
creates unavoidable difficulty in prediction making it difficult to manage and control them using a linearized set of
models. The aim of this article is to design resilient supply chain network from the perspective of complex network
topologies. In this paper, various resilience metrics for supply chains are developed based on a complex network theory,
then a resilient supply chain growth algorithm is also developed for designing a resilient supply chain network. An
agent-based simulation analysis is carried out to test the developed model based on the resilience metrics. The results of
the proposed resilient supply chain growth algorithm are compared with major complex network models. A simulation
result shows that a supply chain network can be designed based on complex network theory, especially as a scale-free
network. It is also concluded that the proposed model is more suitable than general complex network models for the
design of a resilient supply chain network.

Keywords: supply chain network, resilient supply chain, disruption, complex network, agent-based simulation

(Received on December 1, 2013; Accepted on January 02, 2015)

1. INTRODUCTION

The development of information technology and increasing globalization make supply chain systems more dynamic
and complex. Today’s supply chain represents a complex network of interrelated entities, which includes many
suppliers, manufacturers, retailers, and customers. The concept of considering the supply chain as a supply network has
been suggested by many researchers (Surana et al., 2005). It has also been argued that the concepts of complex systems,
particularly complex networks, should be incorporated into the design and analysis of supply chains (Choi et al., 2001;
Pathak et al., 2007). A supply chain is a complex network with an overwhelming number of interactions and
interdependencies among the different entities, processes, and resources. A supply chain network is highly nonlinear,
shows complex multi-scale behavior, has a structure spanning several scales, and evolves and self-organizes through a
complex interplay of its structure and function. However, the sheer complexity of supply-chain networks, with its
inevitable lack of prediction, makes it difficult to manage and control them using the assumptions underlying a
linearized set of models (Surana et al., 2005). The concept of the supply chain as a logistics systems has therefore
changed from “linear structures” to “complex systems” (Wycisk et al., 2008). Thus, this new supply network concept is
more complex than a simple supply chain concept. Supply networks are comprised of the mess and complexity of
networks including reverse loops, two-way exchanges, and lateral links. They contain a comprehensive, strategic view
of resource management, acquisition, development, and transformation. Recently many researchers work on developing
resilient supply chain networks such as (Bhattacharya et al., 2012; Klibi et al., 2012; Kristianto et al., 2014; Zeballosa
et al., 2012).
Generally, supply networks exhibit complex dynamic behaviors and are highly nonlinear. They grow and self-
organize with the help of complex connections between their structure and function. Because of this complexity, it is
very difficult to control and manage a supply network. Due to these complexities, supply network requires robustness
to cope with disruption risk and they should also be resilient enough to bounce back to its original state after disruption
risks (Christopher et al., 2004). Furthermore, the instability in today’s business organizations and changing market
environments requires a supply network to be highly agile, dynamic, re-configurable, adaptive, and scalable that should
effectively and efficiently respond to satisfy demands. Many researchers have investigated supply networks by various
static approaches such as control theory, programming method, and queuing theory. For example, Kristianto et al.
(2014) proposed resilient supply chain model by optimizing inventory and transportation routes. Klibi et al. (2012)

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(1), 117-125, 2015

OPTIMAL MAINTENANCE OPERATIONS USING A RFID-BASED


MONITORING SYSTEM

Sangjun Park1, Ki-sung Hong2, Chulung Lee3,*


1
Graduate School of Information Management and Security
Korea University
Seoul, Korea
2
Graduate School of Management of Technology
Korea University
Seoul, Korea
3
School of Industrial Management Engineering and Graduate School of Management of Technology
Korea University
Seoul, Korea
*Corresponding author’s e-mail: leecu@korea.ac.kr

A high-technology manufacturing operation requires extremely low levels of raw material shortage due to its critical
manufacturing line down recovering cost. It is important to determine the replacement time of a raw material against any line
down risk. We propose an RFID monitoring and investment decision system in the context of semiconductor raw material
maintenance operation. This paper provides the framework of the RFID monitoring system, the mathematical model to
calculate the optimal replenishment time, and the simulation model for the RFID investment decision under different risk
attitude with an aggressive new supply notion of “Make to Consume.” The simulation result presents that the frequency of
replenishment increases and the value of the RFID monitoring system increases as the manufacturer’s risk factor that reflects
the degree of risk aversion reduction.

Keywords: rfid; maintenance operation; value of information; risk reverse attitude

(Received on December 1, 2013; Accepted on January 10, 2015)

1. INTRODUCTION

Improvements in modern information technology have been applied to diverse industries (Emigh 1999). In spite of recent
progress, most concerns of enterprises still focus on their daily safety stock (SS) operations management. One of the key
purposes of keeping an SS is to have an immediate supply into a manufacturing line to prevent any risk of sales loss or
manufacturing line down. However, the traditional SS program based on “Make to Stock” (MTS) often faces a shortage and
an overage issue in practice for various reasons, such as a fluctuating order, incorrectly estimated demand information, and a
lead time that causes a bullwhip effect (Lee et al. 1997, Kelle and Milne 1999). A high level of SS increases an inventory
holding cost, while a low level of SS increases the possibility of a supply shortage and a delivery expedition cost. For this
reason, diverse sophisticated supply chain programs have been introduced to decrease the bullwhip effect and the inventory
level. The Vender Managed Inventory (VMI) program has been introduced as one supply chain initiative (Forrester, 1958,
Cachon and Zipkin 1999). As such, the VMI reduces or even removes the customer SS at a manufacturing site through
sharing customer (manufacturer)’s real-time stock information with a vendor. However, it still relies heavily on the accuracy
of demand forecasts. In particular, a vender should take additional supplying liabilities and inventory holding costs for a
certain inventory level by a VMI agreement compared to a traditional Order to Make (OTM) model based on a firm order.
This means that venders have to keep additional buffer stocks in their warehouse for timely VMI replenishment in addition to
the stored VMI volume at customer manufacturing sites, considering a production and replenishment lead time. Also, the
customer should take the liability with respect to consuming a certain level of inventory and risks of keeping dead stocks by
the VMI agreement when customers and venders improperly set the SS quantity with incorrect sales forecasting information.
In particular, high-technology industries, such as the semiconductor industry, are characterized as having a short product life
cycle and fast market changes. Thus, the overage and consumption liability could be a critical burden and risk for both
venders and customers.
For this reason, there have been a number of studies focusing on improving supply accuracy. The use of Radio
Frequency Identification (RFID) is a recent systematic approach that has contributed to the significant growth in sharing

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(1), 126-133, 2015

OPTIMAL NUMBER OF WEB SERVER RESOURCES


FOR JOB APPLICATIONS
Xufeng Zhao1, Syouji Nakamura2,*, and Toshio Nakagawa3
1
Department of Mechanical and Industrial Engineering
Qatar University
Doha, Qatar
2
Department of Life Management
Kinjo Gakuin University
Nagoya, Japan
*Corresponding author’s e-mail: snakam@kinjo-u.ac.jp
3
Department of Business Administration
Aichi Institute of Technology
Toyota, Japan

The main purpose of this paper is to propose optimization problems in which how many number N of web servers should be
provided for net jobs with random process times. We consider the first case when a single job with random time S is processed
and take up the second case when number n of jobs with successive times are processed. The number n may not be a constant
value that could be predefined from the practical point, so that we modify the model in the second case by supposing n to be
a random variable. Next, we introduce shortage and excess costs into models to consider both costs suffered before and after
failures of server system. We obtain the total expected costs for each model and optimize them analytically. When physical
server failure time and job process time are exponentially distributed, optimal numbers that minimize the expected costs are
computed numerically.

Keywords: web server; random process; multi-jobs; system failure; shortage cost.

(Received on December 1, 2013; Accepted on September 15, 2014)

1. INTRODUCTION

The web server system is one kind of net service forms in which computers process jobs without considering their
physical constitution of computations. This is of great importance in net services due to the merit of efficiency and flexibility.
For example, when increased demand in data center is required, and its facilities and resources have approached to the up
limit, this web server system can assign all available computing resources by using a flexible technique. So that resources
could be shared with multi-users and accessed by authorized devices through nets.
Queuing theory (Sundarapandian, 2009) is one study of waiting lines, which is used for predicting queue lengths and
waiting times. The queuing models have been widely applied in decision-makings on resources that should be provided, e.g.,
sequencing jobs that are processed on a single machine (Sarin et al., 1991). However, using the queuing models contains too
many algorithms which are time-consuming for the load of the systems, and in general, it would be difficult to predict exactly
the process times (Chen and Nakagawa, 2012, 2013) for jobs. Further, most models have paid little attention to failures and
reliabilities (Nakagawa, 2008; Lin, 2013) of web server systems in operations.
Many studies have addressed the problem of downtime cost after system failure (Nakagawa, 2008), which may be
considered to arise from carelessly scheduled plans. By comparing failure time of provided servers with required process time,
we pay attention for another case when process times are too far in advance of failure times, which involves a waste of
resources, as more jobs might be completed. So that we will introduce shortage and excess costs into models by considering
both costs suffered before and after server failures.
From such viewpoints, this paper proposes optimization problems in which how many number of web server resources
should be provided for net job computations with random process times. That is, we suppose that a web server system with N
(N=1,2…) servers are available for random job processes, where N could be optimized to minimize the total expected cost

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 21(6), 134-146, 2015

U-SHAPED ASSEMBLY LINE BALANCING WITH TEMPORARY WORKERS


Koichi Nakade1,*, Akiyasu Ito2 and Syed Mithun Ali2
1
Department of Civil Engineering and Systems Management
Nagoya Institute of Technology
Gokiso-cho, Showa-ku
Nagoya, JAPAN 466-8555
*Corresponding author’s e-mail: nakade@nitech.ac.jp
2
Department of Civil Engineering and Systems Management
Nagoya Institute of Technology
Gokiso-cho, Showa-ku
Nagoya, JAPAN 466-8555

U-shaped assembly lines are useful in an efficient allocation of workers to stations. In assembly lines, temporary workers
are placed to correspond to the fluctuation of demand. Sets of feasible tasks for temporary workers are different from those
of permanent workers. The tasks which are familiar to permanent workers also vary. For the U-shaped assembly balancing
problem under these situations the optimal cycle times for a given number of temporary workers and the optimal number of
workers for given cycle time are derived and compared between U-shaped line balancing and straight line balancing. We
also discuss the optimal allocation for a single U-shaped line and two U-shaped lines. In several cases, in particular when
high throughputs are required, it is shown numerically that the number of temporary workers in optimal allocation for two
lines is less than that of optimal allocation for a single line.

Keywords: u-shaped line; optimal allocation; mathematical formulation; temporary workers; permanent workers

(Received on November 25, 2013; Accepted on Febraury 26, 2015)

1. INTRODUCTION

Assembly line balancing is very important because balancing workload among workers leads to the reduction of labor costs
and increase of throughput of finished products. Therefore theory and solving method on assembly line balancing have
been developed. For example, for mixed models in a straight line, Chutima et al.(2003) have applied a fuzzy genetic
algorithm for minimizing production time and Tiacci et al. (2006) have presented a genetic algorithm for assembly line
balancing with parallel stations. In Villarreal and Alanis (2011) simulation is used to guide the improvement efforts on the
redesign of a traditional line.
In assembly line balancing, a U-shaped assembly line is effective in an allocation of workers and tasks to stations,
because more types of allocations are available compared with those in straight lines, and appropriate arrangement leads to
more throughput. Baybars (1986) has formulated a U-shaped line as a mixed integer program and proposed a heuristic
algorithm for solving. Recently, Hazir and Dolgui (2011) have proposed a decomposition algorithm. Chiang et al. (2007)
have proposed a formulation of U-shaped assembly line balancing with multiple lines, and have shown that there are the
cases that multiple lines can process with a fewer stations than a single line by numerical examples.
Temporary workers are sometimes placed in assembly lines, because the system can remain efficient by increasing or
decreasing the number of temporary workers corresponding to the fluctuation of demand. Sets of feasible tasks for
temporary workers are different from those of permanent workers. The familiar jobs among permanent workers may be also
different. In this case, it is important to allocate permanent and temporary workers to stations appropriately by considering
their abilities for different types of tasks. Corominas et al. (2008) have considered a straight line balancing with temporary
workers. Tasks which temporary workers can process is limited and time necessary for temporary workers to finish their
tasks is assumed to be longer than that for permanent workers to do those. In general, however, tasks which permanent
workers can complete in standard time are different among those workers, because their skills are different.
In this paper, we consider a U-shaped assembly balancing problem with a fixed number of permanent different
workers and temporary workers under precedence constraints on tasks. The model is formulated as an optimization integer
program for deriving the minimal cycle time for a given number of temporary workers, or deriving the minimal number of
temporary workers under given cycle time, and an algorithm is proposed to derive the throughput and an optimal allocation
of workers and jobs to stations for all possible numbers of temporary workers. Then we compare the optimal values
between U-shaped line balancing and straight line balancing in numerical examples by using software Xpress. In addition,

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(1), 147-158, 2015

A GAME THEORETIC APPROACH FOR THE OPTIMAL INVESTMENT


DECISIONS OF GREEN INNOVATION IN A MANUFACTURER-RETAILER
SUPPLY CHAIN
Sha Xi1 and Chulung Lee2,*
1
Graduate School of Information Management and Security
Korea University
Seoul, Republic of Korea
2
School of Industrial Management Engineering and Graduate School of Management of Technology
Korea University
Seoul, Republic of Korea
*Corresponding author’s e-mail: leecu@korea.ac.kr

With increasing consumers’ awareness of eco-friendly products, Manufactures and Retailers are proactive to invest in green
innovation. This paper analyzes a single manufacturer, single retailer supply chain where both participants are engaged in
green innovation investment. Consumer demand is dependent on selling price and investment level of green innovation. We
consider the effects of consumer environmental awareness, perception difficulty of green products, and degree of goods’
necessity on decision making. According to the relationship between the manufacturer and the retailer, three non-coordinated
game (including Manufacturer-Stackelberg, Retailer-Stackelberg, and Vertical Nash) and one coordinated supply chain
structures are proposed. The pricing and investment level of green innovation are investigated under these four supply chain
structures, respectively. A Retail Fixed Markup policy is analyzed when channel members fail to achieve supply chain
coordination. The effects of RFM on supply chain performance are evaluated. We numerically compare optimal solutions and
profits under the coordination, the Manufacturer-Stackelberg, and the Retail Fixed Markup supply chain structure and
provide managerial insights for practitioners.

Keywords: green supply chain management; consumer environmental awareness; product type; game theory

(Received on November 30, 2013; Accepted on February 26, 2015)

1. INTRODUCTION

As the escalating deterioration of environment in past decades, Green Supply Chain Management has attracted increasing
attention from entrepreneurs and researchers. Public pressure, such as consumer demand for eco-friendly products, first put
companies on to the thinking of greening. Nowadays, companies are proactive to invest in green innovation and regard it as a
potential competitive advantage rather than a burden. Porter (1995) explained the fundamentals of greening as a competitive
strategy for business practitioners and reported green investment may increase resource productivity and save cost.
People are more aware of environmental problems and willing to behave eco-friendly. According to the report of Cone
communications (2013), 71% of Americans take environmental factors into consideration and 45% of consumers actively
gathered environmental information about their objective products. In a meta-analysis on 83 research papers, Tully and Winer
(2013) found more than 60% consumers are willing to pay a positive premium for socially responsible products and, on
average, those consumers are willing to pay 17.3% more for these products. The increasing consumer demand of eco-friendly
products drives companies engaging in green innovation to differentiate its product (Amacher et al., 2004; Ibanez and
Grolleau, 2008, Borchardt et al., 2012). Land Rover, one of the world’s most luxurious and stylish 4x4s, has launched Ranger
Rover Evoque which is regarded as the lightest, most fuel efficient Ranger Rover to meet requirements for lower CO2
emissions and fuel economy. LG has produced a water efficient washing machine which saves 50L or more per load and uses
less detergent. Meanwhile, retailers are also engaged in investment of green innovation recently. Home Depot, an American
home improvement products retailer, conducts business in an environmentally responsible manner. Home Depot leads in
reducing greenhouse gas emissions and selecting manufacturer of eco-friendly products. For explanation of properties and
functions of eco-friendly products, Home Depot also provides leaflets, product labeling, and in-store communication, which
help consumers to know eco-friendly well.
Most of companies decide optimal investment decisions of green innovation without considering their manufacturer’s or
retailer’s decisions. With requirements of operational efficiency and environmental protection, companies have tried to
improve the entire supply chains’ performance rather than a single supply chain member’s. Beamon(1999) discussed the

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(1), 159-170, 2015

DYNAMIC PRICING WITH CUSTOMER PURCHASE POSTPONEMENT


Kimitoshi Sato

Graduate School of Finance, Accounting & Law


Waseda University
Japan
*Corresponding author’s e-mail: k-sato@aoni.waseda.jp

We consider a dynamic pricing model for a firm that sells perishable products to customers who have the potential to
postpone the purchase decision to reduce their perceived risk. The firm has a competitor in the market and knows that the
competitor adopts a static pricing strategy. We assume that the customer arrivals follow a stochastic differential equation
with delay and establish a continuous-time model so as to maximize the expected profit. When the probability distribution
of the customers’ reservation value is exponential and its parameter is constant in time, a closed-form optimal pricing
policy is obtained. Then, we show the impact of the competitor's pricing policy on the optimal price sample path through a
martingale approach. Moreover, we show that the purchasing postponement reduces the firm’s total expected profit.

Keywords: revenue management; dynamic pricing; stochastic delay equation

(Received on November 28, 2013; Accepted on February 26, 2015)

1. INTRODUCTION

We consider a dynamic pricing policy of a firm that faces the problem of selling a fixed stock of products over a finite
horizon in a competitive market and knows that the competitor adopts a static pricing strategy. Such a situation can be
found everywhere. Examples include high-speed rail versus low-cost carriers, suite versus regular hotel rooms, national
versus store brands, department versus Internet shops, etc. Since the static pricing policy provides a simple and clear price
to customers, some companies (especially firms offering the high-quality product) place importance on this advantage.
In this paper, we investigate how the customer behavior of delayed purchases impacts on the pricing strategy of the firm.
Causes of delay in customer decision-making include the difficulty of selecting the product and perceived risk. Some non-
purchase customers will return to a shop or web site at intervals. Thus, the number of the present arrival customers is
affected by some of the previous arrival customers. Pricing without considering such behavior may affect the total revenue
of the firm.
Recently, various authors have considered a pricing policy with strategic customer behavior in the management science
literature (Levin et al. 2009, Liu and Zhang, 2013). The strategic customer behavior is that customers compare the current
purchasing opportunity to potential future opportunities and decide whether to purchase immediately or to wait. These
papers model customer's purchase timing so as to maximize their individual consumer surpluses. The strategic customers
take future price expectations into account in their purchase decisions.
Unlike previous works, we consider the number of customers to postpone purchases at an aggregate level, rather than at the
individual customer level. Proportions of customers who postpone the purchase vary depending only on the time of arrival.
In other words, the earlier the arrival, the more delay in purchasing the product. To take into account of such customer
behavior, we model the problem as a stochastic control problem that is driven by a stochastic differential equation with
delay.
Larssen and Risebro (2003) and Elsanosi et al. (2001) consider the applications of the stochastic control problem with
delay in harvesting problem, and consumption and portfolio optimization problems, respectively. Bauer and Rieder (2005)
provide conditions that enable us to reduce the stochastic control problem with delay to the problem that is easier to solve.
By using conditions, we show that our problem can be reduced to the similar model of Sato and Sawaki (2013), which does
not take into account of the delay. Then, we obtain a closed-form optimal pricing policy when the probability distribution
of the reservation value is exponential. Xu and Hopp (2006) apply martingale theory to investigate the trend of optimal
price sample paths in a dynamic pricing model for exponential demand case. Xu and Hopp (2009) consider the dynamic
pricing in continuous-time in which the customer arrivals follow a non-homogeneous Poisson process. They show that the
trend of optimal price increases (decreases) when customer’s willingness-to-pay increases (decreases) in time. We also
apply martingale theory to study how the competitor's pricing strategy and customers' delay behavior effect optimal price
path when customer’s willingness-to-pay is constant in time.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(1), 171-182, 2015

INTEGRATION OF SCENARIO PLANNING AND DECISION TREE


ANALYSIS FOR NEW PRODUCT DEVELOPMENT: A CASE STUDY OF A
SMARTPHONE PROJECT IN TAIWAN
Jei-Zheng Wu1, Kuo-Sheng Lina2,*, and Chiao-Ying Wu b1
1
Department of Business Administration, Soochow University
56 Kueiyang St., Sec. 1, Taipei 100, Taiwan, R.O.C.
2
Department of Financial Management, National Defense University
70 Zhongyang N. Rd., Sec. 2, Taipei 112, Taiwan, R.O.C.
*Corresponding author’s e-mail: arthurlin112@gmail.com

Although the demand for smartphones has increased rapidly, the R&D and marketing of smartphones have encountered
severe competition in a dynamic environment. Most studies on new product development (NPD) have focused on the
traditional net present value method and real options analysis, which lack the flexibility required to model asymmetric
multistage decisions and flexible uncertain states. The aim of this study was to integrate scenario planning and decision tree
analysis for NPD evaluation. Through such integration, scenarios for modeling uncertainties can be generated
systematically. This study presents a case study of a Taiwanese original equipment manufacturing company for validating
the proposed model. Compared to the performance of realized decisions, the proposed analysis is more robust and
minimizes risk if the R&D resource allocation is appropriate. Two-way sensitivity analysis facilitates balancing the
probability of R&D success with the R&D cost of an R&D project becoming profitable.

Keywords: decision tree analysis; scenario planning; new product development project; influence diagram; discounted cash
flow
(Received on December 1, 2013; Accepted on February 26, 2015)

1. INTRODUCTION

Over the past decade, the mobile phone market has exhibited a substantial increase in demand; sales have increased from a
relatively small number of phones in the 1990s to 140 million today. The integration of communication, entertainment, and
business functions with the availability of simple and fashionable designs has contributed to the increasing use of mobile
communication products. New product development (NPD) projects for mobile phones often encounter resource or
budgetary limitations, resulting in limited choices of project investments. Moreover, NPD involves high risk and
uncertainties. When new product investments are financially evaluated, the most common questions are whether projects
are worth investing in and how all uncertainties can be factored into the evaluation, including the uncertainty in the
temporal variation of the product value after launch.
The net present value (NPV) method, also known as the discounted cash flow (DCF) method, is commonly used for
budgeting capital and evaluating investment in R&D projects. The traditional NPV method involves applying the risk-free
rate and risk-adjusted discount rate for discounting future expected cash flows, including financial benefits and expenditure,
to derive the NPV (Brandão and Dyer 2005). A project is considered investment worthy only if the NPV is positive.
Although the NPV method is simple and intuitive, its applications are limited because of the unrealistic assumptions of (1)
reversible investment and (2) nondeferrable decisions. According to the reversible investment assumption, an investment
can be undone and incurred expenditure can be recovered (Dixit and Pindyck 1995). Furthermore, the nondeferrable
decision assumption requires the R&D investment decision to be made immediately. Because it entails using only one
scenario (the so-called now-or-never scenario) for decision-making, the NPV method evaluates one-stage decisions without
considering contingencies or changes that reflect future uncertainties (Trigeorgis and Mason 1987).
In practice, information on the reversibility, uncertainty, and timing of decisions is critical for managers in making
R&D investment decisions at the strategic level (Dixit and Pindyck 1995). An R&D project entails at least four stages: (1)
initialization, (2) outcome, (3) commercialization, and (4) market outcome (Faulkner 1996). In responding to future
uncertainties, managers require flexibility to adjust their actions by using “real options,” such as deferring decisions,
altering the operation scale, abandoning or switching the project, focusing on growth, and engaging in multiple interactions
(Trigeorgis 1993). Real option analysis is complementary to the NPV method, in that the total project value can be
formulated as the sum of the NPV, adjusted option value, and abandonment value (van Putten and MacMillan 2004).
Considering the real option of exercising the right to manage real assets without obligation to proceed with actions when
anticipating uncertainties, R&D project investment is based on a multistage, sequential decision-making process (Ford and
Sobek 2005).
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 22(1), 183-194, 2015
 

TRADE-INS STRATEGY FOR A DURABLE GOODS FIRM FACING


STRATEGIC CONSUMERS
Jen-Ming Chen1,* and Yu-Ting Hsu2
1,2
Department of Industrial Management
National Central University
300 Jhongda Road, Jhongli City, Taoyuan County, Taiwan, 32001
*Corresponding author’s e-mail: jmchen@mgt.ncu.edu.tw

Trade-ins rebate from the manufacturer to the consumers is a commonly used device by a durable goods firm to price
discriminate between new and replacement buyers. It creates segment effect by offering different prices to different groups
of customers. This study deals with such an effect by considering three trade-ins policies facing the firm, i.e., no trade-ins,
trade-ins to replacement consumers with high quality used goods, and trade-ins to all replacement consumers. This study
determines the optimal pricing and/or trade-in rebate, and examines the strategic choice among the three options facing the
firm. We develop analytic models that incorporate key features of durable goods into model formulation, namely the
deterioration rate and the quality variation of the used goods. Our research findings include: the strategic choice among the
three options depends critically on the two features and the price of new goods, and the trade-ins-to-all policy outperforms
the others when the deterioration rate is high and/or new goods price is high.

Keyword: trade-Ins; rebate; Deterioration; Utility Assessment; Stationary Equilibrium

(Received on December 3, 2013; Accepted on February 26, 2015) 

1. INTRODUCTION

An original equipment manufacturer often faces two distinct types of consumers in the market: replacement buyers and new
buyers. Especially in a durable good market, the replacement purchases represent a significant portion of the total sales. In
highly saturated markets like refrigerators and electric water heaters, the percentage of replacement purchases is between
60% and 80% of the annual sales in the United States (Fernandez, 2001). In the automobile industry, approximately half of
all new car sales involve a trade-in (Zhu, Chen, & Dasgupta, 2008; Kim et al., 2011). To increase sales and purchasing
frequency by the customers, the firm usually adopts a price discrimination approach by offering the replacement buyers a
special trade-in rebate that is referred to the firm’s decision of accepting a used good as partial payment for a new good. The
replacement customers will pay less for the new goods by redeemed rebates. In the cellphone industry, Apple offers
replacement customers a trade-in rebate up to $345 for an iPhone 4S and up to $356 for an iPhone 5 (www.apple.com).
Such a manufacturer-to-consumer rebate stimulates new goods sales.
This study deals with such a prevalent practice in durable goods markets. We propose analytic models for decision-
making of optimal trade-in rebates facing the durable goods producer, especially when the replacement buyers act
strategically, that is, their replacement decision depends on the quality condition of the goods after a certain period of use.
We analyze and compare three benchmark scenarios, that is the no trade-ins, the trade-ins to consumers with high quality
used goods (denoted by trade-ins-to-high), and the trade-ins to all consumers with high and low quality used goods (denoted
by trade-ins-to-all). This study especially focuses on investigating the impacts of the two trade-ins policies on the behaviors
and actions the buyers may take, as well as the potential benefit the firm may gain among the three options. Our research
findings suggest that the strategic choice on trade-ins policies facing the firm depends critically on the deterioration rate (or
durability in a reversed measure), quality variation of the used goods, and the new goods price. We also show that as the
deterioration or quality variation increases, the magnitude of trade-in rebates increases.
There are mainly two research streams that deal with trade-in rebates in durable goods markets: (i) models from
economics and marketing literature and (ii) models from operations literature. We provide reviews of both streams.
Waldman (2003) identified some critical issues facing the durable goods producers, including durability choice and
information asymmetric problem. This study is related to the former one but does not deal with the second that was one of
the major research concerns in Rao, Narasimhan, and John (2009). They showed that trade-in programs mitigate the lemon
problem or equivalently information asymmetric problem in markets with adverse selection, and hence increase the firm’s
profit.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


 
 
International Journal of Industrial Engineering, 22(2), 195-212, 2015

Improving In-plant Logistics: A Case Study of a Washing Machine


Manufacturing Facility

Cagdas Ucar1 and Tuncay Bayrak2,*

1
Yildiz Technical University
Department of Industrial Engineering
Istanbul, Turkey
2
Department of Business Information Systems
Western New England University
Springfield, MA, 01119, USA
*Corresponding Author’s E-mail:tbayrak@wne.edu

This study presents a case study on the enhancement of in-plant logistics at a discrete manufacturing plant using lean
manufacturing/logistics principles. Two independent application scenarios are presented. In the first application, we
improve the operation of a supermarket (small internal warehouse) from the ergonomics point of view by (1) placing
heavy boxes on waist-level shelves, and (2) applying rolling racks/trolleys to release the physical load on the workers.
In the second application, the logistic processes related to a new supermarket are fundamentally re-designed.
Key Words: In-plant logistics, supermarket, milkrun, ergonomics, fatigue, just-in- time production.

(Received on September 20, 2013; Accepted on Septemeber 13, 2014)

1. INTRODUCTION

Logistics activities, regardless of whether it is in manufacturing or service business, have become an important business
function as they are seen to contribute to the competitiveness of the enterprise. In such a competitive environment,
logistics activities are one of the most important factors for companies in delivering products and services in a timely
and competitive manner. In other words, the logistics service quality emerges as an important element of being able to
compete.
Logistics can be seen as in-plant logistics and out-of-plant logistics. In-plant logistics or internal logistics covers
the activities between the arrival of raw materials and the full output of the product. Out-of plant logistics or external
logistics covers the remaining activities. In recent years, the importance of in-plant logistics has increased as it is of
great importance for running production smoothly. In-plant logistics implies the co-ordination of activities within the
plant. One would agree that the elements of the in-plant logistics need to be integrated with the external logistics. For
manufacturers, managing the in-plant logistics is as important as managing the external logistics to improve the
efficiency of production activities.
Running in-plant logistics in the best way is of great importance for businesses that adopted just-in-time and lean
manufacturing philosophies to continue functioning without problems. This study reports on the experiences of
redesigning in-plant logistics operations of a washing machine manufacturing facility. How to improve in-plant
logistics, within the framework of just-in-time production and lean manufacturing philosophies, is investigated from
different perspectives such as ergonomics, time spent, and distance traveled. Two real-life examples are presented in
terms of how in-plant logistics activities can be improved using milkrun and supermarket approaches. The first
application deals with logistics activities in terms of ergonomics. In the second application, problems with internal
logistic activities are identified, and solutions are provided to minimize the time spent, and distance traveled by the
employees.

2. LITERATURE REVIEW

Logistics management can be defined as that “part of supply chain management that plans, implements, and controls
the efficient, effective forward and reverse flow and storage of goods, services and related information between the
point of origin and the point of consumption in order to meet customers' requirements” (CSCMP, 2012). Kample et al.,
(2011) suggest logistics is both a fundamental business activity and the underlying phenomenon that drives most other
business processes.
While in-plant logistics plays a vital role in achieving the ideal balance of process efficiency and labor
productivity, unoptimized in-plant logistics may present a considerable challenge for companies in all sectors of
consumer goods and result in poor operation management, human error, and some other problems. Thus, optimized in-
plant logistics is a prerequisite for the economic operation of the factory. As pointed out by Jiang (2005), in-plant
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 22(2) , 213-222, 2015

Toll Fraud Detection of Voip Services via an Ensemble of Novelty Detection


Algorithms
Pilsung Kang1, Kyungil Kim2, and Namwook Cho2,*
1
School of Industrial Management Engineering
Korea University
Seoul, Korea
2
Department of Industrial & Information Systems Engineering
Seoul National University of Science and Technology
Seoul, Korea
*
Corresponding author’s e-mail: nwcho@seoultech.ac.kr

Communications fraud has been dramatically increasing with the development of communication technologies and the
increasing use of global communications, resulting in substantial losses to telecommunication industry. Due to the
widespread deployment of voice over internet protocol (VoIP), the fraud of VoIP has been one of major concerns of the
communications industry. In this paper, we develop toll fraud detection systems based on an ensemble of novelty detection
algorithms using call detail records (CDRs). Initially, based on actual CDRs collected from a Korean VoIP service provider
for a month, candidate explanatory variables are created using historical fraud patterns. Then, a total of five novelty
detection algorithms are trained for each week to identify toll frauds during the following week. Subsequently, fraud
detection performance improvements are attempted by selecting significant explanatory variables using genetic algorithm
(GA) and constructing an ensemble of novelty detection models. Experimental results show that the proposed framework is
practically effective in that most of the toll frauds can be detected with high recall and precision rates. It is also found that
the variable selection using GA enables us to build not only more accurate but also more efficient fraud detection models.
Finally, an ensemble of novelty detection models further boosts the fraud detection ability especially when the fraud rate is
relatively low.

Keywords: toll fraud detection; novelty detection; genetic algorithm (GA); ensemble; VoIP service; call detail records
(CDRs).

(Received on November 29, 2013; Accepted on July 09, 2014)

1. INTRODUCTION

Communications fraud has been dramatically increasing with the development of communication technologies and the
increasing use of global communications, resulting in substantial losses to telecommunication industry (Kou, 2004).
Moreover, due to the widespread deployment of the Voice over Internet Protocol (VoIP), the fraud of VoIP has been one of
major concerns of the communications industry. VoIP is more vulnerable to fraud attacks so its potential loss is greater than
traditional telecommunication technologies. According to the survey conducted by Communications Fraud Control
Association (CFCA, 2009), global fraud losses in 2009 are estimated to be in the range of $72 - $80 billion (USD), which is
up 34% from 2005. The top two fraud loss categories, which constitute nearly 50 percent of the total loss, can be considered
as toll fraud. Toll fraud is defined as an unauthorized use of one’s telecommunications system by an unauthorized party
(Avaya, 2010), which often results in substantial additional charges for telecommunications services. Figure 1 shows a
typical toll fraud pattern. While normal traffic is activated from the normal user groups and transmitted through a VoIP
service provider and an internet telephony service provider (ITSP), toll fraud attacks result from an illegal use of
unauthorized subscriber information and/or the compromise of vulnerable telecommunication systems such as PBX and
voicemail systems.
In telecommunication industry, most fraud analysis applications have been relying on rule-based systems (Rosset,
1999). In the rule-based systems, fraud patterns are pre-defined by a set of multiple conditions, and an alert is raised
whenever any of the rules is met. Rosset et al. (1999) suggested a rule-discovery framework for fraud detection in a
traditional telecommunications environment. Ruiz-Agundez et al. (2010) proposed a fraud detection framework for VoIP
services consisting of a rule engine built over a prior knowledge base. However, relying on the knowledge of domain
experts, rule-based approaches can hardly provide an early warning effectively; they are vulnerable to unknown and
abnormal fraud patterns (Kim, 2013).

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(2), 223-242, 2015

A Multi Depot Simultaneous Pickup and Delivery Problem with Balanced


Allocation of Routes to Drivers
Morteza Koulaeian1, Hany Seidgar1, Morteza Kiani1 and Hamed Fazlollahtabar2,*
1
Department of Industrial Engineering
Mazandaran University of Science and Technology
Babol, Iran
2
Faculty of Management and Technology
Mazandaran University of Science and Technology
Babol, Iran
*Corresponding author’s e-mail: hfazl@iust.ac.ir

In this paper, a new mathematical model is developed for a multi-depot vehicle routing problem with simultaneous pickup
and delivery. A non-homogenous fleet of vehicles and a number of drivers with different levels of capabilities are employed
to service customers with pickup and delivery demands. The capability of drivers is considered to have a balanced
distribution of travels. The objective is to minimize the total cost of routing, penalties for overworking of drivers and fix
costs of drivers’ employment. Due to the problem’s NP-hard nature, two meta-heuristic approaches based on Imperialist
Competitive Algorithm (ICA) and Genetic Algorithm (GA) are employed to solve the generated problems. The parameter
tuning is conducted by Taguchi experimental design method. The obtained results show the high performance of the
proposed ICA in the quality of the solutions and computational time.

Keywords: vehicle routing problem (VRP); multi depot simultaneously pickup and delivery; imperialist competitive
algorithm (ICA).

(Received on January 09, 2014; Accepted on October 19, 2014)

1. INTRODUCTION

Pickup and Delivery problem (PDP) is one of the main classes of the Vehicle Routing problem (VRP) in which a set of
routes is designed in order to meet customers’ pickup and delivery demands. In Simultaneous Pickup and Delivery problem
(SPDP) a fleet of vehicles originating from a distribution center should deliver some goods to customers and at the same
time collect back their excess stuff. This problem arises especially in the reverse logistics context where companies are
increasingly faced with the task of managing the reverse flow of finished goods or raw-materials (Subramanian et al.,
2010).
Min (1989) was the first researcher to introduce vehicle routing problem with simultaneous pickup and delivery
(VRPSPD) for minimizing the total travel time of the route by considering the vehicle capacity as the problem constraint.
Dethloff (2001), and Tang and Galvano (2006) then, contributed on mathematical reformulations. Berbeglia et al., (2007)
also introduced a general framework to model static pickup and delivery problems. Jin and Kachitvichyanukul (2009)
generalized the three existing formulation and reformulated the VRPSPD as a direct extension of basic VRP. In solution
technique areas, Moshivio (1998) studied PDP with divisible demands, in which each customer can be served by more than
one vehicle, and presented greedy constructive algorithms based on tour partitioning. Salhi and Nagy (1999) proposed four
insertion-based heuristics, in which partial routes are constructed for some customers in basic steps and then the remaining
customers will be inserted to the existing routes. Dell 'Amico et al., (2006) presented an exact method for solving VRPSPD
based on column generation, dynamic programming, and branch and price algorithm. Bianchessi and Righini (2007)
proposed a number of heuristic algorithms to solve this problem approximately in a small amount of computing time.
Emmanouil et al., (2009) proposed a hybrid solution approach incorporating the rationale of two well-known meta-
heuristics namely tabu search and guided local search. Mingyong and Erbao (2010) proposed an improved differential
evolution algorithm (IDE) for a general mixed integer programming model of VRPSPD with time windows.
Hsiao-Fan Wang and Ying-Yen Chen (2012) presented a co-evolution genetic algorithm with variants of the cheapest
insertion method for this kind of problem. Ran Liu et al. (2013) propose a genetic algorithm based on a permutation
chromosome, a split procedure and local search for VRPSPD in home health care problem. They also propose a tabu search
method based on route assignment attributes of patients, an augmented cost function and route re-optimization. Tao Zhang
et al., (2012) develop a new scatter search and a generic genetic algorithm approach for the stochastic travel-time VRPSPD.
Goksal et al., (2013) proposed a particle swarm optimization in which a local search is performed by variable neighborhood
descent algorithm for VRPSPD. The reviewed papers so far were single depot problems but there are studies considering
multi-depot vehicle routing problem (MDVRP) in which there exist more than one distribution center. Here, some
International Journal of Industrial Engineering, 22(2), 243-251, 2015

A Branch-and-Price Approach for the Team Orienteering Problem with Time


Windows
Hyunchul Tae and Byung-In Kim*

Department of Industrial and Management Engineering,


Pohang University of Science of Technology (POSTECH)
Pohang, Korea
*
Corresponding author’s e-mail: bkim@postech.ac.kr

Given a set of vertices, each of which has its own prize and time window, the team orienteering problem with time windows
(TOPTW) is a problem of finding a set of vehicle routes with the maximum total prize that satisfies vehicle time limit and
vertex time window constraints. Many heuristic algorithms have solved the TOPTW; to our knowledge, however, no exact
algorithm that can solve this problem optimally has yet been identified. This study proposes an exact algorithm based on the
branch-and-price approach to solve the TOPTW. This algorithm can find optimal solutions for many TOPTW benchmark
instances. We also apply the proposed algorithm to the team orienteering problem (TOP), which is a time window constraint
relaxed version of the TOPTW. Unlike the TOPTW, a couple of exact algorithms have solved the TOP. The proposed
algorithm can find more number of optimal solutions to TOP benchmark instances.

Keywords: team orienteering problem with time windows; branch and price; exact algorithm; column generation

(Received on October 2, 2014; Accepted on February 20, 2015)

1. INTRODUCTION

Given a weighted digraph , , where , , is a set of vertices and is a set of arcs between the vertices,
a set of customers ,…, may be visited by a set of identical vehicles ,…, that departs from the
origin and ends at the sink . A vehicle ∈ collects a prize by visiting ∈ . A vehicle ∈ takes travel
time , to traverse , ∈ and service time to serve ∈ . A vehicle ∈ can visit ∈ only between its time
window , and should wait until if it arrives before . The total working time of each vehicle should be less than or
equal to the time limit . We assume that 0, , and a complete graph for simplicity. The team orienteering
problem with time windows (TOPTW) is an issue that involves finding a set of vehicle routes with a maximum total prize that
satisfies vehicle time limit and vertex time window constraints. The TOPTW can be formulated as a set partitioning problem
as [TOPTW]. We regard a subset of customers ⊆ as a route if the customers in can be visited by one vehicle. Let Φ be
a set of all possible routes.

[TOPTW]

max (1)

Subject to

, 1, ∀ ⊆ (2)

(3)

∈ 0,1 , ∀ ∈Φ (4)

∑ ∈ , represents the prize of a route ∈ Φ, where , is 1 if includes ∈ and 0 otherwise. A


binary decision variable is 1 if ∈ Φ is selected and 0 otherwise. The objective function (1) maximizes the total prize.
Constraints (2) prohibit a customer from being visited more than once. Constraint (3) ensures that vehicles can be used at
most. Constraints (4) restrict to be binary.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(2), 252-266, 2015

An Inhomogeneous Multi-Attribute Decision Making Method and Application to


IT/IS Outsourcing Provider Selection
Rui Qiang1 and Debiao Li2
1,2
School of Economics and Management
Fuzhou University
Fuzhou, China

Corresponding author’s e-mail: dli14@binghamton.edu

Selecting a suitable outsourcing provider is one of the most critical activities in supply chain management. In this paper, a
new fuzzy linear programming method is proposed to select outsourcing providers by formulating it as a fuzzy
inhomogeneous multi-attribute decision making (MADM) problems with fuzzy truth degrees and incomplete weight
information. In this method, the decision maker’s preferences are represented as trapezoidal fuzzy numbers (TrFNs), which
obtained through pair-wise comparisons of alternatives. Based on the fuzzy positive ideal solution (FPIS) and fuzzy negative
ideal solution (FNIS), the fuzzy consistency and inconsistency indices are defined by the relative closeness degrees in TrFNs.
The attribute weights are estimated by solving the proposed fuzzy linear programming. And then the selection ranking is
determined by the comprehensive relative closeness degree of each alternative to the FPIS. An industrial IT outsourcing
provider selection example is analyzed to demonstrate the implementation process of this method.

Keywords: outsourcing provider; multi-attribute decision making; production operation; fuzzy linear programming; supply
chain management

(Received on August 08, 2013; Accepted on January 01, 2015)

1. INTRODUCTION

In the ever-increasing business competitiveness of today, outsourcing has become a main stream practice in global business
operations (Cai et al., 2013). Information systems outsourcing is modeled as one-period two-party non-cooperative games to
analyze the outsourcing arrangement by considering a variety of interesting characteristics, including duration, evolving
technologies, difficulty to assess, and vender fees (Elitzur and Wensley,1999, Elitzur et al., 2012). Many organizations also
attempt to enhance their competitiveness, reduce costs, increase their focus on internal resources and core activities, and
sustain competitive advantage by Information technology/ information system (IT/IS) outsourcing (Yang and Huang, 2010).
The selection of a good provider is a difficult task. Some providers that meet some selection criteria may fail in some other
criteria. Therefore, selecting the outsourcing providers may be ascribed to a multi-attribute decision making (MADM)
problems.
Currently, some integrated decision-making methods have been proposed for solving the problems of selecting
outsourcing providers. Compared to the sequential decision making based on one-dimension rules, integrated
decision-making methods yield more integrative and normative solutions based on multi-attributes (Jansen et al., 2012). For
example, Chou et al. (2006) developed a fuzzy multi-criteria decision model approach to evaluating IT/IS investments. Chen
and Wang (2009) developed the fuzzy Vlsekriterijumska Optimizacija I Kompromisno Resenje (VIKOR) method for the
strategic decision of optimizing partners’ choice in IT/IS outsourcing projects. Lin et al. Combining the DEMATEL, ANP,
and zero-one goal programming (ZOGP), Tsai et al. (2010) developed a MCDM approach for sourcing strategy mix decision
in IT projects. From a policy-maker’s perspective, Tjader et al. (2010) researched the offshore outsourcing decision-making.
(2010) proposed a novel hybrid multi-criteria decision-making (MCDM) approach for outsourcing vendor selection
combining a case study for a semiconductor company in Taiwan. Chen et al. (2011) presented the fuzzy Preference Ranking
Organization Method for Enrichment Evaluation (fuzzy PROMETHEE) to evaluate four potential suppliers using seven
criteria and four decision makers using a realistic case study. Ho et al. (2012) integrated the quality function deployment
(QFD), fuzzy set theory, and analytic hierarchy process (AHP) approach, to evaluate and select the optimal third-party
logistics service providers (3PLs). Fan et al. (2012) utilized an extended DEMATEL method to identify risk factors of IT
outsourcing using interdependent information. Buyukozkan and Cifci (2012) proposed a novel hybrid MCDM approach
based on fuzzy DEMATEL, fuzzy ANP and fuzzy Technique for Order Preference by Similarity to Ideal Solution (TOPSIS)
to evaluate green suppliers.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(2), 267-276, 2015

Multi-Criteria Model For Selection of Collection System in Reverse Logistics: A


Case for End of Life Electronic Products

Md Rezaul Hasan Shumon1,*, Shamsuddin Ahmed2


1
Department of Industrial and Production Engineering
Shahjalal University of Science and Technology
Sylhet-3114, Bangladesh
*Corresponding author’s e-mail: shumon330@gmail.com
2
Department of Mechanical Engineering
University of Malaya, Kuala Lumpur, Malaysia

The purpose of this paper is to propose a multi-criteria model for end-of-life electronic products collection system
selection in a reverse supply chain. The proposed models first determines the pertinent criteria for collection system
selection by conducting questionnaire survey and then uses analytic hierarchy process (AHP) rating method to evaluate
the priorities of the criteria and alternatives, respectively. Finally, global weights of the criteria and evaluation score of
the alternatives are combined to get the final ranking of the collection systems. The analysis result demonstrates the
relative importance of the criteria for evaluating the collection methods, and a real application that shows the preference
of collections system(s) to be selected. The use of this newly proposed model indicates that, decision makers can use it
to determine the most appropriate collection system(s) from available options in the considering territory. Furthermore,
it would be able to make the decision process more systematic and reduce the considerable efforts needed by using the
criteria weights created in this model.

Keywords: reverse logistics; multi-criteria analysis; end-of-life electronic products; analytical hierarchy process;
decision making
(Received on January 3, 2014; Accepted on October 05, 2014)

1. INTRODUCTION
Electronic waste (e-waste) management has gained a significant attention to researchers and policy makers around the
world as their ‘through-away’ impact is hazardous to the physical environment. The advancing technology and
shortened product life cycle makes e-waste one of the fastest growing waste streams, creating significant risks to human
health and the environment (Yeh & Xu, 2013). Use of the reverse supply chain approach is one way of minimizing the
environmental impact of e-wastes entitled as end-of-life (EOL) electronic products (Quariguasi Frota Neto, Walther,
Bloemhof, van Nunen, & Spengler, 2009). Reverse supply chain is a process by which a manufacturer systematically
accepts the previously shipped products or parts from the point of consumption for possible reuse, remanufacturing,
recycling, or disposal (Tsai & Hung, 2009). This process provides with advantage of recycling of material resources,
development of newer technologies and creation of income-oriented job opportunities (Shumon, 2011).
Initially, the significance of this research was based on a problem confronting in Malaysia, a Southeast Asian
country, where companies and organizations are in doubt which system they should use for e-waste collection.
However, this problem is faced by other countries around the world as well. Collection of e-wastes is the first activity to
trigger the reverse supply chain as part of product recovery activities. In this regard, several approaches have been
applied by different countries like individual manufacturer’s buy-back program, municipality’s collection program, and
NGO and government initiatives (Chung, Lau, & Zhang, 2011; Qu, Zhu, Sarkis, Geng, & Zhong, 2013). It is
understandable that no single collection system can ensure the maximum collection of e-wastes, because it largely
depends on the geographical, social and economic conditions of the country under consideration. Some systems are well
established in developed countries but may not be economically feasible in other developing countries. Some systems
are economically feasible but are not well accepted by stakeholders. This resulted use of inappropriate methods or
systems, which ultimately lead to a lower collection rate and higher investment or operating cost. Such system(s) cannot
meet the financial objectives with respect to the investment made. Thus, there is a need for systematic approach of
selecting appropriate collection system(s) by identifying and prioritizing the pertinent criteria and evaluating the trade-
offs between strategic, economic, operational and social performance aspects. The model presented by this research
would be a useful decision making aid for the companies and organizations in any territory to rank and select the
effective and suitable method(s) for their concerned areas.
Hao, Jinhui, Xuefeng, and Xiaohua (2007) investigated on the collection method of domestic e-waste in urban
China by applying case study methods. They analyzed the four alternative collection modes currently exist in Beijing
and proposed a few other modes. The existing modes are door to door collection, Take-back in related business(second
hand market), Collection in recycling spot, Collection for donation and the proposed modes are i) government to formal
recycler ii) enterprise to formal recycler iii) collectors-formal recyclers. The use of multi-criteria decision analysis
ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 22(2), 277- 291, 2015

A Fuzzy Expert System for Supporting Returned Products Strategies


H.Hosseininasab1, M.Dehghanbaghi2,*
1,2
Department of Industrial Engineering
Yazd University
Yazd, Iran
*Corresponding author’s e-mail: dehghanbaghi@yahoo.com

A key strategic consideration in the recovery system of any product is to make proper decisions on reverse manufacturing
alternatives including both recovery and disposal options. The nature of such decisions is complex due to the uncertainty
existing in the quality of the product returns and lack of information about the product. Consequently, the need of correct
diagnosis of recovery/ disposal options for the returned products necessitates the development of a comprehensive model
considering all technical and non-technical parameters. Although human experts with the aid of practical experience may
handle such complex problems, this procedure is time consuming and may lead to imprecise decisions. This study presents
a fuzzy rule-based system to provide a correct decision mechanism for ranking the recovery/disposal strategies by
knowledge acquisition through a simple reverse supply chain with a collection center for each particular returned product.
The proposed system has applications with a focus on brown goods, although the system may be applied to other similar
kinds of products through some changes. To achieve the objective of this study, the proposed model is used to analyze a
case of mobile phone, ending up in coherent results.

Keywords: Fuzzy expert system, Product returns, Return strategies

(Received on January 15, 2014; Accepted on December 22, 2014)

1. INTRODUCTION

In addition to the effects of ever-changing technologies, the rapid changes in the natural environment, the enforcements by
governments and the proven profitable engagement of recovery and reuse activities have influenced the way most
companies perform their business in increasing the rate of reusing returned products. The implementation of extended
producer responsibility in the light of new governmental policies, together with the growing public interest in
environmental issues, will cause Original Equipment Manufacturers (OEMs) to take care of their products after they have
been discarded by the consumer (Krikke et al., 1998). In this regard, product recovery management (PRM), proposed by
Thierry et al. (1995), serves to recover much of the economic and ecological value of products by reducing the quantity of
wastes.
There are four recovery and disposition categories for product returns including reuse/resell, product upgrade,
material recovery and waste management. Each category includes recovery/disposal alternatives. Table 1 presents the
alternatives for each category together with their explanations. Thus, we have 8 different recovery/ disposal activities when
a product is returned back to the chain: reusing, reselling, repairing, remanufacturing, refurbishing, cannibalization,
recycling and disposal. Every returned products/parts should pass one/more of these activities to be back to the second
market or to be disposed. One of the key strategic issues in product recovery management is to find a proper option for
recovery or disposal activities, as each of these activities bears its own costs.
As stated by Behret and Korugan, (2009), uncertainties in the quality, quantity and timing of the product return flow
make it hard to select the best disposition alternative decisions. Large variations in the quality of returns are a major factor
for uncertainties in the time, cost and rate of the recovery process (Liu et al., 2012). Thus, it seems necessary to provide a
strategic decision model for exploring the detailed quality of returned products before making the recovery decisions.
This paper aims at providing a comprehensive expert system through defining the factors mostly affecting the ranking
of the above-mentioned recovery options for product returns. The proposed model analyzes the properties of returned
products to find the best recovery option(s) in an accurate way. Although there are numerous studies in fuzzy decision
making as in Chan et al. (2003), Liu et al. (2013), Ozdaban et al. (2010), Tsai (2011) and Olugu et al. (2012), based on our
findings, there are just a few pieces of research in which expert and fuzzy rule-based decision systems are applied in reverse
logistic issues. They are mainly focused on performance measurement, disassembly process, life cycle and recovery
management (Singh et al., 2003; Meimei et al., 2004; Fernandez et al., 2008; Jayant, 012). There are also few published
research studies that provide clear policies for managing and clustering of returned products. Thus, we review those studies
that are the most relevant to the research we conduct.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22 (2), 292-300 ,2015

Landfill Location with Expansion Possibilities in Developing Countries


Pablo Manyoma1,*, Juan P. Orejuela 1, Patricia Torres2, Luis F. Marmolejo2, and Carlos J. Vidal1
1
School of Industrial Engineering,
Universidad del Valle,
Santiago de Cali, Colombia
*Corresponding author’s e-mail: pablo.manyoma@correounivalle.edu.co
2
School of Natural Resources and Environment Engineering
Universidad del Valle
Santiago de Cali, Colombia

Municipal Solid Waste Management (MSWM) has become one of the main challenges of urban areas in the world. For
developing countries, this situation is of greater severity due to disordered population growth, rapid industrialization, and
deficiency in regulations, among other factors. One component of MSWM is the final disposal, where landfills are the most
commonly used technologies for this purpose. According to a body of research, landfill location should meet the needs of
all stakeholders, thus we propose a model based on multi-objective programming considering several decisions such as
landfill opening, when they should be opened, and especially a common situation in our countries: the kind of expansion
capacity that should be used. We present an example that reflects the conflict of two objectives: cost and environmental
risk. The results show the allocation of each municipality to each landfill and the amount of municipal solid waste to be
sent, among other variables.

Keywords: capacity expansion; landfill location; multi-objective programming; municipal solid waste management;
undesirable facilities.

(Received on January 7, 2014; Accepted on January 25, 2015)

1. INTRODUCTION

Waste has increasingly become a major environmental concern for modern society, due to population growth, the high level
of urbanization, and the mass consumption of different products (Eriksson and Bisaillon, 2011). For this reason, one of the
greatest challenges in urban areas worldwide, especially in developing countries’ cities, is the Municipal Solid Waste
Management - MSWM. Even if a combination of this management technique is utilized and policies of waste reduction and
reuse are applied, the existence of sanitary landfills is necessary for any MSWM system (Moeinaddini et al., 2010).
Particularly in Latin America and the Caribbean countries, waste disposal has become a serious problem and it is
currently a critical concern. Even though some of these countries have a legal framework for waste control, very few
possess the infrastructure and human resources to enforce regulations, especially those related to recycling and disposal. In
these countries, landfills are the main alternative used to dispose of solid waste (Zamorano et al., 2009). During the last
years, an important change in the use of regional solutions for solid waste management has been observed. A growing
number of municipalities in the region have been associated in communities in order to achieve significant scale economies
and better enforcement of regulatory standards (OPS-BID-AIDIS, 2010).
Nowadays, landfills are seen as engineering projects that consider the whole management cycle: planning, design,
operation, control, closure, and post-closure. There is a vital step in the first planning stage: site location. The problem of
identifying the best location must be based on many different criteria. Issues such as political stability, the existing
infrastructure in regions, and the availability of a trained workforce are critical on a ‘macro level’ when making such
decisions. Once a set of feasible regions have been identified for locating a new facility, selecting the ultimate location
takes place on a ‘micro level’ (Gehrlein and Pasic, 2009).
Identifying and selecting a suitable site for a landfill is one of the most outstanding tasks. Here, it must be considered
the collection and processing of information that relate to environmental, socioeconomic and operational aspects such as the
distance to the site, local environmental conditions, the existing patterns of land use, site access, and the potential uses of
the landfill after being completed, among many others features. That is why the location of landfills is a complex problem
(O’Leary and Tchobanoglous, 2002; Geneletti, 2010).
During the past 20 years, many authors around the world have applied different approaches to address the landfill
location problem. Erkut and Moran (1991), Hokkanen and Salminen (1997), and Banias et al. (2010), among others, have

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(2), 301-313, 2015

Establishing a Conceptual Model for Assessing Project Management Maturity in


Industrial Companies
Seweryn Spalek

Faculty of Organization and Managementㅁ


Silesian University of Technology
Gliwice, Poland
Corresponding author’s e-mail address: spalek@polsl.pl

The number of projects undertaken by companies nowadays is significant. Therefore, there is a need to establish processes
in the company supporting and increasing project management efficacy. In order to achieve this, the companies need to
know how good they are at organizational project management, taking into consideration different perspectives. Knowing
their strengths and weaknesses, they are able to improve their activities in challenging areas. In view of the critical
literature review and interviews with chosen companies, the article proposes a conceptual model for assessing project
management maturity in industrial companies. The model is based on four assessment areas. Three of them (human
resources, methods & tools, and environment) represent the traditional approach to maturity measurement, whilst the
fourth, knowledge management, represents a new approach to the topic. The model was tested in over 100 companies in the
machinery industry to verify its practical application and establish valid results of implementation, which have not been
previously explored.

Keywords: project management, model, assessment, maturity, industry, knowledge management.

(Received on November 15, 2011; Accepted on March 16, 2015)

1. INTRODUCTION

The need for models that could be implemented in industry is recognized by authors of publications in different areas of
expertise (Bernardo, Angel, & Eloisa, 2011; Jasemi, Kimiagari, & Memariani, 2011; Kamrani, Adat, & Azimi, 2011;
Metikurke & Shekar, 2011). The importance of new product development from a different perspective was recognized, for
example, by Adams-Bigelow et al. (2006) and measured by Metikurke & Shekar (2011) and Kahn, Barczak, & Moss
(2006). New product development is a laborious endeavour that must be managed properly. Therefore, industrial
companies are interested in having an efficient tool to measure how good they are when it comes to project management.
That assessment must be done in different areas, including the set of best practices as the reference.
Moreover, Kwak (2000) noticed that there is an influence on the company’s project management maturity level and
the key performance indicators of projects. Furthermore, Spalek (2014a, 2014b), based on his studies in the industrial
companies, shows that increasing the maturity level potentially reduces the costs and time of ongoing and new projects.
In fact, industrial companies are managing an increasing number of projects every year (Aubry et al., 2010). Besides
the typical operational representatives in the project-oriented environment like the IT and construction sectors, companies
in other industries have increasingly embraced newer project management methods (Cho & Moon, 2006; Grant &
Pennypacker, 2006; Liu, Ma, & Li, 2004; McBride, Henderson-Sellers, & Zowghi, 2004; C. T. Wang, Wang, Chu, & Chao,
2001). A good example is the machinery sector, which is very focused on the efficient development of new products that
are then used by other industries. The products of machinery industry are divided into those of general purpose, heavy-
industry machines and their elements and components, totalling more than 200 products (ISIC, 2008). Therefore,
companies in the machinery industry are a kind of backbone of the entire economy and are located all over the world.
However, the most significant production comes from the EU (European Union), ASEAN+6 (Japan, Korea, Singapore,
Indonesia, Malaysia, Philippines, Thailand, China (including Hong Kong), Brunei, Cambodia, Laos, Burma, Vietnam,
India, Australia, New Zealand) and NAFTA & UNASUR (Canada, Mexico, USA, Argentina, Bolivia, Brasilia, Chile,
Columbia, Ecuador, Guyana, Paraguay, Peru, Surinam, Uruguay, Venezuela) areas (Kimura & Obashi, 2010). The main
customers of products of the machinery industry are companies from the following industries: construction, agriculture,
mining, steelworks, food and textiles.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(3), 314-329, 2015

A PARETO-BASED PARTICLE SWARM OPTIMIZATION


ALGORITHM FOR MULTI-OBJECTIVE LOCATION ROUTING PROBLEM
Jie Liu*, Voratas Kachitvichyanukul

School of Engineering and Technology


Asian Institute of Technology
Pathumthani, Thailand
*
Corresponding author’s e-mail: liujie12502@163.com

This paper deals with multi-objective location routing problem. The two conflicting objectives considered are to minimize
total cost and to maximize total customer demand served. The multi-objective particle swarm optimization algorithm is
applied to solve this problem by searching for the Pareto front. A specific solution representation is adopted and two different
decoding methods are designed for multi-objective location routing problem. The test problem instances used to evaluate the
algorithm are modified from previously published test problem instances for single objective location routing problem. The
experimental results demonstrated that the proposed algorithm can effectively provide good Pareto fronts for most test
problem instances by both decoding methods but with different solution quality.

Keywords: location routing problem, multiple objectives, particle swarm optimization, pareto front, non-dominated solution,
movement strategies

(Received on June 6, 2014; Accepted on January 10, 2015)

1. INTRODUCTION

The decision problem to simultaneously determine facility locations and delivery routes is commonly known as location
routing problem (LRP, see Bruns, 1998). In general, there are three decisions in LRP; 1) the selection of a set of facilities
(depot, distribution center, and warehouse); 2) the assignment of customers to depot or warehouse; and 3) the determination
of the set of vehicle schedule and routes. The decisions are made so that some measure for distribution efficiency is optimized.
The applications of LRP appeared in many sectors including retailing, transportation, product distribution, postal service,
disaster relief, and so on.
In the early 1980’s, Jacobsen and Madsen (1980), Laporte and Nobert (1981) had recognized LRP as an integration of
two problems that are interdependent and interacted. The two sub-problems are the strategic location-allocation problem
(LAP) and the operational vehicle routing problem (VRP). An LRP reduced to an LAP when the routes between the depot and
customers are straight-and-back and the shipments are in the form of Full Truck Load shipments (FTL). An LRP with fixed
and preselected depots becomes a VRP where the remaining decision is to form the routes for shipment between the depot and
customers. Since both LAP and VRP are NP-hard problems (Cornuejols et al., 1977), LRP is also NP-hard. There are two
main comprehensive reviews on LRP given by Min, Jayaraman and Srivastava (1998) and Nagy and Salhi (2007).
The multi-objective LRP had been considered and had been a popular research topic in recent years because it reflects
many of the decision making situations that are much closer to real world problems. Some more recent research papers
include Lin and Kwok (2006), Moghaddam et al. (2010), Rath and Gutjahr (2011), Abounacer et al. (2012), and Nasab et al.
(2013).
This paper extended the single objective, capacitated LRP model of Prins et al. (2006a, b) to consider two conflicting
objectives. The first objective is to minimize the total cost while the other is to maximize the total customer demand served.
The Pareto based multiple objective particle swarm optimization (MOPSO) by Nguyen and Kachitvichyanukul (2010) is
adopted to search for the non-dominated solutions of the multi-objective LRP. A solution representation is adopted to encode
LRP and two different decoding methods for converting particles into solutions are designed. The proposed algorithm is
tested using the benchmark test problem datasets that are modified from previously published single objective benchmark test
datasets (http://prodhonc.free.fr/Instances/instances_us.htm, Prodhon, 2010) by setting the limit on the number of available
vehicles.
There are four contributions in this paper. First, it successfully applied MOPSO to find Pareto fronts for multi-objective
LRP with a solution representation and two newly designed decoding methods. Second, the benchmark test instances for
single objective LRP are modified to include limits on number of vehicles. Third, a set of good non-dominated solutions are

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(3), 330-342, 2015

AN INTEGRATED MODEL FOR CLASSIFYING PROJECTS AND PROJECT


MANAGERS AND PROJECT ALLOCATION: A PORTFOLIO MANAGEMENT
APPROACH
Elaine Cristina Batista de Oliveira*, Luciana Hazin Alencar and Ana Paula Cabral Seixas Costa

Management Engineering Department


Universidad Federal de Pernambuco
Recife – PE, Brazil
*
Corresponding author’s e-mail: elainecjz@gmail.com

This paper puts forward an integrated model to support the process of classifying projects and project managers (PM) using a
multiple criteria decision aid (MCDA) approach and allocating projects according to organizational restrictions through
mathematical programming. The model was formulated after reviewing the literature and being guided by the findings of
searches; we used a MCDA method for project and PM classification (first stage) and mathematical programming for project
allocation (second stage). A practical application of the proposed model was implemented at a Brazilian electric energy
company. The results demonstrated that it was possible to classify projects and project managers into definable categories,
thereby enabling the process of project allocation to be undertaken more effectively. The project allocation process can be
conducted in a systematic and more efficient way. The proposed model can support an organization by allocating its most
critical projects to its best qualified and experienced professionals.

Keywords: project management; project managers; project portfolio; project allocation; multi-criteria decision aid.

(Received on February 19, 2014; Accepted on January 26, 2015)

1. INTRODUCTION

A system for evaluating projects and project managers is well-suited for decision making in the context of portfolio
management, whether for prioritizing the use of resources, developing management skills or evaluating the performance of
past projects. Project portfolio management (PPM) aims at identifying, prioritizing, authorizing, managing and controlling
projects and programs activities and the risks, resources and priorities associated with these (Project Management Institute,
2008). Portfolio management seeks to ensure that the “resources and changes are prioritized in line with the current
environment, existing changes, resource capacity and capability” (Great Britain, 2010).
Another issue that arises in portfolio and programs planning is allocating projects to project managers by considering a
complete evaluation and ranking of projects and project managers (PM) according to their characteristics, needs and skills
(Meredith & Mantel, 2011; Patanakul, Milosevic, & Anderson, 2007). In this respect, the Project Management Office (PMO)
is the entity that can support the decision process by providing information, analysis and problem-solving techniques (Project
Management Institute, 2013).
The problem addressed in this article initially arose in an energy company that was seeking a solution to their project
allocation process; they had multiple projects to be distributed to multiple managers, with fewer project managers than
projects, thus forcing the managers to simultaneously manage multiple projects, a very common portfolio management
problem. The company wanted this allocation process to account for the managerial needs of the projects as well as the skills
and competencies of managers, with managers of these previous experience on similar projects previously managed. The
organization would like the model take into account their different organizational objectives in the evaluation process and the
inherent characteristics of their projects. This problem was also identified quite frequently in companies working with
multiple projects and has limited number of project managers. No studies were found in existing literature that addressed this
problem.
Thus, this article proposes an integrated model for allocating of projects to PMs based on two stages: in the first stage,
projects in different categories are assessed and classified while the performance and skills of PMs are analyzed to categorize
them; in the second stage, the projects are allocated to the PMs once the projects and PMs are sorted.
Due to the complex nature of the projects, gaining a detailed assessment of their features is a difficult task (Ogunlana,
Siddiqui, Yisa, & Olomolaiye, 2002) and requires the evaluation of multiple criteria; once sets of criteria are established,
alternatives can be evaluated according to the organization´s preferences (Eilat, Golany, & Shtub, 2008; Mendoza, Santiago,
& Ravindran, 2008; Wang & Liang, 2010). The selection of a project manager is also a multiple criteria problem (Hadad,

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(3), 343-353, 2015

A SUBSET SUM APPROACH TO COIL SELECTION FOR SLITTING


Yune T. Han, Soo Y. Chang,*

Department of Industrial and Management Engineering,


Pohang University of Science and Technology,
San 31 Hyoja-Dong, Pohang, Kyungbuk, 790-784, Republic of Korea
*
Corresponding author’s e-mail: syc@postech.ac.kr

Optimizing coil slitting operation requires not only the generation of efficient slitting patterns but also the selection of coils to
be slit by each pattern. When the coils to be slit are not identical, the optimal coil selection can be quite a cumbersome task. In
this paper, we consider the problem of selecting coils to be slit by a given particular slitting pattern when the available coils
are not identical. The objective of our problem is to maximize the customer order fulfillment while minimizing the slitting
loss and overproduction. We adopt and modify a dynamic programming scheme for the subset sum problem to develop an
optimal pseudo-polynomial time algorithm for the problem and demonstrate that the algorithm is fast enough for solving
realistic problem instances.

Keywords: cutting stock, subset sum, coil slitting, 1.5 dimensional packing;

(Received on June 20, 2014; Accepted on October 29, 2014)

1. INTRODUCTION

The steel plate that is thin enough to be rolled is produced, stored and handled in the form known as the coil. In the rolling
process of the steel mill, the coils are produced in a limited variety of widths to avoid the productivity loss and setup cost that
may incur whenever there is a change in target widths of coils being produced. Hence, the coils in stock must be slit widthwise
in order to fulfill customer orders which tend to be in much greater variety of widths than the coils in stock. Figure 1 illustrates
the slitting operation.

Figure 1. Slitting operation

As illustrated in Figure 1, the coils are slit by the multiple knives put in place following a given slitting pattern
formulated for a particular coil width. Once the knives are in place, it is desirable to slit as many coils as possible without
changing the slitting pattern. However, there are conflicting costs and benefit involved in slitting multiple coils by a fixed

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(3), 354-368, 2015

REVERSE LOGISTICS AND SUPPLY CHAINS: A STRUCTURAL EQUATION


MODELING INVESTIGATION
Kaveh Khalili-Damghani1, Madjid Tavana2,3,*, and Maryam Najmodin4
1
Department of Industrial Engineering
South Tehran Branch - Islamic Azad University
Tehran, Iran
2
Business Systems and Analytics Department
Lindback Distinguished Chair of Information Systems and Decision Sciences
La Salle University
Philadelphia, PA, USA
*
Corresponding author’s e-mail: tavana@lasalle.edu
3
Business Information Systems Department
Faculty of Business Administration and Economics
University of Paderborn
Paderborn, Germany
4
Department of Industrial Engineering
Industrial Management Institute
Tehran, Iran

The process of transforming raw materials into final products and delivering those products to customers, known as supply
chain (SC) management, is becoming increasingly complex. Most of SC management research has been concerned with
procurement and production. However, recently, it has become increasingly important to extend SC issues beyond the point
of sale to reverse logistic (RL) where the flow of returned products is processed from the customers back to the collection
centers for repair, remanufacturing or disposal. We propose a conceptual framework and empirically investigate the
relationship between the key factors in RL and SC performance measurement using a series of hypotheses. Structural
equation modeling (SEM) is used to test the hypotheses. The results reveal insightful information about the effects of RL
factors on the SC performance.

Keywords: supply chain performance; reverse logistic; structural equation modeling

(Received on April 13, 2014; Accepted on November 30, 2014)

1. INTRODUCTION

Competition in the manufacturing environment has shifted from simple and uni-directional supply chains (SCs) to
sophisticated and bi-directional SCs and only firms with agile and versatile SCs can sustain an effective competitive edge
(Ohara, 2002; Chan et al., 2003; Li et al., 2006; Lin et al., 2006; Vonderembse et al., 2006). Most of SC management research
has been concerned with procurement and production. However, recently, it has become increasingly important to extend
SC issues beyond the point of sale to reverse logistic (RL) and the product utilization phase (e.g., service, maintenance and
others) and to the end-of-life phase (e.g., product recovery, refurbishing or recycling) (Schultmann et al., 2006).
A forward SC is concerned with the flow of materials, products and information from suppliers through the
production and distribution processes to the final users (Schary, 2001). A RL is the process of planning, implementing and
controlling the efficient, cost-effective flow of raw materials, in process inventory, finished goods and related information
from the point of consumption to the point of origin for the purpose of recapturing or creating value or for proper disposal
(Rogers and Tibben-Lembke, 1999, p. 2). The majority of the SC performance measurement studies in the literature are
devoted to forward logistics performance measurement. However, a comprehensive SC performance management system
should collectively consider the performance of the RL and the performance of the SC in an integrated framework.
In spite of the fact that RL happens frequently for many reasons such as the rise of electronic retailing, the increase
in catalogue purchases, more self-service stores, and a lower tolerance among buyers for imperfection, few companies

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(3), 369-381, 2015

A SYSTEMATIC METHODOLOGY TO DEVELOP BUSINESS MODEL OF A


PRODUCT SERVICE SYSTEM

Ming-Chuan Chiu1,*, Ming-Yu Kuo1 and Tsai-Chi Kuo2


1
Department of Industrial Engineering and Engineering Management
National Tsing Hua University
Taiwan, R.O.C.
*
Corresponding author’s e-mail: mcchiu@ie.nthu.edu.tw
2
Department of Industrial and Systems Engineering
Chung Yuan Christian University Chung Li
Taiwan, R.O.C.

Faced with a growing global population and finite environmental resources, sustainability continues expanding as a serious
concern worldwide. In response to this issue, the Product Service System (PSS) was developed as a planning tool. The
purpose of PSS is not only to meet customer requirements but also to provide an economic/environmental/social “triple-win”
for enterprises. It can be a challenge for enterprises and involved stakeholders to construct a reliable PSS business
model—the interdisciplinary framework that covers market, core competence, product, and profit. Most previous studies
have focused on either PSS methodologies or a business model (BM) development process, but few have integrated both
concepts. Moreover, existent BM procedures seldom investigate interactions among interdisciplinary aspects and typically
lack evaluation tools. Therefore, this study presents a combined PSS BM methodology, generating a proper business model
for a company based on internal capability and external environment factors. Further, a Multiple Criteria Decision Making
(MCDM) tool that integrates both Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to
Ideal Solution (TOPSIS) was used to determine the appropriate BM and to present opportunities for the enterprise to extend
its current products or services into new market segments. A case study illustrates how the PSS business model can create
new commercial opportunities in the market. From an economic perspective, PSS can create new values and improve the
level of customer satisfaction by integrating service, product and system, while from the social and environmental
perspectives, products and services can be more efficiently used and hence enhance environmental sustainability.
Consequently, the proposed PSS business model can benefit not only the enterprise but also the environment and society in
general.

Keywords: product service system (PSS); business model (BM); multiple criteria decision making (MCDM); technique for
order preference by similarity to ideal solution (TOPSIS); analytic hierarchy process (AHP)

(Received on November 9, 2014; Accepted on March 19, 2015)

1. INTRODUCTION

In today’s business world, concern for the environment has come increasingly into play as a planning factor. Analysts
proposed the concept of the Product Service System (PSS) in 1999 as a method for not only sustaining our planet but also
retaining economic benefits. While PSS offers multiple benefits for companies, the environment and society, many
enterprises are unable to develop their own PSS business model due to a lack of knowledge or the use of inaccurate
methodology and, as a result, they may have difficulty surviving in a globally competitive market. The challenge when
developing a PSS business model is to concurrently balance economic, societal and environmental factors along with the
interests of all related stakeholders. Most previous studies have focused solely on either PSS design methodologies or on
business models. Only a few have jointly bridged the two issues. Methods that simultaneously considered an interaction of
business model elements and an evaluation of candidate models have remained absent. Therefore, the goal of this study was
to propose a methodology that could help generate efficient and effective new PSS business models. The proposed method
incorporates use of a MCDM tool that will enable a manufacturing company to transform its original product using a new
service-based pattern and prosper in the competitive global environment, thus generating a triple-win business solution for
the economy, society, and the environment. The paper is organized as follows. In chapter 2, related literatures were
reviewed. Chapter 3 illustrated the proposed methodology. Case study was demonstrated in chapter 4. Conclusions and
potential research issues for future study were given in chapter 5.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(3), 382-398, 2015

MODELING AND SIMULATION FOR ANALYSIS AND IMPROVEMENT OF A


SOCK MANUFACTURING SYSTEM IN A MICRO-ENTERPRISE
Santiago-Omar Caballero-Morales

Postgraduate Division
Technological University of the Mixteca
Road to Acatlima, Km. 2.5, Huajuapan de Leon, Oaxaca, 69000, Mexico
Corresponding author’s e-mail: scaballero@mixteco.utm.mx

The productivity of the national sock manufacturing industry in Mexico has been affected by the introduction of large foreign
manufacturers. Nevertheless there is a small sector formed by micro and small enterprises which is competing for the national
market. Commonly these enterprises are informally planned and managed as the entrepreneurs have very basic knowledge of
accounting, manufacturing-production practices and local market behavior. In this situation the use of computer-assisted
modeling and simulation can be a cost-effective tool to support their planning and managerial decisions. The main objective
of this research is to provide a methodology to analyze a standard sock manufacturing system and support the decision
making process to improve its performance by means of modeling and simulation. The methodology to build the model of the
manufacturing system consists of statistical representations of processing times, arrival description of raw material,
identification of resources and dependencies. The model is statistically validated and simulation is performed for analysis of
resource utilization and estimation of profits. This analysis led to designing an alternative model which presented a significant
increase in production and higher profits. It is expected that this work can be used as a reference to guide academic
practitioners and entrepreneurs to analyze and improve managerial decisions for similar manufacturing processes in
developing economies.

Keywords: modeling of sock manufacturing system; validation and simulation of manufacturing systems; mexican
micro-enterprises; economic analysis of a manufacturing system

(Received on March 14, 2014; Accepted on December 22, 2014)

1. INTRODUCTION

In Mexico 95.5% of the economic entities, or enterprises, are micro enterprises (0-10 workers) which include 45.7% of the
national workforce and contribute to approximately 15% of the global Gross Domestic Product (GDP) (Lozano-Yécora et al.,
2013). These enterprises, although large in quantity, have been reported to have a small period of life: two out of ten micro
enterprises are still operating after the first year since its creation, being two years their average lifespan (Lozano-Yécora et
al., 2013; Salas et al., 2012; García-Reza and Cristóbal-Vázquez, 2007; Cabello et al., 2007; Martinez-Kasten, 2005).
Among the causes of failure for micro enterprises are the followings;

(a) There is no habit of reinvesting in technology to improve production (Fierro, 2006);


(b) most enterprises are based on self-employment, thus managerial skills are uncertain because the entrepreneurs can
only gradually learn about the enterprise's true costs by opening and operating the business (García-Reza and
Cristóbal-Vázquez, 2007; Fajnzylber et al., 2006);
(c) forecasting and operation manuals are not envisioned during the creation of the enterprise (Navarrete-Báez et al.,
2011);
(d) insufficient economic resources to hire specialized workers nor to train current ones, update technology or improve
infrastructure (Navarrete-Báez et al., 2011);
(e) most of these enterprises are funded as family businesses where the economic resources come from personal savings
(Heino and Pagán, 2001);
(f) The entrepreneurs are mostly self-employers with little or no employment history in the formal economy which
restricts their opportunities to obtain credits from the banking sector (Carrillo, 2009).

The textile industry, which is a significant manufacturing subsector and is mainly integrated by micro and small
enterprises, has faced growing competition in recent years from Asian countries (Musik-Asali, 2010). Negative aspects which
cannot be controlled by these enterprises such as illegal imports of textile goods, non-registered trading, undervaluation of

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(4), 399-411, 2015

EVALUATING THE LOCATION OF REGIONAL RETURN CENTERS IN


REVERSE LOGISTICS THROUGH INTEGRATION OF GIS, AHP AND
INTEGER PROGRAMMING
A.Zafer Acar1, İsmail Önden2,* and Karahan Kara2
1
Department of International Logistics and Transportation
Piri Reis University
Istanbul, Turkey
2
Department of International Logistics
Okan University,
Istanbul, Turkey
*
Corresponding author’s e-mail: ismailonden@gmail.com

Reverse logistics network problems are accepted as sophisticated research areas in the existing literature due to difficulties of
the prediction of the material flow in the network and the conflicting objectives of minimizing total cost and energy
consumption while maximizing customer satisfaction and control of the pollution. There are different approaches to dealing
with the difficulties of the problem, yet these approaches overlook the geographical aspect of the issues. In this paper, a
reversed network of a governmental organization is evaluated. A methodology is proposed that integrates Analytic Hierarchy
Process, Geographic Information Systems and integer programming to determine the locations of the return centers and to
deal with the complicated structure. Population, airport locations, maritime facilities, railroad lines and highway lines are
accepted as the decision criteria. Based on these decision criteria, suitability levels of the candidate cities are calculated. A
closed-loop supply chain is taken into consideration and reverse functions of the chain are evaluated with the proposed
methodology. Finally, the locations of regional return centers are determined.

Keywords: reverse logistics, geographic information systems (GIS), spatial analysis, integer programming, analytic
hierarchy process (AHP), location analysis

(Received on April 16, 2013; Accepted on June 21, 2015)

1. INTRODUCTION

Reverse logistics is a sophisticated research area in the supply chain management and network approaches. Legislative
initiatives, increasing awareness from consumers and companies’ perception of new business opportunities (Salema et al.,
2005) increase the importance of this new research topic. Also, reverse logistics reflects a new approach to supply chain.
Today, supply chains no longer finish at the point where products reach the customers, but now include returns as well.
Returned products can range from disposed products to be recycled to products that are socialsent back due to customers’
dissatisfaction. On the other hand, governmental regulations, social factors and economic concerns make reverse logistics
even more interesting area to the researchers. These concerns increase government and private organizations’ interest and
improve the role of these organizations (Cruz-Rivera et al., 2009). Moreover, increasing competitiveness, after sales support
and returning sales, and green marketing techniques, can force the businesses to focus on reversed operations.
Owing to its economic and environmental benefits, the topic has also attracted organizations such as CLM (Council of
Logistics Management) and RLEC (Reverse Logistics Executive Council). This growing interest in the topic has resulted in
many scientific articles. Even though the concept of reverse logistics has evolved over time, authors converged on the
proposal of Rogers and Tibben-Lembke (2001). This most cited definition of the topic defines reverse logistics as “the
process of planning, implementing, and controlling the efficient, cost-effective flow of materials, in-process inventory,
finished goods, and related information from the point of consumption to the point of origin for the purpose of recapturing
value or proper disposal”. Similarly RLEC argued that reverse logistics is “the process of planning, implementing and
controlling backward flows of raw materials, in process inventory, packaging and finished goods, from a manufacturing,
distribution or use point, to a point of recovery or point of proper disposal”. In other words, it entails moving goods from their
place of use, back to their place of manufacturing for re-processing, re-filling, repairs or recycling / waste disposal (Deloitte,
2014).

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(4), 412-425, 2015

A MULTI-OBJECTIVE APPROACH TO PLANNING IN


EMERGENCY LOGISTICS NETWORK DESIGN
Jae-Dong Hong1,*, Ki-Young Jeong2, and Yuanchang Xie3
1
Industrial Engineering
South Carolina State University
Orangeburg, SC 29117, USA
*
Corresponding author’s e-mail: jhong@scsu.edu
2
Engineering Management
University of Houston at Clear Lake
Houston, TX, USA
3
Civil and Environmental Engineering
University of Massachusetts Lowell
Lowell, MA, USA

This paper considers simultaneous strategic and operational planning for the emergency logistics network (ELN) design.
Emergency events make it so critical to distribute humanitarian supplies through emergency response facilities (ERFs) for
rapid recovery to the affected areas in a timely and efficient manner. But emergency could prevent the ERF from providing
the expected service, since the facility can be also damaged or destroyed by such an event. Thus, it is important to plan a
cost/distance-effective ELN, which is also reliable, less likely to be disrupted at strategic level and robust, more likely to work
well at operational level. We adopt a multi-objective decision approach to designing such an ELN. We present formulations
for planning framework and operating ELN to simultaneously determine the locations of ERFs and to assign the possible
disaster areas to ERFs. A case study is conducted to demonstrate our models’ capability under the risk of facility disruptions.

Keywords: emergency response, facility location, multi-objective approach, facility disruptions

(Received on June 20, 2013; Accepted on June 20, 2015)

1. INTRODUCTION

An emergency logistics network (ELN) design has become an important strategic and operational decision, due to the major
damage inflicted by several weather-related events, such as Hurricane Katrina in 2005 which was one of the deadliest
hurricanes in the United States. The 2011 Tohoku Japan Tsunami left more than 15,000 dead and has become the world’s
most expensive natural disaster on record. In 2012, Hurricane Issac’s slow and rainy march through Louisiana caused as
much as $2.0 billion in insured losses, leaving extensive flood and wind damage in several states. In 2013, a two mile-wide
tornado in the suburb of the Oklahoma City, killed more than 50 people and destroyed entire tracts of homes. In 2011, one of
the deadliest U.S. tornadoes killed 161 people in Joplin, Missouri, were killed in 2011. The weather-related emergencies have
brought issues of natural disaster planning again. Indeed, after emergencies, it is critical through emergency response
facilities (ERFs) to distribute humanitarian supplies to the affected areas in a timely and efficient manner for rapid recovery.
The distribution of emergency supplies from ERFs to the disaster areas must be done via an emergency logistics network
(ELN). The emergency response facilities considered in this paper include (i) distribution warehouses (DWHs), where
emergency relief goods are stored, (ii) intermediate response facilities termed Commodity Distribution Point (CDP) or Break
of Bulk (BOB) point, where people can more effectively gain access to relief goods, and (iii) neighborhood locations in need
of relief goods.
ELN design problem can be divided into two levels: strategic and operational. The primary objective of the strategic
level is to determine the most cost-efficient locations of DWHs and CDPs, distribution of emergency supplies throughout the
ELN, and assignment of neighborhood locations to CDPs and CDPs to DWHs. In fact, some experts insist that 80% of the
costs are locked in with the location of ERFs and determination of distribution of relief items (Watson, Lewis, Cacioppi and
Jayaraman, 2013). Thus determining these locations is a critical area in the design of an effective ELN. However, traditional
cost-based facility location models, such as set-covering models, p-median models, p-center models, and fixed charge facility

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(4), 426-437, 2015

HEALTHCARE PERFORMANCE MEASUREMENT: IDENTIFICATION OF


METRICS FOR THE LEARNING AND GROWTH BALANCED SCORECARD
PERSPECTIVE
Samin Emami1,*, Toni L. Doolen2
1
Mondelēz International Inc.
100 NE Columbia Blvd. Portland, OR 97211, U.S.A
*
Corresponding author’s e-mail: samin.emami@gmail.com
2
School of Mechanical, Industrial, and Manufacturing Engineering
Oregon State University
Corvallis, OR 97330, U.S.A

While there is substantial literature devoted to measuring the performance of hospitals and clinics in terms of indicators such
as health outcomes and finances, it is vital for hospitals and clinics to also develop a set of forward-looking metrics at the
operational level that drive all aspects of performance. The purpose of this research is to identify and prioritize a set of metrics
within one perspective of the Balanced Scorecard framework, called learning and growth, which aims at sustaining innovation,
change, and continuous improvement. Using Analytic Hierarchy Process, the data provided by medical managers was
analyzed to determine the most important learning and growth categories and metrics within each category. The results
showed that “human capital” metrics have the most significant impact on the performance of the participating hospitals/clinics.
The results also provide practitioners with metrics spanning each of the four performance categories identified to the learning
and growth perspective.

Key words: performance measurement, healthcare, balanced scorecard, learning and growth, AHP

(Received on November 24, 2013; Accepted on June 21, 2015)

1. INTRODUCTION

Healthcare is one of the fastest growing areas of the economy in most developed countries (Purbey, Mukherjee, & Bhar,
2006). Healthcare organizations are expected to deliver high quality service at reduced costs while dealing with swift
changes in technology, patient load fluctuations, inefficient information access and control, and inter-process delays
(Rasheed & Lee, 2014). Articles and reports on the implementation of various performance measurement frameworks in
healthcare are being published at an increasing rate (Azizi, Behzadian, & Afshari, 2012). Many authors have commented on
those aspects of a healthcare organization that need to be monitored on a regular basis. According to Kollberg and Elg
(2010), healthcare organizations are often described as professional organizations in which the medical profession has a
primary influence on healthcare. Therefore, healthcare organizations rely heavily on traditional forms of control, which
makes measurement of operational drivers of performance difficult to capture. Hospitals and clinics around the world have
mostly used performance metrics to measure indicators related to health outcomes and finances. Although it is important to
monitor health and financial outcomes, Longenecker and Fink (2001) suggest that without integrating ongoing operational
performance measurement and feedback into lower levels of healthcare organizations, performance improvement plans will
not be implemented properly. Moreover, organizations without operational metrics tend to experience higher employee
dissatisfaction and employee turnover.
In recent years, some hospitals and clinics have started using a management tool called the Balanced Scorecard (BSC) to
monitor operational metrics in the organization along with health outcome metrics. BSC is recognized as an important
management tool for 21st century companies (Steele, 2001). BSC is both a performance framework and a management
methodology. BSC was developed by Robert Kaplan and David Norton after an extensive research project in 1990. Kaplan
and Norton believed that traditional performance measurement systems that focused primarily on financial measurements
actually hindered organizational growth and success. The conclusions were that, contrary to popular practice, organizations
should not be managed solely based on “bottom line” results (Kaplan & Norton, 1992). BSC was initially used in the
private and profit sectors. In the late 1990s, non-profit organizations, including healthcare and educational organizations,
began considering BSC as an applicable management tool (Azizi et al., 2012). BSC typically includes organizational

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(4), 438-453, 2015

RELIABILITY OPTIMIZATION OF A SERIES-PARALLEL K-OUT-OF-N


SYSTEM WITH FAILURE RATE DEPENDS ON WORKING COMPONENTS
OF SYSTEM
Mani Sharifi1,*, Ghasem Cheragh1, Kamran Dashti Maljaii1, Arash Zaretalab2 and Amir Vahid Fakre Daei3

Faculty of Industrial & Mechanical Engineering


1

Qazvin Branch, Islamic Azad University


Qazvin, Iran
*
Corresponding author’s e-mail: M.Sharifi@Qiau.ac.ir

Department of Industrial Engineering,


2

Amirkabir University of Technology (Tehran polytechnic),


Tehran, Iran
3
Faculty of Management
U.A.E. Branch, Islamic Azad University
Dubai, U.A.E.

This paper presents a mathematical model for a redundancy allocation problem (RAP) with k-out-of-n subsystems and failure
rate depends on working components of system. It means that failure rate of components increases when a component fails.
The subsystems may use either active or cold-standby redundancy strategies which considered as a decision variable. Thus,
the proposed model and solution methods are to select the best redundancy strategy among active or cold-standby, component
type, and levels of redundancy for each subsystem. The objective function is to maximize the system reliability under cost and
weight constraints. Since RAP belongs to Np-hard problems, four meta-heuristic algorithms named genetic algorithm,
Memetic algorithm, simulated annealing and particle swarm optimization are proposed. The results shown that the MA is
better than other algorithms. Finally, in order to determine whether there is any significant difference between the results of
four algorithms or not, a statistical test is applied.

Keywords: reliability, redundancy allocation problem, k-out-of-n systems, Meta-heuristic algorithm, response surface
methodology.

(Received on January 5, 2013; Accepted on February 20, 2015)

1. INTRODUCTION

Reliability is one of the most important problems in the design of system. So many studies have been made in this area and
many solutions have been brought up to increase the system reliability like redundancy allocations and improve the
components failure rate. Fyffe et al., (1968) presented a new method to improve system reliability that called RAP. This
problem aims to improve system reliability by adding redundant components to each subsystem under some constraints, e.g.
cost, weight, and volume. This problem belongs to non-linear problems and Chern (1972) proved that this problem is Np-hard.
The failure rate of components is one of the most important factors that effect on system reliability. In classic literature of
RAP components, failure rates are considered as:
1. Constant failure rate (CFR) which has an exponential probability distribution function (pdf) for components life.
2. Time dependent failure rate which is increasing failure rate (IFR) or decreasing failure rate (DFR). These failure rates
have Gamma, Weibull and other pdf for components life.
In CFR area researches, Misra and Sharma (1991) used a combination of direct search methods and random search for a
series-parallel system with k-out-of-n subsystems and mixed redundancies. Pham (1992) considered a system consisting of a
single k-out-of-n subsystem with active redundancy strategy for minimizing system cost. Pham and Malon (1994)
represented this model considering multi-failure mode for components. In this problem the objective function is to minimize
system cost in order to find optimal number of subsystem components. Ida et al., (1994) used Genetic algorithm (GA) at the
first time to solve RAP. Painton and Campbell (1995) worked on a series-parallel RAP under risk and solved the presented
model using GA. Coit and Smith (1995) presented a review of the different optimization techniques named dynamic
programming (DP), integer programming (IP), mixed integer and nonlinear programming (MINLP) and presented a GA for

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(4), 454-466, 2015

A BUSINESS PROCESS SIMULATION FRAMEWORK INCORPORATING


THE EFFECTS OF ORGANIZATIONAL STRUCTURE
Jinyoun Lee1, Sanghyun Sung2, Minseok Song3, and Injun Choi1,*
1
Department of Industrial and Management Engineering
Pohang University of Science and Technology (POSTECH)
Pohang, Gyeongbuk, 790-784 South Korea
*
Corresponding author’s e-mail: injun@postech.ac.kr
2
Graduate School of Technology and Innovation Management
Pohang University of Science and Technology (POSTECH)
Pohang, Gyeongbuk, 790-784 South Korea
3
School of Business Administration
Ulsan National Institute of Science and Technology (UNIST)
Ulsan, 689-798 South Korea

Organizations constantly change their business processes and/or organizational structure to innovate and adapt to the rapidly
changing environment. Business process simulation is one of the most popular methodologies for more effectively predicting
the effects of process and organizational redesign. Most existing approaches, however, consider only business processes and
not organizational structures that can significantly affect business process performance. This study presents a framework for
incorporating the effects of organizational structure into business process simulation. Further, it demonstrates how to use and
analyze the proposed model. Finally, a case study of the Korean prosecutor’s office is presented to illustrate the importance
and feasibility of the proposed approach, which will enable a more precise prediction of the changes caused by process and
organizational redesign.

Keywords: business process simulation, business process reengineering, business process analysis, organizational structure,
organizational structure redesign

(Received on November 6, 2014; Accepted on June 18, 2015)

1. INTRODUCTION

Today’s organizations change or redesign their business processes and/or organizational structures more frequently than ever
before to innovate and adapt to the rapidly changing environment. Many organizations frequently make use of simulation
techniques to more effectively predict the effects of process redesign (Greasley, 2003; Gregoriades and Sutcliffe, 2008; Barjis
and Verbraeck, 2010; van der Aalst, 2010). In many Business process reengineering (BPR) and process innovation (PI)
projects, business analysts simulate redesigned processes to validate the processes and identify possible problems caused by
the changes in processes (Giaglis et al., 1999; Chen and Tsai, 2008; Gregoriades and Sutcliffe, 2008; Barjis and Verbraeck,
2010). Most existing approaches, however, consider only business processes and not organizational structures that can
significantly affect business process performance (Chen and Tsai, 2008; Barjis and Verbraeck, 2010; Hearn and Choi, 2013).
To more appropriately analyze and simulate business processes, the effects of organizational structure should be
considered. Lee et al. (2014) proposed a basic approach to incorporate the effect of organizational structure into the business
process simulation model from two perspectives: departmentalization and centralization. This research focused on how the
characteristics of organizational structure affect transfers of work between tasks in business processes. However, no case
study has validated the proposed model and demonstrated how the results are used.
To address this gap in the previous research, this study proposes a framework not only for how to incorporate the effects
of organizational structure into business process simulation, but also for how to analyze and use the results. Further, a case
study is presented to illustrate the importance and feasibility of the proposed approach, which will enable a more precise
prediction of the changes caused by process and organizational redesign.
The remainder of the paper is organized as follows. Section 2 discusses related research. Section 3 describes an
approach for deriving a process simulation model that incorporates the effects of organizational structure and proposes how to
analyze the proposed model. Section 4 presents a case study. Section 5 presents the conclusions of the study.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(4), 467-479, 2015

EFFICIENT SYNTACTIC PROCESS DIFFERENCE DETECTION AND ITS


APPLICATION TO PROCESS SIMILARITY SEARCH
Keqiang Liu1,2, Zhiqiang Yan2,*, Yuquan Wang3, Lijie Wen3, and Jianmin Wang3
1
School of Computer Science and Technology
Beijing Institute of Technology
Beijing, China
2
Information School
Capital University of Economics and Business
Beijing, China
*
Corresponding author’s e-mail: zhiqiang.yan.1983@gmail.com
3
School of Software
Tsinghua University
Beijing, China

Nowadays, business process management plays an important role in the management of organizations. More and more
organizations describe their operations as business processes. It is common for organizations to have collections of thousands
of business process models. The same process is usually modeled differently due to the different rules or habits of different
organizations and departments. Even in the subsidiaries of the same organization, process models vary from each other,
because these process models are redesigned from time to time to continuously enhance the efficiency of management and
operations. Therefore, techniques are required to analyze differences between similar process models. Current techniques can
detect operations required to modify one process model to the other. However, these operations are based on activities and the
syntactic meanings are limited. In this paper, we define differences based on workflow patterns and propose a technique to
detect these differences efficiently. Besides that we propose a metric that can compute process similarity based on detected
syntactic differences. To the best of our knowledge, this is the first technique that returns a list of syntactic differences while
computing a similarity score between two process models. The experiment shows that these differences indeed exist in
real-life process models and are useful to analyze differences between business process models; the experiment also shows
that the metric for process similarity based on detected differences works well in terms of the quality of similarity search and
the average precision score is 0.8.
Keywords: business process model, syntactic difference, process similarity, process feature

(Received on November 7, 2014; Accepted on June 18, 2015)

1. INTRODUCTION

Recently, organizations tend to enhance their management efficiency with the technology of business process management
(BPM). More and more business processes are described as process models to facilitate the implementation of BPM.
Therefore, it is common to see thousands of process models in an organization or even in a department. For example, the
information department of China Mobile Communication Corporation (CMCC) maintains more than 8,000 processes in its
BPM systems (Gao et al. 2013). To manage such a large number of process models efficiently and automatically, business
process model repositories are required (La Rosa et al. 2011, Yan et al. 2010, 2012). These repositories provide techniques
include detecting differences between process models (Dijkman et al. 2008), process similarity search (Yan et al. 2012) and
process querying (Yan et al. 2012).
This paper focuses on detecting (syntactic) differences between business process models. Difference detection can, for
example in case of merger between BPM systems of two or more organizations, be applied to detecting differences between
two process models describing the same operation in different organizations. For example, originally each of the 34
subsidiaries of CMCC built its own BPM systems to maintain its business processes, but later the headquarters decided to
build a unified one to maintain business process models of all subsidiaries (Gao et al. 2013). Since these subsidiaries have
almost the same business processes, these process models are similar with some variations. For each business process, the
headquarters also would like to have one model that works for different scenarios of all subsidiaries instead of to maintain
different models for different scenarios. Therefore, it is required to have a technique that can detect the differences between

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(4), 480-493, 2015

A SYSTEMATIC METHODOLOGY FOR OUTPATIENT PROCESS


ANALYSIS BASED ON PROCESS MINING
Minsu Cho1, Minseok Song1,*, and Sooyoung Yoo2
1
Department of Management Engineering
Ulsan National Institute of Science and Technology
Ulsan, Republic of Korea
*
Corresponding author’s e-mail: msong@unist.ac.kr
2
Seoul National University Bundang Hospital
Seongnam, Republic of Korea

Healthcare industry is competitive due to the increase in demand for medical services caused by population aging and
improved standards of living. In accordance with this trend, there have been several studies investigating how clinical
processes can be improved such as decreasing the waiting time for consultation, optimizing reservation systems, etc. To
improve clinical processes in hospitals, understanding current situations and identifying problems are critical. In this paper, a
method to analyze outpatient processes based on process mining is suggested. Process mining aims at extracting
knowledgeable information from event logs. The proposed methodology consists of data integration, data exploration, data
analysis, and discussion steps. In the data analysis, process discovery and matching rate analysis, process pattern analysis and
what-if analysis based on performance analysis are conducted by applying several process mining techniques. To validate the
proposed method, a case study is conducted with a tertiary general university hospital in Korea.

Keywords: process mining; healthcare; outpatient process analysis; case study

(Received on November 9, 2014; Accepted on June 18, 2015)

1. INTRODUCTION

1.1 Background

A healthcare environment has been one of the important issues due to the increase in demand for medical services caused by
population aging and improved standards of living. For this reason, not only the high-quality consultation but also the optimal
clinical processes should be provided to patients (Anyanwu et al. 2003). While all information related to patients was written
by hand on chart in the past, hospitals nowadays have several information systems such as PMS (Practice Management
System), EMR (Electronic Medical Record), CPOE (Computerized Physician Order Entry), and PACS (Picture Archiving
Communication System), etc. used to record patients’ information (Kagadis and Langer 2011). The information stored in the
systems can be used as a source of data analysis in order to understand and improve the clinical processes. By analyzing the
data, hospitals can manage patients and provide better services at a lower cost (Mans et al 2008). However, most of the
hospitals have conducted only statistical analysis to understand current situations without any further advancement.
With regard to outpatient processes, many studies have been conducted by several researchers. In order to perform
outpatient process analysis, numerous approaches have been applied such as modeling, discrete event simulation, statistical
testing and so on. From a purpose point of view, these papers have aimed to improve outpatient processes by reducing waiting
time or idle time (Lindley 1952 and Jansson 1966), and making an optimized reservation system (Fries and Marathe 1981).
However, these works were not a data-based research using several hospital information systems or did not propose a
systematic analysis method. In addition, they tended to focus on a specific task and were not able to analyze overall processes
in macro level. To solve these limitations, the research using process mining techniques has suggested on the business process
perspective in healthcare.
Process mining aims at extracting knowledgeable information that is related to processes from event logs recorded in an
information system. Process mining has been applied to various industry fields such as manufacturing, medical IT devices,
and call-centers, etc. (van der Aalst et al. 2007; van der Aalst 2011; Lee et al. 2012) It has also been applied to the healthcare
environment. However, most of the works focused on a specific technique and had limitations on the overall process analysis.
Since the processes in a hospital are unstructured and complex to analyze (Mans et al. 2008; van der Aalst 2011), a systematic
methodology is required. In this paper, we suggest a systematic methodology from data integration and exploration to data

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(4), 494-508, 2014

A MODEL-CHECKING BASED APPROACH TO ROBUSTNESS ANALYSIS OF


PROCEDURES UNDER HUMAN-MADE FAULTS
Naoyuki Nagatou1,2,*, Takuo Watanabe1
1
Department of Computer Science
Graduate School of Information Science and Engineering
Tokyo Institute of Technology
Meguro-ku, Tokyo, 152-8552, Japan.
2
PRESYSTEMS Inc.
1461 Kamimuzata
Togane, Chiba, 283-0011, Japan.
*
Corresponding author’s e-mail: nagatou@presystems.xyz

A model-checking approach to analyze the robustness of procedures that suffer from human-made faults is proposed. Many
procedures executed by humans incorporate fault detection and recovery tasks to recover from human-made faults. Examining
whether such recovery tasks work as expected is crucial to preserve the trust and reliability inherent in safety-critical domains.
To achieve this, we have employed a fault-injection method that injects a set of human-made faults into a fault-free model
for a given procedure. This fault set is selected according to Swain's discrete action classification. The proposed approach
uses a model checker to determine paths to error states within the model, and its properties are formalized via calculus of
communicating systems and linear temporal logic. The effectiveness of the proposed method is demonstrated by investigating
the recoverability of a real-world procedure.

Keywords: human-made fault, model checking, dependability, robustness, linear temporal logic, process algebra.
 
(Received on November 7, 2014; Accepted on June 18, 2015)

1. INTRODUCTION

Humans follow procedures in many scenarios such as industrial plants, aircrafts, and hospitals. Procedures in safety-critical
domains typically require high levels of dependability, which is ensured by the skills and knowledge of domain experts;
moreover, because domain experts may make mistakes, the inherent robustness of the given procedures. In this paper, we
present a formal approach for analyzing the robustness of procedures in terms of human-made faults and the logical
characteristics of robust procedures.
A Hazard and Operability (HAZOP) study (IEC 2001) is a structured and systematic examination of a planned process
to identify and evaluate problems that may represent risks to people or equipment. HAZOP was initially developed to analyze
chemical plants, and it was later extended to study procedures, humans, and software. The HAZOP examination is based on
guide words such as NO OR NOT, MORE, LESS, AS WELL AS, PART OF, REVERSE, and OTHER THAN which are
used to identify deviations from intended design. A HAZOP study begins by defining the scope and objectives of a given
system. This system is divided into parts and parameters of the parts are defined. Each guide word is applied to each of the
parts and parameters. The results are recorded on HAZOP worksheets, which are matrices with parts and parameters, and
guide words.
If systems and their objectives are written by formal description such as Petri net and a formal modeling language during
model checking, then formal analysis is available and helpful to evaluate consequences of HAZOP studies. A workflow net
(Aalst 1997) has two special places, i.e., input and output places that represent one begin and one end of a procedure.
Moreover, automatic analysis is also available, e.g., using labeled transition system analyzer (LTSA) (Karamanolis et al.
2000).
Fields (2001) applied model checking to the analysis of faults in human–computer interaction design. A model written
in the Murphi language (Formal Verification Group, University of UTAH) describes the interactions between a device and
user tasks in a usage model. Combining a device model and a usage model as user tasks represents a situation wherein a user
operates a device. User performance that deviates from design intent is injected into a combined model based on defined
deviations. In this case, deviations are coded as transition rules. The analysis of erroneous actions is performed in an injected
model, which is a property of hazard states or goal conditions. Another application of model checking is Bolton and Bass’

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(5), 509-523, 2015

STACK PRE-MARSHALLING PROBLEM: A HEURISTIC-GUIDED


BRANCH-AND-BOUND ALGORITHM
Ruiyou Zhang1,*, Zhong-Zhong Jiang2,3, and Won Young Yun4
1
College of Information Science and Engineering
Northeastern University
Shenyang, China
*
Corresponding author’s e-mail: zhangruiyou@ise.neu.edu.cn
2
Department of Information Management and Decision Sciences
School of Business Administration, Northeastern University
Shenyang, China
3
Institute of Behavioral and Service Operations Management
Northeastern University
Shenyang, China
4
Department of Industrial Engineering
Pusan National University
Busan, Korea

The stack pre-marshalling (SPM) problem is a complex combinatorial optimization problem which arises mainly within the
field of container logistics. This paper presents a heuristic-guided branch-and-bound (HGB&B) algorithm which can
effectively be used to solve the SPM problem. The HGB&B algorithm has a guiding heuristic that cuts ‘valueless’ branches
before the calculation of their bounds improving the search efficiency greatly. The valueless branches cannot result in the
optimum solution definitely. Additionally, two heuristics have been designed to generate initial feasible solutions of the
problem. Experiments indicate that running time of the algorithm is acceptable if the product of the number and maximum
height of stacks is within about 35. The HGB&B algorithm is faster than the existing exact algorithm, and more efficient than
a number of sub-optimum algorithms. Therefore, it is applicable and valuable in most real-life scenarios, including container
logistics, reducing the time to solve the problem to optimality.

Keywords: branch and bound; combinatorial optimization; container pre-marshalling; logistics; stack pre-marshalling
problem

(Received on April 29, 2014; Accepted on June 21, 2015)

1. INTRODUCTION

As a combinatorial optimization problem, the stack pre-marshalling (SPM) problem was initially identified within the
management of container logistics. Therefore, it is also named container pre-marshalling problem in literature (Lee and Hsu,
2007). In the last decade there has been a considerable growth in container transportation, which has led to a need for a further
optimization of these systems worldwide (Boysen et al., 2013; Carlo et al., 2014; Lee and Chao, 2009; Lehnfeld and Knust,
2014; Sibbesen, 2008; Stahlbock and Voß, 2008; Zhang et al., 2014). Container terminals play a critical role in determining
the efficiency of container logistics (Vis and Roodbergen, 2009). Cranes and trucks move containers between yards and
vessels at terminals. Here, the handling of outbound containers is a typical example to illustrate the SPM problem. The
containers to be loaded onto a vessel are divided into groups according to their weights, destinations, and so on (Forster and
Bortfeldt, 2012). The groups of containers with a higher priority should be loaded before those with a lower priority. However,
the containers with a higher priority are not always on top of those with a lower priority because of variant reasons such as
inaccurate logistic information. In order to improve operational efficiency and to decrease the turnaround of vessels,
containers need to be pre-marshalled in advance so that no container with a higher priority is blocked by those with a lower
priority. Usually, containers are pre-marshalled only within a bay, that is a row of stacks, because inter-bay container
pre-marshalling is very time-consuming (Forster and Bortfeldt, 2012).

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(5), 524-548, 2015

A MIXED INTEGER-PROGRAMMING MODEL FOR PERIODIC ROUTING


OF SPECIAL VESSELS IN OFFSHORE OIL INDUSTRY
Abdolhamid Eshraghniaye Jahromi* and Roohollah Ighani Yazdeli

Department of Industrial Engineering


Sharif University of Technology
Tehran, Iran
*
Corresponding author’s e-mail: eshragh@sharif.edu

In order to maintain the production of offshore oil wells, the National Iranian Oil Company periodically services facilities
and equipment located in oil wellheads with mobile wellhead servants. It also gives technical supports to oil wellheads and
mobile wellhead servants with supporter vessels. Due to supply limitations, there are not enough special vessels (namely
supporter vessels and mobile wellhead servants) compared to the number of oil wellheads. The failure of special vessels to
service oil wellheads and supporter vessels to technically support mobile wellhead servants based on a predetermined plan,
will lead to considerable loss in production performance of oil wells and hence higher costs. We propose a mixed integer
programming model and a heuristic algorithm to find the best plan to service oil wellheads considering travel and shortage
costs. Computational results in both simulated and real life instances are reported and performance of the proposed
algorithm is evaluated.

Keywords: Oil wellheads; Periodic routing; Mobile wellhead servants; Supporter vessels; Heuristic algorithm.

(Received on May 25, 2014; Accepted on August 1, 2015)

1. INTRODUCTION

In order to maintain the production of offshore oil wells, the National Iranian Oil Company, one of the major oil and gas
producing companies operating in the Persian Gulf, services facilities and equipment located on oil wellhead platforms
(henceforth called wellheads) with two types of special vessels equipped with service tools. The first type of special vessel
is called a ‘mobile wellhead servant’ (MWS). The second type is called a ‘supporter vessel’ (SV) and is charged with
giving technical support to wellheads and also MWSs. Due to limitations in supplying service tools, there are not enough
special vessels compared to the number of wellheads. In fact, the failure by MWSs to service wellheads or to technically
support wellheads and MWSs by SVs based on a predetermined plan, will lead to considerable loss in production
performance of oil wells and hence higher costs.
The periodic routing problem of MWSs and SVs consists of finding the best routing plan across all periods of a given
horizon in order to attend all wellheads demanding services and technical support and all MWSs demanding technical
support. In this problem, the objective is to minimize the travel and shortage cost of MWSs and SVs. This paper is
organized as follows: Section 2 reviews the relevant literature. Section 3 describes the problem and develops a
mathematical formulation. A heuristic algorithm is proposed to solve the problem in Section 4. Section 5 shows the
computational results of the proposed algorithm, and finally section 6 consists of conclusions and suggestions for future
research.

2. LITERATURE REVIEW

Because MWSs and SVs are used as vehicles to service wellheads as nodes and the problem data stretches over several
periods, the problem considered in this paper is closely related to the periodic vehicle routing problem (PVRP). The PVRP
is a generalization of the vehicle routing problem (VRP). The objective of PVRP is to determine routes for a fleet of
vehicles over several periods. Like the VRP, the vehicles’ tours start and end at a given depot. The aim of many previous
PVRP papers and also the aim of this paper is to model and solve real life instances by considering novel applicable and
practical features.
An extensive description of the PVRP can be found in a survey by Francis et al. (2008), where the evolution of the
problem in the literature and its solution methods are presented. Initially, the PVRP was introduced by Beltrami and Bodin
(1974) for modeling the municipal waste collection problem in the municipality industry. Butler et al. (1997) studied Irish
dairy farm collection problems by a vehicle in the county of Dublin, considering that some farms needed every day pickup

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(5), 549-574, 2015

METHODOLOGY FOR SELECTION OF OPTIMAL PORTFOLIO IN


MAINTENANCE DEPARTMENTS
María Carmen Carnero

Technical School of Industrial Engineering


University of Castilla-La Mancha
Ciudad Real, Spain
Corresponding author’s e-mail: carmen.carnero@uclm.es

This paper presents a methodology for the selection of a project portfolio in maintenance departments. Although project
portfolio selection techniques have been widely analysed in the literature, their use has traditionally been rare in the
maintenance environment. Therefore, in this paper a multi-criteria audit has been designed to evaluate the state of a
maintenance department. The multi-criteria audit and the Taguchi loss function are combined to estimate the external
benefits of each project. Additionally, a multi-criteria additive model that uses two types of benefits, internal and external,
is applied to estimate the benefit-cost ratio. The resulting efficient frontier allows the optimum portfolio to be selected.
Organisations can use this methodology to determine the current state of their maintenance department, locate its
weaknesses, design projects to overcome these deficiencies, and monitor the improvements obtained after introducing the
projects. The methodology has been applied to a healthcare organisation.

Keywords: project portfolio selection; maintenance audit; multi-criteria model; health-care organisation; taguchi loss
function

(Received on November 8, 2013; Accepted on June 21, 2015)

1. INTRODUCTION

Over the last few years machines and devices in organisations have been dramatically increasing in both number and
complexity (Alsyouf, 2009); therefore maintenance services are now critical for improving the availability and safety of
equipment and facilities, the quality of products and services, cost reductions (Wang et al., 2007) and for on-time delivery
and environmental requirements (Alsyouf, 2007). For all these reasons, maintenance managers need to monitor the
performance of the maintenance department to be in a position to prevent or limit deficiencies. However, evaluating
maintenance efficiency has traditionally been difficult (Waeyenbergh and Pintelon, 2002). Performance evaluation systems
make use of indicators (e.g., Arts et al., 1998; Martorell et al., 2002; Van Horenbeek and Pintelon, 2014; Parida et al.,
2015) or audits (e.g., Raouf, 1994; Dwight, 1999; Karapetrovic and Willborn, 2000; Al-Muhaisen and Santarisi, 2002;
Carnero and Delgado, 2008; Macián et al. 2010; Bana e Costa et al., 2012).
Once the organisation’s maintenance is controlled, managers need to carry out corrective actions or introduce projects
to tackle the deficiencies detected. Ideally, they should do this in a process of continuous improvement.
Project portfolio selection is a strategic decision (Liesio et al., 2007). In this selection process the decision-maker must
allocate a limited quantity of resources to a set of competing projects (Medaglia et al., 2007). Thus the decision-making
process is complex, with a large number of stages, decision-making groups, and conflicting objectives, and a high risk and
uncertainty (Ghasemzadeh and Archer, 2000). This decision-making can be more easily analysed by multi-criteria
techniques as they can include multiple aspects (technical, economic, political, social, environmental, etc.) which can be
assessed quantitatively or qualitatively and which regularly conflict, allowing acceptable compromise solutions to be
reached (Munda et al., 1994); furthermore, they are ideal for including the opinions of different stakeholders, relationships
between fields, large quantities of data, and all the characteristics found in real problems (Munda, 2005). These techniques
simplify the complexity of problems, leading to public acceptance of the solutions found (Huang et al., 2011). This is
essential when the investment necessary to introduce the project must be justified. Therefore, these techniques can help the
organisation to make its selection process more transparent (Loch et al., 2001) unlike the tradition selection process based
solely on the managers’ experience. Multi-criteria techniques, however, can also be useful in a field of organisational
innovation such as the project management office (PMO) where activities related to project management are centred
(Aubry et al., 2011) and which is recognised for its ability to improve the organisational performance of the company in
different areas (Spalek, 2013).

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(5), 575-600, 2015

MODELING AND INTEGRATION OF PLANNING, SCHEDULING, AND


EQUIPMENT CONFIGURATION IN SEMICONDUCTOR MANUFACTURING
PART I. REVIEW OF SUCESSES AND OPPORTUNITIES
Ken Fordyce1,*, R. John Milne2, Chi-Tai Wang3, and Horst Zisgen4

Arkieva Supply Chain Solutions


1

Wilmington, DE, USA


*
Corresponding author’s email: kfordyce@arkieva.com
2
School of Business
Clarkson University
Potsdam, NY, USA
3
National Central University
Taoyuan City, Taiwan
4
IBM Software Group
Mainz, Germany

Managing the supply chain of a semiconductor based package goods enterprise—including planning, scheduling, and equipment
configurations—is a complicated undertaking, particularly in a manner that is responsive to changes throughout the demand
supply network. Typically, management responds to the complexity and scope by partitioning responsibility that narrows the
focus of most of the groups in an organization—though the myriad of decisions are tightly integrated. Improving system
responsiveness is best addressed by an advanced industrial engineering (AIE) team that is typically the only group with the
ability to see the forest and the trees. These teams integrate information and decision technology (analytics) into an application
which improves some aspect of planning, scheduling, and equipment configuration. This paper illustrates the need for AIE teams
to serve as agents of change, touches on three success stories, highlights the sporadic progress and incubation process in applying
analytics to support responsiveness where forward progress by early adopters is often followed with stagnation or reversal as
subsequent adopters require a natural incubation period. This paper and its companion paper (Part II. Fab Capability Assessment)
identify modeling challenges and opportunities within these critical components of responsiveness: semiconductor fabrication
facility/factory capability assessment, moderate length process time windows, moving beyond opportunistic scheduling, and plan
repairs to modify unacceptable results. Although aspects of this paper have the feel of a review paper, this paper is different in
nature—a view from the trenches which draws from the collective clinical experience of a team of agents of change within the
IBM Microelectronics Division (MD) from 1978 to 2012. During much of this period MD was a fortune 100 size firm by itself
with a diverse set of products and manufacturing facilities around the world. During this time frame, the team developed and
institutionalized applications to support responsiveness within IBM and by IBM clients, while staying aware of what others are
doing within the literature and industry. The paper provides insights from the trenches to shed light on the past but more
importantly to identify opportunities for improvement and the critical role of advanced industrial engineers as agents of change
to meet these challenges.

Keywords: demand supply network, tool capacity planning, hierarchical production control, systems integration, process time
windows, semiconductor manufacturing
(Received on October 28, 2014; Accepted on May 24, 2015)

1. INTRODUCTION

Little (1992) observes: “Manufacturing systems are characterized by large, interactive complexes of people and equipment in
specific spatial and organizational structures. Because we often know the sub units already, the special challenge and opportunity
is to understand interactions and system effects. There are certainly patterns and regularity here. It seems likely that researchers
will find useful empirical models of many phenomena in these systems. Such models may not often have the cleanliness and
precision of Newton's laws, but they can generate important knowledge for designers and managers to use in problem solving.”
Nick Donofrio (Lyon et al., 2001), then IBM Senior Vice President, Technology & Manufacturing (now retired) notes
in his Franz Edelman Finalist Award video, “The ability to simultaneously respond to customers’ needs and emerging business

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(5), 601-617, 2015

MODELING AND INTEGRATION OF PLANNING, SCHEDULING, AND


EQUIPMENT CONFIGURATION IN SEMICONDUCTOR MANUFACTURING
PART II. FAB CAPABILITY ASSESSMENT
Ken Fordyce1,*, R. John Milne2, Chi-Tai Wang3, Horst Zisgen4

Arkieva Supply Chain Solutions


1

Wilmington, DE, USA


*
Corresponding author’s email: kfordyce@arkieva.com
2
School of Business
Clarkson University
Potsdam, NY, USA
3
National Central University
Taoyuan City, Taiwan
4
IBM Software Group
Mainz, Germany

Managing the supply chain of a semiconductor based package goods enterprise—including planning, scheduling,
dispatching, and equipment configurations—is a complicated undertaking, particularly in a manner that is responsive to
changes throughout the demand supply network. In a companion paper (Part I. Review of Successes and Opportunities), we
illustrate the need for industrial engineering teams to serve as agents of change, review prior successes, and highlight issues
and opportunities for improvement in this highly integrated decision space. In this Part II paper, we identify modeling
challenges and opportunities within a critical component of responsiveness: semiconductor fabrication facility/factory
(FAB) capability assessment. This involves estimating the output of the FAB given the current conditions of the FAB
(work in process, planned wafer starts, equipment availability and configuration, and other factors). A twin objective of
FCA is to determine the actions the FAB should take (how to change the current conditions) to achieve specified output
targets.

Keywords: tool capacity planning, waiting time, hierarchical production control, systems integration, semiconductor
manufacturing
(Received on May 10, 2015; Accepted on May 24, 2015)

1. INTRODUCTION

The global challenge that torments semiconductor fabrication/factory (FAB) management, keeps planners working late, and
creates frustration with occasional glory for modeling professionals is FAB Capability Assessment (FCA). This challenge
takes the form of two questions. 1) Given a set of conditions, what is the outcome and impact? and 2) Given desired
outcomes, what conditions (if any) will generate these outcomes? Outcomes are commonly expressed as the quantity of
good wafers (from WIP and new starts) that will exit the FAB by part number and by exit date (or time period). Cycle time
is a secondary outcome measure. Impact is the workload placed on FAB assets (tools, manpower, consumables, etc.).
Conditions are any aspects of the FAB production and finance environment that can be changed (at least in theory) such as:
the starts profile, adding capacity, different tool (equipment) deployments, limits on chemicals to control costs, expedite
guidelines, cost accounting, dispatch scheduling logic, manufacturing engineering requirements, changing production
process(s), and altering tooling characteristics. The corollary challenge is a model that summarizes FAB capabilities to
support the interplay between organizations in the demand supply network. With FCA, the primary attention is on the end
products and services of the FAB.
The companion paper (Part I. Review of Successes and Opportunities, Fordyce et al., 2015) provides a literature
review on modeling approaches for FCA: deterministic spreadsheet based models, heuristics, historical allocations,
optimization, queueing equations/networks, discrete event simulation, column generation, clearing functions, fuzzy demand,
and hybrid scheduling. The first question from people in the trenches is, “When it would be best to use which methods?”
The underpinning of the complexity is twofold: (a) as John Fowler observed in his 1992 presentation at Loon

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(5), 618-630, 2015

OVERALL RESOURCE EFFECTIVENESS (ORE) INDICES FOR TOTAL


RESOURCE MANAGEMENT AND CASE STUDIES
Chen-Fu Chien 1,*, Pei-Chun Chu 1, and Lizhong Zhao 1,2
1
Department of Industrial Engineering and Engineering Management,
National Tsing Hua University, Hsinchu 30013, Taiwan
*
Corresponding author’s e-mail: cfchien@mx.nthu.edu.tw
2
Department of Industrial Engineering, Harbin Institute of Technology, Harbin, China

High-tech industries are capital-intensive, in which capital effectiveness and productivity are critical for maintaining
competitive advantages. Most of the existing studies focused on demand forecast, capacity planning, order allocation, and
capacity management to enhance capital effectiveness. Few approaches are developed to address productivity and resource
management, while one of the critical roles for industrial engineers is to enhance productivity and resource utilization
effectiveness in practice. This study aims to propose a set of novel indices for Overall Resource Effectiveness (ORE) and
drive various improvement directions for total resource management. A number of case studies are reviewed for
illustration, while the proposed methodology is extended to medical instruments for cross validation. The results have
shown practical viability of the proposed ORE to drive collaborative efforts to enhance total productivity and overall
resource effectiveness. This paper concludes with discussions on value propositions of proposed ORE indices and future
research directions.

Keywords: overall resource effectiveness (ORE); overall equipment effectiveness (OEE); overall wafer effectiveness (OWE);
overall usage effectiveness (OUE); productivity; total resource management

(Received on February 13, 2015; Accepted on July 16, 2015)

1. INTRODUCTION

The semiconductor industry is one of the most complicated industries in which productivity enhancement, yield
enhancement, continual cost reduction, fast ramp-up, on-time delivery, and cycle time reduction are the important ways for
operational excellence (Chien and Wu, 2003; Wu and Chien, 2008). Driven by Moore’s Law (Moore, 1965) that the
number of transistors fabricated in the same size area will be doubled every 12 to 24 months to provide more capability at
equal or less cost, the semiconductor industry has strived for continuous technology migration via capital investments and
cost reduction to maintain competiveness. The roles and responsibilities of industrial engineers are facing challenges in
light of the changes of industry structures as well as the evolutionary information technologies for supporting business
analytics, optimization, and decision makings. In particular, high-tech industries such as semiconductor, solar cells, and
TFT-LCD manufacturing are capital-intensive, in which capital effectiveness and productivity are critical for reducing the
costs and maintaining competitive advantages. Most of the existing studies focused on demand forecast, capacity planning,
order allocation, and capacity management to enhance capital effectiveness. Few approaches are developed to address
productivity and resource management, while one of the critical roles for industrial engineers is to enhance productivity and
resource utilization effectiveness in practice.
To fill the gaps, this study aims to propose a set of novel indices for Overall Resource Effectiveness (ORE) that can
drive specific improvement directions for total resource management. In particular, a number of case studies are reviewed
for illustration, while the proposed methodology is extended for enhancing the effectiveness of capital-intensive medical
instruments for cross validation. The results have shown that the proposed indices can be employed in semiconductor
industry and other industries to drive collaborative efforts to enhance total productivity and overall resource effectiveness.
This paper is organized as follows: Section 2 reviews fundamental for enhancing overall equipment effectiveness.
Section 3 proposes a conceptual framework for enhancing overall resource effectiveness and illustrates its usage for various
resources such as semiconductor wafers and materials. Section 4 describes a case study in medical center for cross
validation. Section 5 concludes this study with discussions on value propositions of proposed ORE indices and future
research directions.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(5), 631-644, 2015

UNISON DECISION ANALYSIS FRAMEWORK FOR WORKFORCE


PLANNING FOR SEMICONDUCTOR FABS AND AN EMPIRICAL STUDY
Yun-Hsuan Lin, Chen-Fu Chien*, Chih-Min Yu

Department of Industrial Engineering and Engineering Management,


National Tsing Hua University, Hsinchu 30013, Taiwan
*
Corresponding author’s e-mail: cfchien@mx.nthu.edu.tw

With the increase in the scale of semiconductor manufacturing, the number of knowledge-workers has also increased
tremendously. The cost of automation and manpower is increasing annually, and engineers and technical operators are
playing an increasingly crucial role in factories. The optimal workforce plan for manufacturing and the improvement of
productivity have become key topics. In semiconductor manufacturing, numerous factors affect the workforce plan for
manufacturing. The problem of determining the actual manpower demand, given the different preference structures of various
decision makers, is difficult to solve. This study employed the UNISON decision analysis framework for constructing a
workforce planning decision model for semiconductor manufacturing. We also held discussions with domain experts to
identify key performance indices for human capital management. An empirical study was conducted in a semiconductor
company, and the results showed that the proposed framework could assist the company in developing an operation
workforce planning model and an associated management mechanism for improving the decision quality and decision
rationality. Thus, the company could enhance human capital and productivity to maintain corporate competitiveness.

Keywords: human capital, decision analysis, UNISON decision analysis framework, semiconductor, workforce planning

(Received on March 12, 2015; Accepted on June 17, 2015)

1. INTRODUCTION

Driven by Moore’s Law (Moore, 1965) that the number of transistors fabricated in the same size area will be doubled every 12
to 24 months to provide more capability at equal or less cost, the semiconductor industry has strived for continuous
technology migration via capital investments and cost reduction to maintain competiveness. With continuous technology
migrations, the semiconductor industry is capital and knowledge intensive. Semiconductor industry is one of the most
complicated industries in which productivity enhancement, yield enhancement, continual cost reduction, fast ramp-up,
on-time delivery, and cycle time reduction are the important ways for operational excellence (Chien and Wu, 2003;
Leachman et al., 2007; Wu and Chien, 2008; Wu, 2013). As the range of applications of semiconductor components has
increased, the life cycle of products has become shorter. Knowledge workers including engineers and technical staff are
increasingly important assets of modern semiconductor companies to maintain competitive advantages, since they are
operated with highly automated and intelligent manufacturing facilities. Most studies on human productivity enhancement
focused on increasing the throughput and overall equipment effectiveness (Chien and Hsu, 2006; Chien et al., 2007). Little
research has been conducted on enhancing workforce planning and staff productivity.
This study aims to construct a workforce planning decision model and the associated management mechanism for
reasonable workforce planning and people productivity planning for increasing operational efficiency and enhancing the
competitiveness of companies. This research focused on the manufacturing manpower by considering direct labor (DL) and
indirect labor (IDL). The results have shown practical viability of the proposed approach. Indeed, the proposed approach is
implemented in real settings.
The remaining of this paper is organized as follows: Section 2 reviews related studies to construct theoretical
fundamental. Section 3 introduces the proposed framework for workforce planning based on the UNISON decision
framework. Section 4 describes an empirical study conducted in a leading semiconductor manufacturing company in Taiwan
for validation. Section 5 concludes this study with discussions of contributions and future research directions.

2. LITERATURE REVIEW

Workforce planning involves matching the supply of and demand for employees from a strategic level to an operational level.
Strategic workforce planning involves the determination of the workforce size over a long period of time (Koutsopoulos et al.,
1987). Huselid (1995) supported predictions indicating that the impact of high-performance work practices on a firm’s

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(5), 645-660, 2015

RANDOM DISTURBANCE REASONING MODEL OF DECISION-MAKING


SYSTEM AND ITS ANTI-INTERFERENCE CAPABILITY
Liping Fu1, Meng Shen1, Hai Yanyu1 and Man Xu2,*

1
College of Management and Economics
Tianjin University
Tianjin 300072, China

Department of Industrial Engineering


2

Nankai University
Tianjin 300457, China
*
Corresponding author’s e-mail: twinklexu@163.com

To solve reasoning robustness issues due to external interferences in decision-making system, random disturbance reasoning
model (RDRM) was proposed. Based on concept of  divergence, the statistical correlations and their dissimilarities between
input variables and disturbances were induced, thus their overlapping effect was revealed with derived stability criterion of
RDRM.  divergence monotonicity of the convex interference was proved with convex combination principles of the
empirical knowledge. CBR/RBR fusion reasoning solution of RDRM was obtained with dynamically adapted parameters and
confidence level based on robust thresholds. To degrade the disturbance magnitude estimated by the system gain, disturbance
reduction strategies were established with covariance matrix of specific input information, leading to enhance its anti-
interference capability of RDRM. The dataset of human factors was adopted on simulation manufacturing platform with two
kinds of external interference, to execute reasoning process of remote decision-making. Anti-interference capability of
RDRM was verified with ROC curves.

Keywords: decision robustness; CBR/RBR fusion reasoning; anti-interference; human factors analysis

(Received on March 16, 2015; Accepted on July 20, 2015)

1. INTRODUCTION

Research on decision robustness is one primary discipline of complex system management and control, which focuses on
how to enrich its anti-interference capability. Anti-interference analysis has been playing an important role in a wide variety
of fields, including the impacts caused by the interference factors such as vibration and noise (Almannai et al., 2008;Battiniet
al., 2011)in the manufacturing system on the human bodies of the operating personnel and their cognitive abilities; the health
of astronauts hurt by harmful gases and radiation in the narrow environment of space station or aircraft (Tvaryanas et al.,
2006); physical and psychological pressure or occupational health risks (Zhang et al., 2006); supply network configuration
(Lee et al, 2010); accident prevention in the process of complex system design, etc.
In the literatures, two categories of researches were studied to build anti-interference capability of complex system,
including deterministic framework and stochastic framework. Deterministic framework was adopted to representing and
reasoning about dynamic state of input and output information (Heckerman et al., 1995), while stochastic framework was
illustrated to identity interference factors and dynamic characteristics. Models based on deterministic framework had great
errors between reasoning solution and real fact of complex decision-making under certain states (such as the worst-case
discussed by De Vries and Van den Hof, 1995), while stochastic framework had a good ability to handle off the problem of
anti-interference. To identify the interference factors, Pan et al. (2007) developed a random feature selection method with
Fisher discriminate analysis for the error bound in case-based reasoning, solving the problem of redundant features in case
base with noise data and avoiding selecting cases that belong to a single region. Melek et al. (2005) presented a fuzzy noise-
rejection data partitioning method to separate random outline data, avoiding and eliminating the impact of the interference to
its reasoning solution. From the perspective of the interference dynamic character, Liu et al.(2008)built a two-stage adaptive
perturbation analysis model, reconstructing original probability distribution of data which was influenced by the interference,
and the interval was used to adjust the information of decision-making and reasoned confidence level of the perturbation
analysis model; Zhou et al. (2010) presented the belief construct framework model to measure the interference factors,
completed dynamic decision reasoning through maximizing the expectation likelihood function of the conditional probability
density. For anti-interference capability, Martins et al. (2007) proposed a mutual information based method to measure the

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(5), 661-682, 2015

DISCRETE-EVENT SIMULATION FOR SEMICONDUCTOR WAFER


FABRICATION FACILITIES: A TUTORIAL
John W. Fowler1, Lars Mönch2,*, Thomas Ponsignon3
1
Department of Supply Chain Management
Arizona State University
Tempe, AZ 85287-4706, USA
2
Chair of Enterprise-wide Software Systems
Department of Mathematics and Computer Science
University of Hagen
58097 Hagen, Germany
*
Corresponding author’s e-mail: lars.moench@fernuni-hagen.de

Infineon Technologies AG
3

85579 Neubiberg, Germany

Discrete-event simulation is a well-established and rather successful method in some semiconductor companies, while other
companies do not use simulation at all. Simulation is used for performance assessment and decision-making. This paper
focuses on the methodological and practical issues that have to be addressed to build, use, and maintain simulation models for
a semiconductor wafer fabrication facility (wafer fab). We describe and discuss the main steps of a simulation study in this
domain. We seek to highlight the main issues and present alternative ways to address them. Common pitfalls in using
discrete-event simulation in semiconductor manufacturing and how they can be avoided are also discussed.

Keywords: discrete-event simulation; semiconductor manufacturing; modeling issues; tutorial

(Received on May 28, 2015; Accepted on October 16, 2015)

1. INTRODUCTION

Semiconductor manufacturing deals with producing integrated circuits on silicon wafers. Over the last 55 years it has
processed from scientific research to a mature industry. Today, 200.000 and 250.000 people are employed in the
semiconductor industry of Europe and the U.S, respectively. This industry supports more than one million additional
European as well as American indirect jobs. European and U.S. semiconductor companies generated $33 and $146 billion in
sales in 2012, respectively. Semiconductors make the global trillion dollar electronics industry possible (cf. European
Semiconductor Industry Association 2015 and Semiconductor Industry Association 2015). According to the Semiconductor
Industry Association, $34 billion were invested in research and development in 2013 by the U.S. semiconductor industry.
Recent wafer fabs belong to the most complex manufacturing systems that exist today (cf. Mönch et al. 2013). The size
and complexity of the related supply chains suggest that simple, intuitive, manual techniques are unlikely to perform well.
While there is a considerable analytical culture in the semiconductor industry due to its science and technology roots, in the
beginning the industry was mainly driven by device design considerations and yield management. Manufacturing and supply
chain management was not viewed as a source of competitive advantage in the beginning (cf. Chien et al. 2011). Because of
the fierce competition, model-based decision making has become more and more important. Among the different methods
from Industrial Engineering, Computer Science, and Operations Research simulation is notably successful. The first scientific
paper in this field we are aware of, namely (Dayhoff and Atherton 1987), deals with a simulation model of a wafer fab. Using
simulation in semiconductor manufacturing was the object of intensive scientific discussions (cf. Fowler et al. 1998). At the
same time, using discrete-event simulation in wafer fabs is not as straightforward as one might think at a first glance (cf.
Fowler and Rose 2004 and Fischbein and Yellig 2011 for discussion of difficulties in simulation modeling for manufacturing
companies). Simulation as a technology has a number of inherent difficulties and limitations that have to be taken into
account when considering its use. Therefore, the main goal of the present paper is to present some of our knowledge and
experience in designing, developing, and deploying simulation models in a tutorial-type manner. This includes a discussion of
lessons learned during the application of discrete-event simulation in wafer fabs.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(6), 683-704, 2015

AN ECONOMIC ORDER QUANTITY MODEL FOR LOTS CONTAINING


DEFECTIVE ITEMS WITH REWORK OPTION
Harun Öztürk1, Abdullah Eroglu1 and Gyu M. Lee2,*
1
Department of Business Administration
Suleyman Demirel University
Isparta, Turkey

Department of Industrial Engineering


2

Pusan National University


Busan, Korea
*
Corresponding author’s e-mail: glee@pnu.edu

In a manufacturing system, it is unavoidable that defective items are produced. A portion of these defective items are
reworkable and often lead to extra profits by reworking them. In order to determine their effects on the total profit, an
economic order quantity model for lots containing defective items with existence of shortages when the rework option
available is proposed. The proposed model determines the economic order quantity and backorder quantity for a single item
in an ordering system. It is assumed that a 100% inspection process is performed to separate good and defective items in each
ordered lot. The defective items consist of imperfect quality, scrap and reworkable items. The rework can be done only to
reworkable items to make good or scrap items. A numerical example is provided to show the effects of the rework option in
the proposed model. In order to determine whether the cost of rework is justifiable, the proposed model is compared with the
one including only imperfect quality and scrap items with existence of shortages. Finally, sensitivity analysis on some
parameters in the proposed model is carried out.

Keywords: ınventory model; economic order quantity; defective items; rework; rework cost; shortages

(Received on November 8, 2014; Accepted on December 12, 2015)

1. INTRODUCTION

The classical inventory model, namely the economic order quantity (EOQ) model, has been discussed by Harris (1913).
Later, the economic production quantity (EPQ) model has been proposed by Taft (1918). The basic assumption of these
models is that 100% of items ordered or produced are of perfect quality. However, decision makers have come to realize
that this assumption is not valid in most of the production systems. Consequently, many companies need to develop a new
inventory model to control the ordering and production. A large number of inventory models have been proposed by
various researchers in order to show the impracticability of the assumption that all items are perfect. Porteus (1986)
assumed that production process can become out-of-control with a given probability each time production system leads to
defective items and introduced investment option in quality improvement and setup cost reduction. Later, Rosenblatt and
Lee (1986) considered an economic production quantity (EPQ) model in which production system contains some defective
items. The basic assumption of their model is that production system produces only good items from the start until a certain
time point, which is a random variable. Then, the system becomes out-of-control and starts to produce defective items at a
certain percentage until the end of the production period. They also assumed that the defective items are reworkable for a
certain cost through the same production process. This model was extended by Kim and Hong (1999) to the case where the
duration until system becomes out-of-control is arbitrarily distributed. Chung and Hou (2003) extended the paper of Kim
and Hong by assuming that shortages are allowed.
Salameh and Jaber (2000) proposed an EOQ model, where each lot contains defective items, and the proportion of
defective items is a uniformly-distributed random variable. It is assumed that each ordered lot is subject to 100% inspection
process and defective items are sold as a single batch at a discounted price after the inspection process. It is also assumed
that shortages are not allowed. Goyal and Cardenas-Barron (2002) considered a simple practical approach to determine the
optimal order quantity in Salameh and Jaber’s model. Papachristos and Konstantaras (2006) proposed an alternative
solution to Salameh and Jaber’s model. They clarified the sufficient conditions for shortages. Konstantaras et al. (2007)
proposed a joint lot sizing and inspection inventory model when each ordered lot contains a random proportion of defective

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(6), 705-716, 2015

DESIGN OF AN EXTENDED NONPARAMETRIC EWMA SIGN CHART

Chen-Fang Tsai1, Shin-Li Lu1,* and Chi-Jui Huang2


1
Department of Industrial Management and Enterprise Information, Aletheia University
New Taipei City 251, Taiwan
*
Corresponding author’s e-mail: shinlilu@mail.au.edu.tw
2
Department of International Trade, Jinwen University of Science and Technology
New Taipei City 231, Taiwan

Selecting a suitable control chart is essential to effectively monitoring process shifts. In particularly, a nonparametric control
chart is recommended over a traditional control chart when the quality characteristics of a process are unknown. In this study,
we propose a new algorithm that extended the nonparametric exponentially weighted moving average (EWMA) sign chart to
a double EWMA (DEWMA) sign chart to improve the detection ability in small process shifts. Simulation studies show that
the nonparametric DEWMA sign chart performs better than the EWMA sign chart in detecting small process shifts, but that
they perform similarly when detecting large shifts. A real-life example of service times from a service system of a bank
branch in Taiwan is used to illustrate the proposed novel nonparametric DEWMA sign chart.

Keywords: nonparametric control chart; EWMA sign chart; DEWMA sign chart; detection ability

(Received on October 25, 2013; Accepted on September 10, 2015)

1. INTRODUCTION

The exponentially weighted moving average (EWMA) control chart was first introduced by Roberts in 1959, and has been
widely used in statistical process control (SPC) ever since. One major advantage of the EWMA control chart is that it
detects small process shifts more quickly than the traditional Shewhart control chart. As a rule, most control charts assume
production processes follow a normal or specified probability distribution. However, in reality there is often limited or no
information about the underlying process distribution. Hence, applying a nonparametric approach to establish control charts
seems a reasonable alternative when the distribution of process observations is non-normal or unknown.
Similar researches have demonstrated the increasing role of nonparametric methods in control chart applications.
Some nonparametric control charts are designed to monitor process means. For example, Bakir (2004) presented a
distribution-free (nonparametric) Shewhart control chart based on the Wilcoxon signed-rank statistic for monitoring a
process center. The proposed chart is more efficient than the traditional Shewhart X-bar chart under heavy-tailed
distributions, but is less efficient under light-tailed distributions. Bakir (2006) proposed the nonparametric Shewhart,
exponentially weighted moving average (EWMA), and cumulative sum (CUSUM) control charts for monitoring an
unspecified in-control target process center. The proposed charts are more efficient than the traditional normal control
charts with a moderate or heavy-tailed underlying distribution. Chakraborti and Eryilmaz (2007) considered a runs rule on
Shewhart-type nonparametric signed-rank charts. This chart offers attractive false alarm rates, and is efficient under a
variety of distributions. Chakraborti and Van de Wiel (2008) developed the Mann-Whitney statistic‐based control chart and
showed that it was considerably improved performance to the traditional Shewhart X-bar chart.
With respect to monitoring process variability, Das (2008) studied the efficiency of nonparametric control charts using
a two-sample variability study. Das and Bhattacharya (2008) developed a nonparametric control chart based on Conover’s
squared rank test for controlling variability. They were able to show that the chart was more efficient than the Shewhart S 2
chart in detecting process variability. Khilare and Shirke (2010) proposed a nonparametric synthetic control chart using sign
statistics for monitoring location parameters. Later, Shirke and Khilare (2012) presented nonparametric synthetic control
charts for process variation. They compared their statistical performance with the Shewhart sign and S 2 charts and proved
that the proposed charts were better equipped to identify out-of-control signals.
In the perspective of nonparametric control charts, detecting small changes in process proportion is an important area
of study in SPC. Yang and Cheng (2011) proposed a nonparametric CUSUM mean chart based on process proportion to
monitor the possible small mean shifts in the process. Yang et al. (2011) established the nonparametric EWMA sign chart
and showed that it was suitable and efficient for monitoring small shifts of a process target when the underlying distribution

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(6), 717-728, 2015

A HYBRID HEURISTIC ALGORITHM FOR INTEGRATED PROBLEM OF


MACHINE SCHEDULING AND UNIDIRECTIONAL FLOW PATH DESIGN
Yan Zheng1, Yujie Xiao2 and Yoonho Seo3,*
1
College of Automobile and Traffic Engineering
Nanjing Forestry University
Nanjing, China
2
School of Marketing and Logistic Management
Nanjing University of Finance and Economics
Nanjing, China
3
Department of Industrial and Management Engineering
Korea University
Seoul, Korea
*
Corresponding author’s e-mail: yoonhoseo@korea.ac.kr

During the past few decades, unidirectional flow path design (UFD) and machines scheduling (MS) problems have been well-
studied separately. However, only considering UFD or MS cannot guarantee the global optimal solution for the whole
production. The reason is that UFD and MS are two correlated issues in the real production situation. This paper is to propose
a new integrated model, called iUFD/MS, with the objective of minimizing makespan. In iUFD/MS, UFD and MS problems
are simultaneously considered. Due to the high complexity of iUFD/MS, a hybrid heuristic algorithm based on the particle
swarm optimization is developed to get an optimal or near-optimal solution within a reasonable time period. To validate our
integrated model, a set of experiments is solved by applying the proposed solution method and the traditional method,
respectively. The result shows that our integrated model can efficiently reduce makespan by 8.9% on average.

Keywords: integrated problem; flow path design; machine scheduling; flexible process plans; particle swarm optimization

(Received on June 17, 2014; Accepted on June 21, 2015)

1. INTRODUCTION

With a given production task, the manufacturing planning can be established by considering two subtasks: material
transferring and material processing. These two manufacturing functions affect each other in the aspect of production
performance (Deroussi et al., 2008). This indicates that the global optimization cannot be obtained unless these two
functions are dealt with simultaneously by considering the interaction between them. This paper involves two well-known
problems. One is unidirectional flow path design (UFD) problem, which is in the class of material transferring; the other is
machines scheduling (MS) problem, which is in the field of material processing.
UFD problem is concerned with paths design for the automated guide vehicle (AGV), which is a widely-used
equipment for the material handling system to transfer the materials between different machines. The performance of AGV
system is affected by many factors, among which the guide path layout is of great importance since unsuitable design may
cause traffic problems such as delay, congestion, blocking or conflict (Vis, 2006). The guide path layout problem is to
ensure that AGVs can move from one place to another along a proper course with regard to the performance criterion such
as the minimal total travel distance or minimal total travel time. UFD is one of the guide path layout design problems with
an additional constraint that each aisle has only one direction for AGVs to travel. Assigning the direction to each aisle is the
main issue to be solved in UFD. Traditionally, UFD problem is modelled as an undirected AGV flow path network and
dealt with on the basis of the material flow requirements (MFR) between two machines (Seo and Egbelu, 1995). UFD
problem has been studied by many researchers since it was first introduced by Gaskins and Tanchoco (1987). Various
mathematical models and heuristic algorithms were developed to efficiently solve UFD (Kaspi et al., 2002; Ko and Egbelu,
2003; Seo et al., 2007; Guan et al., 2011).
MS problem is to find the optimal order in which jobs are processed on each machine with respect to certain
performance measures. According to the shop environment, jobs may have flexible process plans, such as the operation
flexibility, sequencing flexibility and machine flexibility (Lv and Qiao, 2013). Machine flexibility relates to the possibility

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(6), 729-752, 2015

PERMUTATION OF FUZZY AHP AND AHP METHODS TO PRIORITIZING


THE ALTERNATIVES OF SUPPLY CHAIN PERFORMANCE SYSTEM
Mohit Tyagi*, Pradeep Kumar and Dinesh Kumar

Department of Mechanical and Industrial Engineering


Indian Institute of Technology
Roorkee, India
*
Corresponding author’s e-mail: mohitmied@gmail.com

The objective of this research was to examine the most preferable alternative to improve the supply chain performance (SCP)
of automobile industries, located in the National Capital Region (NCR) of India. To meet out the objective, a performance
based model has been developed by identifying the needful measures. The fuzzy analytic hierarchy process (FAHP) extent’s
analysis has been applied to analyze the model in order to evaluate the performance of considered alternatives namely
suppliers, 3PLs providers, web-based technologies and advanced manufacturing technologies and to make the findings more
robust, the analytic hierarchy process (AHP) approach has been applied using defuzzified inputs, then discussed the
comparison of results. The findings suggested that alternative supplier’s is more imperative among the all considered
alternatives and plays a critical role in improving the SCP system.

Keywords: supply chain management, performance measurement, fuzzy set theory, fuzzy analytic hierarchy process,
analytic hierarchy process

(Received on January 28, 2014; Accepted on September 8, 2015)

1. INTRODUCTION AND BACKGROUND

During this competitive environment, most of the companies are facing various fluctuations especially in customer demands,
technological advancement and market orientation. Due to these fluctuations, companies are trying to implement an effective
and efficient supply chain management (SCM) system. Without proper understanding of SCM, it is not possible to build a
seamless supply chain. A suitable SCM system provides a business processes integration, reduce the system wide costs and
maximize the supply chain outcomes by providing required service level among the supply chain stages (Simchi-Levi et al.,
2000; Bowersox et al., 2010). To build a seamless supply chain, company should efforts in the direction of improving the
supply chain PM system as a whole, because it works as an adhesive to holds the complex value-creating system together and
as well as plays an important role in strategies formulation and implementation (Handfield and Nichols,1999). Waggoner et al.
(1999) claimed that adequate performance measurement (PM) during business processes identifies the area that needs
attention and also improves communications among the supply chain partners by providing the motivational traits.
In the previous studies, various performance based frameworks have been developed with the consideration of financial
and non-financial measures. Out of them some important are as follows: Beamon (1999) proposed a framework for
combining cost and other criteria, such as customer service and responsiveness to the environment for SCP measurement.
Dreyer (2000) proposed a framework to develop a successful SCP measurement system. According to Kennerley and Neely
(2002) the aim of performance based frameworks should be to identify the number of key characteristics that helps an
organization in improving the overall performance by using an appropriate set of criteria. Chen and Paulraj (2004) presented
a framework of SCM to identify key inter-firm indicators to measure performance using the collaborative strategic
management theory. Bhagwat and Sharma (2007) proposed a framework for SCM evaluation with the consideration of
different performance metrics by using balanced score card (BSC) approach. Qureshi et al. (2008) developed an interpretive
structural modeling (ISM) based framework to analyze the key criteria of 3PLs providers in order to boost the effectiveness of
SCM. Thakkar et al. (2009) proposed an integrated SCP measurement framework for small and medium scale enterprises
(SMEs) using set of qualitative and quantitative insights. Cho et al. (2012) developed a framework for service SCP
measurement and analyze by using extent FAHP. Ibrahim and Ogunyemi (2012) developed a performance based framework
to examine the impact of supply chain linkages and information sharing on supply chain performance of an organization.
Tyagi et al. (2014) developed a framework to evaluate the performance of information technology (IT) enabled supply chain.
Tyagi et al. (2014) developed and analyzed the e-SCM based model by using a hybrid AHP-TOPSIS approach.
During these days, the Indian automobile sector is pulling attention of automobile manufacturers like Hyundai,
Maruti-Suzuki, Toyota, Nissan, Mahindra and Mahindra etc. to setting up their manufacturing base in India. From a survey

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 22(6), 753-768, 2015

MODULARITY AND VARIETY SPINOFFS: A SUPPLY CHAIN PLANNING


PERSPECTIVE
Khaled Medini

Institut Fayol, EVS UMR 5600


Ecole Des Mines de Saint-Etienne
42000 Saint-Etienne, France
Corresponding author’s e-mail: Khaled.medini@emse.fr

Rising customer demand for customized products induces increasing complexity on the side of manufacturing firms and their
supply chains. Such complexity goes hand in hand with increasing costs and lead times in the customer order fulfilment
process. Product modularity is a common way to cope with diversified demands while reducing costs and lead times. This
paper proposes a linear programming model for assembly supply chain planning considering product variety and supply chain
operations costs. The model is used to analyse the impact of modularity on the economic performance of a fictive assembly
firm in the electronic equipment industry, which is inspired by a real case. Several insights are drawn from the illustrative
example. For instance, the model proves very useful in identifying trade-offs between customer demand for variety and the
consequent costs incurred by the enterprise.

Keywords: product variety; product modularity; linear programming; supply chain planning; assembly

(Received on February 27, 2015; Accepted on November 29, 2015)

1. INTRODUCTION

Fierce market competition and increasing diversity in customer demands create a necessity for more customer-centred
strategies that enable the fulfilment of specific demands at reasonable costs. Increasing product variety is one way to achieve
these goals, as it provides more attractive market offerings (Salvador et al., 2002; Blecker et al., 2006; ElMaraghy et al., 2013;
MacCarthy, 2013). Product variety can be defined as the diversity of the products that a production system provides to the
marketplace (Ulrich, 1995). Du et al. (2001) distinguish technical variety, which refers to manufacturability, from functional
variety, which is related to customer satisfaction. Blecker et al. (2006) refer to these types of variety as internal and external
varieties. “While internal variety refers to the variety of components, modules, products, etc. external variety relates to the
product variations that are perceived by customers”. One of the challenges facing manufacturing firms is that of finding a
suitable process design for coping with a wide variety of products and accommodating a high degree of customer involvement
in product specification. The aim is to deliver increased product variety at reasonable cost (Duray et al., 2000; Jiao et al., 2004;
Kamrani et al., 2011; MacCarthy, 2013; Medini, 2014; Medini et al., 2014; Medini and Boucher, 2015).
To reap the potential benefits of variety, many firms endeavour to establish economies of scope. Unlike economies of
scale, which concern a given product or service, economies of scope aim to decrease the average costs among a set of
products or services. One way to achieve such economies is to increase the commonality of the resources used among
different product variants. Modular product family design is a practical means of achieving cost reduction through increasing
commonality (Salvador et al., 2002; Zhang et al., 2006; Medini et al., 2015). A product family is defined by Meyer and
Lehnerd (1997) as “a set of similar products that are derived from a common platform and yet possess specific
features/functionalities to meet particular customer requirements”. A product family design accounts for variant product
modules and components to reduce costs and maximize customer satisfaction (Liu et al., 2009; Chui and Okudan, 2012). As
such, the product family enables the optimization of internal complexity and external variety (Ishii et al., 1995; Tseng et al.,
1996; Jiao et al., 2007; Fixson; Jiao et al., 2007; Nepal et al., 2008). Jiao et al. (2004) discuss concurrent enterprising for mass
customization, which aims to align customers, products, processes and logistics to deliver increased product variety at
reasonable cost. In this sense, Fixson (2005) considers that many decisions regarding the three domains of product, process
and supply chain depend on product characteristics such as the number and complexity of components, component
commonality, and product modularity. Feng et al. (2013) highlight the positive impact of the modular approach on supply
chain efficiency, particularly the manufacturers’ performance. Such an approach fosters the postponement strategy and
enables a shift in the lead time to the suppliers and a reduction of the inventories held by the manufacturers. In this vein,
authors such as Paralikas et al. (2011) argue that production systems that provide modular products are more cost efficient
than those that provide integral products.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(1), 1-12, 2016

CANDIDATE ORDER BASED GENETIC ALGORITHM (COGA) FOR


CONSTRAINED SEQUENCING PROBLEMS
Jun-Woo Kim

Department of Industrial and Management Systems Engineering, Dong-A University


Busan, Korea
Corresponding author’s e-mail: kjunwoo@dau.ac.kr

This paper aims to introduce a novel genetic algorithm, candidate order-based genetic algorithm (COGA), for solving
constrained sequencing problems. Based on the topological sort scheme, COGA can maintain the feasibilities of individual
solutions without violating the given constraints such as precedence between items. During the search procedure, COGA
constructs an individual solution by appending one item at a time, which is chosen among candidate items that make the
current solution feasible. Moreover, position listing representation and candidate order-based genetic operators enable COGA
to search the optimal solutions effectively. For illustration, COGA is applied to solve the single-machine job sequencing
problem and the traveling salesman problem in this paper, and the experimental results revealed that COGA can be a very
useful approach for solving constrained sequencing problems, considering its competitive search ability and ease of
implementation.

Keywords: genetic algorithm; sequencing problem; candidate order; topological sort; precedence constraint

(Received on February 1, 2014; Accepted on January 26, 2016)

1. INTRODUCTION

Combinatorial optimization problems whose solutions are constructed by ordering discrete items called sequencing
problems, and the objective of the sequencing problem is to find an optimal sequence of the items that optimizes its
objective function while satisfying the underlying constraints, if they exist (Poon and Carter, 1995; Yun et al., 2013). Over
the last decades, sequencing problems have been given much attention by researchers owing to their wide applicability to
industrial scheduling and planning problems (Merten and Muller, 1972; Gen et al., 2009). The optimal solution of a
sequencing problem can be obtained by applying exact methods such as mathematical programming; however, such
methods fail to solve the problem in a practical time frame as the problem size grows (Reeves, 1995). Instead, various
approximate methods such as heuristics, meta-heuristics, and artificial intelligence approaches have been proposed to
obtain good solutions in a practical time frame (Moon et al., 2002).
Genetic algorithm (GA) is one of the meta-heuristic search methods that provides significant advantages over
conventional methods in solving combinatorial optimization problems and has been successfully applied to a variety of
sequencing problems (Altiparmak et al., 2009). However, two important issues can arise in applying GA for solving
sequencing problems: (i) as traditional genetic operators can produce infeasible solutions, the encoding scheme and the
genetic operators must be carefully designed to maintain the population of the feasible ones (Moon et al., 2002; Ahmed,
2011). (ii) constraints, such as precedence between items, significantly increase the complexity of sequencing problems,
and solving a constrained problem is a challenging task. Moreover, many previous GAs for constrained problems are
problem dependent or constraint specific (Lenstra and Kan, 1978; Kowalczyk, 1997). To address these issues, this paper
aims to introduce a novel GA called candidate order-based genetic algorithm (COGA) for solving a wide range of
constrained sequencing problems.
The operations of COGA are mainly characterized by the candidate order-based genetic operator (COGO) and
position listing representation used to encode a solution into a chromosome. The role of genetic operators such as crossover
and mutation is to create new chromosomes from the existing ones, e.g., parents (Holland, 1975). Based on the concept of
topological sort (TS) (Kozen, 1992), which constructs a feasible sequence of vertices in a directed graph, COGO constructs
an offspring by appending a single item at a time. To guarantee the feasibility of the offspring, COGO chooses the item to
be appended from among the candidates, which are items whose assignment makes the current solution feasible. The main
idea behind COGA is that the objectives of conventional crossover and mutation can be achieved by choosing an
appropriate item from among the candidates, and COGA compares the positions of the candidates in the parents for this
purpose. In addition, position listing representation is appropriate for efficiently comparing the positions of the items.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(1), 13-25, 2016

AN EFFECTIVE RANK BASED ANT SYSTEM ALGORITHM FOR SOLVING


THE BALANCED VEHICLE ROUTING PROBLEM
Majid Yousefikhoshbakht, Farzad Didehvar*, Farhad Rahmati

Department of Mathematics and Computer Science, Amirkabir University of Technology,


Tehran, Iran
*
Corresponding author’s e-mail: didehvar@aut.ac.ir

The vehicle routing problem (VRP) is the problem of designing optimal delivery from a given depot in order to satisfy the
customer population demand by a similar fleet of vehicles. It is noted that a considerable part of the drivers’ benefits is related
to their traveled distance; therefore, the balance of the route based on 'vehicles travelled distance' is important to obtain drivers’
satisfaction. This paper presents a balance, based on the vehicles traveled route called balanced vehicle routing problem
(BVRP) and then, a model integer linear programming is proposed for solving the BVRP. Because this problem belongs to
NP-hard problems, an effective rank based ant system (ERAS) algorithm is proposed in this paper. In addition, a number of
test problems involving 10 to 199 customers have been considered and solved to show the efficiency of the proposed ERAS.
The computational results show that the proposed algorithm results are better than the results of classical rank based ant
system (RAS) and exact algorithm for solving the BVRP within a comparatively shorter time period.

Keywords: balanced vehicle routing problem; meta-heuristic; rank based ant system; NP-hard

(Received on September 2, 2013; Accepted on May 14, 2015)

1. INTRODUCTION

The Capacitated Vehicle Routing Problem (CVRP) is one of the most important problems arising in designing the efficient
logistics networks. These networks are designed to facilitate the transfer of goods from distribution centers such as
warehouses or factories to a set of geographically dispersed customers. In addition, the VRP is an important problem of
Operational Research both from practical and theoretical points of view in which a customer’s network needs to be serviced
by one or more depots, to supply a set of customers. In this problem, all the customers correspond to deliveries, the
demands are deterministic, the vehicles are identical and are based at a single central depot, the travel cost between each
pair of customer locations is the same in both directions, i.e., the resulting cost matrix is symmetric, whereas in some
applications, as the distribution in urban areas with one-way directions imposed on the roads, the cost matrix is asymmetric.
Only the capacity restrictions for the vehicles are imposed, and the objective is to minimize the total cost (i.e., the number
of routes, their length or travel time) needed to serve all customers. The CVRP problem involves routing a fleet of vehicles,
each visiting a set of customers such that every customer is visited exactly once and exactly by one vehicle, with the
objective of minimizing the total distance traveled by all vehicles. Furthermore, if the total demand of all customers
assigned to the same vehicle does not exceed the capacity limit in the CVRP, then the feasibility of the route would always
be hold, no matter what the visiting sequence is.
The CVRP has been extensively studied since the early sixties and many new exact, heuristic and metaheuristic
approaches were presented in the past years. The VRP was first defined by Dantzig and Ramsermore than 50 years ago
(Dantzig et al., and Ramser, 1959) and it is considered as one of the most well-known combinatorial optimization tasks.
Different approaches for solving the CVRP have been explored during the past decades (Golden, et al., 2008). These
approaches range from the use of exact optimization methods for solving small-size problems with relatively simple
constraints to the use of approximation algorithms that provides near-optimal solutions for medium and large-size problems
with more complex constraints. Initially, in the late 1960s, several exact algorithms such as dynamic programming
relaxation approaches (Hadjiconstantinou et al., 1995) as well as branch and cut methods (Naddef et al., 2002) were
developed for very small numbers of variables and constraints. The largest problems which can be consistently solved by
the most effective exact algorithms proposed so far contain about up to 50 customers and one method that solved a 100-
customer problem (Golden et al., 1998), whereas larger instances may be solved only in particular cases. So instances with
hundreds of customers, as those arising in practical applications, may only be tackled by heuristic methods. Recently,

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(1), 26-48, 2016

OPTIMISATION AND CONSTRAINT BASED HEURISTIC METHODS FOR


ADVANCED PLANNING AND SCHEDULING SYSTEMS
Cemalettin Ozturk1, M. Arslan Ornek2,*
1
Insight Centre for Data Analytics, University College Cork
Cork, Ireland
2
Department of Industrial Engineering, Izmir University of Economics
Izmir, Turkey
*Corresponding author’s e-mail: cemalettin.ozturk@insight-centre.org

Manufacturing Resources Planning (MRPII) systems are unable to prevent capacity problems occurring on the shop floor
because of the fixed lead time and backward scheduling logic. For this reason, a new breed of concepts called APS (Advanced
Planning and Scheduling) systems emerged which include finite capacity planning at the shop floor level through constraint
based planning. In this paper, we present a Constraint Programming (CP) model to show how optimization models could be
used in this context. We also present a two phase heuristic to solve this complicated APS problem. While jobs are assigned
to the best eligible machines to smooth the workload on the machines in the first phase, a constraint based scheduling heuristic
schedules jobs once they are assigned to eligible machines in the second phase. We provide numerical tests and discuss the
results for both the model and the heuristic. The concluding remarks and suggestions for future research are stated in the final
section of the paper.

Keywords: constraint programming; advanced planning and scheduling; heuristics

(Received on January 12, 2015; Accepted on March 7, 2016)

1. BACKGROUND AND MOTIVATION

The popular and widely used production planning systems, such as Manufacturing Resources Planning (MRPII), require to
have the right part, at the right place, at the right time and in the right quantity with minimum cost. Unfortunately, the
fundamental reasoning of those systems is flawed. In other words, production scheduling of parts, components,
subassemblies, end items are based on fixed lead times with infinite capacity and backward scheduling logic
(Orlicky,1976).
For this reason, Material Requirements Planning (MRP) function of MRPII systems cannot provide capacity feasible
production plans, and this unavoidably leads serious problems on the shop floor, such as varying workloads, changing
bottlenecks, high Work-in-Process (WIP) levels, lower machine utilization, less throughput, late deliveries that cannot be
resolved easily in the short term. That is, MRPII is unable to prevent capacity problems occurring on the shop floor. Hence,
this leads to the conclusion that capacity problems must be solved and prevented at the higher levels (see Ornek & Cengiz,
2006, and Öztürk & Ornek, 2010, 2012). Unquestionably, MRP and production scheduling is closely related, and they
should be integrated together to generate realistic production schedules for the shop floor, which leads to the problem of
Advanced Planning and Scheduling (APS) (Chen and Ji, 2007, Jensen et al., 2011).
In the 1990s, a new breed of concepts called APS systems emerged. APS systems are equipped with a range of
capabilities, including finite capacity planning at the floor level through constraint based planning as well as the latest
applications of advanced logic for Supply Chain Management (SCM) (Turbide, 1998, Hvolby and Jensen, 2010). Recent
APS systems tend to take a holistic and collaborative approach to provide global optimization (Moon et al. 2002).
Since the whole problem of planning and scheduling is rather complicated involving many elements and factors, it is
not practical to solve the problem at one time. For this reason, APS has a hierarchical planning framework that combines
MRP with Capacity Requirements Planning (CRP) to allow feasible production plans to be created. Hence, a complete APS
system has four major modules (Kung & Chern, 2009, Stadler & Kilger, 2005): (i) strategic planning, (ii) demand planning,
(iii) master planning and (iv) factory planning. Factory planning (FP) schedules customer requirements and dispatches
manufacturing orders to the shop floor according to the master plan. In recent years, APS systems have become decision
support tools including several capabilities, from finite capacity scheduling to constraint based planning (David et al.,

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(1), 49-67, 2016

TEMPO RATING APPROACH USING FUZZY RULE BASED SYSTEM AND


WESTINGHOUSE METHOD FOR THE ASSESSMENT OF NORMAL TIME
Emre Cevikcan1, Huseyin Selcuk Kilic2,*
1
Department of Industrial Engineering, Istanbul Technical University
Istanbul, Turkey
2
Department of Industrial Engineering, Marmara University
Istanbul, Turkey
*
Corresponding author: Huseyin Selcuk Kilic, hskilic@hotmail.com

Tempo rating which aims to obtain the accurate tempo for the workers is an important process for time study applications. It
is possible to reach the normal time after finding the precise tempo values. Westinghouse method, based on the factors of
skill, effort, environmental conditions and consistency, can be considered as one of the well-known methods among tempo
rating approaches. However, crisp numerical values with linguistic terms are used for evaluating each factor in Westinghouse
method. In this study, a fuzzy rule based methodology is proposed due to the vague environment of decision making process
of tempo rating. The developed methodology includes fuzzy rules for determining the evaluation scores of each factor. The
proposed rating system is applied in a bus-bar production system so as to demonstrate its validity. The proposed rating system
yields high level of conciseness on the basis of predetermined motion time system.

Significance: Since time study activities are performed in all the industries including every sector, tempo rating which is
critical for finding the normal time of the tasks is very important and has a wide range of industrial applications. With this
study, a new tempo rating methodology based on fuzzy rule based system is presented. The proposed methodology provides
easiness to use and produces accurate values.

Keywords: time study; tempo (performance) rating; westinghouse method; fuzzy rule based system.

(Received on March 9, 2015; Accepted on March 2, 2016)

1. INTRODUCTION

Time study is defined by the Industrial Engineering Terminology Standard (IIE, 1982) as "a work measurement technique
consisting of careful time measurement of the task with a time measuring instrument, adjusted for any observed variance
from normal effort or pace and to allow adequate time for such items as foreign elements, unavoidable or machine delays,
rest to overcome fatigue, and personal needs". It is one of the two mains parts constituting work study whilst the other is
method study.
In the study of “The Principles of Scientific Management” by Frederic Taylor (2003), the effects and the importance
of time study are discussed in a scientific manner (Tikhomirov, 2011). Time study is applied in case there are short or long
repetitive work cycles (Salvendy, 2001) and for accomplishing time study, work analysis, methods standardization and time
study operations must be performed, respectively (Karanjkar, 2008).
Performance rating is regarded as the most important step in the work measurement process. Since it depends on the
experience, training, and judgment of the time-study men, it is highly open to criticisms (Niebel, 1976). Moreover, Barnes
(1980) states that it is the most difficult part of the time study and defines it as the judgment of comparison of the operator’s
performance with respect to a normal performance which depends on the consideration of the observer.
Mainly six methods are utilized for performance rating, namely, skill and effort rating, Westinghouse system of rating,
synthetic rating, objective rating, physiological evaluation of performance level and performance rating (Barnes, 1980). The
rating techniques can also be classified as speed rating, effort rating, pace rating, objective rating, leveling and synthetic
levelling (Polk, 1984). Detailed information will be provided about these systems in the following sections.
Within this study, a new performance rating methodology is proposed based on Westinghouse method. Westinghouse
method includes four factors; skill, effort, conditions and consistency (Barnes, 1980). The worker is evaluated according to
the scale and performance rate is obtained. Utilization frequency in the real applications, comprehensive systematic, the

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(1), 68-82, 2016

ANALYSIS OF RELATIONSHIP BETWEEN BRAND PERSONALITY AND


CUSTOMER SATISFACTION ON A VEHICLE EXHAUST SOUND
Ilsun Rhiu1, Sanghyun Kwon2, Myung Hwan Yun2,* and Dong Chul Park3
1
Institute for Industrial Systems Innovation, Seoul National University
Seoul, Korea
2
Department of Industrial Engineering, Seoul National University
Seoul, Korea
*
Corresponding author’s e-mail: mhy@snu.ac.kr
3
R & D Division, Hyundai Motor Company
Hwaseong, Gyeonggi, Korea

This study aims to understand the brand personality of vehicle exhaust sounds and how these elements affect the customer
satisfaction in regards to exhaust sounds. Thus, a research model for exploring the relationship between brand personality
and customer satisfaction was developed. This study was conducted on eight vehicles’ exhaust sounds which were engine
acceleration sounds when the speed of the vehicle increased from 0 to 100km/h (zero to 100). The evaluation was conducted
among 40 participants who each have a minimum of ten years’ driving experience and have no impairment in their hearing.
The findings partly support the research model and confirm that brand personality dimensions are influencing factors in
satisfaction for the vehicle exhaust sound. ‘Sophistication’ and ‘Confidence’ are more important for customer satisfaction of
V6 and V8 cylinder exhaust sounds than the other brand personality scales. The research advances the understanding of the
effect of brand personalities on customer satisfaction for vehicle exhaust sounds.

Keywords: brand identity; brand personality; customer satisfaction; vehicle engine sound; affective engineering

(Received on December 16, 2014; Accepted on January 26, 2016)

1. INTRODUCTION

Traditionally, the product development process was highly focused on adding or improving new/existing functions.
Furthermore, product design became just as important as function. Nowadays, on top of technical performance and visual
design, affective qualities and user experience of products are becoming crucial factors in customer’s buying decisions.
Therefore, firms that focused on developing and improving product functions are now attaching great importance to product
design and consumers’ affective response to their products. It is difficult to capture consumer’s hearts without satisfying
their expectations for high affective quality and good user experience (Hassenzahl, 2008). High affective quality and good
user experience comes not only from new functions and designs, but also from affective experience while use and the brand
personality.
Brand identity of the product contributes greatly to increasing customer loyalty and satisfaction to the product and the
brand. This means that to unify a brand identity by building a good brand personality can lead to positive customer
attitudes. Also, positive customer attitude formation about the product plays important role for successful product sales (Jo
et al., 2007). It is significant to establish a powerful brand identity through effective differentiation strategies for market
success. Making a brand personality could be an appropriate way to establish a powerful brand identity (Aaker, 1991;
Aaker, 1997; Belk, 1989; Kim & Ahn, 2000; Kleine III et al., 1993; Malhotra, 1988). Also, the idea of brand personality
demonstrates that brand personality is an integral part of brand development and brand strategy (Freling & Forbes, 2005). A
number of efforts have been conducted to improve the brand identity of many products. In the automotive industry, firms
like BMW and Hyundai have been pursuing ‘family look’, presenting a common brand design line (Park & Lee, 2007).
This shows that it is becoming more important to understand brand personalities and unify a brand identity in product
development. Brand positioning and brand personality is the core of the brand identity. The reason why customers like a
particular brand and purchase that brand repeatedly is that they are fond of the brand personality of that brand (Ahn et al.,
2008).
ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 23(1), 83-93, 2016

A COMBINED HEURISTIC APPROACH TO THE NONLINEAR FIXED-


CHARGE CAPACITATED NETWORK DESIGN PROBLEM
Woo-Sik Yoo1, Jae-Gon Kim1, and Chong-Man Kim2,*
1
Department of Industrial and Management Engineering, Incheon National University
Incheon, Korea
2
Department of Industrial and Management Engineering, Myongji University
Yongin, Gyeonggi-do, Korea
*
Corresponding author’s e-mail: chongman@mju.ac.kr

The nonlinear fixed-charge capacitated network design problem (NFCNDP) is notoriously difficult and there is limited
literature addressing this problem although the (linear) fixed-charge capacitated network design problem has been extensively
studied. In this article, we propose a new heuristic approach to the NFCNDP. We develop a two-phase heuristic algorithm in
which a constructive method is used to obtain an initial solution and the solution is iteratively improved using an improvement
method. To overcome the myopia of the heuristic algorithm, we combine it with simulated annealing. Computational
experiments are performed on benchmark test sets with convex and concave cost functions. Test results show that the
suggested algorithm performs much better than commercial optimization solvers.

Keywords: nonlinear fixed-charge capacitated network design; heuristic; simulated annealing

(Received on April 29, 2015; Accepted on February 3, 2016)

1. INTRODUCTION

The fixed-charge capacitated network design problem (FCNDP) is a well-known NP hard problem of routing multiple
commodities from origins to destinations along capacitated arcs at a minimum cost in a directed network (Holmberg and
Yuan, 2000). The FCNDP has a wide range of real-life applications such as construction of new streets in traffic networks,
construction of communication links in telecommunication networks and construction of power lines in energy distribution
networks, etc. The FCNDP has been extensively studied. For survey on the FCNDP, see Kennington (1978), Foulds (1981),
Magnanti and Wong (1984), Minoux (1989) and Crainic (2000). The FCNDP has linear flow costs in its objective function
to be minimized: the sum of the total flow cost and the total fixed cost. In many real cases, however, the flow-cost function
is nonlinear. Whenever congestion phenomena are present, the flow-cost functions that are employed to reflect such
situations are nonlinear; in most applications they are convex (Hu, 1966; Orlin, 1984; Goffin et al., 1996; Work and Bayen,
2008). On the other hand, whenever economies of scale phenomena, discounts or set-up costs are present, the flow-cost
functions are naturally concave (Zangwill, 1968; Lozovanu, 1983; Florian, 1986; Guisewite and Pardalos, 1990; Holmqvist
et al., 1998; Horst and Thoai, 1998). Thus, the nonlinear fixed-charge capacitated network design problem (NFCNDP) is
more realistic than the FCNDP in many cases.
Despite its various industry applications, there are few studies on the NFCNDP probably due to its complexity.
Croxton et al. (2007) study the NFCNDP with nonconvex piecewise linear flow costs. They describe structural results for
various formulations of the problem and present the results of extensive computational experiments carried out on these
formulations. Bektas et al. (2010) consider the NFCNDP where capacity constraints can be violated at the expense of
additional costs and propose Lagrangean-based decomposition algorithms to solve the problem. Nonlinear network flow
problems have relevance to the NFCNDP although fixed costs for using arcs are not considered in them. For the single-
source uncapacitated network flow problem with general concave costs, Fontes et al. (2006) and Fontes and Goncalves
(2007), respectively, present dynamic programming and genetic algorithms. Goffin et al. (1996) deal with the network flow
problems with convex flow costs and propose the analytic center cutting plane method to solve it. Larsson et al. (2008)
suggest a convergent Lagrangean heuristic for the single commodity convex network flow problem. Gürel (2011) suggests
a conic quadratic formulation for the multicommodity nonlinear network flow problem with convex congestion costs and
discrete capacity expansion.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23 (2), 94-107, 2016

TAKT TIME AS A LEVER TO INTRODUCE LEAN PRODUCTION IN MIXED


ENGINEER-TO-ORDER/MAKE-TO-ORDER MACHINE TOOL
MANUFACTURING COMPANIES
Ricondo Iriondo, Itziar 1, Serrano Lasa, Ibon1,* , De Castro Vila, Rodolfo2
1
Department of Industrial Engineering, IK4-IDEKO Technology Centre
Elgoibar, Spain
*
Corresponding author’s e-mail: iserrano@ideko.es
2
Business Organization, Management and Product Design Department, University of Girona.
Girona, Spain

The main purpose of this paper is to show how takt time and flow concepts can act as a lever to introduce lean production
into the Small and Medium Enterprises (SME) machine tool manufacturing sector; a sector characterized by High Variety
and Low Volume (HVLV) product configurations and mixed Engineer-To-Order (ETO)/Make-To-Order (MTO)
production systems. A pilot implementation in such a company confirms that the takt time approach can not only be
applied to a significant part of the manufacturing process, but also implies an increase in manufacturing efficiency. In
terms of manufacturing efficiency, the key outcomes are an increase in the reliability of lead time, an increase in
productivity, a simplification of management functions, and the introduction of a continuous improvement culture. The
introduction of a takt time based or paced production system has been supported by the Value Stream Mapping technique
(VSM) specially adapted to such kinds of manufacturing systems.

Keywords: lean production; takt time; VSM; ETO; MTO.

(Received on May 16, 2014; Accepted on March 23, 2016)

1. INTRODUCTION

The importance of manufacturing to retain and sustain competitiveness is already well known (Pisano and Shih, 2009; Mac
Carthy, 2013). Among other business and manufacturing objectives, manufacturing efficiency is paramount in order to
compete in today’s turbulent market.
Lean production is a proven approach in achieving manufacturing efficiency. Originally developed in the automotive
sector, it has been widely implemented in repetitive manufacturing sectors and there are also some reported cases in
project-based and non-repetitive based industries (Storch and Lim, 1999;Crute, 2003; Spicer, 2005; Salem et al., 2006;
Portioli and Tantardini, 2012; Matt, 2014; Thomassen et al. , 2015).
This paper seeks to introduce the lean approach in the Engineer-To-Order (ETO)/Make-To-Order (MTO) Small and
Medium Enterprise (SME) machine tool industry, characterised by project-based, fixed-layout and manual assembly. These
companies represent approximately 80% of the European machine tool manufacturers (CECIMO, 2016).
Some typical characteristics of the assembled equipment are large size, a high level of engineering and
customisation (a combination of a variety of features, including customized ones), and high value and low stock inventory.
These manufacturers have traditionally worked in a project-based manufacturing environment, arguing that the level of
product customisation does not allow for manufacturing and assembly process standardisation. However, the authors
believe that such diversity does not mean other production and order fulfilment approaches are not highly feasible. In fact,
other project-based industries have adopted lean production and have, in one way or another, disproved the erroneous but
underlying assumption that it was impossible to introduce this system into the ETO/MTO manufacturing environment. As
such, the hypothesis that a lean production system is indeed possible encouraged the authors to continue with this case
study. The validity of this assumption, based on the takt time rhythmic concept, would be clearly demonstrated with the
introduction of a paced production system in a considerable number of the production and assembly stages.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(2), 108 –118, 2016

A DEMAND FORECAST METHOD FOR THE FINAL ORDERING PROBLEM


OF SERVICE PARTS
Yon-Chun Chou1,2,*, Yujang Scott Hsu2 and Shin-Yang Lu1
1
Institute of Industrial Engineering, National Taiwan University
Taipei, Taiwan
*
Corresponding author’s e-mail: ychou@ntu.edu.tw
2
Department of Mechanical Engineering, National Taiwan University
Taipei, Taiwan

Demand forecast of service parts at the end-of-life phase of durable goods is plagued with inadequate demand data, changing
purchasing behavior, and lack of reliability information. As the number of sale data for each part is very limited, conventional
forecast methods are not applicable. This paper presents an empirical study on developing a forecast method based on installed
base information. Several archetypes of demand trend are first identified and regular regression is shown to be inadequate in
predicting future demand. Then by applying the installed base approach, the interrelated effects of data trend, data quantity
and data recency are unraveled. This knowledge enables a new forecast method to be developed based on two tests of data
trend. It is found that for parts with an upward trend it is better to use more data and apply linear regression but for parts
without a trend it is better to use less but more recent data with a constant regression function. The proposed method is
validated with multiple automobile and notebook computer series and is shown to outperform a current method by large
margins in forecast errors.

Keywords: service part inventory, end-of-life part inventory, installed-base forecast, final order, data recency

(Received on September 22, 2014; Accepted on June 22, 2016)

1. INTRODUCTION

After-sales service is important to manufacturers of durable goods such as automobiles and computers. After a product’s
sale is discontinued, spare parts must be provided for an extended period of time to satisfy warranty requirements and
customer services. For example, the sales of an automobile model might last for 5 years, but its service parts must be
provided for at least 15 more years. The availability of service parts is critical to upholding after-sales service quality. In
addition, the sales of service parts generate a steady stream of revenue at high profit margins which is more immune to
economic cycle than new product sales.
Inventory management problems related to this period of product life cycle is typically called the end-of-life (EOL)
part inventory problems. It has several characteristics that are distinct from inventory management of finish goods or parts
that can be replenished. First, the fundamental process of spare parts demand undergoes significant changes at the EOL
phase. Automobile customers’ decision on service parts is affected by many factors such as cost, quality, safety concern,
age of vehicle, etc. Customers’ purchasing preference changes dramatically after the warranty expires and when generic
brands appear in the market as substitutes. The demand of service parts is highly erratic, and it becomes more unpredictable
as time goes on. The second reason is related to unavailability of reliability and usage data. In the global supply chain era,
many parts are procured from outside suppliers, rather than manufactured in house. A product, such as an automobile,
might contain thousands of parts. For many parts, the actual reliability information can only be ascertained by monitoring
field use. The common practice of multiple sourcing and the speed of product/part redesigns also make it more difficult to
monitor part reliability. Finally, service parts sales tend to be noisy. Service parts are sold to multiple sources: distributors,
retailers, and individual end users. Part demand can be stimulated by special events, such as quality re-calls, and one-time
bulk procurement, such as motherboard upgrades for all computers in a game parlor.
This paper addresses the final order problem which occurs toward the latter part of the EOL phase. Suppliers of
outsourced parts, faced with dwindling demand, will subsequently set target dates for discontinuing part production. A
supplier might give the equipment manufacturer a notice for placing a final order. After the final order, production tooling
is put away, raw materials are not stocked, and it is costly to restart a new production batch. This is called the final order
(FO) or last order problem in the literature (Bradley and Guerrero, 2009; Van Kooten and Tan, 2009).

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(2), 119-136, 2016

A PRACTICAL MULTIPLE-TOOL-SET APPROACH FOR INCREASING


AGILE RESPONSE IN OVERHAUL PRODUCTION WITH LIMITED
RESOURCE REQUIREMENT VISIBILITY

Han-Hsin Chou1, Chi-Tai Wang2, *, Ru-Shuo Sheu1

Department of Travel Management, Hsing-Wu University


New Taipei City, Taiwan
2
Graduate Institute of Industrial Management, National Central University,
Jhongli, Taiwan
*
Corresponding author’s e-mail: ctwang@mgt.ncu.edu.tw

Agility has become a manufacturing paradigm of the 21st century that numerous firms have adopted it as a competitive
strategy. Not surprisingly, there is a comprehensive body of research conducted on this subject. However, little study has
been done for one-of-a-kind manufacturing where resource requirements are not available until well into the manufacturing
process. In addition, there are other uncertain factors such as the changing production lead-time, making the environment
dynamic and hard to predict. To address this issue, we conduct an action research study to enable an agile response to the
factory dynamics of the subject environment. Our solution is a novel integration of four widely used technologies: radio
frequency identification (RFID), simulation, bottleneck identification methods and the theory of constraints (TOC). Using
numerical industry data, we illustrate how this multiple-tool-set approach is effective in achieving agility in a real one-of-a-
kind manufacturing facility. With minor modifications, our solution can also be used in other manufacturing environments

Keywords: agility; theory of constraints (TOC); radio frequency identification (RFID); simulation; bottleneck identification

(Received on November 30, 2014; Accepted on March 2, 2016)

1. THE AGILITY PARADIGM

The need for firms to act responsively in the rapidly changing global market has led to the arrival of a major manufacturing
paradigm called agility (Yusuf et al. 1999). Adopted by firms all over the world to replace the out-dated mass production
approach, agility has been a critical competitive strategy for firms to survive in the present global economies, characterised
by features including accelerated technological innovation, the increasing importance of information and continuous
improvement, a shrinking time to market, and increasingly demanding customers. To provide guidance for the development
of an agile manufacturing system, Gunasekaran (1999, p. 100) proposes a conceptual model based on four key dimensions
of the manufacturing enterprise: strategies, systems, technologies and people, with each containing several choices to be
considered as part of the agility endeavour. Similarly, Brown and Bessant (2003) investigate strategies, processes, linkages
(primarily connections with customers and suppliers) and people as the four key dimensions in achieving agility.

Nomenclature
AR action research MES manufacturing execution system
ARM agile response mechanism PDA personal digital assistant
BNID bottleneck identification RCM reliability centred maintenance
CCR capacity constraint resources RFID radio frequency identification
CI confidence interval SEWS simulation-based early warning system
ERP enterprise resource planning TOC theory of constraints
FFS five focusing steps UHF ultra-high frequency
KPIs key performance indicators WIP work-in-process

The preceding discussions indicate that firms must be aware of emerging technologies at all times, so they can take the
opportunity whenever appropriate to adopt a suitable technology to achieve or sustain agility. In recent years, a significant

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(2), 137- 154, 2016

PLANNING THE FUTURE OF EMERGENCY DEPARTMENTS:


FORECASTING ED PATIENT ARRIVALS BY USING REGRESSION AND
NEURAL NETWORK MODELS
Muhammet Gul* and Ali Fuat Guneri

Department of Industrial Engineering, Yildiz Technical University


34349, Istanbul, Turkey
*
Corresponding author’s e-mail: mgul@yildiz.edu.tr

Emergency departments (EDs) face high numbers of patient arrivals in comparison to other departments of hospitals because
they provide non-stop service. Patient arrivals at these departments mostly do not appear in a steady state. Predicting existing
uncertainty contributes to the future planning of these departments. Therefore, forecasting patient arrivals at emergency
departments is crucial so as to make short and long term plans for physical capacity requirements, staffing, budgeting and
arranging staff schedules. In this paper, variations in annual, monthly and daily ED arrivals are analyzed based on regression
and neural network models with the aid of a collected data from a public hospital ED in Istanbul. The results show that ANN-
based models have higher model accuracy values and lower values of absolute error in terms of forecasting the ED patient
arrivals over the long and medium terms. The paper is also aimed to provide ED management and medical staff a useful guide
for future planning of their emergency departments in the light of an accurate forecasting.

Keywords: emergency department, ED patient arrivals, forecasting, regression, artificial neural networks

(Received on January 11, 2015; Accepted on March 9, 2016)

1. INTRODUCTION

Patient arrivals at emergency departments have recently been increasing in many countries as well as in Turkey. According
to the statistics by Turkish Ministry of Health, the rate of admissions to emergency departments out of total admissions has
been growing in recent years (Gul and Guneri, 2012). This situation stems from incompatible admissions to the emergency
departments and utilizing of emergency departments as medical centers and outpatient clinics. Thus, planning ED resources
to meet the increasing daily demand becomes vital from the perspective of ED managers. Making efficient planning is only
possible by forecasting the demand. Forecasts of demand can influence planning and guide the allocation of human and
physical resources to facilitate patient flow. Efficient patient flow has the potential to increase capacity of the existing
system, minimize patient care delays and improve overall quality of health care (Jones et al., 2008; Gul and Guneri, 2015a;
Gul and Guneri, 2015b).
Several studies were conducted to forecast annual, monthly and daily patient volume (arrivals) at emergency
departments. Cote et al. (2013) presented a tutorial for ED directors by forecasting patient arrivals. They focused on
regression-based forecasting models for strategical, tactical and operational planning of emergency departments. Jones et
al. (2008) explored and evaluated the use of several statistical forecasting methods to predict daily ED patient volumes at
three diverse hospital EDs and have compared the accuracy of these methods to the accuracy of a previously proposed
forecasting method. Morzuch and Allen (2006) studied on the forecast of an hourly distribution of arrival process in a
particular hospital ED. Sun et al. (2009) used the autoregressive integrated moving average (ARIMA) method to forecast
daily attendances at an emergency department. The analysis has been found useful to plan staff roster and resource planning
in the department. Jones et al. (2009) studied the temporal relationships between the demands for key resources in the
emergency department and the inpatient hospital and developed multivariate forecasting models. Wargon et al. (2010)
investigated whether mathematical models using calendar variables could identify the determinants of ED census over time
in geographically close EDs and assessed the performance of long term forecasts. Batal et al. (2001) developed a prediction
equation for the census of patients in an urgent care clinic. McCarthy et al. (2008) developed a methodology for predicting
the demand for ED services by characterizing ED patient arrivals. Champion et al. (2007) forecasted the number of patients
who would pay a visit every month to the emergency department of a hospital in regional Victoria. They used exponential
smoothing and Box–Jenkins method as forecasting methods. Kadri et al. (2014) applied ARIMA method for daily arrivals
forecasts of two patient categories and total in a French hospital ED with a year period data.

ISSN 1943-670X  INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(3), 155-165, 2016

OPTIMAL ORDERING AND PRICING ON CLEARANCE GOODS


Koichi Nakade*, Ken Ikeuchi

Department of Civil Engineering and Systems Management, Nagoya Institute of Technology


Nagoya, Japan
*
Corresponding author’s e-mail: nakade@nitech.ac.jp

In a food or convenience store, items such as meal and fresh foods must be sold within a few days. In this case, some old
items are sold as clearance goods with a discounted price. In this study, a basic model with a fixed amount of orders is
investigated where new and clearance goods are sold at the same time. First, the optimal ordering model is formulated when
the discount price and selection probabilities are given. Next the optimal ordering and pricing model is formulated in the case
that the discount retail price for clearance goods can be selected by a retailer and the amount of demand and selection
probabilities for new and clearance goods depend on the discount price. The optimal number of orders and an optimal discount
price are discussed through numerical experiments.

Keywords: order, markov chain, clearance, stochastic demand, logit model

(Received on January 20, 2016; Accepted on August 22, 2016)

1. INTRODUCTION

Inventory and order control is very important for retailers. When they have many goods as inventory the holding cost is
incurred and if there are small inventories then a lost sale happens. In addition, the retail price of items affects the number
of sold goods. Thus, order policies or pricing policies have been developed in literature during two decades. For example,
Petruzzi and Dada (1999) discussed the optimal stocking and pricing problems in a single period like newsvendor problems
and in multiple periods by formulating a Markov decision process over a finite horizon. Li and Huh (2011) considered the
problem of multiple products with the nested logit model and prove the concavity of profit functions on market share,
which is related to retail prices.
For products like foods or clothes, the lifetime of products for selling is limited. Gallego and van Ryzin (1994)
considered a dynamic pricing model with a fixed number of products and fixed limited selling periods. Optimal pricing for
each pair of remaining period and the number of unsold items in inventory is derived. Caro and Gallien (2012) considered a
pricing model for a fast-fashion retailer, in which the demand highly depends on the trend of fashion, and they propose the
forecasting and price optimization model with Zara. Recently, a strategic customer is also discussed, who decides whether
he/she buys an item at the higher price in the first day or at the lower price in the second day with knowing information on
the both prices. Two day problems with strategic customers are considered in Zhang and Cooper (2008) for deriving
optimal two prices maximizing the revenue in two periods underprice-dependent demand. Effect on rationing in the second
day is discussed.
For some products like foods, they deteriorate as the time passes after they are on sale. Panda, Saha and Basu (2008)
assumed the item in inventory deteriorates and the demand depends on stock, and the optimal discount prices in two stages
are derived and discussed theoretically and numerically. In Li, Lim and Rodrigues (2009) a multi-period discount reward
optimization model with perishable products and limited life time is considered. It is assumed that old products are sold
earlier than the new products, and the optimal price and order policy is discussed by a Markov decision process. Chen,
Pang and Pan (2014) discussed a pricing and inventory problem for perishable products over a finite horizon. They
formulate it as a Markov decision process and discuss monotonicity properties of optimal policies.
In the most of literature a finite period model for one deteriorating item during its life time is considered, or when both
new and old items are sold the order in which items are bought in inventory is predetermined, for example, it is assumed
that old items are first consumed in inventory and new items are sold after all old items are sold. In supermarkets or
convenience stores, old and new items are sold in the same day, and old items are sold as clearance goods with a discount
price, because they are sometimes less attractive to consumers than new items, and new items which just arrive at the
retailer today are displayed in the shelf with the old ones at the same time. Therefore, the clearance price on old items
affects the amounts of sales of both old and new items each day. If this clearance price is almost the same as new items then
the fraction for a consumer to select the clearance good is small, which leads to more discarded items, whereas if the price

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(3), 166-173, 2016

SUSTAINABLE COLLABORATION MODEL WITH MONOPOLY OF


SERVICE CENTERS IN EXPRESS DELIVERY SERVICES BASED ON
SHAPLEY VALUE ALLOCATION
Ki Ho Chung1, Seung Yoon Ko2, Chul Ung Lee2 and Chang Seong Ko3, *
1
Department of Business Administration, Kyungsung University
Busan, Korea
2
Department of Industrial and Management Engineering, Korea University
Seoul, Korea
3
Department of Industrial and Management Engineering, Kyungsung University
Busan, Korea
*
Corresponding author’s e-mail: csko@ks.ac.kr

Delivery amounts in the Korean express service market are constantly increasing, in spite of the long-term downturn and
slow growth in most of the industries. This saturated market, where a large number of companies are entering the delivery
service market, leads to severe competition among small and medium-sized express delivery companies. Economies of scale
created through collaboration incur the reduction of operational cost. This cooperation among service centers may also lead
to increment in net profit of participating companies. This study proposes a sustainable collaboration model with which
service centers will still be opened or closed in merging regions with low demand in express delivery services. In addition,
in the process of forming coalitions in express delivery services, Shapley value is applied to give fair allocation to each
company based on its marginal contribution. An example problem is given to verify the appropriateness of the suggested
collaboration model.

Keywords: express delivery services, sustainable collaboration, monopoly of service centers, shapley value

(Received on January 25, 2016; Accepted on October 18, 2016)

1. INTRODUCTION

According to recent Korean Bulletin (Bae 2016), the total sales of the express delivery market in Korea of 2015 increased
9.26% compared to 2014. This figure indicates that the economically active population per capita in Korea used express
delivery services 67.9 times annually. This amount presumes to be around 1.8 billion in quantity, which has increased by
11.87%. It was analyzed that since 2010, the express delivery market which had rapid growth with double digits every year,
has maintained stability of increasing rate below 10%. Nevertheless, the express delivery market has entered a new phase
recording double digit growth rate last year. However, the express delivery companies still suffer from the decrease in
average unit price of delivery service, due to excessive competition in express delivery market and consequently lead to
price decline by 2.33% compared with last year. Collaboration creates economies of scale which leads to the diminution of
operational cost. In addition, through this efficient cooperation among service centers, participating companies can expect
increase of net profit under the win-win situation (Chung et al. 2009). This study suggests a sustainable collaboration model
to increase competitiveness of every participating company through monopoly of under-utilized service centers with low
demand, along with impartial allocation of alliance profits. The critical issue that arises in applying collaboration to the
real-world problem is the allocation of coalition profit to each participating company. It is essential for the members of the
coalition to share their payoff fairly to sustain long-term collaboration. For this purpose, Shapley value allocation is utilized
as a systematic methodology for fair distribution of profit sharing regarding its marginal contribution (Shapley 1953).
Multi-objective nonlinear integer programming model is proposed under the assumption of a possibility of survival of
multiple service centers in each merging region. However, an example problem dealing with only a survival of a single
service center is performed since its solution methodology is not developed. It focuses on the point that Shapley value is
applicable to solve multi-objective problem.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(3), 174-182, 2016

MARKOV NETWORK MODEL WITH UNRELIABLE EDGES


Gilseung Ahn, and Sun Hur*

Department of Industrial & Management Engineering, Hanyang University


Ansan, Korea
*
Corresponding author’s e-mail: hursun@hanyang.ac.kr

Network topologies representing the relationships among nodes in supply chain network should be dynamic in time, partly
because the relationships are unreliable. Existing network analysis methods such as the Markov network, however, do not
consider the time-dependency of the unreliable edges, and therefore these methods cannot consider the dynamics of networks
precisely. In order to consider the unreliable edges in Markov network analysis, we suggest a Markov Network with Time-
Varying Edge algorithm in this paper, where the discrete time Markov chain is employed to express the time-dependency of
the edges. The algorithm consists of: finding unreliable edges for maximal cliques; developing a discrete time Markov chain
model for each unreliable edge which composes any maximal cliques and composition; and analyzing the Markov network.
We explain how to calculate the transient probabilities of an observation and limiting probability with this algorithm and
numerical application to the supply chain network is provided.

Keywords: Markov network; unreliable edge; discrete time markov chain; maximal clique detection, supply chain network

(Received on January 27, 2016; Accepted on October 17, 2016)

1. INTRODUCTION

In real, edges expressing the relationship between nodes (e.g., suppliers, manufacturers, retailers, and customers) in the
supply chain network that change frequently over time. For instance, a relationship between suppliers is formed after they
sign a contract and runs out. This implies that the weight or the strength of the relationship of an edge should be a function
of time. The weight of an edge might be binary with which the existence of a relation between two nodes, e.g., supplier and
retailer, can be represented. We call this kind of an edge the time-varying edge or unreliable edge. Unreliable edges could
describe various supply network situations successfully. As an example, collaboration and information sharing highly
impact small and medium sized enterprises when they adopt open inter-organizational systems in a supply chain network
(Chong and Bai, 2014). As another example, information sharing regarding customer impacts on the optimal sales strategy
for the integrated supply chain (Saito and Kusukawa, 2015). Two enterprises may be so cooperative that they communicate,
collaborate and/or share the information with each other, but after some time they may become competitive to close their
relationship and not communicate at all. In this way, the relationship between any two entities can be changed in time. For
another example, relationships of buyer-supplier which may contribute to enlarge profit can be classified into short term
and long term relationships (Venugopalan et al., 2014). Both can be expressed as time-varying edges, but there is a
difference regarding time until the relationship is renounced.
Existing network analysis methods, however, including Markov network, do not consider the time-dependency of
edges and thus cannot reflect the dynamics of networks and express the real world network precisely. Even though some
researchers have noticed that dynamic changes of networks arise in multiple application domains such as transportation and
information network (Bogdanov et al., 2011), their research has dealt only with link prediction (Sarkar et al., 2012) and
community discovery (Lin et al., 2008), evolution model (Yin et al., 2016), and detection of to-be-popular node (Gunduz
and Yuksel, 2016) in the dynamic networks, and did not develop general methodologies considering dynamic changes of a
network topology. To the best of our knowledge, however, no previous research on the network analysis taking the time-
dependency of edges in a supply chain network into consideration has been done, despite the importance of relationships in
a supply chain network. For instance, relationships between supplier and product configuration are important to understand
the role of the product flow on supply chain (Marufuzzaman and Deif, 2010). In addition, price is highly dependent on
coordination between a manufacturer and a retailer (Nagarur and Iaprasert, 2009).
In order to accommodate the dynamically varying real-world situations among supply chain entities, methods
considering the concept of time-varying network should be developed. In this paper, we suggest a Markov Network with
Time-Varying Edge (MN-TVE) algorithm to analyze Markov network with time-varying edges which can give a general
solution to the dynamic undirected network. We construct a discrete time Markov chain (DTMC) model for each time-
ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 23(3), 183-194, 2016

A MULTI-LEG LOAD-PLANNING ALGORITHM FOR A HIGH-SPEED


FREIGHT TRAIN
Dong-jin Noh1, Byung-In Kim1,*, Hyunbo Cho1, Jun-Seo Park2
1
Department of Industrial and Management Engineering, Pohang University of Science and Technology
Pohang, Korea
*
Corresponding author’s e-mail: bkim@postech.ac.kr
2
Korea Railroad Research Institute

This study considers a load-planning problem for a high-speed train called Cargo Transit eXpress (CTX). In the problem,
CTX visits a sequence of stations to load and unload unit load devices (ULDs). The train has a given number of wagons with
available positions for ULDs, which have respective weights, as well as origin and destination stations. The weight balance
among the wagons and within each wagon is crucial for safety because the train moves fast. Furthermore, given that it
traverses multiple stations, the weight balance should be maintained among all legs between stations. The problem is how to
assign ULDs to specific positions in the wagons considering load balancing at multiple stations. This problem is an NP-hard
multi-leg load-planning problem, which has not been studied extensively. An MIP model and a heuristic algorithm are
developed to solve the problem. Computational results show the effectiveness of the proposed algorithm.

Keywords: multi-leg load-planning problem; k-partition; heuristic algorithm; balancing problem

(Received on January 31, 2016; Accepted on October 17, 2016)

1. INTRODUCTION

Given the current development of intermodal terminals, trains have a significant role in connecting roads with marine or air
transportation systems. The main function of terminals is to transport cargo and freight in the form of containerized goods.
As the train speed increases, numerous issues have to be considered, including the weight balance of the freight loaded in
the train.
Korea Railroad Research Institute is working on a project related to the development of Cargo Transit eXpress (CTX),
a high-speed cargo train that consists of a series of identical wagons. The purpose of CTX is to transport a unit load device
(ULD), which is a small container that carries various types of freight, from a given point of origin to a destination. On the
platform of a station, a conveyor belt with rollers is placed in a line so that the ULDs to be loaded are stationed on it. The
ULDs placed on the conveyor belt should be lined up in a preassigned order to reduce the time spent on loading and
unloading ULDs. ULDs have identical sizes but different weights and loading and unloading stations.
Given that CTX runs at a very high speed of up to 300 km/h, the most important consideration is the weight balance of
the loaded ULDs. If the weight balance of a train is not considered, then serious accidents or railroad deterioration may
occur. This consideration is called “load balancing.” The load balancing of CTX includes not only the weight balance
between wagons of the same train but also the balance between the front and rear parts and between the left and right parts
of each wagon. This condition means that the balance among the four quadrants of each wagon should be maintained.
Ensuring load balancing at multiple stations causes difficulty in solving the problem because each ULD has a different
loading and unloading station, and the plan for loading should be devised before the train leaves the first station. The
problem is a multi-leg load-planning problem (MLLPP), in which a leg is between consecutive stations. The present study
proposes an algorithm to balance the weight of the ULDs at all legs and assign the ULDs to particular positions in the
wagon.
The rest of this paper is organized as follows. Section 2 introduces related studies on load-planning problems. Section
3 describes a MLLPP for CTX and its mathematical model. Sections 4 and 5 present the proposed algorithm and
experimental results, respectively. Section 6 provides the conclusions.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(3), 195-206, 2016

A CONSTRUCTIVE HEURISTIC FOR THE CONTAINER RELOCATION


PROBLEM
Kun-Chih Wu1, and Ching-Jung Ting1, 2, *
1
Department of Industrial Engineering and Management, Yuan Ze University
Taoyuan, Taiwan
2
Innovation Center for Big Data and Digital Convergence, Yuan Ze University
Taoyuan, Taiwan
*
Corresponding author’s e-mail: ietingcj@saturn.yzu.edu.tw

Container terminals face a challenge of handling the growing number of containers to adapt the expansion of globalized
economy. Reducing container relocation is an important issue in operational aspect at port, because the relocation is a non-
value-added movement but a time consuming and wasteful activity. In this study, a container relocation problem (CRP) is
considered to obtain the minimum number of relocations to retrieve a set of containers stored at a bay. Specifically, the
unrestricted-CRP is concerned in this paper, in which not limited to the containers above the target container can be relocated.
We propose a novel constructive heuristic to quickly generate a good quality solution to the container relocation problem.
Comprehensive numerical experiment of total 12,500 unrestricted instances from the literature is tested and compared. The
computational results show that, comparing to the existing construction heuristic, our heuristic can reduce the number of
relocations by 3.55% on average.

Keywords: container relocation problem; block relocation; unrestricted; constructive heuristic; container terminal

(Received on January 30, 2016; Accepted on October 17, 2016)

1. INTRODUCTION

Container terminals play an important role to coordinate flows between sea-side and land-side. Due to the dramatic
expansion of container trade worldwide, the container terminals face a severe challenge to handle the huge increasing
volume of containers. In order to reduce the berthing time of vessels, how to quickly handle the storage containers becomes
a major concern for the port authorities. The sooner the terminal handles the containers, the more vessels can be served and
the higher productivity can be achieved. In general, containers are stacked up at container yards for increasing the
utilization of the available space. The stacking height of the containers is varied according to the handling capacity of the
container terminal, which may be up to six tiers high at some terminals due to the limited space. Thus, in terms of spatial
density, more activities and equipment, such as trucks and cranes, may be involved. In this regard, improving the
operational productivity and the flexibility will be the major issue for those terminals with the high storage density.
The relocation of containers is an important factor that impairs the productivity and increase labor costs of the
container terminals. Ordinarily, two types of movements are applied for the containers stored at a container yard. When a
required container (target container) is placed at the top of a stack, the retrieval operation can be performed immediately to
remove the container from the yard. Otherwise, an additional operation cost incurred by the relocation of containers above
the target container. Since the relocation of containers is a non-value-added operation, reducing the number of the
relocations is one of the most important issues at container terminals.
Container relocation is a hardly unavoidable operation in the container terminal. In the ideal storage plan, containers
are placed according to their retrieval order, and thus the relocation is not necessary. However, when containers are located
at the container yard, the information about the retrieval time of containers are usually not clear and incomplete. Export
containers, for example, are delivered into the terminal by external trucks and usually stored at the container yard for
several days before loading plan to a ship is available. The loading sequence of outbound containers is based on the
stowage plan, which determines storage position of containers in the ship according to their destination, weight, etc. The
stowage plan is known in advance; however, the containers arrive at the terminal in a random manner, the containers might
not be placed at the yard based on their retrieval sequence. On the other hand, the exactly loading sequence cannot be
confirmed completely, because the information of containers may be not available or biased at the beginning. In practice,

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(4), 207-215, 2016

A GATEWAY-CENTERED WORKFLOW ROLLBACK DECISION MODEL


TOWARD AUTONOMOUS WORKFLOW PROCESS RECOVERY
Hyun Ahn and Kwanghoon Pio Kim*

Department of Computer Science, Kyonggi University


Gyeonggi-do, Republic of Korea
*
Corresponding author’s e-mail: kwang@kgu.ac.kr

In enacting a workflow process model, it is very important to control and trace each instance's execution as well as to keep it
recoverable. Especially, the recoverability issue implies that the underlying workflow management system is able to not only
provide the automatic error-detection functionality on its running exceptions but also to equip various autonomous recovery
mechanisms to deal with the detected exceptional and risky situations. As a theoretical approach to resolve the autonomous
workflow recovery issue, this paper tries to formalize a rollback-point decision tree structure based upon gateway-activities
of a corresponding workflow process model, which is named as a gateway-centered workflow rollback decision model. We
strongly believe that the proposed model ought to be one of those impeccable trials and pioneering contributions to improve
and advance the capability of recovery in enacting workflow process models.

Keywords: information control net; workflow model; control dependency; recoverability; autonomous workflow process
recovery; rollback-points sequence

(Received on January 5, 2016; Accepted on November 16, 2016)

1. INTRODUCTION

In this paper1, we focus on the safeness (or recoverability) issue (Ma, J. et al., 2011) (Kim, K. et al., 2007) (Park, M. et al.,
2015) (Grefen, P., 2002) (Grefen et al., 2006) in enacting workflow process models through a workflow management
system. To guarantee safeness on workflow enactment services implies to keep consistency between the virtual status of a
workflow instance on the runtime system and the physical status of the corresponding process on the real business world.
For the sake of the safeness, it is very important for the system to be supported by autonomous error-detections and self-
recovery mechanisms for resolving the unexpected exceptional and risky situations (Ghadge et al., 2013) (Brzeziński et al.,
2012). A little more specifically speaking, the paper conceives a novel concept of rollback-points ancestries to be used for
the workflow management system to automatically recover and resume the error-involved workflow instances from the
exceptional and risky situations. The proposed concept is a theoretical approach to be reified as an autonomous recovery
mechanism that is able to support determining a proper point of rollback operations. The rollback operation needs to point
to a specific activity (either gateway-activity or task-activity) out of the previous committed activities, and starts rollback
up to the determined activity in a corresponding workflow instance confronting the inconsistency situations. The core part
of the theoretical approach is a rollback decision tree structure that produces a set of possible rollback-point sequences from
a workflow process model, and in this paper we name it ‘workflow rollback decision model.’
In principle, there are two types of rollback decision models, such as gateway-centered rollback decision model and
task-centered rollback decision model. A rollback decision model basically specifies a series of rollback sequences as
workflow instance recovery information, which can be used for implementing an autonomous self-recovery mechanism to
resume the error-involved workflow instances from the exceptional and risky situations. Note that a workflow process
model describes a temporal precedence (control flow) of gateway-activities and task-activities, and its related events, such
as initiating, terminating, timing, and so on. So, the rollback-point sequences of a workflow process model can be classified
into a gateway-centered type2. In this paper, we try to extensively refurbish the gateway-centered type as the name of the

1
This paper is fully extended from the conference paper (Park, M. et al., 2015) published on the proceedings of AP-BPM
2015, the Asia-Pacific conference on Business Process Management held in Busan, South Korea.
2
Note that we firstly formalized a gateway-centered type as the name of the gateway-centered workflow rollback-points
ancestry model published in the conference proceedings (Park, M. et al., 2015).

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(4), 215-234, 2016

CLOUD WORKFLOW MODELING BASED ON EXTENDED PROCLETS


FRAMEWORK
Hua Huang1,2, Rong Peng 1,* and Zaiwen Feng 1
1
School of Computer, Wuhan University
Wuhan, China
*
Corresponding author’s e-mail: rongpeng@whu.edu.cn
2
School of Information Engineering, Jingdezhen Ceramic Institute
Jingdezhen, China

How to promote common process fragments reuse, how to accelerate business process customization process for multiple
tenants and how to model interactions between different tenants' business processes while protecting their privacy, are the
main research focuses of existing cloud workflow modeling approaches. This paper proposes a cloud workflow modeling
approach based on an extended proclets framework which has the following advantages: 1) provides a hierarchical
management mechanism to isolate the private data of each tenant’s business process; 2) adopts two-level-channel
transmission protocol to explicitly model interaction between different tenants' business processes. To exactly describe and
understand the modeling approach, the formal definition and construction procedure of cloud workflow model based on
extended proclets are presented. At last, a case study is carried out to illustrate its modeling capability and feasibility, and to
evaluate the applying effect of our modeling approach, the run effect of a cloud workflow system developed by adopting our
proposed modeling approach is evaluated through three indexes: communication error rate, data leakage rate and user
satisfaction.

Keywords: cloud workflow model; proclets; interaction between multi-tenant business processes; process fragment reuse

(Received on January 10, 2016; Accepted on November 16, 2016)

1. INTRODUCTION
Cloud workflow system is a workflow management system deployed in cloud computing environment. It can be widely used
as platform software (or middleware services) to facilitate the usage of cloud services. In a cloud workflow system, good
reuse mechanism is extremely important as it can improve the user friendliness and usability of the system. Meanwhile,
security is also a key feature as the business process and data of tenants are all running on the system. Thus, tenants always
ask for the cloud platform to guarantee that they can design, configure and run their business processes with three-level
isolation: data isolation, performance isolation, execution isolation (Chen et al., 2009). Interactions between business
processes of different kinds of tenants must be supported at the same time because tenants with different roles (tenant types)
need to cooperate with each other to fulfil their business goals. Therefore, a cloud workflow modeling approach which can
meet the demands on good reusability, privacy protection and convenient interaction is important to the success development
and stable run of cloud workflow systems.
For instance, there are at least three types of enterprise tenants in the Cloud Distribution Resource Planning System
(denoted as CDRP), i.e. supplier (ceramic product supplier), online distributor (selling suppliers’ ceramic products only in
online store), physical distributor (selling suppliers’ ceramic products in both online store and physical store). Although they
share some common processes, e.g., registration process and shop creation process, each tenant has its personalized process,
e.g., the process of supplier’s distribution channel construction and the process of distributor’s procurement channel
construction. In order to accelerate business process customization process for suppliers and distributors, the common
processes should be reused easily. Meanwhile, to conduct the product distribution business, interactions between the
processes of suppliers and distributors should be achieved without disclosing the private data in these processes.
In general, interactions between cross-enterprise collaborative processes are implicitly implemented in customized
applications through hard-coded business process definition languages, which makes it very difficult to update and maintain
the corresponding system. Therefore, it is necessary to explicitly model interaction between different tenants' business
processes in cloud workflow, e.g., the interaction between different stakeholder roles in CDRP. Unfortunately, another

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(4), 235-252, 2016

A MULTI-TENANT EXTENSION OF TRADITIONAL BUSINESS PROCESS


MANAGEMENT SYSTEMS TO SUPPORT BPAAS1
Dongjin Yu1,*, Jiaojiao Wang1, Jianwen Su2 and Binbin Huang1
1
School of Computer Science and Technology, Hangzhou Dianzi University
*
Corresponding author’s e-mail: yudj@hdu.edu.cn
2
Department of Computer Science
UC Santa Barbara, USA

BPaaS, or Business Process as a Service, is an advanced model of SaaS in which the Business Process Management System
is deployed as a shared, centrally-hosted service without the need for users to deploy and maintain additional on-premise IT
infrastructure. BPaaS leverages economies of scale and isolates the process management from domain business, by serving
a large number of tenants. In this paper, we present an architectural design of a BPaaS system and its two implementations,
called jBPM4S and ActivitiEx respectively. These extensions of two well-known open-source Business Process
Management Systems, i.e., jBPM and Activiti, provide generic and unified process management services that are unrelated
to specific business operations invoked on demand by tenants. To accomplish this, the extensions cleanly separate the
business data from process execution to isolate tenants. An extensive case study is presented to demonstrate the usability
and efficiency of the extensions, and difference with the traditional BPMS approach.

Keywords: business process management; business process as a service; jBPM; activiti; multi-tenant

(Received on January 14, 2016; Accepted on November 16, 2016)

1. INTRODUCTION

Business process management (BPM) focuses on improving corporate performance by managing and optimizing
enterprise’s business processes. It enables flexible and individualistic composition and execution of services as opposed to
hard-coded workflows in off-the-shelf software (Schulte et al., 2015). In a traditional architecture, a BPM system
responsible for coordinating and monitoring running instances of business processes is usually a part of enterprise
application systems. However, to design, implement, and maintain a BPM system is not always a straightforward task (Lee
et al., 2015). Purchasing a BPM system is an expensive investment for some enterprises. In addition, scalability can also be
a concern for enterprises that use BPM systems, since a process engine is only able to coordinate a limited number of
business process instances simultaneously (Duipmans et al., 2014; Nicolae et al., 2015).
Cloud computing has changed how computing, storage and software services are provisioned. It gives the user
opportunities of using computing resources in a pay-per-use manner, and of perceiving these resources as unlimited (Bibi et
al., 2012; Cusumano et al., 2010). As we all know, cloud providers offer three major types of system services: software as a
service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). In SaaS, consumers pay for a software
subscription and move all or part of their data and code on remote servers (Sengupta et al., 2011; Filho et al., 2013).
Business Process as a Service, or BPaaS, is a new, arising paradigm of SaaS, which can be described as deployed as a
hosted business process service and accessed over the Internet without the need for users to deploy and maintain additional
on-premise IT infrastructure (Sun et al., 2014; Wang et al., 2013; Euting et al., 2014; Duipmans et al., 2014). From a
BPaaS vendor’s perspective, the benefits of BPaaS arise from leveraging economies of scale and separating the process
management from domain business, by serving a large number of customers (“multiple tenants”) through a shared,
centrally-hosted software service of business processes.
In this paper, we present the design and implementation of two prototypes of BPaaS that are extensions based on
jBPM and Activiti respectively, two world-leading open source projects for BPM systems. The prototypes, called jBPM4S
and ActivitiEx respectively, provide the services of business management over the Internet for different applications. With

1 This paper is the extension of our previous one published on APBPM 2015.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(4), 253-269, 2016

DISCOVERY OF GATEKEEPERS ON INFORMATION DIFFUSION FLOWS


USING PROCESS MINING
Berny Carrera, JinSung Lee*, and Jae-Yoon Jung

Department of Industrial and Management Systems Engineering, Kyung Hee University


Yongin, Korea
*Corresponding author’s e-mail: jinsl127@khu.ac.kr

Online social network services (SNS) such as Twitter and Facebook are currently representative means of disseminating
information on the Web. It is therefore crucial to internet marketing to understand the dynamics of information diffusion in
online social networks. To this end, we present a probabilistic approach to process discovery, based on an extended hidden
Markov model, considering the log data extracted from online SNS. Specifically, we first group users based on their
interactions using SNS log data and three community detection algorithms. The process discovery algorithm is an extension
of the hidden Markov model and applies to the user communities to reflect probabilistic dissemination among user
communities. We illustrate the proposed method with real SNS data gathered from a Facebook fan page. We expect that our
method can promote comprehension of the information dynamics in online social networks by visualizing probabilistic
information diffusion through user groups.

Keywords: social media analytics; information diffusion; process mining; hidden markov model

(Received on January 15, 2016; Accepted on November 20, 2016)

1. INTRODUCTION

Online social network services (SNS) are currently effective means for spreading new information and marketing content
around the world. Online SNS such as Facebook and Twitter have hundreds of millions of users, and each user is connected
to friends, family, and co-workers. Many people commonly communicate their opinion or news through online SNS (Kwon
& Wen, 2010). As SNS have become a major means of online communication, various studies on online social network
analysis have been made (Kim & Yoneki, 2012; Guille et al., 2014).
Nevertheless, studies on online social networks remain insufficient to analyze information flow in SNS and predict
user behavior. Understanding how information spreads in a social network is valuable because it facilitates important tasks,
such as analyzing communication between communities (Zafarani et al. 2014), developing a marketing strategy
(Domingos, 2005), and understanding how and why misinformation spreads (Budak, 2011). Interactions and information
propagation among users might influence purchase decisions and hot news propagation. Therefore, it is important to
understand how information spreads and how users interact with one another.
In this work, we visualize how social networks show the relationship of people in society using social science to
represent users’ behavior in social media. One can understand the structure of social relationships, such as distinctions of
community, relationship patterns, and diffusion, by analyzing a diagram (Scott, 2012). Although such social network
analytics provide good methods for understanding information diffusion on SNS, they cannot effectively illustrate the
sequence and path of information propagation inside a social network.
In this research, we propose a process discovery methodology based on the hidden Markov model (HMM) to
understand information diffusion by discovering a process map from SNS data. In general, HMM is an effective technique
that stochastically estimates the parameters of hidden states from observations, and it is used in areas such as speech
recognition, data recognition, and bioinformatics (Duda et al., 2001). On the other hand, process mining techniques extract
knowledge from event logs in enterprise information systems. In this paper, we present a new process mining algorithm that
we call the Hidden Markov Model for Information Diffusion (HMMID). It extends the typical HMM to discover the
information diffusion process. In particular, it clusters users into multiple user communities to abstract many users in a
social network and increase the comprehensibility of information flow among user groups. For the purpose of user
grouping, we adopt three kinds of community detection algorithms, Markov clustering, Girvan Newman, and Louvain
modularity, to elucidate the information diffusion process.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(4), 270-282, 2016

A DYNAMIC AND HUMAN-CENTRIC RESOURCE ALLOCATION


FOR MANAGING BUSINESS PROCESS EXECUTION
Arif Wibisono1, Amna Shifia Nisafani1,*, Hyerim Bae2, You-Jin Park3
1
Department of Information Systems, Institut Teknologi Sepuluh Nopember
Surabaya, Indonesia
*Corresponding author’s e-mail: amna@is.its.ac.id
2
Department of Industrial Engineering, Pusan National University
Busan, Korea
3
School of Business Administration, Chung-Ang University
Seoul, Korea

Generally, resource allocation is essential to the efficient operational execution. More specifically, resource allocation for
semi-automatic business processes might be more sophisticated due to human involvement. To this point, human
performances are oscillating over time. Hence, upfront and static resource allocation might be suboptimal to deal with human
dynamics. For this reason, this research suggests a dynamic and human-centric resource allocation to organize human-type
resources in semi-automatic business process. Here, we use Bayesian approaches to predict resource’s performances
according to historical data set. As a result, we can construct a dynamic priority rule to assign a job to a specific resource with
the highest probability to work faster. Finally, we demonstrate that our approach outperforms other priority rules: Random,
Lowest Idle, Highest Idle, Order, and previously developed Bayesian Selection Rule from the total completion time and
waiting time point of view.

Keywords: dynamic resource allocation, machine learning, dynamic dispatching rule, dynamic priority rule, naïve bayes

(Received on January 15, 2016; Accepted on November 16, 2016)

1. INTRODUCTION

The advancement of information technology has forced many companies to adopt information technology products to support
their daily operations such as software for managing customer (e.g. customer relationship management), managing supplier
(e.g. supplier management system), and managing inventory and production (e.g. Enterprise Resource Planning). One of the
information technology products that many companies bring into their business ecosystem is business process management
system (BPMS). BPMS is software to plan, execute, control, monitor, and evaluate business process (BP) within companies
(Wibisono et al., 2015). In order to improve the efficiency of business process (BP), researchers have investigated some
scheduling concepts for BP by organizing resources during BP execution (Bae, Lee and Moon, 2014) (Eder et al., 2003)
(Huang, Lu and Duan, 2012) (Huang, van der Aalst and Lua, 2011) (Rhee, Bae and Kim, 2004) (Wu et al., 2009) (Zhao and
Stohr, 1999) (Yahya et al., 2011) (Nisafani et al., 2014).
Generally, resources in a business process can be divided into human resources and facilities, for example, machines,
vehicles, storages, etc. While facility-type resource is applicable to manufacturing-related processes within most of its
execution, human-type resource is predominantly found in organization-related processes such as quote-to-order,
procure-to-pay, order-to-cash, application to approval, and issue-to-resolution (Dumas et al., 2013). For example, Figure 1
shows an order-to-cash related process, which involves human-type resources and is commonly available in many
wholesaling companies.
Compared to facility-type resource, managing human-type resource is more challenging for the following two reasons.
First, human-type resources have a lower performance consistency than facility-type resources. For instance, an operator
might perform persistently in his/her first three hours and then his/her speed gradually decreases. After having lunch, the
operator might increase his/her speed but is a bit more slowly than the speed in the morning. As time progresses, the speed
becomes constant from 2 PM until the end of the day (Wibisono et al., 2015). Thus, it is truly unfeasible to expect constant
performance of human-type resource over a consecutive period. Second, generally, even though researchers acknowledge

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 23(5), 283-293, 2016

OPTIMIZING MULTIPLE RESPONSE VARIABLES OF CHEMICAL AND


MECHANICAL PLANARIZATION PROCESS FOR SEMICONDUCTOR
FABRICATION USING A CLUSTERING METHOD
Jaehyun Park1 and Dong-Hee Lee2
1
Department of Industrial and Management Engineering, Incheon National University
Incheon, Korea
2
College of Interdisciplinary Industrial Studies, Hanyang University
Seoul, Korea
*
Corresponding author’s email: dh@hanyang.ac.kr

Semiconductors are fabricated through unit processes including photolithography, etching, diffusion, ion implantation,
deposition, and planarization processes. Chemical mechanical planarization (CMP), which is essential in advanced
semiconductor manufacturing, aims to achieve high planarity across a wafer surface. Selectivity and roughness are the main
response variables of the CMP process. Since the response variables are often in conflict, it is important to obtain a satisfactory
compromise solution by reflecting the CMP process engineer’s preference information. In this study, we present a case study
in which the satisfactory compromise solution is obtained. The recently developed posterior preference articulation approach
to multi-response surface optimization is employed for this purpose. The performance of response variables of CMP process
have been shown to be better at the obtained setting than at the existing setting of process variables.

Keywords: process optimization; chemical and mechanical planarization; multi-response surface optimization

(Received on December 14, 2015; Accepted on November 15, 2016)

1. INTRODUCTION

Semiconductors are fabricated through unit processes including photolithography, etching, diffusion, ion implantation,
deposition, and planarization processes. Chemical and Mechanical Planarization (CMP), which is essential in advanced
semiconductor manufacturing, aims to achieve high planarity across the wafer surface (Oliver, 2004, Steigerwald et al.,
2008).
Figure 1 illustrates the CMP process. It typically consists of a polishing pad, a wafer to be polished, and CMP slurry.
The polishing pad is attached to a plate and a wafer carrier holds the wafer against the polishing pad. Both of the wafer and
the polishing pad rotate in the same direction at different rotating speeds while slurry is injected onto the polishing pad
during the rotation. Abrasive particles in the slurry cause chemical interaction and mechanical friction between the
polishing pad and the wafer surface which results in removal of material on the wafer surface.

Figure 1. Schematic of CMP Process (May and Spanos, 2006)

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(5), 294-301, 2016

IDENTIFICATION OF RISK DEVICES USING INDEPENDENT COMPONENT


ANALYSIS FOR SEMICONDUCTOR MEASUREMENT DATA
Anja Zernig1,*, Olivia Bluder1, Juergen Pilz2, Andre Kaestner3, and Alban Krauth1
1
KAI – Kompetenzzentrum Automobil- und Industrieelektronik GmbH
Villach, Austria
*
Corresponding author’s e-mail: anja.zernig@k-ai.at
2
Department of Statistics, Alpen-Adria University of Klagenfurt
Klagenfurt, Austria
3
Infineon Technologies Austria AG
Villach, Austria

Semiconductor devices must fulfill highest quality standards since they are used in safety relevant applications. Bad devices,
namely devices which are not fully functional after the production are scrapped immediately and not delivered to the
customer. Unfortunately, among the remaining ones there are still devices with increased risk, prone to infant mortality. To
minimize the chance of delivering such risk devices, statistical screening methods are applied to Front-End data to detect
suspicious devices, represented as statistical outliers. Dependent on the technology, different measurements are suitable for
screening. Nevertheless, it is assumed that the measurements contain not only signal noise but also a hidden signal identifying
risk devices. Therefore, this paper proposes to use the Independent Component Analysis as a data transformation to separate
informative and non-informative content from the electrical measurements.

Keywords: screening; independent component analysis; outlier detection; semiconductors

(Received on January 27, 2016; Accepted on November 15, 2016)

1. INTRODUCTION

In semiconductor industry the demand on reliable semiconductor devices is of paramount importance. Functionality tests are
applied to each single device to check if the device fulfills the requirements. For instance, devices with contact fails or any
other physical defects can be detected with special functionality tests. Beside these tests, parameters like leakage currents or
voltages are measured. The values have to stay within pre-defined specification limits. For devices used in safety relevant
applications even tighter statistically derived limits, so-called Part Average Testing (PAT) limits, are calculated. Devices
outside these limits are scrapped in addition, because from a statistical point of view they are outliers and therefore expected
to contain more risk to fail early than devices from the main part of the distribution.
Typically, depending on the investigated technology, reliability relevant measurements are used for PAT.
Unfortunately, the raw measurements are often superimpositions of signals that are not all relevant to the failure risk. This
interference may reduce the effectivity of screening. That is why a suitable transformation of the raw data is needed to extract
or enhance information that is relevant to identify devices at risk. A suitable transformation for this aim is Independent
Component Analysis (ICA), a method related to Blind Source Separation techniques. Thereafter, the transformed data are
screened for outliers with PAT. The performance of screening with and without preceding ICA is then visualized and
compared via Operating Curves (OC).
The paper is structured in the following way: In Section 2 the general meaning of device screening and two
well-established methods are outlined. In Section 3 Independent Component Analysis is introduced. Afterwards, in Section 4,
the theory is applied to semiconductor data, where single steps are outlined. Finally, the work in this paper is summarized,
conclusions are drawn and further steps are outlined.

2. SCREENING OF RISK DEVICES

The failure rate of semiconductor devices is known to follow a bathtub curve (Zernig et al., 2014, Wilkins, 2002) with a high
“infant mortality” in the early lifetime (see Figure 1). Devices failing during the early lifetime have been fully functional after

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(5), 302-317, 2016

OPTIMIZATION OF PREVENTIVE MAINTENANCE PLANS IN G/G/M


QUEUEING NETWORKS AND NUMERICAL STUDY WITH MODELS BASED
ON SEMICONDUCTOR WAFER FABS
Jinho Shin1, James R. Morrison1,*, and Adar Kalir2
1
Department of Industrial and Systems Engineering, Korea Advanced Institute of Science and Technology
Daejeon, Korea
*
Corresponding author’s e-mail: james.morrison@kaist.edu
2
Fab/Sort Manufacturing Division, Intel Corporation
Qiriat-Gat, Israel

Preventive maintenance planning is important to avoid unanticipated equipment breakdowns and ensure smooth production.
Previous work seeking to consider mean cycle time in preventive maintenance planning focused on individual toolsets. We
develop a method to jointly determine preventive maintenance plans for a network of G/G/m queues with the objective of
minimizing the system’s mean cycle time. Our contribution is the integration of two methods that have been developed
previously: approximations for the mean cycle time in networks of G/G/m queues and preventive maintenance plan
optimization for a G/G/m toolset in isolation. Preventive maintenance events are considered as non-preemptive high priority
customers in the network model. We conduct numerical experiments via simulation to assess the performance of the
optimization approach. We use network models inspired by publically available semiconductor industry datasets. The study
suggests that cycle time improvements may be possible via fabricator-level preventive maintenance plan optimization.

Keywords: Preventive maintenance, Queueing network, Semiconductor fabricator optimization, Sensitivity analysis

(Received on January 28, 2016; Accepted on June 20, 2016)

1. INTRODUCTION

To reduce the occurrence of unplanned tool failures, manufacturing systems often rely on preventive maintenance (PM). PM
inserts intentional tool downtime into the tool’s schedule for the service and care of the tool. PMs reduce unplanned tool
failures, improve overall equipment availability and increase production reliability. Much effort has been devoted over the
years to determine the frequency at which a PM should be conducted for a given part; c.f., (Cho and Parlar, 1991) and (Dekker
1996). As a tool consists of many parts, PM activities which occur at similar frequencies are typically grouped together. For
example, all PM activities that should occur once per month are often conducted together when the tool is removed from
production on the first of the month. However, it would be possible to conduct half of these monthly activities on the first of
the month and the other half two weeks into the month. We call such a schedule of activities a PM plan. Focusing on single
toolsets, the optimization of PM plans to minimize mean cycle time has recently been considered in (Kalir, 2013) and
(Morrison et al., 2014). Our goal is to develop a method for PM planning that can be used to simultaneously determine the PM
plans for multiple toolsets operating in an interconnected network. Our intended application area is semiconductor
manufacturing, in which toolsets are connected via complex reentrant process flows.
Construction costs for modern semiconductor wafer fabricators (fabs) are around US$5 billion (Global Semiconductor
Forum, 2011). Individual tools cost US$1M – US$100M (Lapedus, 2010). Fabs host what is arguably the most complex of
manufacturing processes. Fabs and the wafer manufacturing process are characterized by hundreds of tools, complex
reentrant process flows, tight process specifications, hundreds of steps per product, significant levels of automation and many
other complexities. On account of such costs and complexities, operating decisions for fabs must be carefully considered. PM
planning optimization can support fab efforts towards efficiency.
Once the PM frequency has been determined for each part (the service frequency), the PM plan should be determined. In
(Kalir, 2013), a nonlinear optimization model was formulated that seeks to adjust the PM plans to minimize the mean cycle
time for a G/G/m queue. They relied on an approximation formula for the mean cycle time in G/G/m queues and considered
PM activities that all have the same service frequency (e.g., all activities are monthly activities). In (Morrison et al., 2014),
those concepts were extended to allow for the simultaneous consideration of PM activities with different service frequencies
ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 23(5), 318-331, 2016

ANALYZING TFT-LCD ARRAY BIG DATA FOR YIELD ENHANCEMENT


AND AN EMPIRICAL STUDY OF TFT-LCD MANUFACTURING IN TAIWAN
Pei-Chun Chu, Chen-Fu Chien*, and Chia-Cheng Chen

Department of Industrial Engineering and Engineering Management


National Tsing Hua University
Hsinchu, Taiwan
*Corresponding author’s e-mail: cfchien@mx.nthu.edu.tw

The flat panel display industry has invested considerable resources in constructing large-size panels and rendered the process
more complex, resulting in various defects and low yield. Engineers rely on their domain knowledge or rules of thumb for
troubleshooting; however, limited domain knowledge, insufficient experience, faulty generalization, and bounded rationality
lead to ineffective judgment. The objective of this study was to develop a framework for data mining and knowledge
discovery from a database; the Kruskal–Wallis test and a decision tree were used to investigate a large amount of thin film
transistor-liquid crystal display (TFT-LCD) manufacturing data and determine the possible causes of faults and
manufacturing process variations. An empirical study was conducted at a TFT-LCD company in Taiwan, and the results
demonstrated the practical viability of the framework.

Keywords: data mining; decision tree; kruskal–wallis test; yield enhancement; TFT-LCD; mura; manufacturing intelligence

(Received on April 14, 2016; Accepted on July 6, 2016)

1. INTRODUCTION

Flat panel display industry advanced on large-size TV panel market exacerbating the over expansion of large-size
panel market and intensifying the competition for the flat panel display market (Mathews, 2005). In particular, the
production of large-sized thin film transistor-liquid crystal display (TFT-LCD) panels in Taiwan is second to South Korea
sharing worldwide market and facing external threats such as the red supply chain arising from the vertical integration of
Chinese manufacturing companies and the free trade agreement between China and South Korea. The subsistence of
Taiwanese TFT-LCD manufacturers has become more arduous.
Quality is a crucial factor in the TFT-LCD industry to sustain long-term competitiveness (Hsu et al., 2010), in which
large-size panel manufacturing imposes a more stringent requirement on process stability. When yield issues happen,
engineers must identify the root causes of process-related problems as soon as possible to reduce losses. Most engineers
rely on their domain knowledge and experience to identify abnormality but ignore critical information hiding in huge
accumulative fabrication process data. Finally, limited domain knowledge, insufficient experience, faulty generalization,
and bounded rationality lead to ineffective judgment.
During a TFT-LCD fabrication process, a large amount of process data is automatically or semi-automatically
recorded and accumulated in an engineering database. Data mining and big data analytics techniques have been developed
to extract potentially useful information and data patterns from big data to support manufacturing intelligence and business
analytics (Chien and Chuang, 2014). In particular, data mining techniques effectively convert plethoric quantities of
complex engineering data into valuable information and knowledge for process improvements and yield enhancement
(Chien et al., 2007). Engineers use the extracted information and knowledge in the manufacturing process as a reference
and basis for advanced investigation of the root causes of the defects (Hsieh and Lu, 2008).
This paper proposes a data mining framework for extracting manufacturing knowledge through process monitoring
and defect diagnosis to remove assignable causes and thus improve the yield. In particular, the Kruskal–Wallis (K–W) test
(Kruskal and Wallis, 1952) and decision tree methodology were applied to analyze abnormal stations and parameters in
TFT-LCD manufacturing. An empirical study was conducted using the real data from a fabrication plant to validate the
proposed framework. The results showed that this framework can efficiently decrease time and limit the scope of defect
diagnosis and derive specific decision rules effectively, thus demonstrating its practical viability.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(5), 332-348, 2016

AN EFFECTIVE THRESHOLD BASED MEASUREMENT TECHNIQUE FOR


FALL DETECTION USING SMART DEVICES
Sunder Ali Khowaja, Feri Setiawan, Aria Ghora Prabono, Bernardo Nugroho Yahya* and Seok-Lyong Lee*

Department of Industrial and Management Engineering, Hankuk University of Foreign Studies Global Campus
Yongin-si, Korea
*Corresponding author’s e-mail: bernardo@hufs.ac.kr, sllee@hufs.ac.kr

Falls can be considered as most critical events for human workers in real world scenarios which require timely response from
the emergency team. Although many have come up with fall detection devices, complex sensors arrangement and response
time remain as the challenges on automatic detection, particularly in an industrial environment. This paper proposes an
effective fall detection algorithm using threshold based measurement approach that consists of two stages. The first focuses
on optimizing the thresholds from the wearable sensor data and is required to run only one time for a specific device. The
second proposes fall detection algorithms using inertial units and orientation sensor from smart devices to detect the fall. The
proposed algorithms in this study take into account accelerometer and gyroscope sensors for fall detection and an orientation
sensor to validate the detected fall. The wearable sensors used in this study are very common and thus does not require any
special arrangement to wear them. 30% of the fall simulation data was used to acquire the optimized thresholds whereas 70%
of it was used for testing of the proposed algorithm with optimized thresholds. The experiment results show better trade-off in
terms of sensitivity, specificity and detection time, in comparison to the existing studies. This study also provides an
experimental study of fall detection algorithm by changing the placement of sensors to three different locations. It indicates
the efficacy of the proposed algorithm and can adapt to changes of smart devices.

Keywords: fall detection, wearable sensors, human workers, optimized thresholds, inertial sensors

(Received on August 19, 2016; Accepted on November 15, 2016)

1. INTRODUCTION

Across the board, a fall is considered to be the major problem among the humans, which result in reduced life expectancy and
increased health care services and amount of injuries (Rubenstein, 2006). A study shows that more than quarter of overall
fatalities in the US occur in industrial environments (Zhang et al., 2015). Out of all the fatalities, falling is considered to be a
major safety risk at industries according to the US Bureau of Labor Statistics (2014). Therefore, a fall detection plan needs to
be implemented and maintained in every industry (Sulankivi et al., 2010). One of the main objectives of all the industries in
developed countries, such as pharmaceutical, chemical, semi-conductor, manufacturing, power, food processing and so forth,
is to reduce the insurance, failure and maintenance costs (Hayes & Capretz, 2014). Maintenance cost is out of our context as
it is purely related to the performance, but the insurance costs are incorporated with the health risks of the worker and most of
the health risks are associated with the fall in industrial environments. Other than the economic benefits, fall detection can
help in sustaining the psychological impact caused by the fall. According to the study conducted by (Wild et al., 1981 and
Cumming et al., 2000) the elongated lying-fall period i.e. the time the person remains unconscious after the fall without any
medical attention, not only reduces the self-confidence of an individual but also keeps one in continuous fear of falling.
Therefore, a fall detection algorithm should be designed with least detection time so that the emergency team can be notified
in time for the provision of medical attention.
In a particular industry such as semiconductor, a fall is exposed as the second highest work-related incident with 13.6%
among the total cases (Lee et al., 2010). Previous work had also reported that the fall happened onto walkway or working
surface (McCurdy & Lassiter, 1989). An assessment had been made toward ergonomic purpose in the semiconductor
manufacturing industry. It mentioned that “the workers removing the core with such precarious footing were exposed to the
hazards of being crushed by a falling heater core and falling into sharp sheet metal protective shrouding or possibly contacting
electrical lines” (Alexander & Rabourn, 2001). Some phenomena that result falls in semiconductor industry were loss of
memory, burns, loss of vision, nerve/organ damage and even death (Bolmen, 1997). Thus, it is the social responsibility to
come up with an approach for detecting falls without any delay to provide earlier medical assistance and quick support to the
deceased one.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(5), 349-371, 2016

PRODUCTION CONTROL UNDER PROCESS QUEUE TIME CONSTRAINTS


IN SYSTEMS WITH A COMMON DOWNSTREAM WORKSTATION
Yu-Ting Chen1, Cheng-Hung Wu1,*, Yin-Jing Tien2, Cheng-Juei Yu2

1
Institute of Industrial Engineering
National Taiwan University
Taipei, Taiwan
*Corresponding author’s e-mail: wuchn@ntu.edu.tw

2
Data Analytics Technology & Applications Research Institute
Institute for Information Industry
Taipei, Taiwan

This research develops a dynamic production control method for two-product manufacturing systems under process queue
time (PQT) constraints. In which, while two different products are being processed at two upstream workstations, a common
downstream workstation is shared by both products. PQT constraints before the downstream common workstation are
assumed. Under PQT constraints, waiting time of jobs in the downstream queue is constrained by predefined upper limits.
When a work-in-process (WIP) waits in the downstream queue longer than the predefined upper limit, the WIP may be
scrapped or the yield quality may deteriorate seriously. When machines are unreliable, random machine failures cause high
waiting time variance and significantly increase the risk of violating PQT constraints. Therefore, for production systems with
PQT constraints, a robust dynamic scheduling method with real time machine reliability considerations is critical.
In this research, a Markov decision process (MDP) model is developed to explicitly consider real time machine reliability
status and real time WIP distribution. The objective is to find optimal dynamic admission control policies for the upstream
workstations and optimal priority control policies for the downstream common workstation. The MDP model aims to
minimize expected long-run average production costs, which are the sum of waiting costs and scrap costs. To minimize total
costs, a good control model should balance the need for low scrap rates and low waiting costs. In our numerical study, the
robustness of the proposed control method is shown by discrete event simulation. Compared with other control methods in
literature, the proposed method reduces production costs by at least 27.6% on average.

Keywords: production control, equipment health, queue time constraints, common machines, dynamic programming

(Received on March 05, 2016; Accepted on January 5, 2017)

1. INTRODUCTION

This research studies control problems in two stage production systems with process queue time (PQT) constraints. PQT
constraints are time window constraints that are widely adopted in manufacturing and service industries for quality assurance
purpose. In semiconductor manufacturing, under the PQT constraint, waiting times between two consecutive production steps
are constrained by a predefined upper limit. If the waiting time of a work-in-process (WIP) exceeds the predefined upper
limit, the quality of the WIP may decrease or the WIP may be scrapped.
Real time equipment health is explicitly considered in this study because machine reliability problems are the major
reasons causing the violation of PQT constraints. When machine failures occur, WIPs are forced to wait before the affected
workstations. When machines are unreliable, random machine failures cause high waiting time variance and subsequently
results in high risks of violating PQT constraints.
The objective of this study is to develop a production control model for manufacturing systems with common machines
under the PQT constraints. A common machine is a machine that can perform more than one type of task. In our research,
the common machine is located at a downstream workstation. Scheduling a common machine is daunting because it will
affect production efficiency of several related products. In re-entrant semiconductor manufacturing production systems, a
tool group is oftentimes used to handle multiple operations. While multiple operations share the same tool group, the tools
become common machines for those operations. With the PQT constraints and the common downstream workstation involved,
complexity of the scheduling problem increases significantly.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(5), 372-381, 2016

ERROR-SMOOTHING EXPONENTIALLY WEIGHTED MOVING AVERAGE


FOR IMPROVING CRITICAL DIMENSION PERFORMANCE IN
PHOTOLITHOGRAPHY PROCESS
Chia-Yu Hsu1 and Jei-Zheng Wu2,*
1
Department of Information Management, Yuan Ze University
Taoyuan, Taiwan
2
Department of Business Administration, Soochow University
Taipei, Taiwan
*Corresponding author’s e-mail: jzwu@scu.edu.tw

The increasingly stringent tolerance of linewidths is a result of shrinking feature size of integrated circuits, and thus the
manufacturing process in wafer fabrication should be accurately controlled to maintain process yields. Critical dimension
(CD) is defined as the minimum width of a photoresist line or space printed on an exposure pattern by a stepper or scanner in
photolithography. The CD is measured using metrology equipment and is compensated by modifying the corresponding
equipment setup parameters. A feedback message is then sent to the next wafer for pre-adjustment and a feedforward message
is sent to the previous wafer for post-adjustment. This study aimed to address a manufacturing intelligence framework to
improve CD performance in a photolithography process. The input recipe is updated for the next run that is based on recently
measured process data through the modified controller called as an error-smoothing exponentially weighted moving average
(E-EWMA); both process and information flows are considered. A case study with a run-to-run process control is conducted
to compensate the process variation to demonstrate the proposed framework. The results demonstrate that the proposed
E-EWMA outperforms the conventional EWMA used in the company.

Keywords: adaptive process control; manufacturing intelligence; yield enhancement; semiconductor manufacturing

(Received on May 15, 2016; Accepted on January 5, 2017)

1. INTRODUCTION

High-tech companies face global competition. Thus, the extraction of manufacturing intelligence (MI) from
manufacturing-related databases is an effective approach to improve decision quality in these companies, (Chien et al., 2010,
Kuo et al., 2010). Specifically, MI can be used to facilitate data value development, manufacturing knowledge discovery, and
competitive advantage preservation.
Photolithography has become a critical process for wafer fabrication because of the shrinking feature size of integrated
circuits (ICs). Photolithography typically involves seven steps, namely, priming, spin coating, soft bake (SB), exposure,
post-exposure bake (PEB), development, and hard bake (HB). Following the performance of these processes, metrology is
used to evaluate whether or not the critical dimension (CD) is within the tolerance. The CD is defined as the minimum width
of the photoresist line or space printed on an exposure pattern by a stepper or scanner. Technological advancements have led
to increasingly stringent tolerance of linewidths. Therefore, it is necessary to control CD variation within the tolerance to
maintain process yields. Particularly, the CD can be compensated by adjusting the corresponding equipment setup parameters
including the energy dose, focus, PEB, photoresist, SB, and HB (Lachman-Shalem et al., 2002, Chemali et al., 2004,
Grosman et al., 2005).
In order to compensate for process variations, run-to-run (R2R) control techniques are widely used in semiconductor
manufacturing processes such as chemical vapor deposition (Ingolfsson and Sachs, 1993), gate etching (Butler and Stefani,
1994), epitaxy deposition (Sachs et al., 1995), critical dimension controls (Jedidi et al. 2011), chemical mechanical polishing
(Boning et al., 1996, Smith and Boning, 1997, Del Castillo and Yeh, 1998, Chen and Guo, 2001; Chen and Chung, 2010),
metal sputter deposition (Smith and Boning, 1998), overlay (Bode et al, 2004, Chien and Hsu, 2011; Chien et al., 2014), and
shallow trench isolation (Wang et al., 2005). Moyne et al. (2001) defined R2R control as “a form of discrete process and
machine control in which the product recipe with respect to a particular machine process is modified ex situ, i.e., between

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(6), 382-398, 2016

CONTROL CHART APPLICATION TO MONITORING MICROFINANCE


INSTITUTION'S MISSION CHANGE FOR WOMEN BORROWERS UNDER
ADVERSE ECONOMY
So Young Sohn*, Eun Jeong Ji, and Eun Jin Han

Department of Information and Industrial Engineering, Yonsei University


Seoul, Korea
*
Corresponding author’s e-mail: sohns@yonsei.ac.kr

One of the main missions of Microfinance Institutions is to improve the status of under-privileged people. However, MFI’s
mission drift or enhancement can occur under adverse economic condition. In this paper, we investigate the direction of
changes in supporting women borrowers who make up the majority of the poor by using the lending patterns of Latin
American and Caribbean MFIs during the recent global financial crisis. P control chart is applied to detect assignable changes
in the lending rate to woman over different MFIs. In order to identify related characteristics to the outlying MFIs identified
from the P control chart, we perform logistic regression analysis. Our study results can contribute to understand MFI’s
mission change in terms of supporting women borrowers and identify the related characteristics of MFIs.

Keywords: microfinance Institutions; control chart; logistic regression; women borrowers; Latin America and the Caribbean,
mission drift.

(Received on September 15, 2014; Accepted on February 21, 2016)

1. INTRODUCTION

Poor households seek capital to start their small business, but their lack of collateral restricts access to loans (Johnston and
Morduch, 2008). Low-income people have a variety of financial service needs, but they have difficulty of obtaining formal
financial services (Brau and Woller, 2004). Under these circumstances, Microfinance Institutions (MFIs) support informal
activities that often provide a low interest rate to their users. Microfinance programs have increased rapidly in the 1990s, and
these enthusiasms for microfinance brought about an increase in the number of MFIs in the developing world (McIntosh and
Wydick, 2005). Thousands of MFIs provide financial services to an estimated 100-200 million of the world’s poor (Brau and
Woller, 2004). Microcredit programs have generated positive results for financial services that can help improve the
well-being of the poor including women (Vonderlack and Schreiner, 2002; Haile et al., 2012).
In view of the fact that women are more likely to spend assets in ways that benefit the total household than men (Ngo
and Wahhaj, 2012), they are served by MFIs that provide access to credit for business investments (Cheston and Kuhn, 2002).
According to World Bank’s gender statistics database, the world-wide labor force participation rate for females was 51.7% in
2010. In particular, the female labor participation rate in Latin America and the Caribbean grew rapidly from 36% in 1980 to
52% in 2010. Cheston and Kuhn (2002) showed that "reinforcing women’s economic contribution to their families and
communities plays an important role in empowering them". Due to women’s social advancement, the importance of
microfinance programs to women is increasing.
Moreover, despite daily hardships, most women have excellent repayment records (Cheston and Kuhn, 2002).
D'espallier et al. (2011) found that "women are generally better in terms of credit risks in microfinance than men". The
authors confirmed that "a higher percentage of women borrowers in MFIs are associated with lower portfolio risk, fewer
write-offs, and fewer provisions". Therefore, in the rapid economic deterioration such as the 2008 global financial crisis, one
can think that the MFIs tend to provide more loans to women customers who have relatively lower credit risk than men
customers for a risk management.
On the other hand, MFIs may move away from the social mission of poverty alleviation towards an emphasis on their
financial sustainability because many MFIs are still not independent of donation in order to cope with high transaction and
information costs (Hermes and Lensink, 2007). MFIs would shift the focus to wealthier borrowers who can get larger loans in
order to increase average loan size as a cost-effective way of operation (Kar, 2010). In addition, "since commercialization is
being accompanied by larger loan sizes and less focus on women, recent commercialization trends in microfinance are bad for

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(6), 399-411, 2016

OPTIMUM MULTI-PERIOD, MULTI-PLANT, AND MULTI-SUPPLIER


PRODUCTION PLANNING FOR MULTI-GRADE PETROCHEMICALS

Mehdi Mrad1,*, Hesham K. Alfares2


1
Department of Industrial Engineering, King Saud University
Riyadh, Saudi Arabia
*
Corresponding author’s e-mail: mmrad@ksu.edu.sa
2
Systems Engineering Department, King Fahd University of Petroleum & Minerals
Dhahran, Saudi Arabia

A multi-period production and inventory control problem for a multi-grade, multi-supplier petrochemical product is
formulated as a Mixed Integer Linear Program (MILP) and then optimally solved. Raw materials are available from several
suppliers, and several plants (chemical reactors) are used for making the petrochemical product. Several grades of the
petrochemical product can be produced by changing the conditions inside each reactor. During transitions from one grade to
another, certain amounts of off-spec material are produced. The quantity of off-spec production is sequence dependent, i.e. it
depends on the two grades between which the transition takes place. The objective is to maximize the total profit, which is
equal to the sale revenue of all regular grades and off-spec materials, minus the raw material costs and inventory holding
costs.

Keywords: petrochemical production, optimization, multi-grade petrochemicals, production and inventory control,
graphical model, mixed integer programming

(Received on March 19, 2015; Accepted on December 10, 2016)

1. INTRODUCTION

The petrochemical industry has a huge global market, with an estimated value of $609.30 billion in 2012 (Visiongain, 2012).
Overall global demand for petrochemicals is growing rapidly at an annual pace of 4.4%, and global demand for basic
chemicals and plastics alone is expected to reach one billion metric tons in 2020 (Eramo, 2012). Global competition is also
increasing due to significant growth in additional production capacity, especially in the Middle East and Asia. There is an
increasing pressure to meet the growing demand while keeping the costs under control to deal with a greater competition.
Subsequently, petrochemical companies are trying to achieve greater efficiency through the optimum use of limited
production resources. Therefore, optimization models and algorithms are becoming more important and applicable in the
petrochemical industry.
The production of petrochemical products requires complex processes, involving many interacting and dynamically
changing chemical and physical variables. For many petrochemical products, such as polyethylene (PE) and polypropylene
(PP), altering the conditions in the chemical reactor during the production process, such as temperature, pressure, and raw
material inputs, allows different grades of the same petrochemical product to be produced. Shifting from one grade to another
requires a gradual change of the conditions inside the reactor. During the transition time between two grades, the material
produced does not conform to the specification of either grade, and hence this material is considered as lower-value off-spec
production. The quantity of off-spec material produced while shifting between two grades depends on the initial grade and
the final grade that the shift is made between. Given the sequence-dependent quantity of off-spec production, production
planning has to determine not only the quantity of each grade, but also the sequence of producing these grades.
In order to produce the different petrochemical grades, several raw materials need to be purchased from multiple
suppliers. Therefore, production planning has to determine the amount of each raw material to buy from each available
supplier during each time period. To satisfy fluctuating demands over multiple time periods, it might be necessary to produce
some of the grades earlier and to store them in the inventory. Hence, another task for production planning is to determine the

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(6), 412-430, 2016

THE COST-ORIENTED STOCHASTIC ASSEMBLY LINE BALANCING


PROBLEM: A CHANCE CONSTRAINED PROGRAMMING APPROACH
Ahad Foroughi1, *, Hadi Gökçen2, and Lorenzo Tiacci 3
1
Department of Industrial Engineering, Ondokuz Mayıs University
Samsun, Turkey
Corresponding author’s e-mail: ahad.foroughi@omu.edı.tr
2
Department of Industrial Engineering, Gazi University
Ankara, Turkey
3
Department of Industrial Engineering, Perugia University
Perugia, Italy

In recent years, the increase in industry competition has caused the costs of production to become a critical success factor.
A considerable proportion of manufacturing activities and costs is devoted to the assembly of products. Therefore, in
successful assembly line production systems, a reduction in production costs is a necessity. From an economic point of
view, cost-oriented objectives for assembly line balancing should be considered. This issue is addressed as a cost-oriented
assembly line balancing problem, in order to fill a gap in the existing literature on the stochastic version of the cost-
oriented assembly line balancing problem. A chance-constrained programing model is proposed in this paper; to linearize
the model, two methods have been adopted from Ağpak and Gökçen (2007), an approximation method and an exact one.
The resulting models have been tested on different problems in order to assess their effectiveness. A comparative study
has also been conducted for the cost- and time-oriented versions of the problem.

Keywords: cost-oriented assembly line balancing, stochastic task time, chance constraint programming

(Received on June 24, 2015; Accepted on November 18, 2016)

1. INTRODUCTION

An assembly line is a sequence of workstations connected together by a material handling system. It is used to assemble
components into a final product. Assembly line balancing (ALB), as one of the classic industrial engineering problems,
has received considerable attention in the literature since its formulation by Bryton (1954). The problem is to assign
feasible work elements to each work station so that all precedence relations and time constraints are satisfied and an
objective function is optimized.
The main constraints of an ALB problem are as follows (Ozcan et al., 2010): (i) each task must be assigned to
exactly one station (the assignment constraint); (ii) all precedent relationships among tasks must be satisfied (the
precedence constraint); and (iii) the total task time of all the tasks assigned to a station cannot exceed the cycle time (the
cycle time constraint).
The majority of studies have been concerned with improving the accuracy and efficiency of algorithms and
procedures for balancing assembly lines, and the major goal pursued in solving the ALB problems is to minimize the
number of stations (line length) for a fixed cycle time or to minimize the cycle time for a given number of stations. These
types of problems are called “time-oriented” assembly line balancing problems (Amen, 2000 a); however, in today’s
competitive world, reducing the cost of production is a major priority for manufacturing enterprises. Assembling is one
of the most important production stages in the manufacturing industry, and the vast majority of the workers who are
employed in industrial sectors such as the automotive and electronics industries work on assembly lines. About 40–60%
of total costs of many industrial products are related to their assembly costs. Therefore, a large decline in production costs
in the assembly stage has a great impact on the final costs of manufacturing companies’ products as well as on the
companies’ efficiency. In real cases, the line balancing problem can be implemented in two different situations, first,
during the installation of a new line and second, when a line needs to be rebalanced. Rebalancing is essential when a
change in products is necessary or when mixed models are being assembled in the line. In this case, the objective is the
best possible use of the available resources to minimize the total costs per product unit rather than to minimize the cycle

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(6), 431-444, 2016

AIRPORT GATE ASSIGNMENT FOR IMPROVING TERMINALS’ INTERNAL


GATE EFFICIENCY
Jaehwan Lee1, Hyeonu Im1, Ki Hong Kim1, Sha Xi2, and Chulung Lee3,*
1
Department of Industrial Management Engineering, Korea University
Seoul, Korea
2
School of Economics and Management, Tianjin Chengjian University
Tianjin, China
3
Division of Industrial Management Engineering, Korea University
Seoul, Korea
*
Corresponding author’s e-mail: leecu@korea.ac.kr

This paper considers airport gate assignment (AGA) to evenly distribute passengers to airport internal gates (landside gate),
where critical processes such as security check, immigration, and customs are performed. The paper focuses on AGA and the
efficiency of landside gates. The AGA determines departure passengers’ entering gates and arrival passengers’ initial location
in an airport. The initial location may settle passengers’ landside gate selection depending on their moving distances, which
make congestion or idling on landside gate. To improve this inefficiency, we tried to adapt and modify AGA model by using
quadratic mixed integer programming to assign gates with balanced passenger flow on each gate. We then performed a
simulation experiment to verify the effect of proposed model on the internal gate efficiency. The result shows that the
proposed model reduces the passenger processing time.

Keywords: airport; gate assignment; line balancing; utilization; mixed integer programming

(Received on November 30, 2013; Accepted on December 10, 2016)

1. INTRODUCTION

After the deregulation, airlines employed a hub-and-spoke system to meet the significantly growing market demand. A
hub-and-spoke system in fact does improve airport capacity utilization and increase carriers` frequencies between origins and
destinations at lower costs. Such transformation, however, increases congestion in landside operations since the
non-expandable facilities cannot meet the increased number of the passengers.
At a major hub airport, such rapid growth could lead to airport congestion, particularly for the landside gates. In any
peak time (bank), a large number of flights arrives at the airport, allowing passengers to transfer to another flight, to complete
their arrival procedure, and to start boarding in a short period of time. A great number of passengers simultaneously landing,
transferring, and boarding challenges the airport operation system, especially for the landside gate processes. Thus, even
without the expansion of airport facilities, the landside process must still be more efficient than that of general airport gate
assignment (AGA).
Furthermore, some major airports, designed and equipped with sufficient facilities, fail to handle passenger flows in an
efficient manner. Figure 1, for example, shows unbalanced workload on landside gate. Most of the managers worry about
handling the passengers during the peak time, and managers try to reduce process time from security check, customs,
immigration, etc. To solve the problem in integrated manner, however, the unbalanced usage of landside gate must be
equalized. Because, when the unbalanced usage of landside gates becomes balanced, the throughput time of passengers can
be reduced. This paper considers an AGA model to equalize the passengers’ biased usage of landside gates and verifies that an
AGA model can increase both landside gate efficiency and throughput time of passengers. To solve the problem by balancing
usage of landside gates with AGA, we reviewed previous literatures that are related to improving airport operations efficiency.
The AGA for airport operation can be divided into two segments. The first one is airside oriented, and the other one is
landside oriented. Both of the classifications obtain a great interest from many researchers (Ding et al. 2004).

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 23(6), 445-448, 2016

AGENT-BASED SIMULATION OF EMERGENCY EVACUATION FOR


NUCLEAR PLANT DISASTER
Kyoungseok Na1, Gyu M. Lee2,*
1
School of Information and Computer Engineering, Pusan National University
Busan, Korea
2
School of Industrial Engineering, Pusan National University
Busan, Korea
*
Corresponding author’s e-mail: glee@pusan.ac.kr

An effective and efficient emergency evacuation planning is critical to the safety and welfare of the residents or citizens in
case of natural or artificial disasters. The recent nuclear reactor accident at Fukushima caused the residents at Gijang to be
concerned about the safety of their local nuclear plants. The nuclear reactor-related accidents are very rare but may cause the
serious consequences and last for significant period. We conducted the agent-based simulation studies of emergency
evacuation for the Gijang area in case of nuclear plant disasters. The developed simulation tool can be used to validate various
evacuation strategies and support well-designed preparedness.

Keywords: emergency evacuation, nuclear plant disaster, simulation, system architecture

(Received on July 28, 2014; Accepted on October 17, 2016)

1. INTRODUCTION

Kori Nuclear Site at Gijang currently operates 6 nuclear plants and is building the third and fourth New-Kori plants. In
addition, Nuclear Medical & Science Complex and new research reactors are being built at Gijang. The recent nuclear reactor
accident at Fukushima caused the residents at Gijang to be concerned about the safety of their local nuclear plants.
The nuclear reactor-related accidents are very rare but may cause the serious consequences and last for significant period.
Especially, Gijang is located next to Busan, at which most population is concentrated. Hence, the effective and efficient
emergency evacuation plan cannot be overemphasized in addition to well-designed nuclear disaster prevention measures.
The central government has studied the safety issues of nuclear plants and developed the prevention systems. However,
the local government also needs their emergency evacuation plan for its residents, considering its geographical and local
factors such as various modes of transportations, transportation and shelters systems.
Nuclear power plant is designed and built to withstand safely against various natural and other severe events and staffed
by highly trained operators. An emergency plan must specify response capabilities and preplanned strategies that would be
used in the event of a severe accident. An effective emergency response is the result of mutually supportive planning and
preparedness among several entities: local and central agencies; and private and nonprofit groups that provide emergency
services.
The U.S. Nuclear Regulatory Commission and the U.S. Environmental Protection Agency jointly concluded that the
most significant impacts of a nuclear energy facility accident would be experienced in the immediate vicinity.
At greater distance from the facility—beyond a 10-mile radius—the principal health concern in the event of an accident
would be consumption of contaminated water, milk or food. They recommended two planning zones; (1) a 10-mile emergency
planning zone (EPZ) to protect communities near the facility from radiation exposure in the event of an accident and (2) a
50-mile zone within which food products, livestock and water would be monitored to protect the public from radiological
exposure through consumption of contaminated foods. [1] Within the 10-mile EPZ, the immediate protective actions for the
public would include instruction for sheltering in place and evacuation. This study has focused on this emergency planning
of sheltering and evacuation in Gijang area.
As part of emergency preparedness for nuclear power plant, Gijang government developed a plan and procedures for
disastrous events, but their effectiveness has not been proved and the quantitative validation has not been done before. Hence,

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(1), 1- 11, 2017

A MODEL FOR DETERMINING OPTIMAL BATCH SIZES OF


MULTI-FEATURED PRODUCTS WITH RANDOM PROCESSING
ACCURACIES UNDER QUALITY AND COST CONSTRAINTS
Dongmin Shin, Jeong-Yeon Kim, Geun-Ho Cho, and Sun Hur*

Department of Industrial & Management Engineering, Hanyang University


Ansan, Korea
*
Corresponding author’s e-mail: hursun@hanyang.ac.kr

Determining optimal batch sizes in a production system has been a primary focus in the manufacturing sector. From the
volume of research, it is well known that the batch size of a manufacturing process is significantly affected by several cost
factors such as setup cost, order cost, and defective cost. In this paper, a model for determining the batch size for a
multi-featured products production system is developed with consideration of quality and cost constraints under
stochastically changing process accuracies. In determining the batch size of products, we consider tool capability and
feature-based set-up accuracy. A mathematical structure of the model is presented and its applicability is demonstrated
through illustrative examples.

Keywords: batch size; cost-quality trade-off; multi-featured product; probabilistic process

(Received on January 05, 2015; Accepted on March 22, 2017)

1. INTRODUCTION

In manufacturing systems, determining batch sizes has received much attention from economic and practical perspectives.
Manufacturing cost per unit product and quality of products within a batch are significantly influenced by the size of a batch.
In particular, when a product consists of several features, it is necessary to take the quality of each of individual features into
special consideration to assure a required quality level and manufacturing cost.
In a manufacturing system producing multi-featured products, defectives derive from several reasons including process
variability and product variability. The former refers to the variability that stems from machines, tools, materials, and fixtures,
to name a few. The latter can usually be characterized by the extent to which produced parts meet the requirement of design
specifications. As such, process variability and product variability have a significant impact on the quality of final products.
In addition to the quality of products which can usually be measured by the proportion of defectives to the final products,
manufacturing cost also needs to be taken into account in determining batch sizes. It is fundamentally associated with several
factors such as setup cost, ordering cost, and defective cost. Depending on different batch sizes, the manufacturing cost per
unit product varies. In particular, all the features are subject to be affected by variability incurred by setups or machining
processes. The inherent variability in a manufacturing process is inevitably a source of defective products. As a result,
increased features can bring increased variability, resulting in processes more vulnerable to defective operations. Considering
only a small amount of defective features in a sophisticated product can incur rework or scrap cost, we argue defective cost
would be higher when a product contains more features.
This paper presents a mathematical model for determining batch size under quality and cost constraints of a
multi-featured products production system. It introduces two types of variability brought by tool and setup accuracy at the
same time. Our research is unique in that it assumes that the variability changes as the manufacturing process run reflecting
the fact that the processing accuracy of machining operations inevitably vary through time. Truly, the tool and setup
accuracies are varying as the machining process goes on because the products are exposed to the vibration of the fixture and
tool both.
As far as our survey is concerned, there is no previous work that took the time-variable characteristics of accuracy into
consideration in their models. In this regard, the model presented in this paper can be useful in determining batch sizes in
response to dynamically changing process variability. In contrast to the traditional batch size determination approach where a
batch size is predetermined at the beginning of a process, the proposed method provides a responsive batch size determination
technique with consideration of the process capability and the manufacturing cost. In determining batch sizes of products,

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(1), 12-31, 2017

A NEW METHODOLOGY BASED ON MULTISTAGE STOCHASTIC


PROGRAMMING FOR QUALITY CHAIN DESIGN PROBLEM
Taha-Hossein Hejazi 1,*, Mirmehdi Seyyed-Esfahani 2, Jiju Antony3
1
Department of Industrial Engineering, Amirkabir University of Technology (Tehran Polytechnic)
Garmsar, Iran
*
Corresponding author’s e-mail: t.h.hejazi@aut.ac.ir
2
Department of Industrial Engineering & Management Systems, Amirkabir University of Technology (Tehran Polytechnic)
Tehran, Iran
3
Department of Business Management, Heriot-Watt University
Edinburgh, UK

In multi-stage manufacturing/service systems, quality of final products depends on several decision variables and design
factors from several stages of operations as well as environmental and operational nuisance factors. Typically, the effects of
factors in one stage might remain significant at the next stages. Due to a high degree of interdependencies among the
variables in/between the stages, multistage quality control methods have recently attracted special attentions. Multi-response
optimization is a well-grounded method for an offline quality design that can consider several inputs and outputs. This study
introduces a new multiresponse surface methodology by proposing two different modeling approaches for quality
optimization in multistage systems with multiple response variables. Several stochastic parameters, including response
surfaces and covariates, are to be considered the proposed models. In order to cope with the uncertainty, multistage stochastic
programming is applied with a scenario generation algorithm based on Nataf transformation for correlated parameters. Also,
a comprehensive numerical analysis is done to give more insights into the application of the proposed approach.

Keywords: design of experiments, multiresponse optimization, multistage stochastic programming, robust design, quality
chain design.

(Received on February 19, 2015; Accepted on April 10, 2017)

1. INTRODUCTION

Quality control and improvement techniques can be classified into offline and online methods. Offline quality control
methods are related to technology developments, and online methods try to control and manage daily activities. Moreover,
offline quality control consists of system design, parameter design, and tolerance design (Taguchi, Chowdhury, Wu, Taguchi,
& Yano, 2005). Several methods and approaches have been developed to study this kind of problem in quality control.
Among them, Design of Experiments (DOE) with Response Surface Methodology (RSM) plays an important role, especially
in the parameter design problem. DOE aims to identify factors and extract their effect on the response set with a fewer number
of experiments. One of the well-applied methods associated with DOE is called RSM, which systematically finds the
empirical model between the input and output sets. Afterward, it tries to optimize the response surfaces represented by the
model with respect to the factor setting. For this purpose, three steps are performed: 1) Design of experiments, 2) Estimation
of a proper relationship model, and 3) optimization of the estimated response surfaces (Myers, Montgomery, &
Anderson-Cook, 2011).
Nowadays, competitive business environments require high-quality products with more flexible supply chains.
Production and manufacturing systems consist of processes in which some operational factors affect intermediate and final
characteristics of the products (see Figure 1). Today’s manufacturing and service systems can be defined and treated as
multistage systems. The main target of this definition is to find an effective set of factors and consequently, to achieve the
most desirable design producing acceptable outputs.
There are also several characteristics in multistage systems and production chains that motivate the researchers to
propose specific methods and solutions for quality improvements in this context. Some major aspects regarding quality issues
in such systems are expressed as follows (Taha Hossein Hejazi, Seyyed-Esfahani, & Mahootchi, 2013, 2014; Y. Li & Tsung,

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(1), 32-43, 2017

ON THE INTERVAL SELECTION AND INTERPRETATION OF THE


PROBABILISTIC AHP FOR UNCERTAIN DECISION MAKING
Minchul Shin1, Jeonghoon Mo1,*, Kyungsik Lee2, and Cheol Lee3
1
Department of Information & Industrial Engineering, Yonsei University
Seoul, Korea
*
Corresponding author’s e-mail: j.mo@yonsei.ac.kr
2
Department of Industrial Engineering, Seoul National University
Seoul, Korea
3
Almaden Research
Seoul, Korea

The Analytic Hierarchy Process (AHP) method, which is popularly used in multi-criteria decision making, can have a
sensitivity issue when multiple participants have different opinions on the importance of criteria. The ranking of alternatives
can be reversed with small changes in the weight of a criterion. We propose a new ranging method that can be used with
probabilistic AHP to solve the sensitivity problem of the AHP. The equivalence concept is introduced to interpret the
probabilistic output better. The usefulness of the proposed method is demonstrated through simulation.

Keywords: AHP, probability, sensitivity analysis

(Received on October 21, 2015; Accepted on April 11, 2017)

1. INTRODUCTION

The Analytic Hierarchy Process (AHP) has been one of the most popularly used methods in multi-criteria decision making
(MCMD) since its proposal by Saaty (1983) due to its simplicity and easy applicability. Its application areas include the fields
of sociology, politics, engineering, government, industry and more (Vaidya et al., 2006; Borchardt et al., 2012; Thummala,
2011). To evaluate decision alternatives, the AHP follows simple, straightforward steps: 1) it first determines the evaluation
criteria and their weights; 2) assesses the alternatives for the criteria; and 3) ranks the alternatives based on the weighted sum
over the criteria.
Though it is very popular, AHP also has limitations noted in the literature. Rosenbloom (1996) mentioned that “it also
has a problem of uncertainty associated with subjective and multiple assessments.” Even the proponent, Saaty, himself was
aware of the uncertainty issue (Saaty et al., 1987) and tried to address it in his work. One important source of uncertainties is
the so-called pairwise comparison matrix," of which ( i, j ) elements are the ratio of two weights, i and j . When evaluators
have different opinions on the values of the matrix, the uncertainty can be transferred to a final ranking.
Many researchers have addressed the uncertainty issue. Some have tried to perform a good estimate of pairwise
comparison matrix values. Vargas (1982) proposed a method to estimate the true values of weights after treating the values in
the pairwise comparison matrix as random variables. Zahedi(1986) studied the statistical accuracy of six different AHP
estimates under gamma, lognormal and uniform distributions and claimed that the mean transformation method was the most
robust. Arbel et al. (1990) studied the effect of interval judgments on the AHP and found that there exist relatively optimal
point estimators that best categorize the intervals. Forman et al.(1998) studied how to aggregate the individual judgments and
compared different aggregation methodologies.
All Zahir (1991) proposed a numerical method to incorporate interval uncertainties by providing score intervals instead
of score points for each alternative. Moreno-Jimenez et al. (1993) studied the rankings provided by an interval pairwise
comparison matrix using optimization and simulation. Rosenbloom (1996) treated pairwise comparison input data as random
variables and interpreted the final score probabilistically in order to provide additional information for decision making.
Others relied upon the Monte Carlo simulation to provide the probability of winning. Hauser et al. (1996) proposed a
method that utilizes randomly generated pairwise matrices to evaluate the probability that an alternative is better than the
others. Levary et al. (1998) considered two types of uncertainties, one from pairwise comparison and the other from scenarios,
and proposed a simulation-based methodology. Yeh et al. (2001) studied a method to obtain a consensus of the comparison
ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 24(1), 44 - 59, 2017

A CONTINUOUS REVIEW INVENTORY MODEL WITH COLLECTING USED


PRODUCTS CONSIDERATION
Cheng-Kang Chen*, M. Akmalul ʹUlya

Department of Information Management, National Taiwan University of Science and Technology


Taipei, Taiwan
*
Corresponding author’s e-mail: ckchen@cs.ntust.edu.tw

Recently, owing to growing environmental concerns, the activities of collecting used products for remanufacturing or
recycling has attracted a great deal of attention from academic and industrial communities. In this paper, we extend the
classical continuous review inventory system to consider the activities of collecting used products. In the basic model,
assuming the lead time demand follows normal distribution, we seek to minimize total cost per unit time through determining
the order quantity, the reorder point and return rate. Also, we investigate the case when the distribution of lead time demand
is unknown and only mean and variance of lead time demand are known. An iterative search algorithm is developed to find
the optimal solution. Several numerical examples are provided to illustrate the features of our proposed models.

Keywords: inventory, return rate, distribution free

(Received on July 14, 2015; Accepted on March 21, 2017)

1. INTRODUCTION
The importance of environmental performance has attracted a great attention of academic and industrial communities. The
legislations are introduced to encourage the environmental awareness. For instance, Lindhqvist (2000) introduced Extended
Producer Responsibility (EPR) as an environmental protection strategy in 1990. The form of EPR can be reused, take-back,
or recycle. Moreover, Xerox and Kodak consider the activities of collecting used products to deal with the waste disposal
problems, while gain profit by reducing the manufacturing costs. This paper investigates the classical continuous review
inventory system with activities of collecting used products consideration.
A continuous review inventory system is widely used by researchers to determine the optimal solutions for reorder point
and replenishment quantity. For instance, Moon & Choi (1998) improve the continuous review inventory model by optimizing
both the replenishment quantity and the reorder point. While, Sana & Goyal (2015) utilize continuous review inventory model
with a mixture of lead time-dependent lost sales and back orders to optimize the lead time, reorder point and replenishment
quantity. In addition, Sazvar, Baboli, & Akbari Jokar (2013) utilize continuous review inventory system to develop an
inventory model for perishable products under stochastic lead time. Our proposed model is new, because the activity of
collecting used products on a continuous review inventory system is newly introduced, unlike the existing literature.
Information about the distribution of lead time demand is often limited. Researchers deal with this problem by
developing the distribution-free model. For example, Gallego & Moon (1993) proof the optimality of Scarf’s ordering rule
for the newsboy problem where only the mean and the variance of the demand are known. Moreover, the works of Gallego
& Moon (1993) has been utilized by Tajbakhsh (2010) and Kumar & Goswami (2015).
Remanufacturing is proven to be beneficial (Ang, Song, Wang, & Zhang, 2013; Kerr & Ryan, 2001). For instance, the
manufacturing costs in the green remanufacturing program can be saved by 40%-65% in the way of reuse parts and materials
(Ginsburg, 2001). Eastman Kodak Company reuses approximately 76% of the weight of a disposed camera. Moreover, used
products collection activities have become an essential part of remanufacturing. Some researchers model the reverse channel
design decision to deal with the used products collection activities. Nevertheless, the researchers have not examined the
activities of collection used products in a continuous review inventory system.
To investigate the effects of used products collection activities in a continuous review inventory system, we develop a
continuous review inventory model with activities of collecting used products consideration. In the basic model, the
distribution of the demand during lead time is assumed to follow a normal distribution. An iterative algorithm is developed
to find the optimal solutions for the order quantity, safety factor, and return rate. Furthermore, the basic model is relaxed to
consider the case further when only mean and variance of lead time demand are known.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(1), 60-80, 2017

JOINT REPLENISHMENT AND JOINT DELIVERY OF MULTIPLE ITEMS

Muhammad Shafiq1, 2, *, and Huynh Trung Luong1


1
Industrial Systems Engineering, Asian Institute of Technology
Pathum Thani, Thailand
*
Corresponding author’s e-mail: shafiqaatir1@gmail.com
2
Department of Industrial Engineering, University of Engineering and Technology Taxila
Punjab, Pakistan

This paper discussed integers multiple joint replenishment and joint delivery of multiple items in a supply chain comprising
multiple suppliers, multiple warehouses, and multiple retailers. In contrast to the existing integers multiple coordination
mechanisms, the proposed coordination mechanism allows different replenishment and delivery cycle times for the firms
located at the same level of the supply chain. To do this, the basic replenishment cycle time of each warehouse is set to be an
integer multiple of the basic delivery cycle time of the warehouse. This synchronization between the basic replenishment
cycle time and basic delivery cycle time of the warehouse helps in coordinating replenishments and deliveries. The inclusion
of joint replenishment along with joint delivery from the warehouse minimizes transportation and inventory holding cost
without affecting service level. The numerical experiments have been conducted to show practical implementation and
implications of the proposed integers multiple joint replenishment and joint delivery coordination mechanisms. Furthermore,
the proposed integers multiple joint replenishment and joint delivery coordination mechanisms are compared with (1) integers
multiple with joint replenishment but without joint delivery, (2) integers multiple without joint replenishment and joint
delivery, and (3) integer powers-of-two multiple joint coordination mechanisms at various levels of input parameters. From
the numerical analysis, it is evident that the proposed integers multiple joint replenishment and joint delivery coordination
mechanism outperformed the existing integers multiple and integer powers-of-two multiple coordination mechanisms.

Keywords: supply chain management; inventory and logistics; joint replenishment and joint delivery; integers multiple
coordination mechanisms; integer powers-of-two multiple coordination

(Received on September 22, 2015; Accepted on January 07, 2017)

1. INTRODUCTION
Supply chain management refers to the management of resources and flow of information among the members of the supply
chain (Minner, 2003). The coordination among the stakeholders plays an important role in supply chain performance due to
the fact that the members of the coordinated/ integrated supply chain tend to optimize overall profit rather than maximizing
individual profits. The literature about supply chain coordination highlights the fact that the performance of a decentralized
supply chain merely depends on the level of cooperation, coordination, trust, and information sharing. Contrary to the
centralized, the members of the decentralized supply chain do not share private information without proper coordination
agreement. This is because of the fact that the individual objectives of the members are often conflicting and sharing private
information may lead to decrease in individual profit. Whereas, the members of the centralized/ integrated supply chain share
required information with a centralized decision maker who is responsible for optimizing the overall performance of the
supply chain.
The joint replenishment of multiple items got the considerable attention of the researchers in supply chain coordination
literature (Porras and Dekker, 2008). The joint replenishment of multiple items helps in reducing costs of ordering,
transportation, and inventory holding (Cha et al., 2008). The frequency of replenishments and shipment size are two key
factors having a significant impact on inventory holding and transportation costs (Chan et al., 2006). The main objective of
the inventory models presented in the literature is to minimize the operational cost of the supply chain without compromising
the service level (Khouja and Goyal, 2008). Typical joint replenishment inventory models considered the replenishment of

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(1), 81-122, 2017

DETERMINATION OF MATERIAL HANDLING EQUIPMENT FOR LEAN


IN-PLANT LOGISTICS USING FUZZY ANALYTICAL NETWORK PROCESS
CONSIDERING RISK ATTITUDES OF THE EXPERTS
Omer Faruk Yilmaz*, Basar Oztaysi, Mehmet Bulent Durmusoglu and Sultan Ceren Oner

Department of Industrial Engineering, Istanbul Technical University


Istanbul, Turkey
*
Corresponding author’s e-mail: ofyilmaz@itu.edu.tr

Using right material handling equipment (MHE) is substantial in terms of manufacturing costs even if related manufacturing
environment is lean. In this paper, right materials handling equipment selection problem is investigated at the line feeding
systems such as kitting and milk-run systems. Each system has three alternatives for selection, and the main problem is the
difficulty in selecting the appropriate one since these equipment are integrated with each other by means of a total of nine
possible combinations. In order to select the most appropriate combination, alternative equipment is evaluated separately at
first, and then, resulting priorities are aggregated to find the best alternative. In the equipment selection decision model, there
are four main criteria and seventeen sub-criteria. Since the dependencies appear within the criteria, Fuzzy Analytic Network
Process (FANP) technique is preferred. Besides that, risk attitudes of the experts are also incorporated into the model using
linguistic terms in order to reach accurate results. The numerical application inspired from a real-world application in an
electronic device assembly plant located in Istanbul is implemented. The result of the numerical application demonstrated that
Tugger Automated Guided Vehicle (AGV) should be used for the milk-run system and a Belt conveyor should be acquired for
the kitting system.

Keywords: In-plant milk-run, kitting system, material handling equipment, fuzzy, ANP, risk

(Received on February 08, 2016; Accepted on February 08, 2017)

1. INTRODUCTION

In recent years, the principles of the Toyota Production System (TPS) have been widely accepted and adopted by the variety
of different sectors. The extensive applications of TPS principles have led to lean manufacturing (Sullivan et al., 2002). Lean
manufacturing enables us to identify and eliminate waste by continuously improving every aspect of manufacturing processes.
Removing material handling wastes in a lean environment makes small batch manufacturing possible and small batch
manufacturing tends to increase the frequency of transports. Hiregoudar & Reddy (2007) stated that material handling
constitutes 25% of the workers, 55% of the factory area and 87% of the production time. Therefore, lean in-plant logistic
applications and design of material handling system (MHS) are very crucial in terms of material handling waste reduction. In
the case study, a well applied MHS design is performed and the cost will decrease between 10% and 30% (Drira et al., 2007).
From the reflections of literature, lean in-plant logistic could be defined as logistic applications (e.g., milk-run) in lean
manufacturing environments as stated by Kilic et al. (2012) and Baudin (2004)’s studies. Within the in-plant logistic
environment, the elimination of waste (e.g., unnecessary motion, waiting and inventory) is one of the key issues of lean
in-plant logistic managers. In the lean production system where smooth manufacturing appears, an effective design of MHS
could reduce operating and the other related costs. According to Green et al. (2010), “Productivity and the incidence rate of
injuries, specifically lost time injuries, can also be improved by positive changes to MHS.” Also, a reliable and flexible MHS
is essential to sustain just-in-time (JIT) pattern in a lean manufacturing environment. Therefore, lean in-plant logistics
applications and determination of proper equipment are significant activities and elements of MHS for companies.
In lean manufacturing, milk-run systems for inbound and outbound logistics could generally be defined as periodically
moving that perform both distribution and collection. By using small unit loads in lean manufacturing areas, as seen in the
studies of Hanson & Finnsgard (2014) and Baudin (2004), the milk-run approach could be adapted to in-plant for delivering a
large number of unit loads to different locations. According to Domingo et al. (2007) “To get the correct replacement of
material it is necessary to use the milk-run, and efficient and effective standardized routing must be determined.” In this study,
in- plant milk-run systems are investigated because handling equipment begins from raw material warehouse1 and visits to

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(2), 123-133, 2017

ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL


EXAMPLES

Da Teng, Xiao Song *, Guanghong Gong, liang Han

School of Automation, Beihang University


Beijing, China
*
Corresponding author’s e-mail: songxiao@buaa.edu.cn

Deep neural networks have achieved state-of-the-art performance in many artificial intelligence areas, such as object
recognition, speech recognition, and machine translation. While deep neural networks have high expression capabilities, they
are prone to over fitting due to the high dimensionalities of the networks. In recent applications, deep neural networks have
been found to be unstable in adversarial perturbations, which are small but can increase the network’s prediction errors. This
paper proposes a novel training algorithm to improve the robustness of the neural networks in adversarial examples.

Keywords: machine learning; deep learning; neural networks; adversarial examples

(Received on January 25, 2016; Accepted on June 1, 2017)

1. INTRODUCTION

Deep neural networks have achieved state-of-the-art performance in many artificial intelligence areas, such as object
recognition (Krizhevsky, Sutskever, & Hinton, 2012), speech recognition (Graves, Mohamed, & Hinton, 2013) and machine
translation (Sutskever, Vinyals, & Le, 2014). They can also be used to improve the intelligence of the agents in simulation
applications, such as command and control system simulations (Song, Zhang, & Qian, 2013) (Song, Shi, Tan, & Ma, 2015)
(Song, Zhang, & Shi, 2014) and air combat simulations (Ma, Ma, & Song, 2014).
Different from traditional machine learning algorithms where features are designed manually, deep learning algorithms
automatically extract features from the training data. The features in deeper layers are more abstract and can provide more
useful information to the classifier. The performance of machine learning algorithms depends heavily on data representation
(Bengio, & Vincent, 2013), and robustness is one of the most important factors of good representation. Denoising
auto-encoders (Vincent et al., 2008) make the learned representations robust against the partial corruption of the input by
adding random noises to the input data. Hinton et al. (2012) proposed a “dropout” training algorithm to reduce overfitting of
the neural network models by randomly omitting half of the feature detectors, which makes the neural networks more robust
to new inputs.
Although deep learning algorithms have made advances in many artificial intelligence areas, recent researches have
found that deep neural networks can be easily fooled by well-designed, small perturbations called adversarial perturbations,
such that the networks output incorrect answers with high confidence (Szegedy et al., 2013). Different from random
perturbations, the adversarial perturbations are intentionally worst-case perturbations that maximize the networks’ prediction
errors. Humans can easily recognize these adversarial examples, but they can be misclassified by neural networks. Szegedy et
al. (2013) showed that the smoothness assumption does not hold for deep neural networks, and small engineered
perturbations to the inputs can cause the outputs to change greatly, leading to incorrect predictions. Their paper also stated the
properties of the adversarial examples: (a) Adversarial examples can be found by box-constrained L-BFGS. (b) Adversarial
examples generalize cross models and training sets. A large fraction of adversarial examples found on one model will be
misclassified by other models with different structures and/or trained on different training sets. (c) Training on the adversarial
examples can improve the robustness of the trained models.
Goodfellow et al. (2014) gave a linear explanation of adversarial examples. They explained the effects of adversarial
examples by the linearity of the model and the high dimensionality of the inputs. Due to the high computational consumption
of L-BFGS, it is not applicable to large-scale neural networks for adversarial training. Goodfellow et al. (2014) proposed the
fast gradient sign method of generating adversarial examples based on the linear explanation of adversarial examples, which
only requires the computation of the gradient. Gu et al. (2014) proposed the Deep Contractive Network (DCN), which is a
generalization of the contractive auto-encoder (CAE) used to deep feed-forward neural networks. A layer-wise contractive

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(2), 134-145, 2017

A SIMULATION-LANGUAGE-COMPILER-BASED MODELING AND


SIMULATION FRAMEWORK

Xiao Song 1,*, Hang Ji1, Wenjie Tang2, Xing Zhang3, Zhen Xiao3
1
School of Automation, Beihang University
Beijing, China
*
Corresponding author’s e-mail: songxiao@buaa.edu.cn
2
National University of Defense Technology
Changsha, China
3
Beijing Institute of Nearspace Vehicle’s Systems Engineering
Beijing, China

It is important yet difficult to develop a multifunction simulation framework that provides simulation language specifications,
compiler transferring language script to simulation components, simulation engine, etc. To provide simulationists with a
useful multifunction framework, we develop a simulation-language-compiler-based modeling and simulation framework, i.e.,
a simulation tool called SimRunner. This tool is capable of building simulation-language-based models, compiling language
to simulation models, establishing interactive network communications with distributed simulation engines, and controlling
and supervising simulation conditions. An example case of anti-air simulation is implemented with this framework, which is
validated to have useful functions and remain stable with running distributed models.

Keywords: simulation framework; simulation language; compiler

(Received on January 26, 2016; Accepted on June 2, 2017)

1. INTRODUCTION

For simulationists, it is often difficult to build a multifunction framework that integrates services that include using simulation
language to build models, compiling models, communicating and sending models to engines, loading model instances,
invoking and running simulation engines with model instances, and displaying simulation process (Qiu et al., 2009). For the
last two decades, many works have focused on designing simulation frameworks. For example, Pisla (2008) put forward a
distributed control interface with a modular, flexible, and configurable structure to control simulators; KIM (2011) proposed
a layered hardware-in-the-loop architecture to implement interactive simulation control; Fei (2013) provided a universal
simulation control service to improve the poor universality and flexibility of the multidisciplinary distributed simulation
system, but all of these works employed existing modeling language for constructing systems, which makes systems
inflexible and platform-sensitive. Meanwhile, Wang (2009) paid more attention to simulation language and proposed a kind
of virtual language based on virtual reality to drive manufacturing process simulation. Horuk (2014) presented a Web UI to
deploy scientific simulation application portability. Pirri (1999) developed an Internet-distributed-applications container. All
of these applications integrated modeling language, user interface, and distributed communication, but their simulation
engines do not have abilities to support dynamic variable structures and efficient parallel and multithreaded. Moreover, Jaber
(2011) implemented simulation partitioning for increased parallel performance in an object-oriented simulation environment;
Adelantado (2001) provided multisolution management service to run an air–round combat simulation application. Wu (2015)
proposed real-time load balancing scheduling algorithm for distributed simulation engines. All these works are useful, but
few have addressed the development of a multifunction simulation framework that includes a simulation language compiler
and its running engine.
To provide simulationists with a useful multifunction framework, a simulation-language-compiler-based framework
was designed and implemented based on our work (Song et al., 2010). Figure 1 depicts the schematic process. A kind of
simulation language is proposed to build the models first. It has a set of grammar and semantic rules, which regulate the
simulation parameters and running logics. Then the models should be compiled. The compiler is developed using the

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(2), 146-161, 2017

PARTICLE SWARM OPTIMIZATION BASED INVENTORY POLICY FOR


WHOLESALE BUSINESS IN DEVS-BASED MEDICINE SUPPLY CHAIN

Jaekwon Kim, Jongsik Lee*

Department of Computer Engineering, Inha University


Incheon, Korea
*
Corresponding author’s e-mail: jslee@inha.ac.kr

The inventory policy of medicine wholesalers is a very important issue because a medicine supply chain is directly related to
human lives. Currently, inventory policies are widely used for managing medicine stock. However, effective management of
medicine inventory is difficult owing to the characteristics of medicines. This study proposes a PSO-MMIC (Particle Swarm
Optimization base on Min-Max Inventory Control) based inventory policy for wholesale dealers in the medicine supply chain.
A DEVS (Discrete event system specification)-based model was built to implement a virtual medicine supply chain, and the
designed model was simulated to measure the performance of management policies. The proposed PSO-MMIC determines
the optimal volume of products for the specific lead-time by using current stock. Simulation results demonstrated that PSO-
MMIC was an efficient method for inventory management and medicine supply chain for wholesalers.

Keywords: Medicine supply chain; wholesaler manage; Inventory policy; DEVS; PSO-MMIC;

(Received on February 16, 2016; Accepted on June 2, 2017)

1. INTRODUCTION

Supply chain inventory policy has been focused on the reduction of product costs. However, market trends change rapidly
under the influence of various customer requirements (Cakravastia et al., 2002). High stock quantities increase the distribution
level and inventory cost, and vice versa. Efficient supply chain operation can be achieved by maintaining an optimal balance
between distribution level and inventory cost (Thomas and Griffin, 1996). Several studies that are based on the client service
level have proposed an integrated model and optimized structure for the supply chain (Korpela et al., 2001). A medicine
supply chain is directly linked to human life. Hospitals, pharmacies, and distributors must maintain an adequate stock of
medicines (Fahy et al., 2006). However, it is difficult to manage the product inventory of medicine due to characteristics such
as price, quality, and expiry dates. A typical policy requests a product for adjacent hospitals or pharmacy according to the
required stock (Lee et al., 2008).
Inventory policy is a critical issue in medicine supply chains. Medicine wholesalers are trying to achieve quantity and
cost savings by managing the supply chain. However, there are sharp fluctuations in medicine orders because a wholesaler
simultaneously deals with several hospitals and pharmacies. Furthermore, because many medicines have a direct effect on
the patient’s life, an exact supply is required (Byron, 2002). Therefore, medicine wholesalers need to resort to modeling and
simulations (Lee et al., 2015) to establish the most appropriate inventory policy for medicine supply chains (Kelle et al.,
2012). There are several methods for inventory management—FOQ (Fixed Order Quantity), Economic Order Quantity, POQ
(Periodic Order Quantity), min-max (s, S), (R, S), and so on. Not all inventory policy methods are suitable for medicine
supply chains. In order to establish the most appropriate policy, it is necessary to assess whether the inventory management
method conforms to the characteristics of the medicines. An inventory policy is a trade-off between demand and order.
Because of this, designing the inventory policy of a medicine supply chain requires much experience. Consequently, a
medicine supply chain requires an optimal inventory policy to reduce the difference between demand and order. The method
of particle swarm optimization (PSO) can be applied to handle non-linear data and supports global optimization. A medicine
inventory that uses PSO to solve the trade-off in the inventory policy increases a wholesaler’s net profit (Taleizadeh et al.,
2010) (Park and Kyung, 2014).
This paper proposes a PSO-MMIC (Particle Swarm Optimization base on Min-Max Inventory Control) based inventory
policy in medicine supply chains by using a DEVS (Discrete event system specification) (Zeigler et al., 1997) simulation
environment. PSO-MM derives optimal orders at a specific lead-time “s (reorder point)” by using PSO. A virtual medicine

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(2), 162-170, 2017

DESIGN AND IMPLEMENTATION OF AN ANALYSIS TOOL FOR


UNSTRUCTURED SIMUATION OUTPUTS
Kangsun Lee1,*, Sungwoo Hwangbo1 and Sungmin Kim2

Department of Computer Engineering, MyongJi University


Seoul, Korea
*
Corresponding author’s e-mail: ksl@mju.ac.kr
2
GuRum Networks Co. Ltd.,KonKuk University
Seoul, Korea

As simulations have become widely used in the analysis of complex systems, more engineers are experiencing difficulties in
managing massive amounts of unstructured datasets. While conventional database management systems have facilitated the
simulation analysis process by storing datasets in time-tagged databases and querying the databases to answer analysis
questions, their use has been partly limited due to the following two reasons: 1) insufficient mechanisms to handle
unstructured datasets; and 2) tight coupling to specific platforms and devices in delivering the analysis results. In this paper,
we present R-Logger (Responsive-Logger), a plugin software to aid in simulation analysis tasks for unstructured simulation
logs. Once plugged into a simulator, R-Logger is able to 1) filter out the desired datasets from the unstructured simulation logs;
2) store them in the server-side database; 3) analyze them by utilizing built-in queries, and 4) deliver the analysis results over
the Web. R-Logger is able to respond to the user’s environment and dynamically change the layout of the analysis results to fit
into various devices, including desktops, tablets, and mobile phones.

Keywords: Simulation Output Analysis, Distributed Data Management and Processing, Plugins, Unstructured Data Analysis

(Received on March 28, 2016; Accepted on June 2, 2017)

1. INTRODUCTION

M&S (Modeling and Simulation) is the use of models, including emulators, prototypes, and simulators, to develop datasets as
a basis for making managerial or technical decisions (DoD 2011). M&S has been widely used to provide an operationally
valid environment for analyzing system concepts, designing the system, assessing the merits of alternatives, and making
decisions about the system, provided it meets the requirements. As simulations are becoming more popular in the analysis of
large-scale systems with massive inputs and outputs, more engineers are experiencing difficulties in interacting with and
understanding the simulation datasets. The needs of collecting, storing, and managing simulation datasets have become
imminent for various simulation applications in order to analyze the datasets. Many previous studies have been proposed to
use time-tagged databases and database management systems, such as Oracle (Oracle 2013), DB2 (IBM 2015), and Teradata
(Teradata 2015), as aids to the simulation analysis tasks. These time-tagged databases and database management systems
have facilitated the simulation analysis process by storing massive simulation outputs in a time-tagged database and using
built-in queries to perform the analysis tasks in various application domains, such as energy, teletraffic, science and
engineering (Leobman et. al. 2009, Lee et. al. 2014, Jiao et. al. 2012).
While conventional databases perform well in processing simulation datasets, they still require that the data should
conform to a well-defined schema. Unfortunately, many simulators may not produce simulation outputs in a well-structured
format. For example, due to security reasons, legacy missile simulators tend to be available only in a binary format, dumping
a large amount of unstructured simulation logs. Since we cannot access the implementation details of the black-box legacy
missile simulators, we may need to go through expensive pre-processing steps in order to transform the unstructured
simulation logs into well-structured datasets before storing them in conventional database systems. On the other hand, some
of the recently proposed solutions, such as Hadoop (Hadoop 2015), MapReduce (MapReduce 2015), and MongoDB
(Chodorow 2013), permit data to be in any format or even to have no structure at all. These solutions can be applicable to
conduct output analysis tasks for unstructured simulation logs without going through additional preprocessing steps.In the
present paper, we introduce R-Logger (Responsive Logger), a simulation output analysis tool for unstructured simulation logs.
R-Logger comprises three nodes –Simulation node, Server node, and Client node. The simulation node of R-Logger reads the

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(2), 171-181, 2017

SIMULATION STUDY ON THE LOST SALES INVENTORY SYSTEMS WITH


ATTACHED SERVICE QUEUE

Jinsoo Park1, Jung Woo Baek2,*. Yun Bae Kim3


1
Department of Industrial Engineering, Yong In University
Yongin-si, Republic of Korea
2
Department of Industrial Engineering, Chosun University
Gwangju, Republic of Korea
*
Corresponding author’s e-mail: jwbaek@chosun.ac.kr
3
Department of Industrial Engineering, Sungkyunkwan University
Seoul, Republic of Korea

In this study, we examine a queueing system with an attached inventory. Customers arrive in the system according to a
general arrival process, and a single server serves the customers in order. The service times are assumed to be independent
and identically distributed random variables. At the service completion epoch, a customer departs the system with exactly one
item from the attached inventory storage. If the inventory level drops to zero, the service is paused, and the remaining
customers in the queue wait in the system until the inventory is replenished. Potential sales to customers who arrive during
this stockout period are lost. The inventory is usually managed by one of the popular inventory control policies, such as the (r,
Q) and (s, S) policies. We design the simulation experiments under the various distributions of inter-arrival time, service time,
and lead time with several parameter settings to evaluate the system performances.

Keywords: queueing systems; inventory model; simulation study; performance evaluation; cost analysis

(Received on March 3, 2016; Accepted on June 2, 2017)

1. INTRODUCTION
In this study, we examine a queueing system integrated with an inventory system where the customers spend an inventory
item to be served by the server. The queueing system with an attached inventory originated from an assembly-like queue
(Harrison, 1973; Lipper and Sengupta, 1986; Bozer and McGinnis, 1992; Bryznér and Johansson, 1995), where two parts are
simultaneously combined with a positive processing time to produce a "complete product." In terms of inventory manage-
ment, the assembly-like queue can be applied to a service-inventory system where a customer spends time meeting his or her
demand only after some positive service time, e.g., a retail market where customers spend time paying for the item they want
to purchase. Therefore, the queueing models with an attached inventory can be efficiently applied to address the trade-off
between inventory cost and manufacturing (or service) capacity.
Schwarz and Daduna (2006) and Schwarz et al. (2006) conducted extensive studies on the queueing models with an
attached inventory. They assumed that the service queue is the M/M/1 and the inventory system is managed by well-known
inventory control policies. Under the assumption of exponentially distributed lead times, they derived the stationary joint
probability of the on-hand inventory level and the queue length in explicit product-form. Following Saffari et al. (2011), an
extended model was analyzed by Saffari et al. (2013). In the extended model, generally distributed lead times are assumed.
They derived the stationary joint distribution of the queue length and the inventory level and proposed a cost model to
minimize the average system cost. Studies on a production-inventory system with attached service queue and lost sales were
also carried out (He et al., 2002; Krishnamoorthy and Viswanath, 2013; Baek and Moon, 2014; Baek and Moon, 2016).
These studies analytically derived the performance measures under the assumption of an M/M/1 or a Markovian service
queue.
Even though the previous studies presented various analytical solutions (Archibald, 1981; Beckmann and Srinivasan,
1987; Schwarz and Daduna, 2006; Schwarz et al., 2006; Saffari et al., 2011; Lawrence et al., 2013; Saffari et al., 2013), they

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(2),182-193, 2017

ENHANCED SEMI-AUTOMATED BEHAVIORS FOR BUILDING CLEARING


OPERATION IN KOREAN BATTLEFIELD ENVIRONMENT
Seung Keun Yoo , Doo-Kwon Baik*

Department of Computer and Radio Communications Engineering, Korea University


Seoul, Korea
*
Corresponding author’s e-mail: baikdk@korea.ac.kr

Global urbanization has increased the importance of military operations in urban terrain (MOUT) in recent wars. However,
the defense modeling and simulation community in Korea has faced challenges in applying the characteristics of urban terrain
and MOUT in the analysis of contemporary wars. To overcome these issues, we enhanced the semi-automated behaviors in
the One Semi-Automated Forces (OneSAF) simulation system for the building clearing operation, which is one of the most
important operations in MOUT. Existing OneSAF behaviors have limitations when applying the actual actions of each
combatant in the simulation. However, the enhanced behaviors can make the simulation more realistic. Through simulation
experiments, we proved that these behaviors yield reliable simulation results and contribute to reasonable and effective
MOUT simulations of the Korean battlefield environment.

Keywords: Semi-automated behavior; building clearing operation (BCO); military operations in urban terrain (MOUT); One
Semi-Automated Forces (OneSAF); Korean battlefield environment

(Received on March 08, 2016; Accepted on June 2, 2017)

1. INTRODUCTION
Global urbanization is increasing the importance of urban areas in terms of population and size (United Nations, 2015). In
Korea, the growth rate of urban areas is 16.58% (size) and 91.66% (population) (Ministry of Land, Infrastructure, and
Transport 2014). Since 2000, wars such as the Afghanistan War (2001) and Iraq War (2003) have shown that the major
battlefields are core urban areas where large-scale populations and essential national infrastructures are located (DiMarco,
2012). These cases demonstrate that we must study warfare in urban areas and develop new kinds of operations that suit these
environments in order to achieve victory in contemporary wars.
Military operations in urban terrain (MOUT) are defined as “military actions planned and conducted on a terrain
complex where man-made construction affects the tactical options available to commanders” (United States Army, 1979).
This covers all types of operations that involve engagement by military forces in urban areas. The core of MOUT, building
clearing operation (BCO), refers to the process of searching for and clearing out enemies hiding in buildings. This process
focuses on the characteristics of urban areas, such as complex artificial infrastructures, existence of non-military personnel,
and limited multi-dimensional space, as well as the actions of each combatant participating in the operation. The United State
(US) Army has recognized the importance of MOUT and has developed the One Semi-Automated Forces (OneSAF)
simulation system to assess the effectiveness of MOUT in a simulated battlefield. Although OneSAF is unable to sufficiently
highlight the strength of semi-automated forces for MOUT because of a lack of realism and judgment ability (Yoo, 2015), it
performs many functions representing the various characteristics of MOUT and has many semi-automated behaviors that can
be used in MOUT simulations.
Despite, having a good modeling and simulation (M&S) operating system, Korean military forces have focused mainly
on large-scale engagement in an open field. They consider the urban terrain as one of the terrain types (such as a forest or
mountainous region). Military officers have not yet made specific operational plans for MOUT in war, and many modelers
and analysts yet to analyze the effectiveness of MOUT in the Korean battlefield environment. Furthermore, they do not have
a useful tool for MOUT simulation, except the OneSAF procured from the US Army.
Recent wars have shown that MOUT is essential in contemporary warfare (United States Marine Corps, 1998). Thus, we
must develop a useful simulation tool to model and simulate detailed operations in urban areas and analyze the effectiveness
of MOUT. To achieve this objective, we must first enhance OneSAF’s capability for MOUT simulations. In this paper, we
present enhanced semi-automated behaviors of BCO for use in MOUT simulations. We then demonstrate the benefits of these
enhanced behaviors through well-designed simulation experiments. The results of this study may be applied to conduct more
effective and accurate simulations of MOUT.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(2), 194-206, 2017

EFFECT OF PART TRANSFER POLICIES IN TWO TYPES OF LAYOUTS IN


AUTOMOTIVE BODY SHOPS

Dug Hee Moon 1,*, Ye Seul Nam 2, Ha Seok Kim3, Yang Woo Shin4
1
School of Industrial Engineering and Naval Architecture, Changwon National University
Changwon, Korea
Corresponding author’s e-mail: dhmoon@changwon.ac.kr
2
Department of Eco-friendly Marine Plant FEED Engineering, Changwon National University
Changwon, Korea
3
Department of Industrial and Systems Engineering, Changwon National University
Changwon, Korea
4
Department of Statistics, Changwon National University
Changwon, Korea

A body shop in an automotive factory consists of many sub-lines which are highly automated transfer lines, and the sub-lines
are merged in many assembly operations. To design a body shop, the layout concepts based on welding technologies and the
material handling systems should be considered and optimized. Two types of layout concepts are widely used, and they are
known as the layered build layout and the modular build layout. There are also two part transfer policies in a transfer line with
unreliable machines and no buffer. The first transfer policy is the asynchronous transfer which allows a part to move to the
next operation independently. The second policy is the synchronous transfer which allows parts to move to the next operation
whenever all operations in a sub-line are completed. In this study, we compare two types of transfer policies in two different
layouts having many transfer lines and assembly operations in automotive body shops by simulation study.

Keywords: automotive body shop; layout; transfer policy; assembly; transfer line; simulation

(Received on March 10, 2016; Accepted on June 2, 2017)

1. INTRODUCTION

The body shop of an automotive factory is a typical manufacturing system which combines many flow lines and assembly
operations. Generally, the body shop is divided into 15~20 sub-assembly lines (Moon et al., 2006). Each sub-line consists of
many welding operations and they are divided into many serial stations in which more than one welding operations is
assigned. In each sub-line, there is either no buffer or only a few buffers are allowed between two successive stations because
of the space constraint. On the contrary, the decoupled sub-lines are connected by the power-and-free conveyor or by the
electric monorail system (EMS). The functions of the conveyor (or EMS) are to transfer a part to the next sub-line, and to
prepare buffer space for unexpected blocking and starving (idling) caused by the two consecutive sub-lines.
The layout structure of a manufacturing system is determined by the consideration of various factors, and the
engineering method is the basic consideration. In automotive body shops, two types of welding methods are commonly used,
and they are called as the modular build method and the layered build method, respectively, as shown in Fig. 1 (see Kim et al.,
2015). In the layered build method, some parts such as the inner panel and the outer panel of the automobile’s side body are
assembled one by one in sub-lines (see Figure 1(a)). However, in the modular build method, those parts (e.g., inner panel and
outer panel) are pre-assembled in a sub-line, and the sub-assembled parts are welded to the main body in another sub-line (see
Figure 1(b)). It is not easy to conclude which method is better because both layouts have merits and demerits. Some say that
the strength of a welded body is higher when it is assembled by the layered build method, and it also has the merit of the
accessibility for robot guns. However, the layered build method has the weakness of the overload in the main body sub-line,
and thus the longest line length is increased. It is known that many of Japanese and Korean automotive companies utilize the
modular build method, but GM (General Motors) has switched from the modular build method to the layered build method
(Kim et al., 2015).
ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 24(2), 207-219, 2017

INTERACTIVE VISUALIZATION OF LARGE-SCALE 3D SCATTERED DATA


FROM A TSUNAMI SIMULATION

Kun Zhao1,*, Sakamoto Naohisa2, Koji Koyamada3, Satoshi Tanaka4, Kohei Murotani5, Seiichi Koshizuka5
1
IBM Research Tokyo
Tokyo, Japan
*Corresponding author’s e-mail: zk.kyoto@gmail.com
2
School of System Informatics, Kobe University
Kobe, Japan
3
Academic Center for Computing Media Studies, Kyoto University
Kyoto, Japan
4
School of Information Science and Engineering, Ritsumeikan University
Kyoto, Japan
5
School of Engineering, The University of Tokyo
Tokyo, Japan

Recently, more and more scientific simulation fields have generated large-scale three-dimensional (3D) scattered data. To
observe the internal information, an interactive visualization technique for extracting the internal information of such data is
always needed. However, due to the highly complex distribution and structure of the data, interactive visualization of such
data is very challenging. Moreover, the simulation result is also time varying, which makes the visualization more difficult.
In this work, we focus on large-scale 3D scattered data, which are generated from a tsunami (tidal wave) simulation. We
visualize the large-scale 3D scattered data by developing an interactive particle-based rendering (IPBR) method. A main
feature of the proposed IPBR method is that it implements a semi-transparent effect using the ensemble average of particles
so that the volume rendering effect, used to observe the internal information of the 3D scattered data, can be realized.
Moreover, the real-time adjustment of the transfer function is also implemented by changing the particle radius and particle
color. We calculate the density of the 3D point data and adjust the point radius by using a resizing function. Furthermore, we
also develop a time-varying rendering mechanism to render the temporal behaviors of the time-varying data. After application
of the proposed technique to tsunami simulation data, the experimental results show that a detailed analysis of the spatial and
temporal features of a tsunami can be achieved.

Keywords: tsunami simulation; 3D scattered data; interactive visualization; particle-based rendering; 3D point density

(Received on March 10, 2016; Accepted on June 2, 2017)

1. INTRODUCTION

With the development of science and technology, large-scale simulations of many physical phenomena have been performed.
Among these simulations, especially in the computational fluid dynamics (CFD) simulation field, more and more 3D scattered
data have been used as the simulation primitive. In our case, we focus on a tsunami simulation, which generates 3D scattered
data. Tsunami simulations are very important for disaster prevention, policymaking, etc. The tsunami data are simulated by
using the Moving Particle Semi-Implicit (MPS) (Koshizuka et al., 1996) method to simulate the details of the tsunami
(Murotani et al., 2014) that occurred in Ishinomaki Bay of the Miyagi prefecture in Japan (Figure 1). This simulation results
in high-resolution three-dimensional (3D) point data with a large-scale data size and highly complex distribution. The number
of 3D points reaches approximately 5 million for each time step.
To analyze the simulation result, an efficient visualization method is also needed. Due to the complex 3D distribution
of the tsunami data, visualization must be performed three-dimensionally. Moreover, observation of the inner part is also

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(2), 220-228, 2017

EFFICIENT USE OF V&V TECHNIQUES FOR QUALITY AND CREDIBILITY


ASSURANCE OF COMPLEX MODELING AND SIMULATION (M&S)
APPLICATIONS

Axel Lehmann*, Zhongshi Wang

Institut für Technik Intelligenter Systeme (ITIS), Universität der Bundeswehr München
Neubiberg,Germany
*Corresponding author’s e-mail: axel.lehmann@unibw.de

Due to the increasing complexity and variety of M&S methods, tools and applications, quality control of model design and
simulation development (M&S) as well as assurance of credibility and utility of goal oriented M&S applications are
becoming more and more challenging. Facing these perspectives and demands, standards, guidelines, and techniques for
verification, validation, and accreditation (VV&A) of M&S have been developed over past decades. Most of these VV&A
processes, as well as related V&V techniques, are directed to typical M&S application scenarios and their respective
acceptability criteria. However, concrete methods for risk-cost-benefit calculations as the basis for tailoring of VV&A
activities according to specific project requirements are almost unknown. Besides a brief summary of state-of-the-art, this
paper focuses on experiences applying existing guidelines and standards for VV&A for M&S quality assessment, as well as
on challenges arising from increasing complexity and diversity of M&S applications.

Keywords: Quality and Credibility Assurance; Modeling and Simulation (M&S); Verification and Validation (V&V); M&S
Tailoring; V&V Techniques; Techniques Selection

(Received on March 11, 2016; Accepted on June 2, 2017)

1. INTRODUCTION: DEMANDS FOR QUALITY AND CREDIBILITY ASSURANCE

Rapid evolution of modeling and simulation (M&S) technologies, as well as increasing variety of (M&S) applications, are
caused by various factors. On one side, these developments are enabled and pushed by rapidly increasing performance and
ubiquity of information and communication (ICT) technologies. On the other side, these developments are pulled by
requirements to better and faster analyze, evaluate or train dynamic behavior of increasingly complex systems and processes
of real and virtual worlds.
As a consequence, as more and more evaluations and decisions are at least supported by the use of M&S technologies,
quality of M&S tools and credibility of its applications and results (data) are becoming critical success factors. Various
examples in the past demonstrate that missing quality and credibility assurance of M&S technologies and their applications
are raising use risks with respect to safety and security, or can indirectly cause economic or social damages.
Well known methods exist for quality and credibility assurance, like various kinds of testing, verification, and validation
(V&V) techniques. However, in the context of increasing complexity of systems and processes to be modeled, as well as of
increasingly complex models, simulations and data being applied, deficiencies of models, of their implementations as
executable simulations and data are becoming key challenges for quality and credibility assurance (Wang 2011). For example,
in highly parameterized or stochastic models, not all parameter values or distributions are precisely known but rather
uncertain or vague. Therefore, uncertain quantification and functionalities can significantly influence credibility and utility of
M&S and its application results. As an example, data for human behavior representation are difficult to obtain and to evaluate.
While one would consider certain data as “realistic” or “typical”, others might disagree. Quite often the context of measured
real data is not well documented but used as input for simulation experiments. Some quantitative measurement methods of
system complexity are discussed in Song et al., (2013).
In addition, another category of deficiencies can result from the incomplete specification of M&S goals and constraints,
from erroneous or inadequate model design, faulty simulation implementation or experimentation. In summary, various
sources of M&S deficiencies have to be considered, and new challenges arise to assure - at least - a certain level of M&S
quality, credibility, and utility. This paper intends to focus on these challenges by a brief summary of proposed V&V concepts
and case study experiences.
ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 24(2), 221-231, 2017

CLASSIFICATION OF AGE USING THE WRINKLE DENSITIES OF FACE


IMAGES WITH A GENETIC ALGORITHM AND SUPPORT VECTOR
MACHINE
Dong-Woo Lee1, Syng-Yup Ohn2,*, Jong-Whoa Na1, Chan Heo2
1
School of Electronics and Information Engineering, Korea Aerospace University
Goyang-si, Korea
2
Department of Software Engineering, Korea Aerospace University
Goyang-si, Korea
*Corresponding author’s e-mail: syohn@kau.ac.kr

This paper proposes a new technique to estimate age with a support vector machine (SVM) based on the wrinkle densities of
face images. Wrinkle density is defined as the ratio of an area of wrinkles on a section of face skin. In the new method, faces
are segmented into eight wrinkle sections based on face geometry. Wrinkle densities are then calculated from each of the
wrinkle sections. Next, an age classification model is created using a support vector machine algorithm. The model estimates
the age of the face by classifying the face into one of three age classes. The set of features – i.e. the densities of the eight
wrinkle sections – and the SVM kernel parameters used in the classification model are optimized with a genetic algorithm
(GA) to maximize estimation accuracy. The proposed technique is tested using a face database at Korea Aerospace University
(KAU). As a classification model, it shows superior performance over the artificial neural net (ANN) and naïve Bayesian
methods. Simulation results are also provided, including a comparison between the SVM with GA model and the other models.

Keywords: age estimation, wrinkle density, support vector machine, genetic algorithm

(Received on March 14, 2016; Accepted on June 2, 2017)

1. INTRODUCTION
Age estimation techniques based on face images are widely utilized in marketing and advertising to analyze the disposition
of customers. Accurate estimates can contribute to increases in sales since appropriate products can be recommended and
advertisements can be customized based on estimated ages. Recently, smart digital signages equipped with video cameras
and capable of processing video streams from the cameras have been deployed in public areas and stores. These signages
show advertisements to customers walking by and can also take orders. Age estimation effectively increases the efficiency
of the advertisements shown on digital signages. The age estimation methods used for this purpose need to be very time
efficient. People tend to walk past signages relatively quickly, so processing speeds need to be sufficiently fast to estimate
ages in real time and draw the attention of potential customers (www.digitalsignagetoday.com), (www.nist.gov).
In this paper, a new technique is proposed to detect human faces in images and provide age estimations based on wrinkle
density in real time. In the proposed algorithm, the wrinkle densities from eight wrinkle areas on the detected faces are
computed, and the age estimation is obtained from the decision model created by a support vector machine (SVM) method
(Cristianini et al., 2000), (Vapnik et al., 1996). The wrinkle densities are used as inputs for this method. Furthermore, the
decision model is optimized via a genetic algorithm (GA) (Michalewicz, 1992), (Goldverg, 1989) to select the feature sets
and parameters maximizing the estimation accuracy. SVMs are a linear classifier with an iterative training process. In the
iterative training process, the hyperplanes separating different classes that produce the greatest margins between the feature
points belonging to those different classes are found. The GA, an optimization technique that simulates evolutionary processes,
searches the set of features and parameters to be included in a decision model.
This paper consists of five sections. Section 2 briefly introduces the age estimation methods and previous research
efforts. Section 3 explains the methods used to localize the eight wrinkle sections and compute the wrinkle densities from
each section. Section 4 presents the new algorithm to derive and optimize the model to estimate age from wrinkle densities
based on GA and SVM techniques. Section 5 reveals the performance of the technique from a test using an image database

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(2), 232-244, 2017

A SCHEDULE OF CLEANING PROCESSES FOR A SINGLE-ARMED


CLUSTER TOOL
Seung-Min Noh1, Ja-Hee Kim1, and Seong-Yong Jang2
1
Graduate School of Public Policy and Information Technology, Seoul National University of Science and Technology
Seoul, Korea
Corresponding author’s e-mail: jahee@seoultech.ac.kr
2
Department of Industrial and Information Systems Engineering, Seoul National University of Science and Technology
Seoul, Korea

A conventional robot sequence of a single-armed cluster tool pursues a backward strategy for efficiency; the strategy is
optimal when identical work cycles are repeated. However, we cannot guarantee the strategy is optimal under some abnormal
situations, such as breakdowns or during auto-cleaning (periodic cleaning for preventive maintenance). In this paper, we
focus on undesirable but predictable auto-cleaning, which cleans the chambers of a cluster tool each time a predefined number
of wafers has been processed to prevent deterioration of wafer quality caused by residual chemicals. For auto-cleaning, we
suggest alternative cleaning frequencies of a single-armed cluster tool that periodically clean the inside of process modules
based on the Theory of Constraints. Additionally, we derive its heuristic robot scheduling from Petri-net models, because
finding the optimal robot sequence in real time is impossible. Finally, we examine the effectiveness of our suggestions by
simulation.

Keywords: cluster tool; preventive maintenance; theory of constraints; Petri-net; simulation

(Received on March 03, 2016; Accepted on June 2, 2017)

1. INTRODUCTION

The proliferation of Internet of Things (IoT) has rapidly advanced the semiconductor manufacturing industry, which produces
integrated circuit (IC) chips, the core component of the IoT (Meulen & Rivera, 2015). The semiconductor manufacturing
industry fabricates microcircuits on a silicon wafer, a thin circular slice of semiconductor material. As the number of
components per chip rises and the radius of the wafer increases, there is a growing need for cleaner environments and more
automated equipment in the manufacturing process. Therefore, a single wafer-integrated equipment such as a cluster tool is
used for many semiconductor fabrication processes, including chemical vapor deposition (CVD), etching, lithography, and
so on. Cluster tools can have various architectures and configurations that allow each wafer to undergo different processes,
unlike batch production systems. However, consecutive wafers tend to have the same processes for reasons of efficiency (Lee,
2008).
The radial cluster tool, the most typical kind, consists of several process modules (PMs), a robot, a cooler, and load
locks, which take charge of the input and output of cassettes filled with wafers. Because the wafers in the cassette usually
undergo the same process in the cluster tool, it is widely known that the optimal schedule involves repeating identical patterns
for each cycle (Kim et al., 2014). We refer to this repeated schedule pattern as steady. An optimal schedule depends on its
architecture, such as the number of robot arms and the recipe that defines how to process wafers in the cassette. If the robot
has a single arm, the optimal robot sequence is a backward strategy (Kumar et al., 2005); that is, the robot transfers the wafer
in the reverse of the wafer flow (Venkatesh et al., 1997). However, a better schedule is necessary if a cluster tool does not
satisfy the condition that all PMs and the cooler are available and already have their wafers undergoing the same process
(Ramίrez-Hernández et al., 2010). For the noncyclic schedule for cluster tools, conventional systems have usually used
dispatching rules for robot schedules (Fordyce et al., 2015; Fowler et al., 2015). However, process engineers have expressed
dissatisfaction with the dispatching rules because engineers have difficulty predicting the next step of the tool (Kim et al.,
2012). Therefore, researchers have suggested predictable robot scheduling algorithms that can handle the transient period
(Hong and Kim, 2015; Kim et al., 2012), preventive maintenance (Kim et al., 2013), temporal variations (Kim et al. 2014),
and module failure (Qiao et al., 2014).
In this paper, we consider whether periodic preventive maintenance, also called auto-cleaning, causes some of the PMs
to be unavailable. Because auto-cleaning occurs most frequently among a situation in which the backward strategy is not
ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING
International Journal of Industrial Engineering, 24(2), 245-254, 2017

A MODIFIED BOOTSTRAP METHOD FOR INTERMITTENT DEMAND


FORECASTING FOR RARE SPARE PARTS
Gisun Jung1, Jinsoo Park 2, Yohan Kim3 and Yun Bae Kim3,*
1
Department of Industrial Engineering
Sungkyunkwan University
Suwon, Korea
2
Department of Management Information System
Yongin University
Yongin, Korea
3
Department of Systems Management Engineering
Sungkyunkwan University
Suwon, Korea
*
Corresponding author’s e-mail: kimyb@skku.edu

An effective inventory management requires timely and accurate forecasting of demand for parts or items. In many real-
world scenarios, however, the demand for rare, high-cost spare parts are scarce and erratic, making it highly challenging to
perform a reliable forecast for intermittent demand. Studies in the past offered two approaches to such intermittent demand
forecasting: a traditional approach to estimating demand parametrically, and a non-parametric approach that estimates the
distribution of demands. The bootstrap method is considered to be one of the key non-parametric methods available. Despite
its usefulness, however, application of conventional bootstrap methods in intermittent demand forecasting does not take into
account any existing dependent structures in lead time demand, leading to inaccurate forecasting. In this paper, we suggest a
new bootstrap method that takes into consideration the unique characteristics of intermittent demand to improve forecasting
performance. We conclude by demonstrating the applicability of suggested new method through the results of a simple case
experiment.

Keywords: intermittent demand, non-parametric forecasting, bootstrap, inventory management

(Received on March 10, 2016; Accepted on June 2, 2017)

1. INTRODUCTION

Accurate demand forecasting is fundamental to inventory management and supply chain management in various industries.
In general, demand does not have zero values or periods with no demand. For several industries, however, demand occurs at
infrequent, irregular, and often unpredictable intervals (Buffa and Miller, 2004). This includes automobile, train, aircraft, and
military equipment industries.
In this regard, demand can be classified into three types by the proportion of periods with zero demand. First, the general
type of demand that does not have zero values is said to be non-intermittent. If demand shows sporadic periods of zero
demand (under 20%), it is said to be partially intermittent. If demand has numerous periods of zero demand, it is said to be
intermittent. In other words, intermittent demand is a specific type of demand, with large proportions of values of zero. In
intermittent demand series, non-zero values are stochastic in nature (there is no rule when the non-zero values occur) and
typically have relatively large variances. Because of these characteristics, intermittent demand is often called slow-moving
or lumpy demand and is difficult to forecast. Traditional forecasting methods, such as exponential smoothing or moving
average, are designed for the general demand that has no value of zero. Thus, traditional methods do not perform well for
intermittent demand.
Figure 1 shows the (a) time plot and (b) histogram of an intermittent demand example. As shown in Figure 1(a), the
variance of non-zero values is quite large. This makes forecasting intermittent demand more difficult. In addition, the series
of sums of total demand during the next lead time (called lead time demand) can have zero values because of the large

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(3), 255- 271, 2017

A UTILITY SCALE DEVELOPMENT FOR GESTURE INTERFACES:


APPLIED TO VISUAL DISPLAY PRODUCTS
Eunjung Choi1, Taebeum Ryu2,*, Bora Kang1, and Min K Chung1
1
Department of Industrial and Management Engineering, Pohang University of Science and Technology
Pohang, South Korea
2
Department of Industrial and Management Engineering, Hanbat National University
Daejeon, South Korea
*
Corresponding author’s e-mail: tbryu75@gmail.com

In the case of new technology, user acceptance is significantly affected by its utility and usability. However, previous
studies regarding gesture interfaces have been focused on usability while neglecting utility. In this study, an attempt was
made to develop a gesture interface utility scale and also to analyze the relationship between the utility and user acceptance.
We collected 12 utility items related to a gesture interface from the previous studies and surveyed the extent of gesture
interface utility for visual display products while using them. The utility-scale was developed using the explorative factor
analysis and was revised and verified using the confirmative factor analysis. The relationship between the gesture utility of
the visual display products and the user acceptance was analyzed using a structural equation model. Therefore, the gesture
utility scale comprised three factors of evaluation: contact-freeness, naturalness, and expressiveness. Among these factors,
contact-freeness and expressiveness were observed to influence the user acceptance of the display products significantly.

Keywords: gesture interface, user acceptance, usefulness, utility, scale development

(Received on December 01, 2014; Accepted on September 1, 2017)

1. INTRODUCTION

Nowadays, users prefer products that are easy to use and efficient. To meet these user preferences, several researchers have
attempted to develop various alternatives for future interfaces that will allow direct and intuitive control through visual
recognition, and voice and gesture commands. Several products have adapted interfaces that make use of simple gestures
involving physical movements of the fingers, hands, arms, head, and face (Hummels and Stappers, 1998; Mitra and
Archarya, 2007) owing to their natural, direct, and immediate access (Urakami, 2012). From simple 2D hand gestures, such
as a tap, drag, drop, and flip, that are adapted to products having a touch screen, the gestures are extended to more
complicated multi-touch-based gestures such as zoom in, zoom out, and rotate. More recently, products, such as home
appliances and pet robots, that enable remote control through 3D hand gestures are being developed (Bhuiyan and Piking,
2009; Guo and Sharlin, 2008; Kühnel et al., 2011). The gesture interface is expected to be applied to several systems as a
direct and intuitive interface in the near future (Bhuiyan and Picking, 2009; Saffer, 2008; Shan, 2010; Wachs et al., 2011).
New technologies such as a gesture interface must take into consideration user acceptance, which determines whether
the users are likely to use the products, in order to find their niche in the existing market (Anthony et al., 2008; Davis, 1993;
Dillon and Morris, 1996; Gould et al, 1991; Kaasinen, 2005; McCarroll, 1991; Nickerson, 1981). Moreover, it is very
important to determine the factors that may influence user acceptance at the beginning of the product development stages
(Davis, 1993; Tricot, 2004). The technology acceptance model (TAM) is a widely used method in the information
technology (IT) field that allows developers to predict the user acceptance of a new product (Veiga et al., 2001). This
model takes into consideration perceived usefulness (PU), which measures the usefulness of a product, and the perceived
ease of use (PEU), which measures the ease of use of the product, as the two main factors that affect the user acceptance
(Davis, 1989). In addition, a short multiple-item questionnaire has also been designed to evaluate these factors. TAM is
widely used in various fields such as mobile communication technologies and IT educational service fields (Davis, 1989;
Fenech, 1998; Ha et al., 2007; Horton et al., 2001; Kuo and Yen, 2009; Maneesoonthorn and Fortin, 2006; Nysveen et al.,
2005). However, the questionnaire designed for TAM only measures the PU and PEU for a general IT system, and it is

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(3), 272-283, 2017

UNIFIED LOGICAL MODEL TO IDENTIFY FAULTS IN A PLC CONTROLLED


MANUFACTURING SYSTEM
Arup Ghosh1, Shiming Qin1, Gi-Nam Wang1, Jooyeoun Lee1, and Hee-Young Jang2
1
Department of Industrial Engineering, Ajou University
Suwon, Korea
Corresponding author’s e-mail: arupghosh22.3.89@gmail.com
2
Korea National Industrial Convergence Center, Korea Institute of Industrial Technology
Ansan, Korea

In this paper, we present a novel approach that can identify operational faults associated with control process by using log
data records of the PLC program signals. The proposed approach automatically creates a logical model of the PLC control
process, called the unified logical model and then utilizes that model to detect faults. The unified logical model includes the
signal-state logical models of the devices and the relational logical models of the device groups. The signal-state logical
model is designed to depict the device behavior, and the relational logical model is designed to describe the relationship
between the dependent devices. The proposed approach automatically generates the unified logical model from the PLC
signal log data records and employs a hash table based model indexing and fault searching scheme to identify the faults in the
manufacturing system. Experimental results show that the proposed model can be utilized to detect the faults effectively.

Keywords: fault detection; control process modeling; log data analysis; manufacturing; PLC

(Received on September 27, 2014; Accepted on September 24, 2017)

1. INTRODUCTION

Nowadays most of the manufacturing systems are controlled by PLCs. This is mainly because of its adaptability, modularity,
robustness and low cost (Basile et al., 2013; Hu et al., 1999a; Hu et al., 1999b; Qin and Wang, 2012). The operational faults
associated with PLC control processes occur most often (about 70%) among all kinds of faults (Hu et al., 1999b; Hu et al.,
2003). It is very difficult to identify the faults (and their detailed effects) in a PLC controlled manufacturing system. This is
because PLC has a very inflexible programming system and PLC device or its control system does not contain any inherent
module for this task (Qin and Wang, 2012) [in this work, the term “fault” refers to the operational fault]. A great damage to
the system can result if the faults propagate through the system. Moreover, it becomes harder to distinguish the root cause of
the fault once it propagates through the system and hence, the real-time identification of the faults is necessary for the
manufacturing industry. Usually, the operation engineers who were not originally involved in the PLC programming task find
it difficult to repair or analyze the fault sources because of the lack of knowledge about the control logic and the fault itself.
This issue can be addressed by providing the engineers an easy graphical representation of the control process characteristics.
There is an increasing need for an approach that can solve the problems mentioned above that we have addressed in this
work. Our proposed approach uses the log data records of PLC program signals to satisfy those needs. Rapid advances in
communication technology (between PLC I/O board and computer) have dramatically improved the timeliness and accuracy
of the PLC signal log data records. These days, the data loggers are able to produce highly accurate log data records of the
PLC program signals in real-time which was infeasible to achieve even before a few years (see for instance: Zaharia et al.,
2011; Kepware Data Logger Manual, 2017; Mitsubishi Data Logger Manual, 2010). Thereby, the log data records of PLC
program signals become a useful source of information to detect faults in real-time. Our proposed approach utilizes this
opportunity efficiently.

2. BACKGROUND STUDY

The literature related to this area is primarily focused on the verification of control logic or PLC programs (Park et al., 2009;
Rausch and Krogh, 1998; Moon, 1994; Park et al., 2008). As this subarea become mature enough, the focus is shifted to the
fault detection. This shift is because of the realization that the control process may not be working correctly even though the
control logic or PLC programs are correct. Unfortunately, there is a substantial lack of literature in this particular field. In past

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(3), 284-294, 2017

MULTI-OBJECTIVE ERGONOMIC WORKFORCE SCHEDULING UNDER


COMPLEX WORKER AND TASK CONSTRAINTS
Tanuchporn Wongwien and Suebsak Nanthavanij*

Engineering Management Program, Thammasat University


Pathum Thani, Thailand
*
Corresponding author’s e-mail: suebsak@siit.tu.ac.th

When workers are assigned to perform a set of tasks with exposure to some safety/ergonomics hazard, they can be rotated
among those tasks periodically within a workday so that their daily hazard exposures do not exceed a permissible exposure
limit. Assigning the right workers to the right tasks in each period can help to increase the work system productivity. It is
necessary to consider worker limitations and job requirements when developing an optimal set of work schedules for all
workers. A multi-objective ergonomic workforce scheduling problem (MO-WSP), with combined cost and productivity
considerations, is described. Three problem objectives are considered: (1) minimizing the number of utilized workers, (2)
maximizing the total worker-task fit score, and (3) minimizing the total worker-task changeover. A preemptive optimization
approach and a heuristic procedure are employed to generate safe daily rotating work schedules for workers.

Keywords: workforce scheduling; safety; hazard exposure reduction; job rotation; multi-objective optimization

(Received on September 03, 2014; Accepted on April 22, 2017)

1. INTRODUCTION

The workforce scheduling problem (WSP) involves assigning a set of workers/employees to perform a set of tasks over a
given time period. The scheduling of airline crew and bus/train drivers in the transportation industry is one example of the
WSP that has received considerable attention from researchers (Qi et al., 2004; Kwan, 2004). Although the WSP is basically
aimed to develop a feasible worker-task timetable under a number of complex restrictions, its objective depends on a key
problem of each application. When only one problem objective is of interest, the WSP can be constructed to minimize either
the number of workers (Lagodimos and Leopoulos, 2000) or the total cost (Fowler et al., 2008) to accomplish certain tasks,
or to maximize the productivity performance (Chu, 2007). Nevertheless, the single-objective WSP is found to be insufficient
to deal with most practical problems which require simultaneous consideration of two or more objectives. The nurse
scheduling problem is a good example of having multiple conflicting objectives. Apart from satisfying demand coverage
requirements, schedulers have to maximize nurses’ preferences with respect to a variety of constraints imposed by legal
regulations, personnel policies, and many other hospital-specific requirements (Zhou et al., 2012).
When multiple objectives are considered, the solution can be obtained using an optimization, heuristic, or meta-heuristic
approach. For the optimization approach, efficient solution techniques include preemptive goal programming (Sayin and
Karabati, 2007) and non-preemptive goal programming (Gomar et al., 2002; Topaloglu, 2006). Mathirajan and Ramanathan
(2007) used both techniques in their tour scheduling research and compared the obtained solutions. Castillo et al. (2009)
developed a heuristic method to solve their workforce scheduling problem with two objectives, i.e., minimizing the cost and
maximizing the service level.
Workforce scheduling that includes job rotation is not new. According to Baker (1976) and Ernst et al. (2004), shift
scheduling or day-off scheduling is a common problem in the healthcare service system. Many researchers included the
concept of job rotation in their studies. Musliu et al. (2002) constructed a framework that includes four main steps with a
backtracking algorithm for rotating workforce schedules. Alfares (2003) developed an integer programming model and a
two-stage solution method for the flexible 4-day workweek scheduling problem with weekend and work frequency constraints.
For large-sized WSPs, Musliu (2003) and Mora and Musliu (2004) applied methods based on a heuristic, genetic algorithm
(GA), and tabu search to obtain rotating work schedules. Nevertheless, very few researchers considered workplace safety or
ergonomics when they developed work schedules.
Nanthavanij and Yenradee (1999) proposed a quantitative approach to job rotation by developing a mathematical model
for the problem with equal numbers of workers and tasks. They also investigated the effect of work period length on noise
hazard reduction (Nanthavanij and Yenradee, 2000a). Later, they developed a mathematical model to determine the minimum
number of workers for job rotation (Nanthavanij and Yenradee, 2000b). For complex safety-based job rotation problems, a

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(3), 295-305, 2017

COMPARISON BETWEEN CONDITION-BASED, AGE-BASED AND


FAILURE-BASED MAINTENANCE POLICIES IN PARALLEL AND SERIES
CONFIGURATIONS: A SIMULATION ANALYSIS
M. Sheikhalishahi*, H. Heidaryan-Baygy, S. Abdolhossein Zadeh, A. Azadeh

School of Industrial and Systems Engineering, College of Engineering, University of Tehran


Tehran, Iran
*
Corresponding author’s e-mail: m.alishahi@ut.ac.ir

This paper investigates the performance of condition-based maintenance (CBM), age-based maintenance (ABM) and failure-
based maintenance (FBM) policies in series and parallel multi-resource systems. Condition-based maintenance is mainly
taken into account as a significant maintenance policy that makes use of the component operating circumstances to predict
failure occurrence. The advantage of implementing CBM is that the component is preventively maintained only when
necessary and as a result, the entity can benefit saving resources and system availability. Since implementing CBM seems to
be impossible in some situations, ABM and FBM are considered as two alternatives for CBM. In this paper, these three
policies are compared with each other using simulation. To evaluate and analyze these three maintenance policies in parallel
and series configurations, three performance indicators including efficiency (E), total maintenance costs (TMC) and average
queue (AQ) are introduced for all scenarios. An explanatory case study is applied to investigate the performance of these
policies in various conditions. The results demonstrate that in general, CBM performs better than ABM, and on the other
hand, ABM shows better results in comparison to FBM. This is the first study which compares CBM, ABM and FBM policies
in series and parallel configurations considering efficiency, maintenance cost and queue length as criteria.

Keywords: Condition-based maintenance (CBM), Age-based maintenance (ABM), Failure-based maintenance (FBM),
ARENA

(Received on December 5, 2014; Accepted on September 24, 2017)

1. INTRODUCTION
Many manufacturing procedures and processes, as well as the equipment operating in industries, suffer from growing wear
and aging. Components and equipment are consequently subject to random failures stemming from this degradation. The
deterioration of the performance could be a disastrous phenomenon for some components like those used in aircraft systems,
medication, and network servers. For instance, in an airplane, vital flight components like actuators, valves, pumps, and
engine control electronics must function appropriately for a safe flight (Keller et al., 2006). Implementing continuous and
regular maintenance of the electronic components (systems and sub-systems) can ensure their consistent function and on the
other hand, can extend the advantageous life of the equipment.
There exist various techniques for maintenance activities. According to Tsang (1995), two main types of maintenance
activities include preventive maintenance (PM) and corrective maintenance (CM). Preventive maintenance aims at precluding
failures from happening and is implemented at prearranged time intervals. To put it simply, PM seeks to repair or replace the
components before a failure occurs, whereas, corrective maintenance is a reactive activity and its task is to repair or replace
the failed parts of a system. In both cases, the product has to stay out of function until the maintenance activity is carried out
completely.
When it is possible to monitor the system’s status - continuously for operating systems and by test and inspections for
stand-by safety systems - a condition-based maintenance (CBM) technique can be conducted. In CBM, the decisions for
maintenance activities are made based on the observed condition of the component. Taking advantage of this, CBM tries to
prevent any unexpected downtime as well as any unnecessary maintenance activities from happening. This technique is of
high potentialities in systems such as nuclear power stations, aerospace components and offshore installations that operate
under demanding circumstances which can endanger their reliability and functionality (Marseguerra et al., 2002). Rao (1996)
argues that condition-based maintenance has proved to minimize the cost of maintenance, improve operational safety and
decrease the quantity and strictness of in-service system failures. Niu et al. (2010) presented a novel CBM system that makes
use of reliability-centered maintenance (RCM) mechanism to minimize the maintenance costs and exploits a data fusion
technique for improving condition monitoring, health evaluation and prognostics.

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(3), 306- 327, 2017

PRODUCT PRICE FORECASTING BASED ON CORRELATIVE PRICE NET


AND NEURAL NETWORKS

Hong-Sen Yan1,*, Nan-Yun Jiang1,2, Wen-Wu Shi1, Xian-Gang Meng1,3, and Tian-Hua Jiang1

1
Key Laboratory of Measurement and Control of Complex Systems of Engineering, Ministry of Education and School of
Automation, Southeast University
Nanjing, China
*
Corresponding author’s e-mail: hsyan@seu.edu.cn
2
Department of Economics and Management, Nanjing Technology University
Nanjing, China
3
School of Automation, Tianjin University of Technology
Tianjin, China

The manufacturing enterprise needs price forecasting in order to attain maximum profit and guild production. However, it is
difficult to trace the changing rules of their historical data since product price may fluctuate abruptly. Therefore, a price
forecasting method based on correlative price net and neural networks is proposed in the paper. Firstly, by analyzing the
main factors to product price changes, a model of correlative price net (CPN) that connects many products whose prices
affect each other is set up. Then, the theoretical proof that the whole CPN can be substituted by its correlative price sub-net
for price forecasting is provided. Based on the correlative price sub-net, a set of BP neural networks are introduced for
building a new network, named the price forecasting net (PFN), which can reflect the factors of price fluctuation in the
correlative price sub-net. Finally, since the change of factors in the correlative price sub-net relates to the variation of
product price caused by it, the product price can be forecasted by the change of factors in the correlative price sub-net by
means of PFN. The theoretic analysis and simulation experiment show that the proposed method adapts well to product
price forecasting, especially when prices change abruptly.

Keywords: price forecasting, self-learning, correlative price net, neural networks

(Received on July 11, 2015; Accepted on May 13, 2017)

1. INTRODUCTION

Price forecasting is to analyze various factors that constitute and influence price movement on the basis of the general law
of price movement and estimate the price of the future period in accordance with historical and current achievement, price
and market supply and demand. Product price changes constantly and influences the whole profit of the enterprise.
Therefore, manufacturing enterprises need to forecast the variation of product price, estimate the changing trend of product
profit, and determine which product should be manufactured or terminated on the basis of the market information.
Therefore, price forecasting has also been one of the hotspot issues that many scholars focus on.
Previous research of forecasting methods, such as linear and nonlinear regression methods (Choubin et al., 2016; Kao
et al., 2013), moving average method (Barrow, 2016) and exponential smoothing methods (Tratar et al., 2016), are
patterned to seek the change rules through the use of mathematical methods, and then predict the future data. In recent
years, with the advent of new data processing methods, some new forecasting methods emerge, such as wavelet transform,
support vector machine, a clustering algorithm, artificial neural network (ANN) and so on.
The wavelet transform is a new data process tool, and due to its strong process capacity and distinct characteristics,
often applied to time series forecasting (Tascikaraoglu et al., 2016). Support vector machine is a good classification tool
based on statistical learning theory. Due to its good performance on small sample training, it is introduced for forecasting in
the cases with a few sample data (Zhang et al., 2013; Kaya et al., 2012; Çaydaş and Ekici, 2012; Benkedjouh et al., 2013).
The clustering algorithm is another commonly used tool for time series forecasting, it groups sets of data with their

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING


International Journal of Industrial Engineering, 24(3), 328- 339, 2017

A DATA-DRIVEN TEXT SIMILARITY MEASURE BASED ON


CLASSIFICATION ALGORITHMS
Su Gon Cho, Seoung Bum Kim*

School of Industrial Management Engineering, Korea University


Seoul, Korea
*
Corresponding author’s e-mail: sbkim1@korea.ac.kr

Measuring text similarity has shown its fundamental utilization in various text mining application problems. This paper
proposes a new method based on classification algorithms for measuring the similarity between two texts. Specifically, a
sentence-term matrix that describes the frequency of terms that occur in a collection of sentences was created to measure the
classification accuracy of two texts. Our idea is based on the fact that similar texts are difficult to distinguish from each other,
which should lead to a low classification accuracy between similar texts. By doing comparative experiments on several widely
used text similarity measures, analysis results with real data from the Machine Learning Repository at the University of
California, Irvine demonstrate that the proposed method is able to achieve outperformed the other existing similarity measures
across the entire range of term selection filters.

Keywords: classification, sentence-term matrix, text similarity measure, text mining

(Received on September 16, 2015; Accepted on April 04, 2017)

1. INTRODUCTION

The rapid growth of the World Wide Web and online information services has generated and made accessible to a huge
number of text documents. Although these text documents are meant to be useful, their sheer volume and extensive
availability have compromised their usefulness. Most people do not have the time to read entire texts to make critical decisions
based on the information accessible to them. Therefore, ways to determine the key concepts or main themes of texts have
recently become the focus of considerable research (Huang, 2008; Lee, Wang, et al., 2015; Ur-Rahman et al., 2012). In an
effort to examine the relationships between texts, researchers have attempted to identify meaningful patterns and trends and
then extract information from large volumes of text data.
Text mining refers to the process of automatically extracting and discovering useful information in unstructured texts
(Alguliev et al., 2013; Chakraborty et al., 2013; Gupta et al., 2009). Text mining has been applied to a variety of research
areas. These include text classification, text clustering, opinion mining, and information extraction and retrieval. In all these
areas, measuring the degree of textual similarity is essential to identifying the semantic relationships among texts.
Measuring text similarity is a fundamental process that numerically evaluates the semantic similarity of two texts
(Huang et al., 2012; Yahya et al., 2013). Text similarity measurements have been used in various text mining applications,
including text classification (Chen et al., 2011; Lee et al., 2012) and clustering (Jun el al., 2014; Shi et al., 2013), information
retrieval (Pai et al., 2013; Tan et al., 2003), and word-sense disambiguation (Agirre et al., 2014; Schütze, 1998). Recently,
text similarity measurements have been used in extractive summarization (Aliguliyev, 2009; Kim et al., 2013; Lin et al.,
2003), automatic evaluation of machine translations (Denkowski et al., 2011; Papineni et al., 2002), text coherence testing
(Crossley et al., 2011; Lapata et al., 2005), search engine advertising (Hur et al., 2015; Ortiz-Cordova et al., 2012), and the
investigation of similar geographic features in geoinformatics (Janowicz et al., 2014; Lee et al., 2012).
In general, two types of similarity measurements exist: knowledge-based and corpus-based (Aker et al., 2015; Mihalcea
et al., 2006). The goal of knowledge-based measurement is to quantify the degree of similarity based on information from
semantic networks that use structured vocabularies to create highly formal knowledge representations (e.g., WordNet
hierarchy). Various measurements have been proposed for knowledge-based areas such as the path similarity measurement
and similarity measurements of Leacock and Chodorow; Wu and Palmer; Lesk, Resnik, Lin, and Jiang; and Conrath (for
details, see Gomaa et al., 2013; Rada et al., 1989). All these measurements have been successfully applied to text mining
tasks. To further improve the performance of knowledge-based measurement methods, various morphological analyses have
been used, such as stemming, stop-word removal, part-of-speech tagging, and longest subsequence matching (Salton et al.,
1997). However, it is difficult to obtain a reliable performance measure because knowledge databases have been assembled

ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

Vous aimerez peut-être aussi