Académique Documents
Professionnel Documents
Culture Documents
Strategic Change
centers, which predominantly receive calls an economic value associated with cus-
rather than initiate them, face the classical tomer waiting time and abandonment.
planning problems of forecasting and Furthermore, such callers will with some
scheduling under uncertainty, with a probability call back and thereby help sus-
number of industry-specific complications tain the load on the system [Hoffman and
[Mehrotra 1997]. Well-known cases in- Harris 1986]. Thus, call center managers
clude L. L. Bean’s phone-order business must balance service level (a chief driver
[Andrews and Parsons 1989], the IRS’s of customer satisfaction) with the number
toll-free taxpayer information system of agents deployed to answer the phone
[Harris, Hoffman, and Saunders 1987], (comprising 60 to 80 percent of the cost of
and AT and T’s operational design studies operating a call center).
[Brigandi et al. 1994]. A Specific Call Center Problem
In their planning, most call centers tar- We conducted our analysis for the man-
get a specific service level, defined as the agers of a technical support call center of a
percentage of callers who wait on hold for major software company. At the time, Sep-
tember 1995, it was the only company in
Eighty percent of calls waited its industry that provided free technical
five to 10 minutes. support over the phone to its software
customers. Agents often provided a com-
less than a particular period of time. For bination of software solutions and busi-
example, a sales call center may aim to ness advice to customers, and the custom-
have 80 percent of callers wait for less ers perceived the service provided to be
than 20 seconds. A related measure often valuable. The software company’s cus-
used to assess call center performance is tomer base and sales volume were grow-
the average time customers wait on hold, ing rapidly.
or average speed to answer (ASA). Unfortunately, its call center had very
Another key performance measure is poor service levels, with 80 percent of calls
the abandonment rate, defined as the per- waiting five to 10 minutes, well above the
centage of callers who hang up while on firm’s three-minute target. Predictably,
hold before talking to an agent. It is both abandonment rates were high, approach-
intuitive and well known throughout the ing 40 percent on some days.
call-center industry that customer aban- To senior management, this situation
donment rates and customer waiting times was intolerable.
are highly correlated. High abandonment At first glance, the solution seemed ob-
rates are bad for several reasons, most no- vious: simply add more agents on the
tably because a customer who hangs up is phones, reducing waiting times and agent
typically an unsatisfied customer and is utilization. However, because this would
much less likely to view the company fa- increase the percentage of total revenues
vorably [Anton 1996]. Many analysts (for dedicated to free telephone technical sup-
example, Andrews and Parsons [1993] and port, it would adversely affect the bottom
Grassmann [1988]) have directly modeled line.
May–June 2001 89
SALTZMAN, MEHROTRA
May–June 2001 91
SALTZMAN, MEHROTRA
Figure 2: We analyzed the call center using an Arena simulation model. The figure shows the
entire model as it appears on the screen in the Arena environment. Five data modules at the
top define essential run characteristics of the model, variables, expressions, and performance
measures about which statistics will be gathered, such as the number of callers of each type
who abandon the queue and who reach an agent. The 3 ⴒ 4 table continually updates the val-
ues of these performance measures. Calls are animated in the Call Center area where they can
be seen waiting in line, occasionally abandoning, and being served. The modules at the bottom
of the figure frequently check each call on hold to see if it is ready to abandon the queue.
May–June 2001 93
SALTZMAN, MEHROTRA
also plots the average agent utilization, ex- Once per six simulated seconds, the
pressed as a percentage of the total num- model creates a logical entity to examine
ber of servers. all of the calls on hold at that particular
The main flow and animation of calls instant. If the Search module finds a call
occurs in the call center area of the model. that has waited longer than its abandon-
Calls enter the Arrive module, with the ment time, the Remove module pulls the
time between arrivals being exponentially call from the call center queue and moves
distributed, and they are immediately as- it to the Depart module labeled Aband-
signed random values for three key attrib- Stats. Checking the queue for abandon-
utes, handle time, abandonment time and ment more often than 10 times per minute
customer class. During execution, rapid would probably be more accurate but lead
calls appear on screen as small green tele- to considerably longer run times (which
phones moving through the center while we had to avoid).
regular calls appear as blue telephones. The two modules in the mean-
After arrival, a call is either put through interarrival-time-schedule part of the
to an available agent or put on hold to model allow the arrival rate to vary by
wait in queue until an agent is assigned to time of day. Periodically, the model cre-
handle it. Rapid customers have priority ates a logical entity (Arrive) to update the
over regular customers and, if they have
to wait at all, appear at the head of the Failing to offer its suffering
queue. customers an alternative for
The call center’s phone system had es- technical support would have
sentially unlimited capacity to keep calls
been extremely damaging.
on hold, so we specified no queue length
limit in the model. The client’s telecommu- value of the mean-interarrival-time vari-
nications department stressed that the call able MeanIATime based on the values
center had an exceptionally large trunk contained in the MeanIAT vector. How-
line capacity because of the firm’s existing ever, because we held MeanIATime con-
long-term contracts with its service stant throughout the day at 0.15 minutes
provider. to simplify the analysis, we did not take
During their wait in queue, some calls advantage of this feature.
may abandon the system while the rest Verification and Validation
eventually reach the Server module. There, Because we had so little time, we veri-
each call is allocated to one of the S fied and validated the model quickly and
agents, its waiting time on hold is re- informally. To verify that the model was
corded (Assign and Tally), a counter for working as intended, we relied on our ex-
the number served is increased by one perience with building and testing simula-
(Count), and the call is delayed for the du- tion models [Mehrotra, Profozich, and
ration of its handle time. After service is Bapat 1997; Saltzman 1997], and on Arena’s
completed, the agent is released and the many helpful features. For example, Arena
call departs from the system. has a completely graphical user interface,
many automated bookkeeping features values of S and six values of P1 led to a to-
that greatly reduce the likelihood of pro- tal of 36 scenarios. Since we had to run
gramming error, and interactive debug- and analyze many scenarios in just a few
ging capabilities that allow the user to days, only 10 replications were executed
stop execution and examine the values of for each scenario. (We made later runs
any variable or caller attribute. with 50 replications for several scenarios
To validate that the model’s operation and obtained results that differed from
and its output represented the real system those for 10 replications by less than five
reasonably well, we relied on the second percent.)
author’s experience as a call center consul- Arena’s output analyzer calculated sum-
tant and intimate knowledge of the client’s mary statistics across the 10 independent
operations. Based on the model’s anima- replications [Kelton, Sadowski, and
tion and its average output over many sce- Sadowski 1998]. We report here only the
narios, he made a preliminary judgement means across these replications for each
that the model was valid. scenario. Although Arena generates confi-
Subsequently, the model passed the dence intervals for the mean, we did not
most important test: the call center manag- report these to the client for fear of com-
ers embraced it. They compared the queue plicating the presentation of results
lengths, waiting-time statistics, and aban- (Table 1).
donment rates in the base case to those in Under the circumstances we tested,
the ACD reports for specific time periods rapid customers waited very little (less
and found the simulated values largely than half a minute on average among
consistent with what they saw in the call those who did not abandon) because of
center. Once they were comfortable with their priority over regular customers. Con-
the base case, they were eager to under- sequently, few rapid callers (3.3 to 4.2 per-
stand the impact of different adoption lev- cent) abandoned the system. The service
els for the rapid program with various level provided to rapid callers was above
staffing configurations. 95 percent in all but a few cases, achieved
Experimentation and Results at the expense of a much lower level of
The call center is a terminating system service for regular callers.
that begins each morning empty of calls Regular customers waited between 3.35
and ends hours later when agents go minutes in the best case (P1 ⳱ 0%, S ⳱ 90)
home after serving their last calls. For sim- to 18.5 minutes in the worst case (P1 ⳱
plicity, we took each replication of the 50%, S ⳱ 80). When rapid callers made up
model to be exactly 12 hours, so even 20 percent of the callers (P1 ⳱ 20%), the
though the calls in the system at the end average waiting time for regular calls can
of the day were not served to completion be improved by 57.5 percent, from 10 to
we counted them as served. 4.25 minutes, by increasing the number of
We defined scenarios as specific combi- agents from 80 to 90. This would also re-
nations of S, the number of agents, and P1, duce the abandonment rate of regular cus-
the percentage of rapid callers. Testing six tomers by 51 percent, from 21.5 to 10.5
May–June 2001 95
SALTZMAN, MEHROTRA
Number of agents, S
P1 Performance measure 80 82 84 86 88 90
Ave. queue time (min.)—rapid 0.43 0.40 0.38 0.36 0.33 0.30
Ave. queue time (min.)—regular 18.50 14.30 11.50 9.47 7.68 6.12
50% Abandonment rate (%)—rapid 4.2 4.1 4.1 3.8 3.8 3.5
Abandonment rate (%)—regular 32.0 28.4 24.6 21.0 17.6 14.5
Service level (%)—rapid 89.5 91.2 92.3 92.9 94.0 95.1
Service level (%)—regular 16.8 22.7 30.8 40.1 52.2 64.1
Ave. queue time (min.)—rapid 0.33 0.32 0.31 0.29 0.27 0.25
Ave. queue time (min.)—regular 13.10 11.30 9.55 7.99 6.62 5.32
40% Abandonment rate (%)—rapid 4.1 4.0 3.9 3.8 3.7 3.5
Abandonment rate (%)—regular 27.3 24.2 21.0 18.0 15.3 12.6
Service level (%)—rapid 94.6 94.9 95.5 95.9 96.6 97.2
Service level (%)—regular 21.0 28.5 38.0 48.3 60.7 72.1
Ave. queue time (min.)—rapid 0.27 0.27 0.25 0.24 0.22 0.21
Ave. queue time (min.)—regular 11.20 9.69 8.35 7.06 5.87 4.66
30% Abandonment rate (%)—rapid 4.0 4.0 3.8 3.8 3.7 3.5
Abandonment rate (%)—regular 23.8 21.0 18.4 16.0 13.7 11.2
Service level (%)—rapid 96.9 97.1 97.9 98.1 98.5 98.4
Service level (%)—regular 24.3 33.5 44.4 56.6 68.5 80.9
Ave. queue time (min.)—rapid 0.23 0.23 0.21 0.21 0.20 0.18
Ave. queue time (min.)—regular 10.00 8.78 7.61 6.50 5.30 4.25
20% Abandonment rate (%)—rapid 3.9 4.0 3.9 3.7 3.8 3.3
Abandonment rate (%)—regular 21.5 19.0 16.8 14.7 12.5 10.5
Service level (%)—rapid 98.3 98.7 98.9 99.0 98.9 99.2
Service level (%)—regular 28.9 40.1 51.1 62.9 76.0 86.3
Ave. queue time (min.)—rapid 0.21 0.19 0.19 0.18 0.17 0.16
Ave. queue time (min.)—regular 8.97 7.86 6.82 5.87 4.70 3.74
10% Abandonment rate (%)—rapid 3.9 4.1 4.0 3.8 3.8 3.6
Abandonment rate (%)—regular 19.3 17.2 15.3 13.4 11.4 9.5
Service level (%)—rapid 99.1 99.3 99.4 99.3 99.6 99.5
Service level (%)—regular 35.4 46.8 60.0 72.3 84.3 92.4
Ave. queue time (min.)—regular 8.19 7.26 6.31 5.28 4.25 3.35
0% Abandonment rate (%)—regular 17.8 16.0 14.2 12.4 10.5 8.8
Service level (%)—regular 42.4 54.1 68.2 80.4 90.7 96.4
Table 1: We ran the model for 36 scenarios. Entries in the table represent average values across
the 10 replications run per scenario. An abandonment rate for each class of callers was derived
from the number of abandoned and served calls, expressed as a percentage, that is, Abandon-
ment Rate ⴔ 100* Abandoned/(Served ⴐ Abandoned). Service level for rapid callers is the per-
centage of calls answered within one minute; for regular callers it is the percentage of calls an-
swered within eight minutes.
percent.
Another key strategic question was
what level of agent staffing would keep
average waiting times reasonable for regu-
lar calls, that is, at or below eight minutes,
while providing superior service for rapid
callers (Figure 3). The model showed, for
example, that if rapid callers made up 20
percent of the callers, 84 agents would be
needed. In this scenario, rapid callers
would have a very high service level of
about 99 percent. Alternatively, if just 10 Figure 3: The lines represent (from top to bot-
tom) 50, 40, 30, 20, 10, and zero percent rapid
percent of callers were rapid customers, customer calls. For each percentage, we can
only 82 agents would be needed to see how the average wait in queue for regular
achieve the eight-minute target average callers decreases as the number of agents in-
creases. The figure can be used to determine
for regular customers.
the number of agents required to keep aver-
Call center managers could use the age waiting times for regular calls at or below
graph to determine staffing levels if they the eight-minute target, while providing supe-
changed the target average wait to another rior service for rapid callers. For example, if
rapid callers made up 20 percent of the call-
value, such as six minutes, or if more cus-
ers, 84 agents would be needed.
tomers participated in the rapid program
(Figure 3). for the impact on system performance of
Implementation changes in the number of customers pur-
Once we had built an initial version of chasing the rapid program (which they
the model, we presented our preliminary could not control) and in the number of
results to a management team that had no agents (which they could control). In par-
previous experience with simulation. We ticular, the average queue time results
explained the underlying concept of a made it clear that the one-minute guaran-
simulation model as a laboratory for look- tee to rapid customers would be fairly
ing at different call center configurations easy to achieve, even if the percentage of
and examining the impact of design callers in the rapid plan became quite
changes on key performance measures. high, which helped drive the revenue pro-
Based on both the animation and the jections for the program.
preliminary results, the managers were ex- They were surprised at the dramatic im-
cited about the answers our model could pact adding agents would have in cutting
provide. They asked us to quickly run the waiting time of regular customers. In
some additional scenarios, which we ana- our presentation, we gave the managers
lyzed and then reviewed with them a few some general guidelines about how to
days later. make adjustments to staffing based on the
The results of the simulation analysis proportion of customers who purchased
gave the managers a good understanding the program. Most important, the results
May–June 2001 97
SALTZMAN, MEHROTRA
of our analysis gave managers confidence to launch the rapid program was inher-
that they could successfully implement the ently difficult and contentious. Debate
rapid program. about the program’s merits and risks
We conducted our analysis and pre- could have caused weeks of delay. This
sented the results during one week in Sep- delay would have been very expensive in
tember 1995. Within a few days, the man- terms of rapid-program revenues, because
agers decided to introduce the rapid the call center’s busiest months were De-
program. The firm did a lot of internal cember, January, and February. Our simu-
work very quickly to prepare for the start lation analysis, however, provided a di-
of the program. The company trained verse group of decision makers (from the
agents to handle rapid calls and changed call center, finance, marketing) with a
call center desktop information systems common empirical understanding of the
(changing the user interface and modify- potential impact of the rapid program on
ing the back-end database and integrating customer service, which in turn facilitated
it with the credit-checking and billing sys- a much faster decision.
tems). Finally, it altered the call center’s Lessons for Simulation Practitioners
ACD logic to establish different call rout- This case study highlights several
ing and priorities for rapid customers. themes from which simulation practition-
By November, the company had fin- ers can benefit (see Chapter 3 of Profozich
ished its preparations. Launched with a [1998] for further discussion). This applica-
major direct-mail campaign, the program tion of simulation is an excellent example
was eventually adopted by more than 10 of embedding a mathematical analysis in
percent of customers, and generated the midst of a larger decision-making pro-
nearly $2 million of incremental revenues cess. In our simulation model, we did not
within nine months. Because of the success try to determine which service program
of this program, the company initiated a the call center should offer to customers
much more comprehensive fee-based because many business factors (including
technical-support service at the start of its marketing, financial, and technical issues)
next fiscal year. influenced the definition of the rapid pro-
Our simulation model made a major gram. However, the results of our analysis
contribution to the launch of this program enabled managers to evaluate the pro-
and the generation of this revenue stream. posed solution empirically under different
The results of our analysis played a key conditions.
role in the company’s decision to launch The managers saw our simulation anal-
the program; in particular, they validated ysis as a vehicle for mitigating risk. They
the feasibility of the “one minute or free” did not have to hold their breaths while
guarantee to rapid customers, which was they experimented on live customers with
an important part of marketing the real revenues on the line. Instead they re-
program. lied on the simulation for insights into
Our analysis had another more subtle what would happen when they changed
effect on the rapid program. The decision the configuration of the call center. In
May–June 2001 99
SALTZMAN, MEHROTRA
for call centers have appeared on the mar- the rapid program.
ket, including the Call$im package [Sys- Call$im is a far superior tool for creat-
tems Modeling Corporation 1997] that is ing call center models than Arena and its
built on top of Arena. They enable the peers, but they are far better tools for
rapid development of call center simula- building simulation models than general-
tion models. Call$im, for example, is used purpose programming languages (which
by over 70 call centers around the world were once the state of the art). Arena’s
for planning and analysis studies. availability and flexibility were crucial to
Released initially in 1997, Call$im has a our successful analysis.
user interface built around call center ter- As call centers proliferate—there are an
minology and includes such features as estimated 60,000 in the United States
out-of-the-box integration with Visual Ba- alone—and with simulation’s potential to
sic, animation, and customized output to help managers configure and plan their
spreadsheets. All of this shields users from operations, we expect the use of simula-
the underlying programming, greatly sim- tion will grow substantially in this
plifying the process of building call center industry.
models, importing data, conducting exper- References
Andrews, B. H. and Parsons, H. L. 1989, “ L. L.
iments, and analyzing results.
Bean chooses a telephone agent scheduling
Would we have used Call$im for this system,” Interfaces, Vol. 19, No. 6, pp. 1–9.
analysis if it had been available then? Yes. Andrews, B. H. and Parsons, H. L. 1993, “Es-
Call$im’s industry-specific objects—with tablishing telephone-agent staffing levels
through economic optimization,” Interfaces,
such module names as Calls, Agents, and Vol. 23, No. 2, pp. 15–20.
Schedules—contain detailed logic that mir- Anton, J. 1996, Customer Relationship Manage-
rors the way call centers operate and de- ment, Prentice Hall, Upper Saddle River,
New Jersey.
fault values that call center personnel find
Brigandi, A. J.; Dargon, D. R.; Sheehan, M. J.;
intuitive and reasonable. In addition, each and Spencer, T. 1994, “AT&T’s call process-
module includes a variety of parameters ing simulator (CAPS) operational design for
that provide flexibility in model building inbound call centers,” Interfaces, Vol. 24, No.
1, pp. 6–28.
and enable realistic representation of Dawson, K. 1996, The Call Center Handbook,
nearly all call centers. Automatically gen- Flatiron Publishing, New York.
erated output statistics are also defined in Grassmann, W. K. 1988, “Finding the right
number of servers in real-world queuing sys-
call center terms, such as service level,
tems,” Interfaces, Vol. 18, No. 2, pp. 94–104.
abandonment rates, and agent utilization. Green, L. and Kolesar, P. 1991, “The pointwise
A knowledgeable Call$im user could cre- stationary approximation for queues with
ate and test the model shown in Figure 2 nonstationary arrivals,” Management Science,
Vol. 37, No. 1, pp. 84–97.
in less than one hour, using just six or Harris, C. M.; Hoffman, K. L.; and Saunders,
seven modules for input data, model logic, P. B. 1987, “Modeling the IRS taxpayer infor-
and output creation. This package would mation system,” Operations Research, Vol. 35,
No. 4, pp. 504–523.
have allowed us to add more model de-
Hoffman, K. L. and Harris, C. M. 1986, “Esti-
tail, to spend more time on analysis, and mation of a caller retrial rate for a telephone
to test different scenarios for adoption of information system,” European Journal of
Operations Research, Vol. 27, No. 2, pp. 39–50. this new initiative.
Kelton, W. D.; Sadowski, R. P.; and Sadowski,
D. A. 1998, Simulation with Arena, McGraw-
“The simulation analysis conducted by
Hill, New York. the Onward—SFSU team helped us get a
Mehrotra, V. 1997, “Ringing up big business,” much better understanding of the contin-
OR/MS Today, Vol. 24, No. 4, pp. 18–24. gencies that we were likely to face. In par-
Mehrotra, V.; Profozich, D.; and Bapat, V. 1997,
“Simulation: The best way to design your ticular, by quantifying the impact of Rapid
call center,” Telemarketing and Call Center So- customers and staffing/skilling decisions
lutions, Vol. 16, No. 5, pp. 28–29, 128–129. on service levels and costs, the simulation
Mehrotra, V. 1999, “The call center workforce
analysis gave us the confidence to launch
management cycle,” Proceedings of the 1999
Call Center Campus, Purdue University Cen- the program. The results also gave us an
ter for Customer-Driven Quality, Vol. 27, pp. excellent idea of how best to staff the dif-
1–21. ferent queues and how to adapt our staff-
Pegden, C. D.; Shannon, R. E.; and Sadowski,
R. P. 1995, Introduction to Simulation Using
ing as we learned more about the cus-
SIMAN, second edition, McGraw-Hill, New tomer population’s acceptance of the
York. Rapid program.
Profozich, D. 1998, Managing Change with Busi- “I am proud to report that this program
ness Process Simulation, Prentice Hall, Upper
Saddle River, New Jersey. was a great success for our business,
Saltzman, R. M. 1997, “An animated simulation grossing nearly $2 million in a little bit
model for analyzing on-street parking is- less than nine months. The work done by
sues,” Simulation, Vol. 69, No. 2, pp. 79–90.
the Onward—SFSU team played a big part
Systems Modeling Corporation 1997, Call$im
Template User’s Guide, Systems Modeling in making it happen successfully.”
Corp., Sewickley, Pennsylvania. (www.
sm.com)
Whitt, W. 1999, “Predicting queueing delays,”
Management Science, Vol. 45, No. 6, pp. 870–
888.