Académique Documents
Professionnel Documents
Culture Documents
Kettinger and Choong C. Lee Source: MIS Quarterly, Vol. 29, No. 4 (Dec., 2005), pp. 607-623 Published by: Management Information Systems Research Center, University of Minnesota Stable URL: http://www.jstor.org/stable/25148702 . Accessed: 18/10/2013 21:58
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp
.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
Management Information Systems Research Center, University of Minnesota is collaborating with JSTOR to digitize, preserve and extend access to MIS Quarterly.
http://www.jstor.org
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
JllCil
KXZl
RESEARCH NOTE ly
Alternative Scales Zones of Tolerance: for measuring information systems Service Quality1
By: William J. Kettinger Moore School of Business University of South Carolina Columbia, SC 20208
U.S.A.
level of IS service desired, and (2) adequate ser vice: theminimum level of IS service customers are willing to accept. Defining these two levels is a "zone of tolerance" (ZOT) that represents the range of IS service performance a customer would consider satisfactory. Inotherwords, IS customer
expectations are characterized by a range service
bill@sc.edu Choong C. Lee Graduate School of Information Yonsei University Seoul KOREA
cclee@yonsei.ac.kr
measure
of levels, rather than a single expectation point. This research note adapts the ZOT and the generic operational definitionfrom marketing to the IS field, assessing its psychometric properties. Our findingsconclude that the instrumentshows validityof a four-dimension IS ZOT SERVQUAL for desired, adequate, and perceived service quality levels, identifying18 commonly applicable question items. This measure ad dresses past criticismwhile offeringa practical diagnostic tool.
Abstract
The expectation norm of InformationSystems SERVQUAL has been challenged on both concep tual and empirical grounds, drawing intoquestion the instrument'spractical value. To address the criticism that the original IS SERVQUAUs expec measure is ambiguous, we testa new set of tation
IS service quality, zones of tolerance, Keywords: IS management, SERVQUAL, evaluation, user services function expectations, information
scales that posits thatservice expectations exist at two levels that IS customers use as a basis to assess IS service quality: (1) desired service: the
Introduction
M_HB_i_^_-_-_-_-_-_-l
was the accepting senior editor for this 1V. Sambamurthy T. Watson L. Carr and Richard paper. Christopher served as reviewers.
Over the past decade, SERVQUAL has garnered considerable scholarly and managerial attentionas a diagnostic tool foruncovering areas of informa tion systems service quality strengths and weak
MIS
Quarterly
Vol. 29 No.
4, pp. 607-623/December
2005
607
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
nesses.
Praised
relevance
Jiang, Klein, and Carr 2002; Jiang, Klein, and Crampton 2000; Kettinger and Lee 1994, 1997; Pitt et al. 1995, 1997; Watson et al. 1998), ithas often been criticized on conceptual and psych
(e.g.,
Research
Challenges
with
ometrics grounds (Lee and Kettinger 1996; Kohlmeyer and Blanton 2000; Van Dyke et al. 1997,1999). A primaryarea of criticismconcerns reliance on gap scores that are SERVQUAL's derived by calculating the difference between IS users' perceived levels of service and their expectations forservice. Critics both in marketing (e.g., Brown et al. 1993; Cronin and Taylor 1992, 1994; Teas 1993, 1994) and in IS (e.g., Kettinger and Lee 1997; Van Dyke et. al. 1997, 1999) point to conceptual and empirical difficulties with the instrumentand have sug original SERVQUAL gested thatalternatives to theoriginal "gap scored" IS-adapted SERVQUAL be explored. In their 1997 article Kettinger and Lee2 called for the further study of an alternative instrument
Norm
_ __ __
can
consumer
research,
satisfaction
adapted from marketing referredto as the "zones of tolerance" (ZOT) service qualitymeasure. This
zones of tolerance measure is conceptualized
and empirical integrity instru of the SERVQUAL ment. These articles focused primarily around two use of major concerns related to SERVQUAL's
difference or gap scores. Researchers such as
be broadly characterized as a post-use evaluation of product or service quality given pre-use expec tations. SERVQUAL was developed tomeasure service quality. Indeveloping theirSERVQUAL of Parasuraman, Zeithaml instrument,the intent and Berry (hereafterPZB) was to derive a service quality measure that transcended multiple mea surement contexts. Over the years, SERVQUAL has been adapted to the measurement of many service delivery contexts, including IS service delivery. With its widespread application, studies in marketing emerged questioning the conceptual
to
overcome one of the most significant points of with theoriginalSERVQUAL instrument; criticisms
namely, the need for a more quality parsimonious expectations, levels. This con while ceptualization of service
Cronin and Taylor (1992, 1994) questioned the gap mea predictive superiorityof SERVQUAL's sure (perceived service quality minus expected
service
quality score
tioning calculate the
quality)
over
only
service
(SERVPERF);
need score.
in essence,
expectations scholars
ques
or as such
retaining the practical diagnostic power of under note tests the psychometric properties of an IS ZOT service quality instrument.Structured as a context-only extension (Berthon et al. 2002), this study finds that the IS zones of tolerance service
quality measure offers standing service expectation research
a gap
diagnostic tool for IS managers and as an alternative toovercome problems with theexisting instrument. IS SERVQUAL
significant
promise
as
(1993, 1994) questioned the conceptual of SERVQUAL's expectation measure, integrity that it suffered from different interpre stating tations. Considerable debate and subsequent marketing and in IS study have occurred both in these concerning challenges to the expectation measure of the original SERVQUAL concep Teas
tualization.
we will make frequent reference to the authors Leyland and C. Bruce Kavan, we will refer Pitt, Richard Watson, to them as PWK and likewise, William J. Kettinger and Choong C. Lee will be refer to as K&L.
research focuses 2As the work on which the SERVQUAL A. from that of the marketing scholars originated Zeithaml Parasuraman (Z), and Leonard (P), Valarie we will make frequent reference Berry (B), and because to their various joint publications, we will refer to them as PZB, PBZ, ZBP BPZ, or ZPB, as the case may be. As
608
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
produced higher adjusted R2 values when com pared toSERVQUAL's gap scores foreach of the
five dimensions assurance, (e.g., reliability, and tangibles). responsiveness, The superior scores empathy, power
scriptive insights, take a slightly different tack where they state that conceptual problems iden tifiedwith the original SERVQUAL expectation
measure
was
predictive
of the performance-only
furtherconfirmed by Babakus and Boiler (1992), Cronin and Taylor (1992), Boulding et al. (1993), and PZB (1994b). These same results were later demonstrated with an IS adapted SERVPERF measure by Lee and Kettinger in
of an expectation-based figurations
measure.
push
for exploration
of alternative
con
ISSERVQUAL
and expectation (Cronin 1992, 1994; Peter et al. 1993; Van Dyke etal. 1997).
1996, by PWK in 1997, and by Van Dyke et al. in 1999. Given these findings,some researchers in both Marketing and Information Systems have a single item comparative of perception argued for
While conceding the improvedpredictive power of the perceived performance only instrument, advo cates (e.g., PZB 1994b; PWK 1997) of theoriginal gap-scored SERVQUAL measure argue thevalue of difference scores both on practical and theoretical grounds. PZB (1994b, p. 116) state, in companies that have executives switched to a disconfirmation-based measurement approach tell us that the information generated by this approach
have
greater
diagnostic
value.
Moreover,
distinction between the twomain stan important dards that represent expectations inthe confirma tion/disconfirmationliterature.One standard re presents the expectation as a prediction of future events (Churchill and Suprenant 1982; Miller 1977), defined as an objective calculation of the probabilityof performance. The other standard is a normative expectation of futureevents (Miller 1977; PZB 1988; Swan and Trawick, 1980; ZBP 1993), operationalized either as a desired or ideal expectation. Although these two standards use different expectation measures, expectation and perceptions are treated as linked via the dis confirmation of expectation paradigm (Oliver 1980), which states that the higher theexpectation in relation to the actual performance, the greater
the degree of disconfirmation and lower satis
measurements
of perceived service quality? (p. 116). Inthe IS context, PWK (1997) argue thatthe richer information contained in IS SERVQUAL's discon firmation-based measurements provides ISman
faction. Critics of the normative standards such as Teas (1993) inmarketing and Van Dyke et al. SERVQUAL's expec (1997,1999) in ISargue that measure suffersfrom tation multiple interpretations depending on whether a customer bases his/her assessment on a prediction of what will occur in the next IS service encounter or on what ideally
should occur.
agers with diagnostic power that typicallyout weighs statistical and convenience benefits de rived fromthe use of IS SERVPERF. K&L (1997), while agreeing with PZB (1994b) and PWK (1997) more meaningful pre thatdifference scores offer
ZBP (1993, Recognizing a need for improvement, that their of p. 3) acknowledged original "definition the expectations was broad.. .and did not stipulate
norms of expectations used by customers in assessing service quality." Citing empirical sup port (e.g., Tse and Wilton 1988) for both a pre dicted and ideal expectation standard, ZBP point
609
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
minimum expectations that may move even a range beyond desired levels of expectations into of service surprise sometimes termed delight. Based on this rethinkingof expectations, ZBP offered a reconceptualized model of customer services
actions that include more than one expectation comparison. Similarly other confirmation/discon firmation researchers (e.g., Oliver et al. 1997) indicate thata range of satisfaction exists beyond
involving
complex,
simultaneous
inter
SERVQUAL concept in the IS setting: Is an IS SERVQUAL adapted ZOT psychometrically sound? More specifically, do the two expectation levels of IS service quality (desired and adequate) and the perceived IS service level possess the same dimensions of SERVQUAL (including com mon items)? To address these questions, we
adapted PZB's (1994a) ZOT SERVQUAL concept to the IS context, whereby the IS adapted measures were subjected to reliability and validity
testing.
the customer wanted to be performed. Second, a minimum tolerable expectation (Miller 1977) was defined as the lowest levelof performance accep
(see Figure 1). Their revised service quality expectation comparison norm delineates two types of expectations. First, a normative expectation was termeddesired expectations (e.g., Spreng and MacKoy 1996; Swan and Trawick 1980), which was defined as the level of service
Research Methods
and Analysis
_ _-_ _
table to a customer that incorporates the influence of predicted service and situational factors. Further clarifying their conceptualization, PZB (1994b, p. 112) recommended comparative norms as "two differentcomparison norms for service quality assessment: desired service (the level of service a customer believes can and should be delivered) and adequate
level of service the
A preferred method to cross-validate an instru ment's dimensionality is to examine the factor structureof one sample within the factorstructure of a second sample, commonly referredto as the holdout sample (Chin and Todd 1995). Since the process objective of this study requires a refining to obtain a common set of validated items and factors for all three levels of IS service quality (desired, adequate, and perceived), we followed Chin and Todd's approach to examine the service sample groups: a university quality of twodifferent
IS services
customer
table)." Separating these two levels is a zone of tolerance that represents the range of service
performance tory. tions In other are a customer words, would customer by a consider service range satisfac expecta of levels
sample. The firstsample was used to test the factor structure of IS ZOT SERVQUAL using an exploratory factoranalysis, and thesecond sample
was used as a holdout to sample cross-validate for a confirmatory the derived factor analysis
sample
and
an
industrial
IS service
sample. dimensionality from the first Using the three column ZOT formatproposed by PZB (1994a) and the IS adapted items of K&L was instrument (1994), the IS ZOT SERVQUAL a with interviews IS series of pretested through on Based students. and IS professionals graduate
characterized
been practically applied to numerous services contexts such as the assessment of the student service quality of a business school (Caruna et al. the service quality of 2000) and in assessing libraries (e.g., Blixrud and research university 2002; Cook et al. 2003).
(between desired and adequate service), rather than a single point. Even though the zones of had tolerance (ZOT) SERVQUAL instrumentation not been empiricallyvalidated, it has subsequently
the results of pretesting, additional wording section, adjustments were made inthe instruction such as the originalword of "adequate" service in
scale was changed to "minimum" service in
PZB's
The previous discussion leads us to the following questions concerning the validity of the ZOT
order to clearly differentiate the desired and the minimally adequate service levels. Itshould also be noted that while researchers (e.g., K&L 1994; PZB 1991) have questioned the strength of the Tangible dimension, theTangible dimension was
610
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
.-. I_I
Expected Service
Desired Service
Adequate
Service
Adequacy
,_t
i-*j -> I
Perceived
Service
IS Desired and Adequate Standards: Comparison Figure 1. Dual Expectation L. "The Nature of A. V. L. and from Service Parasuraman, Zeithaml, Berry, (Adapted Journal of the of Service of Customer Determinants Academy Expectations Quality," of Marketing Science (21:1), 1993, pp. 1-12)_
retained inthisstudy given the fact thattheoriginal PZB ZOT (1994a) instrument included Tangible and this dimension has been included insubse quent ZOT operationalizations inother organiza tional contexts (e.g., Caruana et al. 2000). After pre-testing and refiningthe instrument,two samples were chosen forthe cross validation: an initialsample from the university setting and a holdout sample from the industrysetting. Two U.S. universities formed the initialsample for testing of the IS-adapted ZOT SERVQUAL. Anonymous, self-administered questionnaires containing items from the IS-adapted ZOT were distri SERVQUAL (sample 1) instruments buted to approximately 560 upper-level under graduate and graduate students in several MIS and management sciences courses in the two universities. Total sample size was 250 with the response rates averaging about 45 percent at both organizations. Such a student sample has been used in past research as a general measure of service quality (see Boulding et al. 1993; Ford et al. 1999; Rigotti and Pitt 1992) and in the IS ser vice quality context (K&L 1994; Kettinger et al. 1995), showing high consistency with measures
and validity. For example, Jaing, Klein and Crampton (2000) revalidated theoriginalKettinger and Lee findings in the industrial context with similar results. Inaddition, there isongoing sup continued use in port for ZOT SERVQUAL's educational institutions (Caruna et al. 2000; Cook et al. 2003). As a holdout sample, another set of data (sample 2) was collected from four large Asia (two banks, one telecommuni companies in cation, and one IS consulting firm). Question naires were distributed to a total of 500 em ployees; 188 were returned fora response rate of
37.6 percent.
Inboth the universityand industry settings, users had access to the fullarray of IS services (e.g., network access and IDs, application software access on their PCs and shared servers, Web
intranet accounts, e-mail accounts, help desk, con
sultingsupport, training?both online and tutorial network dial-in, Internet access, laptop hook-up,
online access to records and accounts,
users had at least 1 year of experience with the provided computing and network services, en suring a basic level of computer and network access competence. Inaddition, all users had at least a minimum level of face-to-face interaction
etc.).
All
611
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
at each with information service function(ISF) staff site their net sample whereby they established
work, e-mail, and software user authorization;
many had also taken advantage of help desk and services. Ingeneral, the users fromeach training sample can be described as motivated by either class orwork responsibilities toactivelymake use of the IS resources and to avail themselves of IS
support.
Our objective was to determine if there are a validated, common set of factorsand itemsamong three levels of service quality. Therefore, a dual design of statistical approaches (exploratoryfactor analysis and confirmatory factor analysis) was samples respectively and applied for twodifferent sequentially. The perceived service scale was selected as a calibration scale for exploratory factor analysis of sample 1 because, as was discussed previously, it is the one SERVQUAL indicator that has not been the subject of debate of concerning itsformatthroughoutthe longhistory criticism concerning SERVQUAL's expectation
measurement.
five service quality dimensions on the perceived was completed on the sample 1 data. service level Using commonly accepted factorselection criteria as specified inTable 1, four constructs with 18 itemswere derived. Three original SERVQUAL constructs emerged from the exploratory factor analysis (tangibles, reliability,and responsive ness). However, two of the original dimensions, one dimen empathy and assurance, merged into sion. Based on a review of the retained itemsand the seeming similarity of the constructs when applied in the IS context, the new merged con structwas named rapport because the construct items focus on an IS service provider's ability to convey a rapport of knowledgeable, caring, and courteous support. Past researchers using ZOT different service contexts have also SERVQUAL in a such experienced merging of the original five factorstructure (e.g., Caruana et al. 2000). The derived factor structure and items from the exploratory factoranalysis on the perceived scale (refertoTable 1)were then subjected to confirma tory factor analysis (CFA) and reliabilitytesting using the holdout sample forthe three different IS service quality levels. Covariance matrices, the and descriptive Mardia's non-normality coefficient,
Results
To use the ZOT method, its three IS service quality levelsmust share the same constructs and the corresponding items. This requires a test to determine whether the dimensions of IS ZOT all three levels. This SERVQUAL are the same for study examined whether thecommon constructs of
perceived service, desired service, and minimum
statistics for three sets of service levels of the holdout sample are reported inAppendix A. To overcome the limitationof maximum likelihood method regardingmultivariate non-normality4the likelihood [ML,Robust] method, which in the first attemptwith the holdout sample resulted ingood fits of themodels foreach of the three different levels, as shown in the composite fit index of Table 2. In confirming the validity of the IS ZOT SERVQUAL at three levels, the guidelines sug gested byAndersen and Gerbing (1988) were fol
parameters were estimated using the maximum
service expectations captured equivalent dimen sions with equivalent question itemsand mapped intoa diagnostic method using all of these three levels of IS service quality.
Given
SERVQUAL
the extent of revision to the IS ZOT to bring it into the IS context, an exploratory factoranalysis3 for itemsof theoriginal
on the was applied component analysis 3Principal to derive the minimum 1 data since we wanted sample for the maximum portion number of factors that account inan exploratory manner. of the total variance
to correct potential bias implemented errors. maximum standard EQS's option provides these estimators.
from a ques continuous data collected a non-normal tionnaire, our raw data possess shaped the distribution. However, aware of the risk of assuming data to be multivariate normal, a "sandwich" parameter estimator covariance 1990) was (Satorra and Bentler 4Like most
612
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
I Table
1. Exploratory Factor 1 P1 0.104 P2 0.283 P3 0.034 P4 0.008 P5 -0.036 P7 0.044 0.260 P8 P9 0.153 P11 0.594 P12 0.780 P13 0.759 P14 0.663 P15 0.732 P16 0.680 P17 0.777
Factor Analysis*: Factor 2 0.813 0.626 0.759 0.771 0.745 0.560 0.056 0.239 -0.080 0.099 0.165 0.001 -0.083 0.046 -0.017
Perceived Factor 3 0.037 -0.052 -0.087 -0.001 0.086 0.171 0.092 0.066 0.043 -0.024 0.065 -0.013 0.074 0.174 0.069
Service Factor 4 -0.104 0.132 0.098 0.011 -0.068 0.322 0.592 0.575 0.265 -0.064 -0.190 0.145 0.168 0.102
Level
for Sample
Variable
Factor 5 -0.035 -0.028 0.076 0.065 0.116 -0.040 0.081 0.038 0.137 -0.008 -0.007 0.070 0.095 -0.006 0.037
Communality 0.685 0.493 0.600 0.598 0.581 0.461 0.437 0.417 0.450 0.623 0.644 0.455 0.586 0.505 0.612
-P6-9^38-0-448-9t284-&_3_-9t098-0.346
-JP4Q-0,40$-9,373-0406-0r_49-0^076-9^388
-P43-0r?_4-9?9_-&44T-9454-0r3_4-0.360
P22_0.008_0.297
*Principal Components Selection Criteria: criteria were Analysis Oblique as
0.657 0.525
0.478 0.471
0.122_0.516_0.371
loadings, no single loading, items not meeting these
rotation, factor loading > 0.5, no multiple indicated by the struck-out lines.
dropped
Table
on the Holdout Minimum < 3.0 1.521 0.965 0.817 0.107 <1.0 > 0.90 0.928 0.964
Sample
Satorra-Bentler Scaled Chi-Square/d.f. 1.947 Bollen 0.932 (IFI) 0.796 LISREL AGFI Root Mean Squared Residual (RMSR) 0.120 Fit Index 0.918 Comparative Robust CFI 0.931
*Segars and Grover (1993)
> 0.90
613
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
Kettinger
& Lee/Alternative
Scales
for IS SERVQUAL
lowed. Significant factor loading coefficients and the satisfactory fitsof the threeCFA models con firmthe convergent validityof the four IS service quality dimensions. Next, formal tests of discrim were performed (Bagozzi and Phillips inant validity 1982). The chi-square differences between theall possible constrained models (each correlation
between four dimensions was
with only two items. The authors recognize that such a two-item construct has potential validity Future researchers might consider the improving responsiveness measure around the concept of anticipated preparedness toperform a service, which can be inferred by the two retained problems.
responsiveness and readiness items (i.e., willingness to help... to respond...).
responsiveness items loaded more closely with the reliabilityfactor than the responsiveness dimen sion, leaving thederived responsiveness construct
model was tested and strained to 1.0) and thefinal showed significant chi-square differences, indi cating discriminant validity for all three levels. Reliability tests for the final derived four dimen sions with 18 items were conducted with a Cronbach Alpha test, resulting in acceptable levels of reliability forall dimensions at three levels (refer toTable 3). Insum, a totalof 18 items loaded into four dimensions at all three IS ZOT levels, indi cating strong support forconstruct validity of the
measures as well as
subsequently
con
Research
instrument and
structure of four dimensions and items among three levels of IS service quality. These results provide the statistical legitimacyforuse of ISZOT inthe IS setting. The finalversion of SERVQUAL items retained is displayed in Table 4.
demonstrating
common
surprised satisfaction sometimes called customer stream delight (Oliveret al. 1997), while a different of literature reminds us that there is a cost of quality and one must be mindful tomake sure that IS service quality has a bottom line impacton the firm. Inthis regard, futureresearch should investi gate the relationships of IS ZOT service quality to
performance.
example, researchers might examine themeaning of exceeding desired service levels. Some litera ture suggests that thisoffers the service provider a levelof benefits by bringing theircustomers into
for
Research_
This study introduced and validated the ZOT concept inthe IS setting using a dual measure of The findings IS service quality expectations. an toward addressing represent importantstep past concerns with the original IS SERVQUAL's expectation measure and gap-scoring. As will be discussed later in the section, the new IS ZOT has strongpractical poten SERVQUAL instrument which managers tial as a diagnostic tool through can quicklyvisualize their current IS service quality situation and design corrective actions. However, to further establish the instrument's external additional applications in validity and reliability,
The survey length of the ISZOT SERVQUAL adds some complexitywhen compared toa single point
(perception determine only) measure. the relative Researchers diagnostic value need of mea to
Incases where brevity, (SERVPERF) or cost, predictive validity concerns demand, the measure. PERF) seemingly less clinical perception-only (SERV measure might be a better option. Learning how and when managers use the ISZOT
as a diagnostic tool to shape theirservice delivery strategies needs to be investigated both quali tatively and quantitatively in multiple industry measure contexts before the relativevalue of this is fully understood. In terms of practice, employing the IS ZOT ina periodic IS service qualityman agement program is important for two reasons.
Future researchers also should attempt to further distinguish the responsiveness dimension of IS dimension. ZOT SERVQUAL from the "reliability" In this study, two of the original IS SERVQUAL
SERVQUAL
614
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
Table Sample
3. Revised
IS ZOT SERVQUAL
Constructs
and Reliabilities
of the Holdout
Reliabilities* Constructs Reliability Responsiveness Rapport Original Items Retained** 1,2,3,4,5,7 0.78 8,9 11,12,13,14,15,16,17 Perceived 084 086 088 0.74 0.73 0.81 Desired Minimum
Tangibles
| 20,21,22
0.86
0.84 0.92
0.81
0.85
*Cronbach Alpha values for Perceived, Desired, **Refer to Table 4 for retained item descriptions
and Minimum
service
levels.
Constructs
Before
and After
Questions
and
Format
(below): you consider adequate. of My Perception the [Organization's Service Computer Performance
Minimum Desired
the expected minimum level of service performance the level of service performance you desire._
Low
When
Service
My Minimum
Level
is:
Unit's Name]
High
High
Low
is:
High 23456789
Final Constructs
1.
itcomes
to...
123456789
123456789_1
Original Constructs
services
service services
service
Reliability Reliability Reliability Reliability Reliability Responsiveness Reliability Dropped Reliability Responsiveness Responsiveness Assurance Assurance Rapport Assurance Rapport Empathy Rapport Empathy Rapport Rapport Rapport Responsiveness Dropped Rapport Reliability
Reliability technology and system... informed about when service will be made...
service
Responsiveness Responsiveness
instill confidence
who
feel safer in computer transactions... are consistently courteous... Assurance to answer customers' have the knowledge questions... Empathy fashion...
individual attention... customers 14...Giving 15...IS employees who deal with customers
ina caring the customer's best interest at heart... 16...Having Empathy 17...IS employees who understand the needs of customers... 18...Convenient 20.. business hours... 19.. .Up-to-date Empathy Tangibles Tangibles Tangibles training, videos, are Dropped Dropped technology... facilities...
Tangibles Tangibles
615
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
Kettinger
& Lee/Alternative
Scales
for IS SERVQUAL
First, as a diagnostic tool, ithas the potential to measure changes in IS service quality relative to can customers'expectations over time. Second, it be a basis for corrective actions leading to strategies to manage minimum service level expectations, to improveperceived service levels, or to allocate IS resources to specific IS customer need. segments based on an identified
ZOT customer service expectations are charac
(2) What is the relative position of the perceived service pointers within each ZOT band? Are the perceived service pointers closer to the desired expectation level than theminimum level? (3) If all the perceived service pointers are within tive size and the relative positioning of ZOT bands? These criteria can be examined to determine whether possible expectation management could extend the band and possibly lower expectation levels. For example, inUniversity 2, the respon siveness dimensions should be picked as the second most troubled IS service qualitydimension, given the relativepositioning of the pointerswithin theZOT. Further diagnosis related to this targeted respon siveness dimension for university 2 might be obtained by comparing differentcustomer user segmentations (refer to Figure 3). In this illustra customer groups tion, responsiveness of different iscompared. The Grad Student Class 1 customer segment shows the most serious deficiency despite a relatively largeZOT band. It is possible that IS service delivery faults have occurred with
this group, actual placing their current perceptions of responsiveness of the information system their respective zones, what is the compara
terized by tolerance bands. These tolerance bands, representing the difference between desired service and the levelof service considered minimally adequate, can differ insize. Over time these bandwidths may either expand, contract, or move up or down based on expectation changes. Variations can also exist in differentcustomers'
tolerance zones. Some customers
small zone of tolerance, which may require a consistent level of service by an IS provider to hit
within a small band, whereas other customers
may
have
tolerate a greater range of service quality. The and potential for IS managers to learn to identify to im tolerance bands manipulate expectation prove IS service strategies offers great promise over the single SERVPERF measure. To illustratethis potential,we examine the actual ZOT results fromour sample 1 (two universities). As Figure 2 demonstrates, if University 1 relied mistakenly identifythe rapport dimension as a
more solely on the perceived service measure, the itmay area,
may
since the single perception-only indicatorshows a lower score of rapport than reliability. However, incorporating the ZOT band, one can visually pinpoint the area of deficiency; namely, the dimension, where the perceived perfor reliability mance pointer is furthest outside theZOT band. There are three criteria thathelp provide the basis for diagnosis and judgments concerning IS service quality deficiencies and service quality manage
ment:
problematic
area
than
reliability
function (ISF) far below minimum service quality levels. This group might be prime fora special service recovery activityby the IS provider towin back confidence that they are responsive to this group's IS needs. Looking at Grad Class 2, it is observed thatwhile the band is smaller and the single pointer higher, these graduate students are also unsatisfied with current the level of service given the distance between the perceived service pointer and theZOT.
(1)
Is the perceived service quality pointer If outside and below theZOT? so, how great is the distance from the ZOT (adequate service level) to the perceived service quality pointer?
Since the perceived service level pointers forboth undergraduate student groups are within their might point the ISF to respective ZOT bands, this more specialized services targeted to grad offer
uate students. Such
a segmentation
services
stra
segments.
Benchmarking
com
616
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
Kettinger
& Lee/Alternative
Scales
for IS SERVQUAL
University 2's
91
81
5
| |
j r~i |?|
9f
"
IS Service Quality
8
, 6 5
x
: 7 ;7 :.jf.
rn
L_J __
3 "?
2 ?
<D </>
s;
5
0
1 3
0)
5 3 "?
(D
2
1
?-?<_.
<?
g3
I (fl_I I (/)_I
Leaend-
H H %
Boxes f?r eacn dimension represent the zone of tolerance Detween desired service quality levels and minimum levels. acceptable Perceived Service Pointer
of the Service Quality Figure 2. Assessments the Zones of Tolerance Approach Using
of Two University
IS Departments
Responsiveness
dimension
multiple customer
service
segments
m 8
7
6
n
IIS
<
__,
|_j
_ I-1 L 3 2 Legend: Boxes represent the I m zone of tolerance of customer different segments for the |_I Responsiveness dimension. Perceived Service Pointer
5 4
3
0) Wo
Q, _ ^ ?
Q. _ q,
_ Q) tf) _>
?- o o Q. 0 O (S (Q O
w
ro
Figure 3. Customer
of Tolerance
Dimensional
Assessment
Comparing
MIS
Quarterly
Vol. 29 No.
4/December
2005
617
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
Kettinger
& Lee/Alternative
Scales
for IS SERVQUAL
sible to provide relativeservice quality levels. For example, going back to Figure 2 and comparing these two universities, University 2 shows rela tivelybetter IS service quality than University 1, given its largerZOT bands and with three of the four perceived service pointers inside the ZOT
dimensional bands.
Longitudinal study should be carried out to better learn the efficacy of managerial interventions(as discussed above) tomanipulate ZOT's minimum we need to expectation levels. To gain this insight, better understand the antecedents of expectation levels and theirpossible managerial implications. A customer's level of minimum service is influenced by a least four factors (Berry and Parasuraman 1997; ZBP 1993). First, "transitory
service intensifies" are
customer loyalty based on may be more difficult distinguished service delivery that substantially exceeds minimum levels. This raises thequestion: Would superior IS service vendors be better off attempting to narrow IS customer's tolerance zones by strivingtomove minimum service levels up to reduce the competitive appeal of a mediocre IS providers? As thisquestion suggests, theappli cation of the IS ZOT SERVQUAL and itsasso ciated expectation management schemes are flexible to differentservice contexts and begin to offer ISF providers a more exact tool to shape service quality strategies.
Acknowledgments
The authors would like to acknowledge the excel lent direction of the senior editor and the important insightsand contributions of the reviewers.
term, individual factors that lead customers to a heightened sensitivity to service. Second, per
ceived service alternatives are customers'
temporary,
usually
short
tionsof the degree towhich theycan obtain better service through providers other than the focal IS service provider. A thirdfactor is the customer's self-perceived service role. This can be defined as the customer's perceptions of the degree to which themselves influence the level of service they they receive. Fourth, levels of minimum service adequacy are influenced by situational factors,
defined service
percep
References
Andersen, J. C, and Gerbing, D. W. "Structural
as service-performance
contingencies
that
Equation Modeling inPractice: A Review and Recommended Two-Step Approach," Psycho logical Bulletin (103:3), 1988, pp. 411-423. "An Empirical Babakus, E., and Boiler, G. W. Assessment of theSERVQUAL Scale," Journal of Business Research (24:2), 1992, pp. 253
268. Bagozzi, R. P., and Phillips, L. W. "Representing
valuable starting point for futurestudy. IS man agers should note that these four factors share a common implicationforexpectation management fically, IS providers must convince customers of them of the benefits of using IS services, inform their roles and limits in using IS services, and clarify the line of responsibility in the case of IS
service problems. strategies and customer communication. Speci
and Testing Organizational Theories: A Holistic Construal," Administrative Science Quarterly (27:3), 1982, pp. 459-489.
L., and Parasuraman, A.
Berry,
Customer?The Concept of a Service-Quality Information System," Sloan Management Review (38:3), 1997, pp. 65-78.
P., Pitt, L., Ewing, M., and Carr, C. L.
"Listening
to the
Berthon,
Finally, this article focuses on the service quality levels of internal IS service providers who seemingly would desire to have IS customers with In the case of an external IS large ZOT bands.
service
"Potential Research Space inMIS: A Frame work forEnvisioning and Evaluating Research mation Systems Research
416-427. Replication, Extension, and Generation," Infor
might outsourcing vendor, a different implication emerge. Namely, ifexternal customers have relatively large zones of tolerance, establishing
provider's
point-of-view,
such
as
an
IS
Blixrud, J. C. "Evaluating LibraryService Quality: Use of LibQUAL+ TM," IATUL Proceedings (New Series), Volume 12, Partnerships, Con sortia and 21stCentury LibraryService, Kansas City,MO, June 2-6, 2002.
618
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
Kettinger
& Lee/Alternative
Scales
for IS SERVQUAL
Boulding,
W.,
Kalra,
A., Staelin,
R.,
and
Zeithaml,
"A Dynamic Process Model of Service Quality: From Expectations toBehavioral Inten tions," Journal of Marketing Research (30:1), V. A.
1993, pp. 7-27. T. J., Churchill, G. A., Jr., and Peter, J. P.
Brown,
spectives on the Measurement of Information Systems Service Quality,"MIS Quarterly (21:2), 1997, pp. 223-240.
W. J., Lee, C. C, and Lee, S. "Global
"Pragmatic
Per
"Research Note: ImprovingtheMeasurement of Service Quality," Journal ofRetailing (69:1), 1993, pp. 127-139.
Caruana, A., Ewing, M. T., and Ramaseshan, B.
"Assessment of the Three-Column Format An Experimental Approach," SERVQUAL: Journal ofBusiness Research (49:1), 2000, pp.
57-65. Chin, W. W., and Todd, P. A. "On the Use,
Measures of Information Service Quality: A Cross-National Decision Sciences Study," 569-588. 1995, pp. (26:5), Kohlmeyer J.M., and Blanton, J. E. "Improving IS Service Quality," Journal of Information Theory and Applications (2:1), 2000, pp. 1-10 (available
online
Kettinger,
Usefulness, and Ease of Structural Equation MIS Research: A Note ofCaution," Modeling in MIS Quarterly (19:2), 1995, pp. 237-246.
Cook, C, Heath, F., and
Tolerance InthePerceptions of LibraryService Quality: A "LibQUAL+" Study," Libraries and theAcademy (3:1), 2003, pp. 113-123.
Cronin, J. J., and
Thompson,
B.
"Zones
of
vice Quality: A Reexamination and Extension," Journal ofMarketing (56:3), 1992, pp. 55-68. "SERVPERF Cronin, J. J., and Taylor, S. A. Versus SERVQUAL: Reconciling Performance Based and Perceptions-Minus-Expectations Measurements of Service Quality," Journal of Marketing (58:1), 1994, pp. 125-131.
Churchill, G. A., Jr., and Journal Surprenant, of Marketing C. "An
Taylor,
S.
A.
"Measuring
Ser
and Kettinger,W. J. "A Test of the Psychometric Properties of the IS Adapted SERVQUAL Measure," Paper presented at the 1996 National INFORMS Meeting, Atlanta Georgia, November 4, 1996. Miller, J. A., "Studying Satisfaction, Modifying Models, Eliciting Expectations, Posing Prob lems, and Making Meaningful Measurements," in Conceptualization and Measurement ofCon sumer Satisfaction and Dissatisfaction, K. Hunt (Ed.), Report No. 77-103, Marketing Science Institute, Cambridge, MA, 1977, pp. 72-91. Oliver, R. L. "ACognitive Model of the Antece dents and Consequences of Satisfaction Deci sions," Journal of Marketing Research (17), November 1980, pp. 460-469.
Oliver, R.
Lee, C. C,
at www.jitta.org).
mer Delight: Foundations, Findings and Mana gerial Insight," JournalofRetailing (73:3), 1997,
pp. 311-336. A., Berry, L. L., and Zeithaml, V. A. Parasuraman,
L. Rust,
R. T.,
and
Varki
S. V.
"Custo
tance-Performance Analysis as a Strategic Tool forService Marketers: The Case of Service Quality Perceptions of Business Students in New Zealand and theUSA," Journal ofServices Marketing (13:2), 1999, pp. 171-186.
Jiang, J. J., Klein, G., and Carr, C.
Joseph,
M.,
and
Joseph,
B.
"Impor
More on Improving Quality Measurement," Journal of Retailing (69:1), 1993, pp. 140-147. "Research Note:
Parasuraman, A., Zeithaml, V. A., and
Information Systems Quality: SERVQUAL from theOther Side," MIS Quarterly (26:2), 2002, pp.
145-166. Jiang, J. J., Klein, G., and Crampton, S. "A Note
"Measuring
"Alternative Scales for Measuring Service Quality: A Comparative Assessment Based on Psychometric and Diagnostic Criteria," Journal ofRetailing (70:3), 1994a, pp. 201-229.
Parasuraman, A., Zeithaml, V. A., and
Berry,
L. L.
on SERVQUAL Reliability and Validity in Infor mation System Service Quality Measurement," Decision Sciences (31:3), 2000, pp. 725-745.
W. J., and Lee, C. C. "Perceived
"AConceptual Model of Service Quality and its Implications for Future Research," Journal of Marketing (49), Fall 1985, pp. 41-50.
Parasuraman, A., Zeithaml, V. A., and
Berry,
L. L.
Kettinger,
"Refinement and Reassessment of the SERVQUAL Scale," Journal ofRetailing (67:4), 1991, pp. 420-450.
Berry,
L. L.
MIS
Quarterly
Vol. 29 No.
4/December
2005
619
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
Parasuraman,
A., Zeithaml,
V. A.,
and
"Reassessment of Expectations as a Compari son Standard inMeasuring Service Quality: Implications forFurther Research," Journal of Marketing (58), January 1994b, pp. 111-124.
Parasuraman, A., Zeithaml, V. A., and
Berry,
L. L.
Tse
P. C.
"Models
of Consumer
Satisfaction Formation: An Extension," Journal ofMarket Research (25), May 1988, pp. 204
212. Van Dyke, T. P., Kappelman, L. A., and Prybutok,
"SERVQUAL: A Multiple-Item Scale forMea suring Consumer Perceptions of Service Qual Journal ofRetailing (64:1), 1988, pp. 12-40. ity,"
Peter, J. P., Churchill, G. A., Jr., and Brown, T. J.
Berry,
L. L.
V. R. "Caution on theUse of SERVQUAL Mea sures to Assess the Quality of Information Systems Services," Decision Science (30:3), 1999, pp. 877-891.
Van
"Caution in the Use of Difference Scores in Consumer Research," Journal of Consumer Research (19), March 1993, pp. 655-662.
Pitt, L. F., Watson, R. T., and Kavan, C. B. "Mea
suring InformationSystems Service Quality: Concerns fora Complete Canvas," MIS Quar terly(21:2), 1997, pp. 209-221.
Pitt, L. F., Watson, R. T., and Kavan, C. B.
V. R. "Measuring Information Systems Service on the Concerns Use of the SERV Quality: QUAL Questionnaire," MIS Quarterly (21:2), 1997, pp. 195-208.
Watson, R. T., Pitt, L. F., and Kavan, C. B., "Mea
Dyke,
T. P., Kappelman,
L. A.,
and
Prybutok,
"Service Quality: A Measure of Information Systems Effectiveness," MIS Quarterly (19:2), 1995, pp. 173-187. Rigotti, S., and Pitt, L. F. "SERVQUAL as a Instrument for Service Provider Measuring in Business Schools," Gaps Marketing Research News (15:3), 1992, pp. 9-17.
Satorra, A., and Bentler., P. M. "Model Conditions
suring Information Systems Service Quality: Two Longitudinal Case Studies," Lessons from MIS Quarterly (22:1), 1998, pp. 61-79.
Zeithaml, V.,
"The Nature and Determinants of Customer Expectations of Service Quality," Journal of the Marketing Science (21:1)1993, pp. Academy of
1-12.
Berry,
L. L.,
and
Parasuraman,
A.
About theAuthors
William J. Kettinger isan associate professor of Information Systems at theMoore School of Busi ness, University of South Carolina. He teaches
and mation and researches management, IS performance in the areas of strategic process He infor change, consults business measurement.
forAsymptotic Robustness in the Analysis of Linear Relations," Computational Statistics and Data Analysis (10), 1990, pp. 235-249.
Segars, A., and Grover, V.
ceived Ease of Use and Usefulness: A Con Factor Analysis,"MIS Quarterly (MA), firmatory 1993, pp. 517-527.
R. A., and MacKoy, R. D. "An Empirical
"Re-examining
Per
Examination of a Model of Perceived Service Quality and Satisfaction," Journal of Retailing (72:2), 1996, pp. 201-214. Swan J., and Trawick, F. "SatisfactionRelated to Predictive vs. Desired Expectation," inRefining Concepts and Measures of Consumer Satis faction and Complaining Behavior, H. K. Hunt and R. L. Day (Eds.), Indiana UniversityPress, Bloomington, IN, 1980, pp. 7-12.
Teas, R. K.
Spreng,
domestically and abroad on these topics and has published numerous books and research articles MIS Quarterly. includingfive previous articles in Choong C. Lee isAssociate Dean of ate School of Informationat Yonsei Seoul, Korea. He has a Ph.D. degree the University of South Carolina and
served as an associate professor
Standard
Assessment
"Expectations
as
Comparison
An
of
School of Business, Salisbury University. Actively involvedwith research and consulting projects in Senior Researcher/Consultant forenterpriselQ in Lausanne, Switzerland. His past research results MIS Quarterly, Decision have been published in Sciences, Journal ofMIS, Communications of the
ACM, and Information and Management. IS performance measurement, he also works as a
tion and Consumers' Perception of Quality," Journal ofMarketing (57:4), 1993, pp. 18-34.
620
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
"" I P1 I P2 P3 I P4 I P5 P7 I P8 P9 I P13 P11 P12 I P14 I P15 I P16 I P17 I P20 I P21 I P22
-0.3016
-0.5330
-0.320
-0.1441
952
1.027
0.833 Covariance Matrix, Univariate, Multivariate Statistics the and Holdout Sample of
1.066
^ Item 5.5699 5.6183 5.6398 5.7634 5.7849 5.4462 4.8280 5.2742 5.5645 5.7312 4.9570 5.8656 5.4409 5.3656 5.2258 5.1237 5.1613 ?' ^ P22 0.778 1.035 0.950 0.979 0.808 1.036 1.417 0.950 1.233 1.206 1.049 1.245 1.096 1.168 1.265 1.353 1.693 2.590 ? 0.2247 -0.1559-0.3734 -0.0344 0 | Kurtosis -0.2422 -0.1167 -0.140 0.1727 -0.6730 -0.3227 -0.3919 -0.2505 -0.2413 0.0107 -0.0679 -0.3093 -0.2482 -0.2052 0. ?8 1.3831 1.2553 1.3653 1.6137 1.3969 1.6741 1.4369 1.3782 1.4734 1.4529 1.4725 1.4316 1.4101 1.5416 1.3708 1.5149 1.6094 1.4485 Standard g P21 0.735 0.815 0.845 0.965 0.756 1.009 1.454 1.078 1.212 1.335 1.173 1.313 1015 1.106 1.187 1.448 2.098_| -0.5108 -0.2403 0.0879 1.509 1.180 1.133 1.261 ^0.623 0.859 P17 0.889 0.914 0.814 1.047 1.024 1.496 1.503 1.405 1.228 1.418 1.344 1.497 2.168_? 1.149 | P16 0.585 0.746 0.949 0.968 0.798 1.009 1.490 0.953 1.484 1.385 1.239 1.400 1.422 2.049_?> gP15 0.699 0.877 0.668 0.818 0.852 1.988_<? 0.943 1.390 0.916 1.346 1.360 1.496 1.146
0.729
0.844
1.006
0.623
P13 0.893 0.997 1.163 1.162 0.918 1.204 1.174 1.063 1.456 1.374 1.879_^ ^ 1.803 1.073 P12 0.819 0.891 0.957 1.017 1.002 0.996 1.354 1.058 1.720 2.295_ 0.720 0.988 0.946 2.604_ P11 0.982 0.891 1.417 1.546 1.206
1.174
1.338
? Multivariate index for 22 Mardia's items: normality all Coefficient 67.04; Estimate Normalized 17.04 ^ =
_P9_ 0.637 0.840 0.953 0.903 1.049 1.147 1.539 1.951_ _P8_ 0.893 1.145 1.186 0.979 1.283 1.526 2.803_ (n Table Service A1 Perceived Level 188) = _P7_ 0.793 1.112 1.059 1.144 0.956 2.065_ _P5_ 0.945 1.026 1.090 1.154 1.899_ JP4_ 0.936 1.185 1.251 1.864_ 0.979 _P3_ 1.082 2.171_
Appendix A
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
? 2.
3I D1 I D2 I D3 I D4 I D5 I D7 I D8 I D9 D11 ID12 D13 II D14 I D15 I D16 I D17 I D20 I D21 I D22 ?F
Skew-0.8557 -0.8783 -0.6528 -0.8282 -0.9766 -0.7607 -0.6789 -0.5114 -0.7630 -1.0590 -0.6253 -0.8896 -0.5303 -0.7070 -0.9981 -0.8271 -0.4945-0.7050 D22 0.737 0.723 0.613 0.648 0.680 0.690 0.849 0.638 0.829 0.922 0.830 1.102 0.922 0.977 1.116 1.091 1.127 2.029 Item 7.6596 7.5213 7.6064 7.6383 7.7500 7.7074 7.1915 7.2766 7.4415 7.6223 7.5798 6.7606 7.3298 7.3723 7.3138 7.117 7.1915 7.0904 1.2387 1.1659 1.2299 1.2821 1.2300 1.2860 1.3780 1.4184 1.1696 1.3044 1.2346 1.1544 1.3202 1.4245 1.3982 1.2972 1.4339 Standard 0.632 0.616 0.760 0.668 0.652 0.538 0.819 0.797 D21 0.647 1.009 0.896 0.947 1.089 0.976 1.070 1.899_ 1.037 1<urtosis~" 0.1891 0.1439 ^02654 -0.0854 -0.239? "0.2242 -.0017 0.9565 0.1937 ^03235 -0.4185 -0.0371 0.1186 0.3728" 0.7528 "a94l7-0.5125 0.4884 D20 0.749 0.829 0.852 0.688 0.771 0.760 0.866 0.792 0.950 0.994 0.721 1.059 1.043 1.108 1.168 1.683_ 0.888 0.937 1.033 0.948 0.953 1.100 1.009 1.073 1.268 D17 0.903 1.376 1.113 2.056_ 1.358 1.372 D16 0.849 0.987 0.91 0.831 0.949 0.853 1.105 0.955 1.129 1.157 0.772 1.164 1.023 1.743_ 0.739 0.943 0.806 0.992 0.937 0.848 0.876 1.304 0.947 1.339 1.084 D15 1.955_
D14 0.699 0.821 0.862 0.769 0.790 0.876 1.169 0.853 1.074 1.080 0.803 2.013_ 0.669 0.839 0.584 0.911 0.823 0.830 0.860 0.625 0.797 1.368_ 0.921 D13 for Coefficient items: 22 Estimate Normalized 124.61; Multivariate Mardia's index all normality 31.83 =
?D12 0.828 0.914 0.781 0.970 0.878 0.932 1.030 0.859 1.082 1.702_ ? 0.686 0.801 0.870 0.786 0.811 0.738 0.963 D11 0.873 1.574_? g
JD9_
0.640
0.791
0.714
0.989 I ft 0.980 0.987 0.766 0.973 0.862 1.011 1.899_ _D8_ 0.849 <? 07 0.707 0.827 0.799 1.011 0.953 1.513_ 35 1.064 0.955 ? D4 0.853 0.860 0.958 1.644_ |
0.771
0.952
1.335_
0.727
1.654_
g?
Jg g> 1.513_ 0.901 0.742 _D3_ ^ JD2_ 1.534_ ^ | D1 1.359_ 0.895 Means 0/5 I-1 <D Deviation ness I
$K
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions
w r
I M1 I M2 I M3 I M4 I M5 I M7 I M8 I M9 M11 IM13 M12 II M14 I M15 I M16 I M17 I M20 I M21 I M22
-0.1932
-0.0610
-0.4778
"0.2850
-0.1758
? Item 5.7394 5.6436 5.8298 5.9894 5.8511 5.2553 5.4043 5.6383 5.7979 5.7128 5.0244 5.5691 5.6220 5.5372 5.3032 5.2766 5.2660 ^ g ^ M22 0.720 0.727 1.079 0.810 0.955 0.976 1.092 1.063 1.321 1.065 1.253 1.512 1.147 1.077 1.284 1.325 1.514 2.239 |5.6862 Skew0.6770 0.2000 -0.0053 -0.1479 0.0065 -0.1689 0.1469 -0.0192 0.0648 -0.0430 0.2438 -0.3867 0.1478 0.2679 -0.1943 0.0038 0.1034 0.0198 ?, -0.2443-0.1038 0.649 0.580 0.796 0.901 M21 0.687 0.710 0.865 1.041 0.939 0.963 1.944_* 1.133 1.125 1.201 1.059 1.069 1.140 | ?1.1711 1.2713 1.3770 1.3347 1.4131 1.3107 1.4839 1.4401 1.3602 1.3944 1.3478 1.4750 1.3132 1.2804 1.4963 1.3714 1.5212 Standard 0.0836 0.1200
0.0503"-
? M20 0.550 0.673 0.793 0.678 0.624 0.805 1.045 0.886 1.233 1.072 0.970 1.163 1.030 1.021 1.136 1.881_2> ? 0.656 M17 0.665 0.819 0.882 0.867 1.231 1.054 2.314_f 1.081 1.187 1.500 1.516 1.455 1.473 1.339
? M16 0.601 0.754 0.788 0.830 0.739 0.966 1.148 0.962 1.171 1.198 1.076 1.078 1.119 1.816_J M15 0.550 0.848 0.787 0.830 0.930 0.664 0.934 1.041 1.151 1.138 1.271 1.325 1.851_
M14 0.442 0.563 0.929 0.566 0.802 1.378 0.865 1.101 1.186 1.016 1.136 2.176_ 0.674 0.673 M13 0.849 1.015 0.885 0.919 1.058 1.043 1.318 1.134 1.725_ Multivariate for items: Coefficient Normalized index 22 122.03; Mardia's Estimate o all normality 31.18 = ^
M12 0.680 0.738 0.740 0.794 0.902 0.884 0.993 0.863 1.397 1.905_ 0.602 0.638 M11 0.956 0.846 0.793 1.013 1.128 1.997_ 1.051 M9 0.545 0.667 0.947 0.887 0.946 1.044 1.105 1.718_ 0.535 M8 0.709 0.921 0.926 2.202_ 1.118 0.821 M7 0.643 0.822 0.936 1.023 0.801 2.074_ (n [Table Minimum A3. Service Level 188) = 0.708 M5 0.890 0.913 0.980 1.850_ M4 1.046 0.770 1.099 1.896_ M3 0.866 1.781_ 0.875 0.789 M2 1.616_ M1 1.372_
Jg Means_ | ^ ness cd
2 Deviation ~>
? -i
10 >
This content downloaded from 202.43.95.117 on Fri, 18 Oct 2013 21:58:41 PM All use subject to JSTOR Terms and Conditions