Vous êtes sur la page 1sur 13

Measuring Service Quality: SERVQUAL vs.

SERVPERF Scales
Sanjay K Jain and Garima Gupta

includes research articles that focus on the analysis and resolution of managerial and academic issues based on analytical and empirical or case research

RESEARCH

Executive Summary

KEY WORDS Service Quality Measurement of Service Quality Service Quality Scale Scale Validity and Reliability Diagnostic Ability of Scale

Quality has come to be recognized as a strategic tool for attaining operational efficiency and improved business performance. This is true for both the goods and services sectors. However, the problem with management of service quality in service firms is that quality is not easily identifiable and measurable due to inherent characteristics of services which make them different from goods. Various definitions of the term service quality have been proposed in the past and, based on different definitions, different scales for measuring service quality have been put forward. SERVQUAL and SERVPERF constitute two major service quality measurement scales. The consensus, however, continues to elude till date as to which one is superior. An ideal service quality scale is one that is not only psychometrically sound but is also diagnostically robust enough to provide insights to the managers for corrective actions in the event of quality shortfalls. Empirical studies evaluating validity, reliability, and methodological soundness of service quality scales clearly point to the superiority of the SERVPERF scale. The diagnostic ability of the scales, however, has not been explicitly explicated and empirically verified in the past. The present study aims at filling this void in service quality literature. It assesses the diagnostic power of the two service quality scales. Validity and methodological soundness of these scales have also been probed in the Indian context an aspect which has so far remained neglected due to preoccupation of the past studies with service industries in the developed world. Using data collected through a survey of consumers of fast food restaurants in Delhi, the study finds the SERVPERF scale to be providing a more convergent and discriminantvalid explanation of service quality construct. However, the scale is found deficient in its diagnostic power. It is the SERVQUAL scale which outperforms the SERVPERF scale by virtue of possessing higher diagnostic power to pinpoint areas for managerial interventions in the event of service quality shortfalls. The major managerial implications of the study are: Because of its psychometric soundness and greater instrument parsimoniousness, one should employ the SERVPERF scale for assessing overall service quality of a firm. The SERVPERF scale should also be the preferred research instrument when one is interested in undertaking service quality comparisons across service industries. On the other hand, when the research objective is to identify areas relating to service quality shortfalls for possible intervention by the managers, the SERVQUAL scale needs to be preferred because of its superior diagnostic power. However, one serious problem with the SERVQUAL scale is that it entails gigantic data collection task. Employing a lengthy questionnaire, one is required to collect data about consumers expectations as well as perceptions of a firms performance on each of the 22 service quality scale attributes. Addition of importance weights can further add to the diagnostic power of the SERVQUAL scale, but the choice needs to be weighed against the additional task of data collection. Collecting data on importance scores relating to each of the 22 service attributes is indeed a major deterrent. However, alternative, less tedious approaches, discussed towards the end of the paper, can be employed by the researchers to obviate the data collection task.

VIKALPA VOLUME 29 NO 2 APRIL - JUNE 2004

25
25

uality has come to be recognized as a strategic tool for attaining operational efficiency and improved business performance (Anderson and Zeithaml, 1984; Babakus and Boller, 1992; Garvin, 1983; Phillips, Chang and Buzzell, 1983). This is true for the services sector too. Several authors have discussed the unique importance of quality to service firms (e.g., Normann, 1984; Shaw, 1978) and have demonstrated its positive relationship with profits, increased market share, return on investment, customer satisfaction, and future purchase intentions (Anderson, Fornell and Lehmann 1994; Boulding et al., 1993; Buzzell and Gale, 1987; Rust and Oliver, 1994). One obvious conclusion of these studies is that firms with superior quality products outperform those marketing inferior quality products. Notwithstanding the recognized importance of service quality, there have been methodological issues and application problems with regard to its operationalization. Quality in the context of service industries has been conceptualized differently and based on different conceptualizations, alternative scales have been proposed for service quality measurement (see, for instance, Brady, Cronin and Brand, 2002; Cronin and Taylor, 1992, 1994; Dabholkar, Shepherd and Thorpe, 2000; Parasu- raman, Zeithaml and Berry, 1985, 1988). Despite considerable work undertaken in the area, there is no consensus yet as to which one of the measurement scales is robust enough for measuring and comparing service quality. One major problem with past studies has been their preoccupation with assessing psychometric and metho- dological soundness of service scales that too in the context of service industries in the developed countries. Virtually no empirical efforts have been made to eva- luate the diagnostic ability of the scales in providing managerial insights for corrective actions in the event of quality shortfalls. Furthermore, little work has been done to examine the applicability of these scales to the service industries in developing countries. This paper, therefore, is an attempt to fill this existing void in the services quality literature. Based on a survey of consumers of fast food restaurants in Delhi, this paper assesses the diagnostic usefulness as well as the psychometric and methodological soundness of the two widely advocated service quality scales, viz., SERVQUAL and SERVPERF.

SERVICE QUALITY: CONCEPTUALIZATION AND OPERATIONALIZATION


Quality has been defined differently by different authors. Some prominent definitions include conformance to requirements (Crosby, 1984), fitness for use (Juran, 1988) or one that satisfies the customer (Eiglier and Langeard, 1987). As per the Japanese production philosophy, quality implies zero defects in the firms offerings. Though initial efforts in defining and measuring service quality emanated largely from the goods sector, a solid foundation for research work in the area was laid down in the mid-eighties by Parasuraman, Zeithaml and Berry (1985). They were amongst the earliest researchers to emphatically point out that the concept of quality prevalent in the goods sector is not extendable to the services sector. Being inherently and essentially intangible, heterogeneous, perishable, and entailing simultaneity and inseparability of production and consumption, services require a distinct framework for quality explication and measurement. As against the goods sector where tangible cues exist to enable consumers to evaluate product quality, quality in the service context is explicated in terms of parameters that largely come under the domain of experience and credence properties and are as such difficult to measure and evaluate (Parasuraman, Zeithaml and Berry, 1985; Zeithaml and Bitner, 2001). One major contribution of Parasuraman, Zeithaml and Berry (1988) was to provide a terse definition of service quality. They defined service quality as a global judgment, or attitude, relating to the superiority of the service, and explicated it as involving evaluations of the outcome (i.e., what the customer actually receives from service) and process of service act (i.e., the manner in which service is delivered). In line with the propositions put forward by Gronroos (1982) and Smith and Houston (1982), Parasuraman, Zeithaml and Berry (1985, 1988) posited and operationalized service quality as a difference between consumer expectations of what they want and their perceptions of what they get. Based on this conceptualization and operationalization, they proposed a service quality measurement scale called SERVQUAL. The SERVQUAL scale constitutes an important landmark in the service quality literature and has been extensively applied in different service settings.

26
26

MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES

Over time, a few variants of the scale have also been proposed. The SERVPERF scale is one such scale that has been put forward by Cronin and Taylor (1992) in the early nineties. Numerous studies have been undertaken to assess the superiority of two scales, but consensus continues to elude as to which one is a better scale. The following two sections provide an overview of the operationalization and methodological issues concerning these two scales.

SQ i = ( Pij E ij )
j=1

(1)

SERVQUAL Scale
The foundation for the SERVQUAL scale is the gap model proposed by Parasuraman, Zeithaml and Berry (1985, 1988). With roots in disconfirmation paradigm, 1 the gap model maintains that satisfaction is related to the size and direction of disconfirmation of a persons experience vis--vis his/her initial expectations (Churchill and Surprenant, 1982; Parasuraman, Zeithaml and Berry, 1985; Smith and Houston, 1982). As a gap or difference between customer expectations and perceptions, service quality is viewed as lying along a continuum ranging from ideal quality to totally unacceptable quality, with some points along the continuum representing satisfactory quality. Parasuraman, Zeithaml and Berry (1988) held that when perceived or experienced service is less than expected service, it implies less than satisfactory service quality. But, when perceived service is less than expected service, the obvious inference is that service quality is more than satisfactory. Parasuraman, Zeithaml and Berry (1988) posited that while a negative discrepancy between perceptions and expectations a performance-gap as they call it causes dissatisfaction, a positive discrepancy leads to consumer delight. Based on their empirical work, they identified a set of 22 variables/items tapping five different dimensions of service quality construct.2 Since they operationalized service quality as being a gap between customers expectations and perceptions of performance on these variables, their service quality measurement scale is comprised of a total of 44 items (22 for expectations and 22 for perceptions). Customers responses to their expectations and perceptions are obtained on a 7-point Likert scale and are compared to arrive at (P-E) gap scores. The higher (more positive) the perception minus expectation score, the higher is perceived to be the level of service quality. In an equation form, their operationalization of service quality can be expressed as follows:
VIKALPA VOLUME 29 NO 2 APRIL - JUNE 2004

where: SQi = perceived service quality of individual i k = number of service attributes/items P = perception of individual i with respect to performance of a service firm attribute j E = service quality expectation for attribute j that is the relevant norm for individual i The importance of Parasuraman, Zeithaml and Berrys (1988) scale is evident by its application in a number of empirical studies across varied service settings (Brown and Swartz, 1989; Carman, 1990; Kassim and Bojei, 2002; Lewis, 1987, 1991; Pitt, Gosthuizen and Morris, 1992; Witkowski and Wolfinbarger, 2002; Young, Cunningham and Lee, 1994). Despite its extensive application, the SERVQUAL scale has been criticized on various conceptual and operational grounds. Some major objections against the scale relate to use of (P-E) gap scores, length of the questionnaire, predictive power of the instrument, and validity of the five-dimension structure (e.g., Babakus and Boller, 1992; Cronin and Taylor, 1992; Dabholkar, Shepherd and Thorpe, 2000; Teas, 1993, 1994). Since this paper does not purport to examine dimensionality issue, we shall confine ourselves to a discussion of only the first three problem areas. Several issues have been raised with regard to use of (P-E) gap scores, i.e., disconfirmation model. Most studies have found a poor fit between service quality as measured through Parasuraman, Zeithaml and Berrys (1988) scale and the overall service quality measured directly through a single-item scale (e.g., Babakus and Boller, 1992; Babakus and Mangold, 1989; Carman, 1990; Finn and Lamb, 1991; Spreng and Singh, 1993). Though the use of gap scores is intuitively appealing and conceptually sensible, the ability of these scores to provide additional information beyond that already contained in the perception component of service quality scale is under doubt (Babakus and Boller, 1992; Iacobucci, Grayson and Ostrom, 1994). Pointing to conceptual, theoretical, and measurement problems associated with the disconfirmation model, Teas (1993, 1994) observed that a (P-E) gap of magnitude -1 can be produced in six ways: P=1, E=2; P=2, E=3; P=3, E=4; P=4, E=5; P=5, E=6 and P=6, E=7 and these tied gaps cannot be con-

27
27

strued as implying equal perceived service quality shortfalls. In a similar vein, the empirical study by Peter, Churchill and Brown (1993) found difference scores being beset with psychometric problems and, therefore, cautioned against the use of (P-E) scores. Validity of (P-E) measurement framework has also come under attack due to problems with the conceptualization and measurement of expectation component of the SERVQUAL scale. While perception (P) is definable and measurable in a straightforward manner as the consumers belief about service is experienced, expectation (E) is subject to multiple interpretations and as such has been operationalized differently by different authors/ researchers (e.g., Babakus and Inhofe, 1991; Brown and Swartz, 1989; Dabholkar et al., 2000; Gronroos, 1990; Teas, 1993, 1994). Initially, Parasuraman, Zeithaml and Berry (1985, 1988) defined expectation close on the lines of Miller (1977) as desires or wants of consumers, i.e., what they feel a service provider should offer rather than would offer. This conceptualization was based on the reasoning that the term expectation has been used differently in service quality literature than in the customer satisfaction literature where it is defined as a prediction of future events, i.e., what customers feel a service provider would offer. Parasuraman, Berry and Zeithaml (1990) labelled this should be expectation as normative expectation, and posited it as being similar to ideal expectation (Zeithaml and Parasuraman, 1991). Later, realizing the problem with this interpretation, they themselves proposed a revised expectation (E*) measure, i.e., what the customer would expect from excellent service (Parasuraman, Zeithaml and Berry, 1994). It is because of the vagueness of the expectation concept that some researchers like Babakus and Boller (1992), Bolton and Drew (1991a), Brown, Churchill and Peter (1993), and Carman (1990) stressed the need for developing a methodologically more precise scale. The SERVPERF scale developed by Cronin and Taylor (1992) is one of the important variants of the SERVQUAL scale. For, being based on the perception component alone, it has been conceptually and methodologically posited as a better scale than the SERVQUAL scale which has its origin in disconfirmation paradigm.

They questioned the conceptual basis of the SERVQUAL scale and found it confusing with service satisfaction. They, therefore, opined that expectation (E) component of SERVQUAL be discarded and instead performance (P) component alone be used. They proposed what is referred to as the SERVPERF scale. Besides theoretical arguments, Cronin and Taylor (1992) provided empirical evidence across four industries (namely banks, pest control, dry cleaning, and fast food) to corroborate the superiority of their performance-only instrument over disconfirmation-based SERVQUAL scale. Being a variant of the SERVQUAL scale and containing perceived performance component alone, performance only scale is comprised of only 22 items. A higher perceived performance implies higher service quality. In equation form, it can be expressed as:

SQi = Pij
j =1

(2)

SERVPERF Scale
Cronin and Taylor (1992) were amongst the researchers who levelled maximum attack on the SERVQUAL scale.

SQi = perceived service quality of individual i k = number of attributes/items P = perception of individual i with respect to performance of a service firm on attribute j Methodologically, the SERVPERF scale represents marked improvement over the SERVQUAL scale. Not only is the scale more efficient in reducing the number of items to be measured by 50 per cent, it has also been empirically found superior to the SERVQUAL scale for being able to explain greater variance in the overall service quality measured through the use of single-item scale. This explains the considerable support that has emerged over time in favour of the SERVPERF scale (Babakus and Boller, 1992; Bolton and Drew, 1991b; Boulding et al., 1993; Churchill and Surprenant, 1982; Gotlieb, Grewal and Brown, 1994; Hartline and Ferrell, 1996; Mazis, Antola and Klippel, 1975; Woodruff, Cadotte and Jenkins, 1983). Though still lagging behind the SERVQUAL scale in application, researchers have increasingly started making use of the performance-only measure of service quality (Andaleeb and Basu, 1994; Babakus and Boller, 1992; Boulding et al., 1993; Brady et al., 2002; Cronin et al., 2000; Cronin and Taylor, 1992, 1994). Also when applied in conjunction with the SERVQUAL scale, the SERVPERF measure has outperformed the SERVQUAL scale (Babakus and Boller, 1992; Brady, Cronin and Brand, 2002; Cronin and Taylor, 1992;
MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES

where:

28
28

Dabholkar et al., 2000). Seeing its superiority, even Zeithaml (one of the founders of the SERVQUAL scale) in a recent study observed that Our results are incompatible with both the one-dimensional view of expectations and the gap formation for service quality. Instead, we find that perceived quality is directly influenced only by perceptions (of performance) (Boulding et al., 1993). This admittance cogently lends a testimony to the superiority of the SERVPERF scale.

Service Quality Measurement: Unweighted and Weighted Paradigms


The significance of various quality attributes used in the service quality scales can considerably differ across different types of services and service customers. Security, for instance, might be a prime determinant of quality for bank customers but may not mean much to customers of a beauty parlour. Since service quality attributes are not expected to be equally important across service industries, it has been suggested to include importance weights in the service quality measurement scales (Cronin and Taylor, 1992; Parasuraman, Zeithaml and Berry, 1995, 1998; Parasuraman, Berry and Zeithaml, 1991; Zeithaml, Parasuraman and Berry, 1990). While the unweighted measures of the SERVQUAL and the SERVPERF scales have been described above vide equations (1) and (2), the weighted versions of the SERVQUAL and the SERVPERF scales as proposed by Cronin and Taylor (1992) are as follows:

the two scales. The diagnostic ability of the scales has not been explicitly explicated and empirically investigated. The psychometric and methodological aspects of a scale are no doubt important considerations but one cannot overlook the assessment of the diagnostic power of the scales. From the strategy formulation point of view, it is rather the diagnostic ability of the scale that can help managers in ascertaining where the quality shortfalls prevail and what possibly can be done to close down the gaps.

METHODOLOGY
The present study is an attempt to make a comparative assessment of the SERVQUAL and the SERVPERF scales in the Indian context in terms of their validity, ability to explain variance in the overall service quality, power to distinguish among service objects/firms, parsimony in data collection, and, more importantly, their diagnostic ability to provide insights for managerial interventions in case of quality shortfalls. Data for making comparisons among the unweighted and weighted versions of the two scales were collected through a survey of the consumers of the fast food restaurants in Delhi. The fast food restaurants were chosen due to their growing familiarity and popularity with the respondents under study. Another reason was that the fast food restaurant services fall mid way on the pure goods - pure service continuum (Kotler, 2003). Seldom are the extremes found in most service businesses. For ensuring a greater generalizability of service quality scales, it was considered desirable to select a service offering that is comprised of both the good (i.e., food) and service (i.e., preparation and delivery of food) components. Eight fast food restaurants (Nirulas, Wimpy, Dominos, McDonald, Pizza Hut, Haldiram, Bikanerwala, and Rameshwar) rated as more familiar and patronized restaurants in different parts of Delhi in the pilot survey were selected. Using the personal survey method, 300 students and lecturers of different colleges and departments of the University of Delhi spread all over the city of Delhi were approached. The field work was done during December 2001-March 2002. After repeated follow-ups, only 200 duly filled-in questionnaires could be collected constituting a 67 per cent response rate. The sample was deliberately restricted to students and lecturers of Delhi University and was equally divided between these two groups. The idea underlying the selection of these two categories of respondents was their easy accessibility.

SQ i = I ij (Pij E ij )
j=1
k

(3)

SQ i = I ij (Pij )
j=1

(4)

where: Iij is the weighting factor, i.e., importance of attribute j to an individual i. Though, on theoretical grounds, addition of weights makes sense (Bolton and Drew, 1991a), not much improvement in the measurement potency of either scale has been reported after inclusion of importance weights. Between weighted versions of two scales, weighted SERVPERF scale has been theoretically posited to be superior to weighted SERVQUAL scale (Bolton and Drew, 1991a). As pointed out earlier, one major problem with the past studies has been their preoccupation with assessment of psychometric and methodological soundness of
VIKALPA VOLUME 29 NO 2 APRIL - JUNE 2004

29
29

Quota sampling was employed for selecting respondents from these two groups. Each respondent was asked to give information about two restaurants one most frequently visited and one least frequently visited. At the analysis stage, collected data were pooled together thus constituting a total of 400 responses. Parasuraman, Zeithaml and Berrys (1988) 22-item SERVQUAL instrument was employed for collecting the data regarding the respondents expectations, perceptions, and importance weights of various service attributes. Wherever required, slight modifications in the wording of scale items were made to make the questionnaire understandable to the surveyed respondents. Some of the items were negatively worded to avoid the problem of routine ticking of items by the respondents. In addition to the above mentioned 66 scale items (22 each for expectations, perceptions, and importance rating), the questionnaire included items relating to overall quality, overall satisfaction, and behavioural intentions of the consumers. These items were included to assess the validity of the multi-item service quality scales used at our end. The single-item direct measures of overall service quality, namely, overall quality of these restaurants is excellent and overall satisfaction, namely, overall I feel satisfied with the services provided were used. Cronin and Taylor (1992) have used similar measures for assessing validity of multi-item service quality scales. Behavioural intentions were measured with the help of a 3-item scale as suggested by Zeithaml and Parasuraman (1996) and later used by Brady and Robertson (2001) and Brady,Cronin and Brand (2002).3 Excepting importance weights and behavioural items, responses to all the scale items were obtained on a 5-point Likert scale ranging from 5 for strongly agree to 1 for strongly disagree. A 4-point Likert scale anchored on 4 for very important and 1 for not important was used for measuring importance weights

of each item. Responses to behavioural intention items were obtained using a 5-point Likert scale ranging from 1 for very low to 5 for very high.

FINDINGS AND DISCUSSION


Validity of Alternative Measurement Scales
As suggested by Churchill (1979), convergent and discriminant validity of four measurement scales was assessed by computing correlations coefficients for different pairs of scales. The results are summarized in Table 1. The presence of a high correlation between alternate measures of service quality is a pointer to the convergent validity of all the four scales. The SERVPERF scale is, however, found having a stronger correlation with other similar measures, viz., SERVQUAL and importance weighted service quality measures. A higher correlation found between two different measures of the same variable than that found between the measure of a variable and other variable implies the presence of discriminant validity (Churchill, 1979) in respect of all the four multi-item service quality scales. Once again, it is the SERVPERF scale which is found possessing the highest discriminant validity. SERVPERF is, thus, found providing a more convergent as well as discriminant valid explanation of service quality.

Explanatory Power of Alternative Measurement Scales


The ability of a scale to explain the variation in the overall service quality (measured directly through a single-item scale) was assessed by regressing respondents perceptions of overall service quality on its corresponding multi-item service quality scale. Adjusted R 2 values reported in Table 2 clearly point to the superiority of SERVPERF scale for being able to explain greater

Table 1: Alternate Service Quality Scales and Other Measures Correlation Coefficients
SERVQUAL (P-E) SERVQUAL (P-E) SERVPERF (P) Weighted SERVQUAL I(P-E) Weighted SERVPERF I(P) Overall service quality Overall satisfaction Behavioural intentions 1.000 .735 .995 .759 .416 .420 .293 .767 .993 .544 .557 .440 .772 .399 .425 .308 .531 .554 .459 .724 .570 .528 1.000 SERVPERF (P) Weighted SERVQUAL I (P-E) Weighted SERVPERF I (P) Overall Service Quality Overall Satisfaction Behavioural Intentions

30
30

MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES

Table 2: Explanatory Power of Alternative Service Scales Regression Results


Measurement Scale (Independent Variable) SERVQUAL (P-E) SERVPERF (P) Weighted SERVQUAL I(P-E) Weighted SERVPERF I(P) R2 .173 .296 .159 .282 Adjusted R2 .171 .294 .156 .280

Parsimony in Data Collection


Often, ease of data collection is a major consideration governing the choice of measurement scales for studies in the business context. When examined from this perspective, the unweighted performance-only scale turns out to be the best choice as it requires much less informational input than required by the other scales. While the SERVQUAL and weighted service quality scales (both SERVQUAL and the SERVPERF) require data on customer perceptions as well as customer expectations and/or importance perceptions also, the performanceonly measure requires data on customers perceptions alone, thus considerably obviating the data collection task. While the number of items for which data are required is only 22 for the SERVPERF scale, it is 44 and 66 for the SERVQUAL and the weighted SERVQUAL scales respectively (Table 4). Besides making the questionnaire lengthy and compounding data editing and coding tasks, requirement of additional data can have its toll on the response rate too. This study is a case in point. Seeing a lengthy questionnaire, many respondents hesitated to fill it up and returned it on the spot.

Note: Dependent variable = Overall service quality.

proportion of variance (0.294) in the overall service quality than is the case with other scales. Addition of importance weights is not able to enhance the explanatory power of the SERVPERF and the SERVQUAL scales. The results of the present study are quite in conformity with those of Cronin and Taylor (1992) who also found addition of importance weight not improving the predictive ability of either scale.

Discriminatory Power of Alternative Measurement Scales


One basic use of a service quality scale is to gain insight as to where a particular service firm stands vis--vis others in the market. The scale that can best differentiate among service firms obviously represents a better choice. Mean quality scores for each restaurant were computed and compared with the help of ANOVA technique to delve into the discriminatory power of alternative measurement scales. The results presented in Table 3 show significant differences (p < .000) existing among mean service quality scores for each of the alternate scales. The results are quite in line with those obtained by using single-item measures of service quality. The results thus establish the ability of all the four scales to be able to discriminate among the objects (i.e., restaurants), and as such imply that any one of the scales can be used for making quality comparisons across service firms.

Diagnostic Ability of Scales in Providing Insights for Managerial Intervention and Strategy Formulation
A major reason underlying the use of a multi-item scale vis--vis its single-item counterpart is its ability to provide information about the attributes where a given firm is deficient in providing service quality and thus needs to evolve strategies to remove such quality shortfalls with a view to enhance customer satisfaction in future. When judged from this perspective, all the four service quality scales, being multi-item scales, appear capable of performing the task. But, unfortunately, the scales

Table 3: Discriminatory Power of Alternate Scales ANOVA Results


Restaurant Nirulas Wimpys Dominos McDonalds Pizza Hut Haldiram Bikanerwala Rameshwar Overall mean F-value (significance level) SERVPERF (P) 3.63 3.41 3.40 3.72 3.64 3.55 3.38 3.19 3.55 6.60 (p <.000) Weighted SERVPERF I (P) 3.67 3.44 3.50 3.78 3.72 3.51 3.41 3.30 3.61 5.31 (p <.000) SERVQUAL (P-E) -0.28 -0.64 -0.45 -0.21 -0.24 -0.40 -0.57 -0.58 -0.37 4.25 (p <.000) Weighted SERVQUAL I (P-E) -0.31 -0.58 -0.41 -0.20 -0.29 -0.50 -0.62 -0.58 -0.38 3.40 (p <.002) Overall Service Quality 4.04 3.46 3.52 4.23 4.00 3.72 3.65 3.19 3.86 6.77 (p<.000)

VIKALPA VOLUME 29 NO 2 APRIL - JUNE 2004

31
31

Table 4: Number of Items Contained in Service Quality Measurement Scales


Scale SERVQUAL (P-E) SERVPERF (P) Weighted SERVQUAL I(P-E) Weighted SERVPERF I(P) Number of Items 44 22 66 44

differ considerably in terms of the areas identified for improvement as well as the order in which the identified areas need to be taken up for quality improvement. This asymmetrical power of the four scales can be probed into by taking up four typical service attributes, namely, use of up-to-date equipment and technology, prompt response, accuracy of records, and convenience of operating hours as being tapped in the study vide scale items 1, 11, 13, and 22 respectively. The performance of a restaurant (name disguised) on these four scale items is reported in Table 5. An analysis of Table 5 reveals the following findings. When measured with the help of performanceonly (i.e., SERVPERF) scale, scores in column 3 show that the restaurant is providing quality in respect of service items 1, 13, and 22. The mean scores in the range of 3.31 to 3.97 for these items are a pointer to this inference. The consumers appear indifferent to the provision of service quality in respect of item 11. However, when compared with maximum possible attainable value of 5 on a 5-point scale, the restaurant under consideration seems deficient in respect of all the four service areas (column 5) implying managerial intervention in all these areas. In the event of time and resource constraints, however, the management needs to prioritize quality deficient areas. This can be done in two ways: either on the basis of magnitude of performance scores

(scores lower in magnitude pointing to higher priority for intervention) or on the basis of magnitude of the implied gap scores between perceived performance (P) and maximally attainable score of 5 (with higher gaps implying immediate interventions). Judged anyway, the service areas in the descending order of intervention urgency are 11, 22, 13, and 1 (see columns 3 and 5). The management can pick up one or a few areas for managerial intervention depending upon the availability of time and financial resources at its disposal. If importance scores are also taken into account as is the case with the weighted SERVPERF scale, the order of priority gets changed to 11, 13, 22, and 1. In the case of the SERVQUAL scale requiring comparison of customers perceptions of service performance (P) with their expectations (E), the areas with zero or positive gaps imply either customer satisfaction or delight with the service provision and as such do not call for any managerial intervention. But, in the areas where gaps are negative, the management needs to do something urgently for improving the quality. When viewed from this perspective, only three service areas, namely, 13, 11, and 1 having negative gaps, call for managerial intervention and in that order as determined by the magnitude of gap scores shown in column 9 of Table 5. Taking into account the importance scores also as is the case with the weighted SERVQUAL scale, order of priority areas gets changed to 11, 13, and 1 (see column 10). We thus find that though all the four multi-item scales possess diagnostic power to suggest areas for managerial actions, the four scales differ considerably in terms of areas suggested as well as the order in which the actions in the identified areas are called for. The moot

Table 5: Areas Suggested for Quality Improvement by Alternate Service Quality Scales
Scale Item Item Description Performance Maximum (P) or Score SERVPERF Score (3) (4) 3.97 3.08 3.51 3.31 5.00 5.00 5.00 5.00 Gap (P-M) Importance Score (I) I(P) or Expectation Gap (P-E) Weighted Score or SERVPERF (E) SERVQUAL Score (7) (8) (9) 16.99 12.60 12.88 13.37 11, 13, 22, 1 4.37 3.57 4.05 3.04 -0.40 -0.49 -0.54 -0.27 13, 11, 1 I(P-E) or Weighted SERVQUAL (10) -1.71 -6.01 -1.98 -1.09 11, 13, 1

(1) 1. 11. 13. 22.

(2) Use of up-to-date equipment and technology Prompt response Accuracy of records Operating hours convenient to all Action areas in order of priority

(5) -1.03 -1.92 -1.49 -1.69

(6) 4.28 4.09 3.67 4.05

11, 22, 13, 1

11, 22, 13, 1

Note: Customer expectations, perceptions, and importance for each service quality item were measured on a 5-point Likert scale ranging from 5 for strongly agree to 1 for strongly disagree.

32
32

MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES

point, therefore, is to determine which scale provides a more pragmatic and managerially useful diagnosis. From a closer perusal of the data provided in Table 4, it may be observed that the problem of different areas and different ordering suggested by the four scales is coming up basically due to different reference points used explicitly or implicitly for computing the service quality shortfalls. While it is the maximally attainable score of 5 on a 5-point scale that presumably is serving as a reference point in the case of the SERVPERF scale, it is customer expectation for each of the service area that is acting as a yardstick under the SERVQUAL scale. Ideally speaking, the management should strive for attaining the maximally attainable the performance level (a score of 5 in the case of 5-point scale) in all those service areas where the performance level is less than 5. This is exactly what the SERVPERF scale-based analysis purports to do. However, this is tenable only under situations when there are no time and resource constraints and it can be assumed that all the areas are equally important to customers and they want maximally possible quality level in respect of each of the service attributes. But, in a situation where the management works under resource constraints (this usually is the case) and consumers do not equally importantly want maximum possible service quality provision, the management needs to identify areas which are more critical from the consumers point of view and call for immediate attention. This is exactly what the SERVQUAL scale does by pointing to areas where firms performance is below the customers expectations. Between the two scales, therefore, the SERVQUAL scale stands to provide a more pragmatic diagnosis of the service quality provision than the SERVPERF scale. 4 So long as perceived performance equals or exceeds customer expectations for a service attribute, the SERVQUAL scale does not point to managerial intervention despite performance level in respect to that attribute falling short of the maximally attainable service quality score. Service area 22 is a case in point. As per the SERVPERF scale, this is also a fitting area for managerial intervention because the perceived performance level in respect of this attribute is far less than the maximally attainable value of 5. This, however, is not the case with the SERVQUAL scale. Since the customer perceptions of a restaurants performance are above their expectation level, there seems to be no ostensible justification in further trying to improve the performance in this area.
VIKALPA VOLUME 29 NO 2 APRIL - JUNE 2004

The customers are already getting more than their expectations; any attempt to further improve the performance in this area might drain the restaurant owner of the resources needed for improvement in other critical areas. Any such effort, moreover, is unlikely to add to the customers delight as the customers themselves are not desirous of having more of this service attribute as revealed by their mean expectation score which is much lower than the ideally and maximally attainable score of 5. If importance scores are also taken into consideration, the weighted versions of both the scales provide much more useful insights than those provided by the unweighted counterparts. Be it the SERVQUAL or the SERVPERF scale, the inclusion of weights does represent improvement over the unweighted measures. By incorporating the customer perceptions of the importance of different service attributes in the analysis, the weighted service quality scales are able to more precisely direct managerial attention to deficient areas which are more critical from the customers viewpoint and as such need to be urgently attended to. It may, furthermore, be observed that between the weighted versions of the SERVPERF and the SERVQUAL scales, the weighted SERVQUAL scale is much more superior in its diagnostic power. This scale takes into account not only the magnitude of customer defined service quality gaps but also the importance weights that customers assign to different service attributes, thus pointing to such service quality shortfalls as are crucial to a firms success in the market and deserve immediate managerial intervention.

CONCLUSIONS, IMPLICATIONS, AND DIRECTIONS FOR FUTURE RESEARCH


A highly contentious issue examined in this paper relates to the operationalization of service quality construct. A review of extant literature points to SERVQUAL and SERVPERF as being the two most widely advocated and applied service quality scales. Notwithstanding a number of researches undertaken in the field, it is not yet clear as to which one of the two scales is a better measure of service quality. Since the focus of the past studies has been on an assessment of the psychometric and methodological soundness alone of the service quality scales and that too in the context of the developed world this study represents a pioneering effort towards evaluating the methodological soundness as well as the diagnostic power of the two scales

33
33

in the context of a developing country India. A survey of the consumers of the fast food restaurants in the Delhi was carried out to gather the necessary information. The unweighted as well as the weighted versions of the SERVQUAL and the SERVPERF scales were comparatively assessed in terms of their convergent and discriminant validity, ability to explain variation in the overall service quality, ease in data collection, capacity to distinguish restaurants on quality dimension, and diagnostic capability of providing directions for managerial interventions in the event of service quality shortfalls. So far as the assessment of various scales on the first three parameters is concerned, the unweighted performance-only measure (i.e., the SERVPERF scale) emerges as a better choice. It is found capable of providing a more convergent and discriminant valid explanation of service quality construct. It also turns out to be the most parsimonious measure of service quality and is capable of explaining greater proportion of variance present in the overall service quality measured through a singleitem scale. The addition of importance weights, however, does not result in a higher validity and explanatory power of the unweighted SERVQUAL and SERVPERF scales. These findings are quite in conformity with those of earlier studies recommending the use of unweighted perception-only scores (e.g., Bolton and Drew, 1991b; Boulding et al., 1993; Churchill and Surprenant, 1982; Cronin, Brady and Hult, 2000; Cronin and Taylor, 1992). When examined from the point of view of the power of various scales to discriminate among the objects (i.e., restaurants in the present case), all the four scales stand at par in performing the job. But in terms of diagnostic ability, it is the SERVQUAL scale that emerges as a clear winner. The SERVPERF scale, notwithstanding its superiority in other respects, turns out to be a poor choice. For, being based on an implied comparison with the maximally attainable scores, it suggests intervention even in areas where the firms performance level is already up to customers expectations. The incorporation of expectation scores provides richer information than that provided by the perception-only scores thus adding to the diagnostic power of the service quality scale. Even the developers of performance-only scale were cognizant of this fact and did not suggest that it is unnecessary to measure customer expectations in service quality research (Cronin and Taylor, 1992). From a diagnostic perspective, therefore, (P-E) scale

constitutes a better choice. Since it entails a direct comparison of performance perceptions with customer expectations, it provides a more pragmatic diagnosis of service quality shortfalls. Especially in the event of time and resource constraints, the SERVQUAL scale is able to direct managerial attention to service areas which are critically deficient from the customers viewpoint and require immediate attention. No doubt, the SERVQUAL scale entails greater data collection work, but it can be eased out by employing direct rather than computed expectation disconfirmation measures. This can be done by asking customers to directly report about the extent they feel a given firm has performed in comparison to their expectations in respect of each service attribute rather than asking them to report their perception and expectation scores separately as is required under the SERVQUAL scale (for a further discussion on this aspect, see Dabholkar, Shepherd and Thorpe, 2000). The addition of importance weights further adds to the diagnostic power of the SERVQUAL scale. Though the inclusion of weights improves the diagnostic ability of even the SERVPERF scale, the scale continues to suffer from its generic weakness of directing managerial attention to such service areas which are not at all deficient in the customers perception. In overall terms, we thus find that while the SERVPERF scale is a more convergent and discriminant valid explanation of the service construct, possesses greater power to explain variations in the overall service quality scores, and is also a more parsimonious data collection instrument, it is the SERVQUAL scale which entails superior diagnostic power to pinpoint areas for managerial intervention. The obvious managerial implication emanating from the study findings is that when one is interested simply in assessing the overall service quality of a firm or making quality comparisons across service industries, one can employ the SERVPERF scale because of its psychometric soundness and instrument parsimoniousness. However, when one is interested in identifying the areas of a firms service quality shortfalls for managerial interventions, one should prefer the SERVQUAL scale because of its superior diagnostic power. No doubt, the use of the weighted SERVQUAL scale is the most appropriate alternative from the point of view of the diagnostic ability of various scales, yet a final decision in this respect needs to be weighed against the gigantic task of information collection. Following CroMEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES

34
34

nin and Taylors (1992) approach, one requires collecting information on importance weights for all the 22 scale items thus considerably increasing the length of the survey instrument. However, alternative approaches do exist that can be employed to overcome this problem. One possible alternative is to collect information about the importance weights at the service dimension rather than the individual service level. This can be accomplished by first doing a pilot survey of the respondents using 44 SERVQUAL scale items and then performing a factor analysis on the collected data for identifying service dimensions. Once the service dimensions are identified, a final survey of all the sample respondents can be done for seeking information in respect of the 44 scale items as well as for the importance weights for each of the service quality dimensions identified during the pilot survey stage. Addition of one more question seeking importance information will only slightly increase the questionnaire size. The importance information so gathered can then be used for prioritizing the quality deficient service areas for managerial intervention. Alternatively, one can employ the approach adopted by Parasuraman, Zeithaml and Berry (1988). Instead of directly collecting information from the respondents, they derived importance weights by regressing overall quality perception scores on the SERVQUAL scores for each of the dimensions identified through the use of factor analysis on the data collected vide 44 scale items. Irrespective of the approach used, the data collection task will be much simpler than required as per the approach employed by Cronin and Taylor (1992) for gathering data in connection with the weighted SERVQUAL scale. Though the study brings to the fore interesting findings, it will not be out of place to mention here some of its limitations. A single service setting with a few restaurants under investigation and a small database of only 400 observations preclude much of the generalizability of the study findings. Studies of similar kind

with larger sample sizes need to be replicated in different service industries in different countries especially in the developing ones to ascertain applicability and superiority of the alternate service quality scales. Dimensionality, though an important consideration from the point of view of both the validity and reliability assessment, has not been investigated in this paper due to space limitations. It is nonetheless an important issue in itself and needs to be thoroughly examined before coming to a final judgment about the superiority of the service quality scales. It is quite possible that the conclusions of the present study might change if the dimensionality angle is incorporated into the analysis. Studies in future may delve into this aspect. One final caveat relates to the limited power of both the unweighted and the weighted versions of the SERVQUAL and the SERVPERF scales to explain variations present in the overall service quality scores assessed through the use of a single-item scale. This casts doubts on the applicability of multi-item service quality scales as propounded and tested in the developed countries to the service industries in a developing country like India. Though regressing overall service quality scores on service quality dimensions might somewhat improve the explanatory power of these scales, we do not expect any appreciable improvement in the results. The poor explanatory power of the scales in the present study might have arisen either due to methodological considerations such as the use of a smaller sample or a 5-point rather than a 7-point Likert scale employed by the developers of service quality scales in their studies or else as is more likely to be the case the problem has arisen due to the inappropriateness of items contained in the service quality scales under investigation in the context of the developing countries. Both these aspects need to be thoroughly examined in future researches so as to be able to arrive at a psychometrically as well as managerially more useful service quality scale for use in the service industries of the developing countries.

ENDNOTES
1. Customer satisfaction with services or perception of service quality can be viewed as confirmation or disconfirmation of customer expectations of a service offer. The proponents of the gap model have based their researches on disconfirmation paradigm which maintains that satisfaction is related to the size and direction of the disconfirmation experience where disconfirmation is related to the persons initial expectations. For
VIKALPA VOLUME 29 NO 2 APRIL - JUNE 2004

further discussion, see Churchill and Surprenant, 1982 and Parasuraman, Zeithaml and Berry, 1985. 2. A factor analysis of 22 scale items led Parasuraman, Zeithaml and Berry (1988) to conclude that consumers use five dimensions for evaluating service quality. The five dimensions identified by them included tangibility, reliability, responsiveness, assurance, and empathy. 3. The scale items used in this connection were: The

35
35

probability that I will use their facilities again, The likelihood that I would recommend the restaurants to a friend, and If I had to eat in a fast food restaurant again, the chance that I would make the same choice. 4. Even though a high correlation (r=0.747) existed be-

tween (P-M) and (P-E) gap scores, the former cannot be used as a substitute for the latter as on a case by case basis, it can point to initiating actions even in such areas which do not need any managerial intervention based on (P-E) scores.

REFERENCES
Andaleeb, S S and Basu, A K (1994). Technical Complexity and Consumer Knowledge as Moderators of Service Quality Evaluation in the Automobile Service Industry, Journal of Retailing, 70(4), 367-81. Anderson, E W, Fornell, C and Lehmann, D R (1994). Customer Satisfaction, Market Share and Profitability: Findings from Sweden, Journal of Marketing, 58(3), 5366. Anderson, C and Zeithaml, C P (1984). Stage of the Product Life Cycle, Business Strategy, and Business Performance, Academy of Management Journal, 27 (March), 5-24. Babakus, E and Boller, G W (1992). An Empirical Assessment of the Servqual Scale, Journal of Business Research, 24(3), 253-68. Babakus, E and Mangold, W G (1989). Adapting the Servqual Scale to Hospital Services: An Empirical Investigation, Health Service Research, 26(6), 767-80. Babakus, E and Inhofe, M (1991). The Role of Expectations and Attribute Importance in the Measurement of Service Quality in Gilly M C (ed.), Proceedings of the Summer Educators Conference, Chicago, IL: American Marketing Association, 142-44. Bolton, R N and Drew, J H (1991a). A Multistage Model of Customers Assessment of Service Quality and Value, Journal of Consumer Research, 17(March), 375-85. Bolton, R N and Drew, J H (1991b). A Longitudinal Analysis of the Impact of Service Changes on Customer Attitudes, Journal of Marketing, 55(January), 1-9. Boulding, W; Kalra, A, Staelin, R and Zeithaml, V A (1993). A Dynamic Process Model of Service Quality: From Expectations to Behavioral Intentions, Journal of Marketing Research, 30(February), 7-27. Brady, M K and Robertson, C J (2001). Searching for a Consensus on the Antecedent Role of Service Quality and Satisfaction: An Exploratory Cross-National Study, Journal of Business Research, 51(1) 53-60. Brady, M K, Cronin, J and Brand, R R (2002). PerformanceOnly Measurement of Service Quality: A Replication and Extension, Journal of Business Research, 55(1), 17-31. Brown, T J, Churchill, G A and Peter, J P (1993). Improving the Measurement of Service Quality, Journal of Retailing, 69(1), 127-39. Brown, S W and Swartz, T A (1989). A Gap Analysis of Professional Service Quality, Journal of Marketing, 53 (April), 92-98. Buzzell, R D and Gale, B T (1987). The PIMS Principles, New York: The Free Press. Carman, J M (1990). Consumer Perceptions of Service Quality: An Assessment of the SERVQUAL Dimensions, Journal of Retailing, 66(1), 33-35. Churchill, G A (1979). A Paradigm for Developing Better Measures of Marketing Constructs, Journal of Marketing Research, 16 (February), 64-73. Churchill, G A and Surprenant, C (1982). An Investigation into the Determinants of Customer Satisfaction, Journal of Marketing Research, 19(November), 491-504. Cronin, J and Taylor, S A (1992). Measuring Service Quality: A Reexamination and Extension, Journal of Marketing, 56(July), 55-67. Cronin, J and Taylor, S A (1994). SERVPERF versus SERVQUAL: Reconciling Performance-based and Perceptions MinusExpectations Measurement of Service Quality, Journal of Marketing, 58(January), 125-31. Cronin, J, Brady, M K and Hult, T M (2000). Assessing the Effects of Quality, Value and Customer Satisfaction on Consumer Behavioral Intentions in Service Environments, Journal of Retailing, 76(2), 193-218. Crosby, P B (1984). Paper presented to the Bureau de Commerce, Montreal, Canada (Unpublished), November. Dabholkar, P A, Shepherd, D C and Thorpe, D I (2000). A Comprehensive Framework for Service Quality: An Investigation of Critical, Conceptual and Measurement Issues through a Longitudinal Study, Journal of Retailing, 76(2), 139-73. Eiglier, P and Langeard, E (1987). Servuction, Le Marketing des Services, Paris: McGraw-Hill. Finn, D W and Lamb, C W (1991). An Evaluation of the SERVQUAL Scale in a Retailing Setting in Holman, R and Solomon, M R (eds.), Advances in Consumer Research, Provo, UT: Association for Consumer Research,480-93. Garvin, D A (1983). Quality on the Line, Harvard Business Review, 61(September-October), 65-73. Gotlieb, J B, Grewal, D and Brown, S W (1994). Consumer Satisfaction and Perceived Quality: Complementary or Divergent Constructs, Journal of Applied Psychology, 79(6), 875-85. Gronroos, C (1982). Strategic Management and Marketing in the Service Sector. Finland: Swedish School of Economics and Business Administration. Gronroos, C (1990). Service Management and Marketing: Managing the Moments of Truth in Service Competition. Mass: Lexington Books. Hartline, M D and Ferrell, O C (1996). The Management of Customer Contact Service Employees: An Empirical Investigation, Journal of Marketing, 69 (October), 52-70. Iacobucci, D, Grayson, K A and Ostrom, A L (1994). The Calculus of Service Quality and Customer Satisfaction: Theoretical and Empirical Differentiation and Integration, in Swartz, T A; Bowen, D H and Brown, S W (eds.), Advances in Services Marketing and Management, Greenwich, CT: JAI Press,1-67. Juran, J M (1988). Juran on Planning for Quality. New York: The Free Press. Kassim, N M and Bojei, J (2002). Service Quality: Gaps in the Telemarketing Industry, Journal of Business Research, 55(11), 845-52. Kotler, P (2003). Marketing Management, New Delhi: PrenMEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES

36
36

tice Hall of India. Lewis, R C (1987). The Measurement of Gaps in the Quality of Hotel Service, International Journal of Hospitality Management, 6(2), 83-88. Lewis, B (1991). Service Quality: An International Comparison of Bank Customers Expectations and Perceptions, Journal of Marketing Management, 7(1), 47-62. Mazis, M B, Antola, O T and Klippel, R E (1975). A Comparison of Four Multi-Attribute Models in the Prediction of Consumer Attitudes, Journal of Consumer Research, 2(June), 38-52. Miller, J A (1977). Exploring Satisfaction, Modifying Modes, Eliciting Expectations, Posing Problems, and Making Meaningful Measurements, in Hunt, K (ed.), Conceptualization and Measurement of Consumer Satisfaction and Dissatisfaction, Cambridge, MA: Marketing Science Institute, 72-91. Normann, R (1984). Service Management. New York: Wiley. Parasuraman, A, Berry, L L and Zeithaml, V A (1990). Guidelines for Conducting Service Quality Decrease, Marketing Research, 2(4), 34-44. Parasuraman, A, Berry, L L and Zeithaml, V A (1991). Refinement and Reassessment of the SERVQUAL Scale, Journal of Retailing, 67(4), 420-50. Parasuraman, A, Zeithaml, V A and Berry, L L (1985). A Conceptual Model of Service Quality and Its Implications for Future Research, Journal of Marketing, 49 (Fall), 41-50. Parasuraman, A, Zeithaml, V A and Berry, L L (1988). SERVQUAL: A Multiple Item Scale for Measuring Consumer Perceptions of Service Quality, Journal of Retailing, 64(1), 12-40. Parasuraman, A, Zeithaml, V A and Berry, L L (1994). Reassessment of Expectations as a Comparison Standard in Measuring Service Quality: Implications for Further Research, Journal of Marketing, 58(January), 111-24. Peter, J P, Churchill, G A and Brown, T J (1993). Caution in the Use of Difference Scores in Consumer Research, Journal of Consumer Research, 19(March), 655-62. Phillips, L W, Chang, D R and Buzzell, R D (1983). Product Quality, Cost Position and Business Performance: A Test of Some Key Hypothesis, Journal of Marketing, 47 (Spring), 26-43. Pitt, L F, Oosthuizen P and Morris, M H (1992). Service Quality in a High Tech Industrial Market: An Application of SERVQUAL, Chicago: American Management Association.

Rust, R T and Oliver, R L (1994). Service Quality New Directions in Theory and Practice, New York: Sage Publications. Shaw, J (1978). The Quality - Productivity Connection, New York: Van Nostrand. Smith, R A and Houston, M J (1982). Script-based Evaluations of Satisfaction with Services, in Berry, L, Shostack, G and Upah, G (eds.), Emerging Perspectives on Services Marketing, Chicago: American Marketing Association, 59-62. Spreng, R A and Singh, A K (1993). An Empirical Assessment of the SERVQUAL Scale and the Relationship between Service Quality and Satisfaction, in Peter, D W, Cravens, R and Dickson (eds.), Enhancing Knowledge Development in Marketing, Chicago, IL: American Marketing Association, 1-6. Teas, K R (1993). Expectations, Performance Evaluation, and Consumers Perceptions of Quality, Journal of Marketing, 57(October), 18-34. Teas, K R (1994). Expectations as a Comparison Standard in Measuring Service Quality: An Assessment of Reassessment, Journal of Marketing, 58(January), 132-39. Witkowski, T H and Wolfinbarger, M F (2002). Comparative Service Quality: German and American Ratings across Service Settings, Journal of Business Research, 55 (11), 875-81. Woodruff, R B, Cadotte, E R and Jenkins, R L (1983). Modelling Consumer Satisfaction Processes Using Experience-based Norms, Journal of Marketing Research, 20(August), 296-304. Young, C, Cunningham, L and Lee, M (1994). Assessing Service Quality as an Effective Management Tool: The Case of the Airline Industry, Journal of Marketing Theory and Practice, 2(Spring), 76-96. Zeithaml, V A, Parasuraman, A and Berry, L L (1990). Delivering Service Quality: Balancing Customer Perceptions and Expectations, New York: The Free Press. Zeithaml, V A and Parasuraman, A (1991). The Nature and Determinants of Customer Expectation of Service, Marketing Science Institute Working Paper No. 91-113, Cambridge, MA: Marketing Science Institute. Zeithaml, V A and Parasuraman, A (1996). The Behavioral Consequences of Service Quality, Journal of Marketing, 60(April), 31-46. Zeithaml, V A and Bitner, M J (2001). Services Marketing: Integrating Customer Focus Across the Firms, 2nd Edition, Boston: Tata-McGraw Hill.

Sanjay K Jain is Professor of Marketing and International Business in the Department of Commerce, Delhi School of Economics, University of Delhi, Delhi. His areas of teaching and research include marketing, services marketing, international marketing, and marketing research. He is the author of the book titled Export Marketing Strategies and Performance: A Study of Indian Textiles published in two volumes. He has published more than 70 research papers in reputed journals including Journal of Global Marketing, Malaysian Journal of Small and Medium Enterprises, Vikalpa,
VIKALPA VOLUME 29 NO 2 APRIL - JUNE 2004

Business Analyst, etc. and also presented papers at various national and international conferences. e-mail: skjaindse@vsnl.net Garima Gupta is a Lecturer of Commerce in Kamla Nehru College, University of Delhi, Delhi. She is currently pursuing her doctoral study in the Department of Commerce, Delhi School of Economics, University of Delhi, Delhi. e-mail: garimagupta77@yahoo.co.in

37
37

Vous aimerez peut-être aussi