Vous êtes sur la page 1sur 24

JOURNAL OF BUSINESS LOGISTICS, Vol. 25, No.

2, 2004

95

PERFORMANCE MEASUREMENT: MEASURE SELECTION BASED UPON


FIRM GOALS AND INFORMATION REPORTING NEEDS
by
Stanley E. Griffis
Air Force Institute of Technology
Martha Cooper
Thomas J. Goldsby
The Ohio State University
and
David J. Closs
Michigan State University
Performance measurement is an issue of continued interest in logistics and supply chain management. While research into methods to improve logistics performance has resulted in benefits to
the firm, similar improvements in performance measurement have not necessarily followed. One
possible improvement is better alignment of firm measurement needs with measure choice. Much
of the logistics performance measurement research to date has focused upon: 1) introducing characteristics that measures should possess; 2) perspectives that measures should assume; or 3) specific
measures that firms should choose. Little consideration has been given to information reporting needs
based upon firms unique environments or strategies. The premise of this paper is that performance
measures should be chosen for their ability to detect performance consistent with the logistics organizations specific mission, goals, and environment.
One way firms differentiate themselves in the marketplace is through their goals, and those
goals determine the nature of firm operations. In turn, these goals affect the type of measures the firm
should select for its logistics system. The potential exists for a disconnect between the logistics organizations measurement needs and the information reporting capabilities covering performance
measures chosen by the firm. Just as referring to a thermometer for a humidity reading is misguided, consulting a logistics performance measure for information it is incapable of reporting is
similarly unproductive.
Note: The views expressed in this article are those of the authors and do not necessarily reflect
the official policy or position of the Air Force, the Department of Defense, or the U.S. Government.

96

GRIFFIS, COOPER, GOLDSBY, AND CLOSS

Consider the logistics organization that has long used logistics cost per unit as a key performance
indicator. Perhaps this measure has served the organization well over the years given that cost minimization has been the driving force for action. However, if that company begins to differentiate itself
in the marketplace by offering unique services to different customer segments, changes may be
needed. While logistics cost per unit might still serve as an important measure for the firm, it probably should not be the key performance indicator. Rather, measures that address the organizations
new focus upon the ability to accommodate customer-specific needs in a timely manner should
take precedence. In fact, logistics cost per unit might prove to be a poor measure if costs begin to
stray dramatically from the average as customers demand varying types and levels of service.
Though a hypothetical scenario, this example speaks to the importance of having a measurement system in place that provides the necessary information to guide current and future action. There is no
one measurement system that is right for every company. Rather, logistics measurement systems should
be as unique as the organizations that employ them.
Disconnects between measurement needs and measure choice can take at least three forms. In
the first situation no measures are chosen to evaluate performance and could result in a lack of
guidance about system performance. This situation is clearly problematic and, in most cases,
unlikely. The second situation occurs when bad information is consulted. This can occur where a company consults measures that are unreliable, inconsistent, or possibly invalid. Because information
is being collected, the company may be misled into decisions which are not appropriate given the
true state of performance. The third situation is the result of consulting the wrong measures. This
occurs when measures are chosen without regard for the true needs of the company. Though the measures might be reliable, consistent, and valid, they may reveal nothing about the state of company
operations most important to the overall success of the organization. The ability to guide the firm
is hampered if this deficiency is recognized. Of more concern is when this problem goes unrecognized. In cases where the wrong measures are employed, and the situation is not recognized, the firm
runs the risk of misdirecting actions out of ignorance of the true state of organizational performance. Each of these cases presents an unfortunate situation for the firm; however, this research
focuses on the third case where the wrong information types are collected and acted upon.
It is imperative that measures be selected for situations where they are most appropriate. This
is particularly true in changing times as boundaries among the firms various functional areas dissolve, and effective supply chain management requires evolving responsibilities and accountabilities. Choosing measures that meet the general call for characteristics such as reliability, observability,
and ease of capture are not sufficient to guide measure selection. Measures must be consistent with
the specific needs of the firm and be capable of communicating to those within the organization what
type of performance is desired.

JOURNAL OF BUSINESS LOGISTICS, Vol. 25, No. 2, 2004

97

In an effort to address the need for better measure alignment, this research developed a performance measurement framework and a methodology to address the question:
Research Question: How can logistics professionals select performance measures
that align measures reporting capabilities with the firms information needs?
The framework developed herein identifies four possible performance measurement dimensions that the logistics organization might reference. Each of the dimensions is anchored by two perspectives illustrating the different orientations measures might assume. By assessing the relative
location of each logistics performance dimension, firms can evaluate whether their current measures
actually match their measurement needs, or, evaluate which performance measures are necessary
to achieve the firms objectives.
This paper is organized as follows. First, performance measurement literature is reviewed and
a measurement framework is presented. Second, a framework incorporating eight measurement
perspectives, made up of four measurement dimensions is developed and subsequently tested by
exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). Third, a methodology
describing the use of the framework to place common logistics measures is explained. Finally,
managerial implications, research limitations, and possible topics for future research are discussed.
LITERATURE REVIEW
Logistics performance measurement literature can be characterized into two categories.
Research either suggests specific performance measures for consideration or recommends qualities
that the measures should possess. This section begins with an overview of some of the measures
suggested followed by a review of recommended measure qualities.
Measures
A sampling of measures found in the literature reveals that measures exist for virtually any purpose. There appears to be a sufficient depth of possible measures, and new measures continue to appear
in the literature, each with the promise of being able to answer all of a firms information needs. In
fact, the richness of available measures presents a challenge. With so many measures from which
to choose, the potential for selecting an inappropriate measure for a particular need increases.
Logistics performance measurement researchers have worked to clarify the issue, some suggesting
specific measures for adoption while others call for the development or adoption of new measures.
In general, the strengths of specific measures are described and adoption in logistics is recommended. Practitioners responsible for developing measurement systems must then sort through
numerous opinions, searching for the right measures to use. Sometimes they succeed in choosing
measures that benefit their systems; though given the continued interest in performance measurement in trade journals and conferences, their overall success might be questioned.

98

GRIFFIS, COOPER, GOLDSBY, AND CLOSS

Strong consensus exists on some of the most commonly recommended logistics performance
measures listed below:
Average line item fill rate (Harrington, Lambert, and Christopher 1991; Johnson 1998; Johnson and Davis 1998; Lee and Billington 1992)
Average backorder fill time (Johnson and Davis 1998)
Complete order fill rate (Boyd and Cox 1997; Brewer and Speh 2000; Ellram, La Londe, and
Weber 1989; Harding 1998; Johnson 1998; Johnson and Davis 1998; Keebler et. al 1999; Lee
and Billington 1992)
Days order late (Davis 1993; Johnson and Davis 1998)
Inventory turnover ratio (Ellram, La Londe, and Weber 1989; Fisher 1997; Johnson 1998;
Johnson and Davis 1998; Keebler et al. 1999; Krupp 1994; Wisner and Fawcett 1991)
Logistics costs per unit (Brewer and Speh 2000).
Missed sales due to stockouts (Emmelhainz, Stock, and Emmelhainz 1991; Fisher 1997)
On-time delivery percentage (Boyd and Cox 1997; Davis 1993; Harding 1998; Johnson
1998; Kaplan 1991; Kleinsorge, Schary, and Tanner 1991; Wisner and Fawcett 1991)
Order cycle time variability (Ellram, La Londe, and Weber 1989)
Percent error pick rate (Brewer and Speh 2000),
Weeks of supply (Johnson and Davis 1998; Krupp 1994)
This list represents a sample of measures commonly employed to guide logistics decisionmaking. As already observed the wealth of measures available presents the challenge of too many
options. One way to consider reducing the confusion is to evaluate measures based upon characteristics
or qualities they possess.
Measure Qualities
Researchers focusing on measure qualities encourage practitioners to evaluate measures for a
set of properties before adoption. Firms are advised to consider measure qualities such as: validity,
robustness, usefulness, integration, economy, compatibility, level of detail, and behavioral soundness before choosing (Caplice and Sheffi 1994). Mentzer and Firman (1994) suggest that measures
should be realistic, representative, consistent, cost effective, understandable, and not underdetermined. Measures that fail to embody these characteristics may lack a sufficient volume of accurate, credible information and could lead to the situation described in the second scenario (bad
information consulted) mentioned earlier. These measure qualities are desirable, but are general enough
to apply to all firms logistics performance measurement systems. As an example, it is doubtful that
any firm would knowingly choose a measure thought to be difficult to understand, or incompatible
with their needs.

JOURNAL OF BUSINESS LOGISTICS, Vol. 25, No. 2, 2004

99

Frameworks have been developed to address broad performance measurement needs. Kaplan
and Norton (1992), for example, suggest the need for measures to yield better insight into business
operations as a result of the increased interest in continuous improvement and innovation present
in many companies. They propose a balanced view of the firm involving numerous perspectives at
once. These perspectives found in their balanced scorecard approach include outcomes associated
with finances, customers, internal business, and innovation and learning that should be considered
when selecting measures for a company. Brewer and Speh (2000) modify the same four perspectives of the balanced scorecard, adjusting them for a supply chain perspective.
These qualities and frameworks are generally so broad that they do not, nor appear to have been
intended to, guide firms about measurement choice with regard to individual firms specific goals
and measurement situations. Types of measures are recommended, but firm-specific measure selection does not result. Direction regarding which measures are best suited to specific measurement
situations is still needed.
Proposed Framework
The proposed framework seeks to align the logistics organizations specific measurement
needs with those measures most suited to reporting the desired information. The framework is
based upon required measurement perspectives for choosing measures. The perspectives the measure could take were drawn from the literature and compared to what logistics organizations report
as occurring within their companies.
In discussing supply chains and the products manufactured within them, Fisher (1997) identified the need to align production system capabilities with respective product characteristics.
Fishers example uses innovative and functional products to show that systems should be tailored
to their respective products. Fisher presents the example of Campbells Tomato soup, and suggests
that soup is a functional product given its stable and predictable demand. Fisher observes that a
functional product is best served by a production system focused upon production efficiency. In comparison, an innovative product such as fashion clothing requires a production environment that
emphasizes responsiveness rather than efficiency due to the perishable nature of the demand for fashion products. Fishers thinking suggests that tradeoffs exist between products themselves and the
systems in place to support them. Though his central focus is the production systems, the performance
measures of a system need to be sensitive to these tradeoffs as well. Fishers tradeoff provides a starting point for a framework to identify tradeoffs between desirable performance measure characteristics and measurement needs. Numerous possible tradeoffs are suggested within the literature.
The following section identifies a number of perspectives that authors have suggested as relevant
to either system design or measure selection.
Richardson and Gordon (1980) argue that firm capabilities need to be responsive and adjust over
time to the changing needs of aging products. This argument parallels Fishers discussion of synchronizing systems with the nature of the products produced as seen in his examples of efficient and
responsive systems. Similarly, Mentzer and Konrad (1991) stress the importance of efficiency of

100

GRIFFIS, COOPER, GOLDSBY, AND CLOSS

operations and also stress the importance of effectiveness in operations in light of the efficiency
obtained. The pursuit of these two system characteristics are generally considered to be mutually
exclusive, that is, it appears pursuing one or the other to its extreme precludes pursuit of the other.
Hence the pursuit of responsiveness or efficiency as a basis of competition represents the first
dimension of the framework.
The link between the top levels of the firm, where strategy is established, and the operational
decisions made as part of day-to-day operations at the lower levels of the firm is a commonly recognized disconnect (Wisner and Fawcett 1991). Bowersox and Daugherty (1995) identify the relevance of sensitivity to strategic orientation, noting that the extent and type of performance
measurement is likely to vary according to strategic orientation. It is imperative to have measures
capable of responding to measurement needs at both the strategic and operational levels of the firm
that could be applied to improve overall performance. While Wisner and Fawcett developed a
process to link operational criteria to strategic objectives, the actual selection of the measures for
the different levels within the logistics organization needs to be addressed. The second dimension
of the framework is intended to facilitate the selection of measures that meet the needs of strategic
and operational assessment.
Scott and Westbrook (1991) discuss functionally oriented firms, and their need for measures
focused on individual products or ranges of similar products, while Cooper, Lambert, and Pagh (1997),
Lambert, Cooper, and Pagh (1998), and Stank, Keller, and Daugherty (2001) suggest viewing logistics from a process perspective. The performance measurement system for a functional orientation is likely to differ from that of a process orientation, necessitating tradeoffs in measurement
capabilities. For example, an organization with a functional orientation might view the transportation budget in isolation and therefore attempt to minimize it by avoiding premium transportation
modes. A process oriented organization on the other hand would be less concerned with the individual
transportation costs, instead focusing upon overall costs (transportation costs, inventory costs, costs
of reduced customer service in the event of a late shipment) associated with serving a customer.
Whether the organizations logistics activities are aligned by function or process serves as the third
relevant measurement dimension.
The frequency of measurement and performance analysis is another concern. Taking time to
periodically diagnose system anomalies outside of the normal operational cycle helps firms to correct problems before they get out of control (Lancioni and Gattorna 1992). Kliensorge, Schary,
and Tanner share this concern about diagnosing problems and developed a data envelopment analysis tool for that purpose. Meanwhile system performance monitoring is stressed by others (Bowersox and Daugherty 1992), and evidence is presented that leaders in the field extensively monitor
internal and external operations (Bowersox and Drge 1989). Routine monitoring is considered essential to drive overall performance (Marien 1995). Therefore, measurement frequency is the fourth and
final measurement dimension in the proposed framework.
To date, literature on specific measures, frameworks, and models does not appear to
adequately guide individual firms in logistics measure selection. Sound direction is provided on

JOURNAL OF BUSINESS LOGISTICS, Vol. 25, No. 2, 2004

101

the general criteria and researchers models identify issues to consider, but guidance on how to
select measures sensitive to a firms unique measurement needs appears lacking. The proposed
measurement framework incorporates tradeoffs that can be used to align the measurement needs of
the firm with the measures for a given system. To summarize, the dimensions represent important
aspects of evaluation and guidance within the logistics management domain. Each dimension is summarized below:
Competitive Basis Dimension The need for measures for systems competing based upon
efficiency, versus measures for systems competing based upon responsiveness;
Measurement Focus Dimension The need for measures to support operational decisions
versus measures to support strategic level decisions;
Organizational Type Dimension The need for measures suited to process orientations,
versus measures suited to functional orientations;
Measurement Frequency Dimension The need for measures that routinely monitor
performance, versus measures that intermittently diagnose performance.
Individual performance measures are hypothesized to identify with each of these dimensions
giving an ability to report one type of information better than the other. Visually representing this
four dimensional measurement space in a two-dimensional format is impossible. However, a pair
of three-dimensional representations allows for all four dimensions to be visualized simultaneously (Figure 1). Figure 1a represents the three dimensions of measurement at the monitoring end
of the measurement frequency dimension, while Figure 1b represents the diagnostic end of this
fourth measurement dimension.
FIGURE 1A
MONITORING MEASUREMENT SPACE
Efficiency
Responsiveness
Process

Functional
Operational

Strategic

102

GRIFFIS, COOPER, GOLDSBY, AND CLOSS

FIGURE 1B
DIAGNOSTIC MEASUREMENT SPACE
Efficiency
Responsiveness
Process

Functional
Operational

Strategic

Within this framework, measures can be evaluated for their ability to report different information
types (e.g., efficiency versus responsiveness) and then be placed along each of the four dimensions in this manner, resulting in a location within four-dimensional space.
Once measures are placed in this manner, a firm can evaluate their measurement needs, as guided
by their overall goals and objectives. Upon placing their needs within the measurement space, the
firm can select those measures with characteristics most closely aligned with their measurement needs.
This procedure will allow the firm to select measures with capabilities consistent with their needs.
Using the single dimension example of efficiency versus responsiveness, if the goal of a company
is to compete in the marketplace by being the most responsive to the customer, then the firm should
emphasize measures closer to the responsiveness rather than the efficiency end of the scale. It
should be noted that not all measures will be sensitive to all measurement dimensions. While some
measures will naturally favor one end on some measurement dimensions, they may be insensitive
to other measurement dimensions within the framework.
METHODOLOGY
Survey data were used to test the developed framework. The survey collected information on
management characteristics present in logistics firms across multiple industries. In the survey,
senior logistics and supply chain executives from North American manufacturers, wholesalers/
distributors, and retailers were asked to agree or disagree with statements regarding their firms
strategies, structures, and operations. The mail survey sampled 2,680 CLM members and resulted
in 306 usable responses for a response rate of 11.5%. Though less than desired, the response rate is
comparable with other similar research concerning industry best practices (Boyson et al. 1999;
Narasimhan and Carter 1998). Additionally, the length and comprehensive nature of the survey

JOURNAL OF BUSINESS LOGISTICS, Vol. 25, No. 2, 2004

103

may have played a role in limiting response. Despite these factors, the 306 responses provided a solid
sample comparable with similar studies. Non-response bias was tested for according to procedures
recommended by Armstrong and Overton (1977), where the last quartile of respondents is assumed
to be most like non-respondents given their responses are most difficult to obtain, and take the
longest. This last quartile was compared with the first three quartiles and the means for each group
showed no significant difference (at the 0.05 level of significance). As such, non-response bias
was not believed to be a concern.
A subset of data from the surveys database that had not previously been factor analyzed was
used for model analysis. Model analysis was performed in a split-sample, or cross validation
approach (Hair et al. 1995, p 195). This approach involves randomly dividing the sample into two
groups. The first group, the analysis sample, is used to develop the discriminant function, while the
second group, the holdout sample, is used for testing, rather than developing, discriminant functions.
A two-stage approach was used to first identify the number of factors that the data support and
then to assess the theory-specified measurement model (i.e., hypothesized relationships between measurement items and latent factors). Exploratory factor analysis (EFA) was used to identify the number of latent factors extracted from the data using the analysis sample (Hoyle 1995). EFA is, as its
name implies, exploratory and allows the researcher to determine how the observed variables are
linked to their underlying factors (Byrne 1994). EFA is appropriate for the initial analysis given that
the research team had no prior knowledge that the measurement items for each factor did indeed measure the intended factors (Byrne 1994). Bollen (1989) suggests the use of EFA in substantive areas
where little is known regarding measurement model structures, as is the case with logistics performance measurement.
Upon concluding the initial analysis, the research moves from exploratory to more confirmatory. The theory-specified measurement model was tested by way of confirmatory factor analysis
(CFA) on the holdout sample per Hair et al. (1995). CFA tests whether the intended linkages between
measurement items and their underlying factors do exist (Byrne 1994). CFA relies upon the
researchers a priori expectations that specific, observed measurement items will correlate significantly with latent factors identified in the theoretical model. Stated another way, CFA evaluates how
well a hypothesized measurement model corresponds to data (Bollen 1989).
To increase reliability, two researchers independently reviewed the survey instrument and
classified survey questions as to how representative they were of one of the eight anchors (factors)
on the four measurement dimensions presented in Table 1.

104

GRIFFIS, COOPER, GOLDSBY, AND CLOSS

TABLE 1
MEASUREMENT FACTORS AND DIMENSIONS
Dimension

Anchors

Competitive basis

Efficiency based

Measurement focus

Operational level

Organizational type

Process orientation

Measurement frequency

Responsiveness based
Strategic level
Functional orientation

Monitoring

Diagnostic

High levels of agreement were obtained by the separate rater evaluations of the measurement
items. In the limited cases where contrary evaluations were obtained, the items in question were
dropped. Following this initial review, two additional researchers independently reviewed the items
for inclusion in the analysis. The measurement items appear in the Appendix.
RESULTS
Exploratory Factor Analysis
EFA was performed upon the random analysis sample (n=110) of the original database (n=306)
with Comprehensive Exploratory Factor Analysis (CEFA) software (Browne et al. 2000). The highlighted values in Table 2 represent the strongest factor loadings for each survey measurement item
and show with which factor each item loaded most strongly.
TABLE 2
EFA ITEM-FACTOR WEIGHTS
Item*

Factor 1

Factor 2

Factor 3

Factor 4

E1

0.120

0.009

0.540

0.095

-0.121

Factor 5

Factor 6
0.068

E2

0.009

-0.003

0.719

0.147

0.061

0.068

E3

0.035

0.218

0.414

0.146

0.420

0.048

E4

0.180

0.593

0.028

0.058

-0.092

-0.057

E5

0.040

0.623

-0.128

0.025

0.040

0.096

R1

-0.038

0.252

0.091

0.091

0.129

0.447

R2

-0.128

0.076

-0.078

0.341

0.115

0.341

R3

-0.212

0.347

0.016

0.049

0.048

0.607

R4

0.080

-0.172

0.082

-0.093

-0.039

0.850

R5

0.160

0.182

-0.161

0.361

0.133

0.304

R6

0.158

0.060

-0.184

0.257

-0.065

0.441

JOURNAL OF BUSINESS LOGISTICS, Vol. 25, No. 2, 2004

105

TABLE 2 (CONT.)
EFA ITEM-FACTOR WEIGHTS
Item*

Factor 1

Factor 2

Factor 3

Factor 4

Factor 5

Factor 6

O1
O2
O3
O4

0.219
0.023
0.035
0.015

0.216
0.553
0.342
0.572

0.113
0.303
0.231
0.116

0.282
-0.055
0.005
0.076

0.133
0.046
0.197
-0.016

0.092
0.041
0.161
-0.136

S1
S2
S3
S4
S5
S6
S7

0.330
0.419
0.595
0.546
0.617
0.806
0.509

0.250
0.082
0.217
0.140
-0.110
0.001
0.126

0.123
-0.184
-0.163
0.131
-0.155
0.098
0.253

0.272
0.225
-0.041
-0.012
0.130
-0.033
0.008

0.131
0.207
0.132
0.152
0.188
-0.018
0.002

0.066
0.126
0.062
-0.060
0.022
0.012
0.007

M1
M2
M3

-0.006
0.059
-0.152

0.061
-0.126
0.066

0.014
0.170
0.177

0.785
0.722
0.599

0.000
-0.078
0.124

-0.052
-0.013
0.064

D1
D2
D3

0.169
0.088
-0.020

0.107
0.031
-0.071

-0.042
-0.063
0.068

0.238
0.011
-0.043

0.478
0.845
0.933

-0.027
-0.088
0.068

*Measurement item descriptions appear in the Appendix

The EFA results indicated six of the original eight factors (efficiency, responsiveness, operational, strategic, monitoring, and diagnostic) obtained support. The six factors gaining the strongest
support are associated with three measurement framework dimensions (competitive basis, measurement focus, and measurement frequency). The factors associated with organizational type
(functional and process) failed to load in a coherent manner when an eight-factor model was tested.
As a result, those items selected from the survey instrument to represent the functional and process
factors were removed and the resulting six-factor model was obtained. With support for a sixfactor model suggested by EFA and the generally low degree of item cross-loadings among
factor pairs, the framework appeared to have support beyond the theoretical underpinnings of the
literature.
Confirmatory Factor Analysis
Confirmatory factor analysis was performed upon the holdout sample (n=196) of the original
data. EQS software was used for confirmatory factor analysis runs (Bentler 1998). The standardized
parameter estimates for the CFA are shown in Table 3.

106

GRIFFIS, COOPER, GOLDSBY, AND CLOSS

TABLE 3
CFA ITEM-FACTOR CORRELATIONS

Factor
Efficiency

Responsiveness

Operational

Strategic

Monitoring

Diagnostic

Item
Number

Standardized
Parameter
Estimate

Standard
Error

E1
E2
E3
E4
E5

0.558
0.922
0.868
0.286
0.355

0.192
0.181
0.130
0.139

*
8.321
8.253
3.636
4.399

R1
R2
R3
R4
R5
R6

0.755
0.659
0.837
0.685
0.618
0.543

0.106
0.100
0.094
0.088
0.099

*
8.889
11.288
9.262
8.313
7.266

O1
O2
O3
O4

0.632
0.682
0.432
0.657

0.198
0.166
0.153

*
6.783
4.866
6.653

S1
S2
S3
S4
S5
S6
S7

0.579
0.558
0.544
0.660
0.586
0.638
0.485

0.153
0.153
0.176
0.151
0.159
0.137

*
5.985
5.875
6.693
6.197
6.553
5.390

M1
M2
M3

0.783
0.725
0.586

0.122
0.128

*
7.852
6.915

D1
D2
D3

0.757
0.814
0.883

0.102
0.103

*
11.154
11.595

t-Score

*t-scores for these items were not available because they were fixed for scaling purposes.

All correlation t-test values exceeded a critical value of 2.326, further indicating support for
the three dimensions identified during EFA. These results indicate that the items chosen did correlate with the factors thought to represent the model. In addition, overall model fit indicates support
as well. Overall model fit statistics from the CFA appear in Table 4.

JOURNAL OF BUSINESS LOGISTICS, Vol. 25, No. 2, 2004

107

TABLE 4
CFA MODEL FIT
Elliptical Reweighted Least Squares
Chi Squared
Bentler-Bonnett Normed Fit Index
Bentler-Bonnett Nonnormed Fit Index
Comparative Fit Index

909.43 (p<0.001)
0.877
0.912
0.920

The 2 value of 909.43 for the model is based upon 347 degrees of freedom. The p-value of
< 0.001 typically might suggest rejection of the model based upon poor fit, but 2 values are only
one indication of model fit. The other fit statistics reported provide additional insight regarding the
fit assessment. The Bentler-Bonnett Normed Fit Index, Bentler-Bonnett Nonnormed Fit Index, and
Comparative Fit Index are all close to or exceed 0.90 in this model. These measures and level of fit
are generally considered to be acceptable measures of fit in lieu of the p-value (Bagozzi and Yi 1988;
Bentler 1990; Byrne 1994; Fornell and Larcker 1981; Hu and Bentler 1995). Bentler (1990) and Byrne
(1994) additionally hold that the CFI (Comparative Fit Index) is the best of all these indices as it
accounts for sample size, a common bias related to index calculations. The ratio of 2 to degrees of
freedom is 2.61:1 and, although no consensus exists regarding an absolute maximum to this ratio,
values between 2:1 and 5:1 are generally considered acceptable (Arbuckle 1997). Given the strong
values for this collection of indices, and the individual strength of the CFI alone, model fit is thought
to be acceptable, particularly given the data used in model fitting were not collected specifically for
the testing of this framework.
Given model fit and factor correlation, there is sufficient support to justify the framework
consisting of six factors, comprising three measurement dimensions, within which logistics measures could be placed.
With the framework reduced to three dimensions by the EFA and further supported by CFA,
framework visualization becomes easier. Eliminating one measurement dimension (the organizational type dimension), measurement needs of the firm can be illustrated and represented graphically
in a single diagram (Figure 2).

108

GRIFFIS, COOPER, GOLDSBY, AND CLOSS

FIGURE 2
MEASUREMENT SPACE
Efficiency
Responsiveness
Monitoring

Diagnostic
Operational

Strategic

Placing measures within the framework requires evaluating performance measures for the
information reporting properties they possess. As an example, a measure like logistics cost per unit
would be expected to score closer to the efficiency end of the competitive basis dimension, and would
therefore be on the efficiency side of the cube represented in Figure 2. The measures position
along the other two dimensions would also be determined, and a specific location within the cube
would result. In this manner, commonly used logistics measures like those identified in the literature review could be evaluated and a consensus built upon which types of information each best
reported. Once these tendencies were characterized, the measures could be placed within the framework in their relative positions for managers of logistics organizations to consult.
As an example of how the measurement framework can be used, Figure 3 positions two different hypothetical measures (circles) within the measurement space. In addition, a star representing a measurement need of the firm has also been placed within the model.

JOURNAL OF BUSINESS LOGISTICS, Vol. 25, No. 2, 2004

109

FIGURE 3
MEASURE/NEED PLACEMENT
Efficiency
2
Responsiveness
Monitoring

Diagnostic

Operational
Possible Measure

Strategic
Measurement Need

The star represents that the measurement need of the firm in question is for a measure that reports
information about the firms responsiveness and needs to be employed operationally. Additionally,
the firm needs a measure that is diagnostic in nature rather than a day-to-day monitoring measure.
The circles placed within the measurement space represent two (of many) possible hypothetical measures, from which the firm could choose. In this case, measure #1 possesses many of the same
characteristics as the firms measurement need; it is diagnostic, suited to the operational level of the
firm, and assesses responsiveness. Measure #2 is also depicted but shares fewer properties with the
measurement needs of the firm. Though measure #2 possesses an operational orientation it represents a poor fit with respect to the firms measurement needs along the other measurement dimensions. Measure #2 picks up more on efficiency than responsiveness and is more of a routine,
monitoring measure, than a diagnostic one.
By placing potential measures as well as the measurement needs of the firm within this framework, good alignment, as represented by the first measure, and poor alignment, as represented by
the second measure, can be readily observed. In this manner, the framework provides logic for first
determining the measurement needs throughout the logistics organization in a variety of situations,
and then facilitates the identification of measures that meet these varied measurement needs.

110

GRIFFIS, COOPER, GOLDSBY, AND CLOSS

IMPLICATIONS
Given the investment in time and money required to establish a measurement program, firms
should be interested in doing it right the first time. Measurement efforts that yield the wrong
information are wasteful of time and energy, as well as being potentially damaging to the performance
of the firm, both today and into the future. This is particularly true given that rewards are often based
upon measured performance (Kerr 1975; Wiersma 1992). This research provides a measurement
framework that appears to have promise given its fit with existing literature and statistical testing.
In addition, it is flexible, dynamic, and robust. It is flexible because it can be applied to virtually any
logistics environment. Regardless of the level of information needed, the frequency of measurement,
or the firms competitive basis, the framework aids in performance measure selection.
The framework is dynamic because as change occurs within the firm, the framework can be
re-visited and new measurement needs considered. Because the framework allows the firm to place
its measurement needs within a measurement space, as internal and external change affect measurement needs, the firms position within the measurement space changes as well. Once this positional change is recognized and evaluated, the firm can determine which measures are closest to its
new position and, therefore, most appropriate for its measurement systems. Likewise, different
levels of the organization have different measurement needs that can also be accommodated within
the framework.
The framework is robust in that the framework allows for the tradeoffs regardless of the origin of the measures themselves. Measures from logistics activities such as transportation, warehousing,
and customer service can all be considered.
In sum, performance measurement is not a one size fits all proposition. This framework
does not try to replace existing measurement models and frameworks in the literature by suggesting that the tradeoffs present in the model are the only ones of concern. Balanced measure selection
as well as measures exhibiting positive characteristics such as robustness and validity will always
be necessary. The framework does, however, provide a new capability. All firms do not employ the
same strategies and this framework accounts for and supports that reality. The framework helps determine and recognize individual logistics organizations measurement needs. With this framework,
firm-specific measurement needs can be taken into account in ways not possible within the general
models presently found in the logistics literature.
As with most things, more testing would serve to benefit the framework. In the interim, application of the framework by logistics practitioners in the field is the next step. By evaluating current
firm goals and objectives and leading up to measurement needs, firms would have a better understanding of the types of measures they should be employing. Reviewing performance measures for
their ability to report necessary information would identify which measures are best suited to meet
those needs, thereby helping to align the firms measurement needs with the capabilities of the
measures available to them.

JOURNAL OF BUSINESS LOGISTICS, Vol. 25, No. 2, 2004

111

LIMITATIONS AND FUTURE RESEARCH


A limitation of this research was that the dataset used in framework validation was not collected
with this specific framework in mind. The survey instrument did not address measurement needs
specifically, but rather logistics and supply chain activities found within firms engaged in logistics
operations. Model fit could be expected to improve if data were collected specifically to test this framework. Data collected specifically for testing this model might also lend greater support to the functional and process dimension of the framework, which did not achieve support in this research.
The observed lack of support could have occurred for one of at least three reasons. The survey
instrument used may not have accurately detected this type of performance; industry progress
toward adopting a process approach may not be far enough along to lend support for this type of testing; or it could be that the process approach is present in the field, the survey instrument can detect
this, but that support is still lacking because the measurement dimension simply does not belong.
No matter which is the case, the most conservative approach is to remove the dimension from the
framework, rather than retain it in hopes of later determining more specifically why it failed to
gain support here.
Future research could tailor surveys to provide further validation of these dimensions and
should focus upon placing logistics measures within the framework. A set of hypotheses could be
tested on the tendency of the measures to favor one end of each measurement dimension or another.
Measures showing clear differences between the dimension anchors would show further support for
the concept that some measures are better suited to specific measurement needs. In addition, research
could investigate new, different measurement dimensions.
CONCLUSION
Like a ship traversing the high seas, performance measures provide readings that guide the ship
to port. Without such readings, the ships captain is unable to take effective corrective action or
to even recognize that corrective action is in order. Basing decisions upon valid but inappropriate
measures (e.g., the outside temperature instead of a compass heading) would be unproductive in returning the ship to port. Simply using measures is not enough; it is important to use the right measure.
Too often logistics managers either fail to use measures at all, or choose to consult measures they
are familiar with, unaware if they are appropriate for their measurement needs or not. This research
presents a framework that seeks to close this gap for the logistics practitioner. It is hoped that this
initial effort will provide some guidance to logistics practitioners and encourage researchers to further investigate this important area of management decision-making.

112

GRIFFIS, COOPER, GOLDSBY, AND CLOSS

NOTES
Arbuckle, James L. (1997), Amos Users Guide, version 3.6 SmallWaters, Chicago.
Armstrong, J. Scott and Terry S. Overton (1977), Estimating Non-Response Bias in Mail
Surveys, Journal of Marketing Research, Vol. 14, No. 3, pp. 396-402.
Bagozzi, Richard P. and Youjae Yi (1988), On the Evaluation of Structural Equation Models,
Journal of the Academy of Marketing Science, Vol. 16, No. 1, pp. 74-94.
Bentler, Peter M. (1990), Fit Indices, Langrangian Multipliers, Constraint Changes, and Incomplete
Data in Structural Models, Multivariate Behavioral Research, Vol. 25, No. 1, pp. 163-172.
Bentler, Peter M. (1998), EQS for Windows, Version 5.7, Encino, CA: Multivariate Software.
Bollen, Kenneth A. (1989), Structural Equations with Latent Variables, New York: Wiley.
Bowersox, Donald J. and Patricia J. Daugherty (1992), Logistics Leadership Logistics Organizations of the Future, Logistics Information Management, Vol. 5, No. 1, pp. 12-17.
Bowersox, Donald J. and Patricia J. Daugherty (1995), Logistics Paradigms - The Impact of Logistics Technology, Journal of Business Logistics, Vol. 16, No. 1, pp. 65-77.
Bowersox, Donald J. and Cornelia Drge (1989), Similarities in the Organization and Practice of
Logistics, Journal of Business Logistics, Vol. 10, No. 2, pp. 61-72.
Boyd, Lynn H. and James F. Cox III (1997), A Cause and Effect Approach to Analyzing Performance Measures, Production and Inventory Management Journal, Vol. 38, No. 3, pp. 25-32.
Boyson, Sandor, Thomas Corsi, Martin Dresner, and Elliot Rabinovich (1999), Managing
Effective Third Party Logistics Relationships: What Does it Take? Journal of Business Logistics,
Vol. 20, No. 1, pp. 73-100.
Brewer, Peter C. and Thomas W. Speh (2000), Using the Balanced Scorecard to Measure Supply
Chain Performance, Journal of Business Logistics, Vol. 21, No. 1, pp. 75-93.
Browne, Michael W., Rishna Tateneni, Gerhard Mels, and Robert Cudeck (2000), Comprehensive
Exploratory Factor Analysis 1.1 (CEFA).

JOURNAL OF BUSINESS LOGISTICS, Vol. 25, No. 2, 2004

113

Byrne, Barbara M. (1994), Structural Equation Modeling with EQS and EQS/Windows: Basic
Concepts, Applications, and Programming, Thousand Oaks, CA: Sage Publications.
Caplice, Chris and Yossi Sheffi (1994), A Review and Evaluation of Logistics Metrics, International Journal of Logistics Management, Vol. 5, No 2, pp. 11-28.
Cooper, Martha C., Douglas M. Lambert, and Janus D. Pagh (1997), Supply Chain Management:
More Than Just a New Name for Logistics, International Journal of Logistics Management,
Vol. 8, No. 1, pp. 113.
Davis, Tom (1993), Effective Supply Chain Management, Sloan Management Review, Vol. 34,
No. 4, pp. 35-46.
Ellram, Lisa M., Bernard J. La Londe, and Mary M. Weber (1989), Retail Logistics, International
Journal of Physical Distribution & Logistics Management, Vol. 19, No. 12, pp. 29-39.
Emmelhainz, Margaret A., James R. Stock, and Larry W. Emmelhainz (1991), Consumer Responses
to Retail Stock-outs, Journal of Retailing, Vol. 67, No. 2, pp. 138-148.
Fisher, Marshall L. (1997), What is the Right Supply Chain for Your Product? Harvard Business
Review, Vol. 75, No. 2, pp. 105-116.
Fornell, Claes and David F. Larcker (1981), Evaluating Structural Equation Models with Unobservable Variables and Measurement Error, Journal of Marketing Research, Vol. 18, No. 1,
pp. 39-50.
Hair, Joseph F. Jr., Rolph E. Anderson, Ronald L. Tatham, and William C. Black (1995), Multivariate
Data Analysis with Readings, 4th Ed. Englewood Cliffs, NJ: Prentice Hall.
Harrington, Thomas C., Douglas M. Lambert, and Martin Christopher (1991), A Methodology for
Measuring Vendor Performance, Journal of Business Logistics, Vol. 12, No. 1, pp. 83-104.
Harding, Forrest E. (1998), Logistics Service Provider Quality: Private Measurement, Evaluation, and Improvement, Journal of Business Logistics, Vol. 19, No. 1, pp. 103-120.
Hoyle, Rick H. (1995), The Structural Equation Modeling Approach: Basic Concepts and Fundamental Issues, in Structural Equation Modeling: Concepts, Issues, and Applications, Rick H.
Hoyle, ed., Thousand Oaks, CA: Sage Publications, pp. 1-15.

114

GRIFFIS, COOPER, GOLDSBY, AND CLOSS

Hu, Li-Tze and Peter M. Bentler (1995), Evaluating Model Fit, in Structural Equation Modeling:
Concepts, Issues, and Applications, R. H. Hoyle, ed., Sage, Thousand Oaks, CA, 1995, pp. 76-99.
Johnson, M. Eric (1998), Giving Em What They Want, Management Review, Vol. 87, No. 10,
pp. 62-67.
Johnson, M. Eric and Tom Davis (1998), Improving Supply Chain Performance by Using Order
Fulfillment Metrics, National Productivity Review, Vol. 17, No. 3, pp. 3-16.
Kaplan, Robert S. (1991), New Systems for Measurement and Control, The Engineering Economist, Vol. 36, No. 3, pp. 201-218.
Kaplan, Robert S. and David P. Norton (1992), The Balanced Scorecard Measures That Drive
Performance, Harvard Business Review, Vol. 70, No. 1, pp. 71-80.
Keebler, James S., Karl B. Mandrodt, David A. Durtsche, and D. Michael Ledyard (1999), Keeping Score: Measuring the Business Value of Logistics in the Supply Chain, Oak Brook IL, Council
of Logistics Management.
Kerr, Steven (1975), On the Folly of Rewarding A, While Hoping for B, Academy of Management
Journal, Vol. 18, No. 4, pp. 769-783.
Kleinsorge, Ilene K., Philip B. Schary, and Ray D. Tanner (1991), The Shipper-Carrier Partnership:
A New Tool for Performance Evaluation, Journal of Business Logistics, Vol. 12, No. 2, pp. 35-57.
Krupp, James A. G. (1994), Measuring Inventory Management Performance, Production and
Inventory Management Journal, Vol. 35, No. 4, pp. 1-6.
Lambert, Douglas M., Martha C. Cooper, and Janus D. Pagh (1998), Supply Chain Management:
Implementation Issues and Research Opportunities, The International Journal of Logistics Management, Vol. 9, No. 2, pp. 1-19.
Lancioni, Richard and John L. Gattorna (1992), Setting Standards for Quality Service in
Logistics, International Journal of Physical Distribution and Logistics Management, Vol. 22,
No. 3, pp. 24-29.
Lee, Hau L. and Corey Billington (1992), Managing Supply Chain Inventory: Pitfalls and Opportunities, Sloan Management Review, Vol. 33, No. 3, pp. 65-73.
Marien, Edward J. (1995), Structuring the Shipper/Carrier Relationship, Transportation and
Distribution, Vol. 36, No. 7, pp. 60-62.

JOURNAL OF BUSINESS LOGISTICS, Vol. 25, No. 2, 2004

115

Mentzer, John T. and John Firman (1994), Logistics Control Systems in the 21st Century, Journal of Business Logistics, Vol. 15, No. 1, pp. 215-227.
Mentzer, John T. and Brenda P. Konrad (1991), An Efficiency/Effectiveness Approach to Logistics Performance Measurement, Journal of Business Logistics, Vol. 12, No. 1, pp. 33-62.
Narasimhan, Ram and Joseph R. Carter (1998), Linking Business Unit and Material Sourcing
Strategies, Journal of Business Logistics, Vol. 19, No. 2, pp. 155-171.
Richardson, Peter and John Gordon (1980), Measuring Total Manufacturing Performance, Sloan
Management Review, Vol. 21, No. 2, pp. 47-58.
Scott, Charles and Roy Westbrook (1991), New Strategic Tools for Supply Chain Management,
International Journal of Physical Distribution and Logistics Management, Vol. 21, No. 1, pp. 22-33.
Stank, Theodore P., Scott B. Keller, and Patricia J. Daugherty (2001), Supply Chain Collaboration
and Logistical Service Performance, Journal of Business Logistics, Vol. 22, No. 1, pp. 29-49.
Wiersma, Uco J. (1992), The Effects of Extrinsic Rewards in Intrinsic Motivation: A MetaAnalysis, Journal of Occupational & Organizational Psychology, Vol. 65, No. 3, pp. 101-115.
Wisner, Joel D. and Stanley E Fawcett (1991), Linking Firm Strategy to Operating Decisions
Through Performance Measurement, Production and Inventory Management Journal, Vol. 31,
No. 3, pp. 5-11.

116

GRIFFIS, COOPER, GOLDSBY, AND CLOSS

APPENDIX
SURVEY ITEMS
Item
#

Item Description

Mean

Standard
Deviation

E1

Logistical operations throughout my firm are performed in a


standard manner

3.10

0.99

E2

My firm has common, agreed to policies and procedures to standardize


logistics operations

3.26

0.94

E3

My firm has active programs to enforce standardized logistical


performance

3.15

0.95

E4

My firm has simplified our product and promotional offering to


reduce ordering and invoice complexity

2.68

0.99

E5

My firm has substantially reduced operating complexity by developing


separate operations focused on individual channels over the past
three years

2.83

0.98

R1

My firm is able to accommodate a wide range of unique customer


requests by implementing preplanned solutions

3.14

0.98

R2

My firm has different, unique logistics service strategies for different


customers

3.36

1.05

R3

My firm has a flexible program of special services that can be matched


to changing customer requirements

3.27

0.99

R4

My firm has established a program to authorize and perform special


requests made by selected customers

3.41

0.96

R5

My firm has established a program to integrate and facilitate individual


customer requirements across our strategic business units

3.09

0.92

R6

My firm has developed information linkages with customers that


permit substantial last-minute accommodation without loss of
planned efficiencies

2.76

0.96

O1

My firm experiences improved performance by integrating operations


with supply chain partners

3.37

0.94

O2

My firm has substantially reduced facility and operational complexity


over the past three years

3.12

1.15

O3

My firm is committed to achieving zero defect logistical performance

3.33

1.08

O4

My firm has substantially reduced channel complexity over the past


three years

3.00

0.93

S1

My firm clearly defines specific roles and responsibilities jointly with


our supply chain partners

3.11

0.92

S2

My firm has a track record of allowing suppliers to participate in


strategic decisions

2.76

0.92

JOURNAL OF BUSINESS LOGISTICS, Vol. 25, No. 2, 2004

117

APPENDIX (CONT.)
SURVEY ITEMS
Item
#

Item Description

Mean

Standard
Deviation

S3

My firm has active programs to positively impact our suppliers


suppliers

2.50

0.92

S4

My firm shares technical resources with key suppliers to facilitate


operations

3.27

0.94

S5

My firm shares research and development costs and results with


primary suppliers

2.66

0.85

S6

My firm is committed to sharing responsibility with suppliers in


new product/service development and commercialization

3.10

0.89

S7

My firm is willing to enter into long-term agreements with suppliers

3.80

0.81

M1

The information available in my firm is accurate, timely, and formatted


to facilitate use

2.92

0.98

M2

My firm effectively shares operational information between


departments

3.19

1.00

M3

My firms logistics information systems capture and maintain


real time data

3.00

1.15

D1

My firm benchmarks best practices/processes and shares results


with suppliers

2.86

1.05

D2

My firm benchmarks outside our primary industry

2.82

1.13

D3

My firm benchmarks performance metrics

3.07

1.08

F1

My firm extensively utilizes cross-functional work teams for


managing day-to-day operations

3.43

1.13

F2

My firm has extensively redesigned work routines and processes


over the past three years

3.83

0.99

F3

My firm has invested in technology designed to facilitate


cross-organizational data exchange

3.40

1.05

P1

The orientation of my firm has shifted from managing functions


to managing processes

3.48

0.91

P2

My firm has developed performance incentives based upon process


achievement

3.00

1.06

P3

My firm acknowledges the importance of functional excellence


but focuses on process achievement

3.40

0.84

P4

My firm has developed performance incentives based upon process


improvement

3.05

0.97

*Items were measured using a five-point Likert-type scale, where 1= Strongly Disagree and 5 = Strongly
Agree.

118

GRIFFIS, COOPER, GOLDSBY, AND CLOSS

ABOUT THE AUTHORS


Stanley E. Griffis is an Assistant Professor of Logistics Management in the School of Engineering and Management at the Air Force Institute of Technology in Dayton, Ohio. His Ph.D. in Logistics is from The Ohio State University. His current research interests include logistics performance
measurement, reverse logistics, logistics information systems, and the role of logistics in customer
satisfaction and loyalty.
Martha C. Cooper is a Professor of Marketing and Logistics, Fisher College of Business, The
Ohio State University. Her research interests include supply chain management, partnership and other
inter-firm relationships, the role of customer service in corporate strategy, international logistics, strategic planning for logistics, and cluster analysis. She is co-author of: Customer Service: A Management Perspective; Partnerships in Providing Customer Service: A Third-Party Perspective; and
Strategic Planning for Logistics. Her Ph.D. is from The Ohio State University.
Thomas J. Goldsby is an Assistant Professor of Marketing and Logistics, Fisher College of
Business, The Ohio State University. He received his Ph.D. in Marketing and Logistics from Michigan State University. He also holds a BS in Business Administration from the University of Evansville and MBA from the University of Kentucky. His research interests focus on logistics customer
service and supply chain integration. He also has interest in the development and implementation
of lean and agile supply chain strategies.
David J. Closs (Ph.D. Michigan State University) is the John McConnell Chaired Professor
of Business Administration, Eli Broad College of Business, Michigan State University. He is co-author
of Logistical Management, World Class Logistics: The Challenge of Managing Continuous Change,
and 21st Century Logistics: Making Supply Chain Management a Reality and has published numerous articles on logistics strategies, systems, modeling, inventory management, and forecasting.

Vous aimerez peut-être aussi