Vous êtes sur la page 1sur 24

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/0144-3577.htm

IJOPM
31,8 Supplier evaluation processes:
the shaping and reshaping
of supplier performance
888
Kim Sundtoft Hald
Department of Operations Management, Copenhagen Business School,
Received 29 October 2009
Revised 12 July 2010, Frederiksberg, Denmark, and
15 October 2010 Chris Ellegaard
Accepted 28 October 2010
Copenhagen Business School, Center for Applied Market Science,
Herning, Denmark

Abstract
Purpose The purpose of this paper is to illuminate how supplier evaluation practices are linked to
supplier performance improvements. Specifically, the paper investigates how performance
information travelling between the evaluating buyer and the evaluated suppliers is shaped and
reshaped in the evaluation process.
Design/methodology/approach The paper relies on a multiple, longitudinal case research
methodology. The two cases show two companies efforts in designing, implementing, and using
supplier evaluation in order to improve supplier performance.
Findings The findings show how the dynamics of representing, reducing, amplifying, dampening,
and directing shape and reshape supplier evaluation information. In both companies, evaluation
practices were defined, redefined, and re-directed by the involved actors perception and decision
making, as well as organisational structures, IT systems, and available data sources.
Research limitations/implications The identified factors and dynamics could be empirically
tested on larger samples to increase generalisability.
Practical implications The results provide insights into how an evaluating buyer may analyse
and control supplier evaluation processes thereby improving their effects. Managers must know how
performance information is altered before reaching key supplier actors in order to optimise supplier
performance.
Originality/value Current studies on supplier evaluation practices are limited in their focus on
design, implementation, or use. This paper explores all three phases empirically, and proposes a set of
dynamics to better understand and control the often taken for granted link between intentions and
outcome of such practices. In relation to future research, the authors propose a more holistic direction,
which will take the entire supplier evaluation process as its unit of analysis.
Keywords Supplier evaluation, Performance measurement, Buyer-supplier relationship
Paper type Research paper

Introduction
Performance measurement in operations management has received some attention
during the past three decades (Bourne et al., 2003; Evans, 2004; Wouters and Sportel,
International Journal of Operations & 2005). It has been suggested that this attention has been driven by changes in business
Production Management environments (McAdam and Bailie, 2002) and that these changes in turn triggered
Vol. 31 No. 8, 2011
pp. 888-910 a performance measurement revolution (Neely, 1999), fuelled by the inadequacy
q Emerald Group Publishing Limited of previous one-dimensional financially oriented performance measurement practices
0144-3577
DOI 10.1108/01443571111153085 (Bourne et al., 2000).
The increased scope of managing, where companies now seek to control Supplier
inter-organisational activities, is one such change in the business environment evaluation
(Christopher, 1998). To assist in managing the wider supply chain, a multitude of new or
modified managerial tools and practices have been suggested by academic writers and processes
implemented by practitioners (Fugate et al., 2006; Gunasekaran and Kobu, 2007). Several
authors have suggested that previous internally focused performance measurement
practices now need to be broadened and changed. Otherwise they will limit the 889
possibility to optimise dyadic relationships (Lamming et al., 1996) or the supply chain
(Van Hoek, 1998).
As a response, numerous papers in the academic literature have reported on studies
of the development of performance measurement systems addressing the evaluation of
activity outside legal company borders. These papers can be classified into three
different streams of research according to the scope of the system they address as
object of management: supplier evaluation (Simpson et al., 2002; Wilson, 1994),
buyer-supplier relationship evaluation (Medlin, 2003; OToole and Donaldson, 2002), or
supply chain evaluation (Beamon, 1999; Gunasekaran and Tirtirogul, 2001). Parallel to
research on performance measurement inside organisations, most attention in research
on performance measurement between or outside legal company borders is oriented
towards the identification and development of performance measures and models
(Purdy and Safayeni, 2000). A technical rational logic tends to dominate (Elg and
Kollberg, 2009), where improved measures, systems that are aligned with strategy
(Kaplan and Norton, 1992), as well as an optimum performance measurement system
environment (Neely et al., 1995) will result in improved performance in the activities
measured.
We position this contribution outside the scope of the technical rational logic and
contribute to the emerging and growing literature exploring how organisations handle
performance measurement and make use of the data collected (Elg, 2007; Kennerley
and Neely, 2002). The study represents a shift in focus from studying the
measurements themselves to how they are used in real face-to-face situations (Elg and
Kollberg, 2009). Understanding how supplier evaluation practices are designed,
implemented, and used in and between organisations is critical for managers because it
generates insight into the effective management of suppliers through performance
measurement devices. The objective of the present study is to explore how supplier
evaluation practices are linked to their effects. More specifically, we are interested in
how activities in the supplier evaluation process shape supplier performance outcome
and the following research question forms the basis of the investigation:
RQ. How is performance measurement information, travelling between the
evaluating buyer and evaluated suppliers, shaped and reshaped in the
evaluation process?
We report on two longitudinal case studies where supplier evaluation practices where
developed, implemented, and used in order to manage and influence suppliers.
The paper is organised as follows. First, the literature on supplier evaluation is
reviewed. Following a description of the research methodology, the two case studies
are presented and analysed. The paper concludes with a discussion on the contribution,
managerial implications, limitations, and prospects for future research.
IJOPM Supplier evaluation
31,8 We define supplier evaluation as the process of quantifying the efficiency and
effectiveness of supplier action (Neely et al., 1995). Supplier evaluation is a
quantification process designed to stimulate the decision process inside the evaluating
buying company or through the incentives it invokes, to stimulate a change in
behaviour in the evaluated supplying company (Neely et al., 1997). We explore supplier
890 evaluation practices as instruments designed to influence supplier action (Schmitz and
Platts, 2003, p. 719). The underlying assumption is that if such an influence attempt is
successful, it will manifest itself in changed supplier behaviour aligned with
the evaluating companys interests, improved supplier capabilities and performance,
and that this in turn will benefit the evaluating buying company (Prahinski and
Benton, 2004).
Taking the categorisation of performance measurement literature offered by
Bourne et al. (2000, p. 758) as our starting point, we adopt a three-phase model of
supplier evaluation:
(1) The design of the supplier performance evaluation system. Key objectives to be
measured are defined and performance measures are selected (Choi and Hartley,
1996; Simpson et al., 2002; Tan et al., 2002; Willis and Huston, 1990; Wilson,
1994).
(2) The implementation of the supplier performance evaluation system. Systems
and procedures are put in place to collect and process the data that enable the
measurements to be made regularly (Araz and Ozkarahan, 2007; Forker and
Mendez, 2001; Morgan and Dewhurst, 2007; Muralidharan et al., 2001; Ross et al.,
2006; Teng and Jaramillo, 2005; Vokurka et al., 1994).
(3) The use of the supplier performance evaluation system. Performance data are
collected, reviewed, and acted upon (Dumond, 1991, 1994; Prahinski and
Benton, 2004; Prahinski and Fan, 2007).
This study contributes to the specific part of the performance measurement literature
occupied with the behavioural implications of supplier evaluation. This literature
explores how buying company evaluators and evaluated suppliers activate and
respond to assessed and communicated supplier performance ratings and how their
behaviours in turn are influenced (Cousins et al., 2008; Dumond, 1991, 1994; Prahinski
and Benton, 2004; Prahinski and Fan, 2007; Purdy et al., 1994). Using experiments as
research instrument, Dumond (1991, 1994) founds that different measures produce
different procurement manager decisions. Prahinski and Benton (2004) explored how
suppliers perceived the buying firms supplier evaluation communication process and
how this in turn impacted supplier performance. Prahinski and Fan (2007) added to
this understanding by exploring how content and frequency in communication
impacted suppliers commitment to change behaviour. Cousins et al. (2008) explored
the role of socialization mechanism in mediating the relationship between supplier
performance measures and performance outcomes. Socialization mechanisms provide
an avenue for dialogue to act upon issues identified through the performance
measurement control process (Cousins et al., 2008, p. 240). Interestingly, the authors
concluded that it is not the performance measurement system, but rather the mediating
effect of buyer-supplier socialization mechanisms that is critical to firm performance.
Purdy et al. (1994) and Purdy and Safayeni (2000) explored suppliers perceptions
of the effectiveness of buyers evaluation processes. Three main conclusions were Supplier
drawn. First, the majority of suppliers felt that their effectiveness was not accurately evaluation
reflected in the evaluation. Rather, it seemed as a test of how much their organisation
looked like the buying organisation. Second, the evaluating buying organisation did processes
not utilise the information gathered through the audit process properly. Instead,
suppliers felt that price and politics were the bottom line of the purchasing decision
(Purdy et al., 1994, p. 102). Finally, suppliers felt that scoring high on the evaluation 891
chart was more a question of game playing and showmanship of repackaging
material to fit a different format rather than one of looking for ways to improve (Purdy
et al., 1994, p. 102). The article rejects the undisputed power of the measurement system
(i.e. the technology), stating that simply having the required systems and procedures
in place does not necessarily ensure an effective or good supplier (Purdy et al., 1994,
p. 102), thereby attributing interpretive and transformative power to the actors
involved in activating and responding to the technology.
In this research, we build on the contention that performance measurement
practices and their performance effects are best understood and controlled as a holistic
process (Kuwaiti, 2004). We explore supplier evaluation practices as potentially
including all steps from alignment with strategic objectives, data capture, data
analysis, interpretation, and evaluation to decision making, communication and
information transfer, and taking action (Bourne et al., 2005). This study contests
technology-centred thinking, which implies that simply putting a supplier evaluation
practice in place, which is aligned with strategy, will produce the desired effect.
Instead, it is assumed that supplier evaluation practices cannot be engineered as a move
from one well-defined point of being (state A) to another (state B) (Quattrone and
Hopper, 2001). This implies that we are looking for obstacles to the unproblematic
implementation or the mechanisms shaping and reshaping such practices (Elg, 2007;
Waggoner et al., 1999). We further position our research in the wider accounting
change literature and focus on the forces that put the supplier evaluation practices
under study into motion (Andon et al., 2007; Hopwood, 1987, p. 207).

Methodology
We chose to design the investigation as a qualitative case-based study. The study
object was the supplier evaluation process, which made a qualitative research design
superior (Van Maanen, 1983a). In addition, the how research question qualified the
case study methodology as the ideal instrument for the investigation (Voss et al., 2002;
Yin, 1994). The aim of the study was to extend existing concepts and understandings
within the field of supplier evaluation, which made case research a highly appropriate
choice (Stuart et al., 2002; Voss et al., 2002). The two buying organisations were
intensely investigated, which allowed data retrieval of the sequential relationship of
events (Voss et al., 2002). The interaction and the decisions made by actors engaged in
the evaluation processes, as well as the changes in information, supplier behaviour,
and performance resulting from the evaluations were studied through the phases
design, implementation, and use.

Studied companies
The supplier evaluation practices of two large corporations producing industrial
systems (A) and electronic appliances (B), were studied. Both companies were major
IJOPM competitors in their industries. Their complex supply needs meant that the
31,8 management and evaluation of supplier performance were particularly challenging,
representing intense cases of the studied phenomenon (Miles and Huberman, 1994,
p. 28). Performance information travelled a long distance and was subject to decisions
and interferences made by a multitude of actors. Both companies were open and
granted full access to employees, documents, meetings, and decision making.
892 The researchers played no active part in the design, implementation, and use of the
supplier evaluation practices under study.

Data collection
Various forms of inquiry were employed to produce a plausible interpretation of the
supplier performance evaluation phenomenon. Hence, triangulation ensured the
validity of the research findings (Denzin, 1978; Yin, 1994). Employees in the two
buying companies, such as purchasers, product developers, process engineers,
production planners, managers, and executives, were interviewed on the supplier
evaluation process (Table I). In addition, supplier actors, such as sales employees and
key account managers, involved in the supplier evaluation process, were interviewed.
These actors were selected based on supplier importance to the focal buying companies
and their observed involvement in the supplier evaluation process. Our inquiry covered
the decisions and events affecting the performance information flow, and the expected
evaluation outcomes.
A main challenge was to let the interviewee guide the interview, but simultaneously
find ways to ensure that the conversation uncovered all the pertinent data (Stuart et al.,
2002). Hence, we collected the interview data through in-depth open-ended
dialogue-type interviews, where the interviewees were allowed to account for their
stories, experiences, and opinions regarding the supplier evaluation process.
The interview guide only contained basic keywords ensuring that all key aspects
were covered in the interview. Interviewees accounted for the evaluation process and
performance information flow and keyed together decisions made and interaction
encounters. Furthermore, documents and electronic files (e.g. evaluation databases,
exchange agreements, supply strategies, evaluation reports, and suppliers own
performance measurement devices) were studied, which was useful especially to
generate insight into the implementation phase. Finally, observation was a key inquiry
method. Meetings between buyer and supplier personnel, direct observations of work
procedures involved in the use of supplier evaluation systems, as well as internal
meetings between actors in the buying organisations were attended. Observation
allowed the monitoring of behaviour and interaction between actors. It also prevented
the sole reliance on individual actors own accounts. The interview data (presentational
data), as presented by the informants, were complemented with operational data,
describing the activities observed by the researchers, to provide a richer empirical base

Time Average Average Average no. of


studied interview No. of meeting meeting people present at
Table I. (months) Interviews time (min) observations time (min) meetings
Details on the interviews
and observations made A 26 13 55 9 (5 external, 4 internal) 100 4.5
in the two case studies B 36 17 65 12 (6 external, 6 internal) 120 4.0
and to separate fact from fiction (Van Maanen, 1983b). Reactions and counter Supplier
communication of supplier personnel (most often sales employees) were observed at evaluation
close hand. Observing the interactions enabled the researchers to draw conclusions on
the success of the evaluation process (Light, 1983). The data collection process processes
progressed from interviews with the buying company actors, as well as internal
buying company meetings regarding the design and implementation of the evaluation
schemes. This was followed by observations of meetings between buyer and supplier 893
when the schemes were brought to use, and finally interviews again with core actors in
the buying organisation along with the supplier actor interviews.

Data analysis
We relied on several of the coding procedures and tools offered by Miles and
Huberman (1994). First, a basic first-level coding of every interview and observation
was made. This was done in order to identify all events and decisions influencing the
information flow. The outcome was a comprehensive partially ordered data display for
further analysis (Miles and Huberman, 1994). Further, the documents created by the
involved actors were studied in order to identify the information flowing between the
actors involved in the supplier evaluation process. Based on the partially ordered data
display, we then performed within case analysis on the two cases. Here, we analysed all
first-level coded data in the initial document in order to bracket and identify the factors
affecting the information flow. Factors refer to the underlying causes of information
shaping or reshaping, which can be in the form of organisational dimensions, systems
limitations/circumstances, cognitive barriers and/or actor preferences among others.
We then carried out cross-case analysis to determine if similarities could be found
among the two cases. The richness of the data meant that we derived some factors that
were unique to each of the cases. Across the cases, we identified 13 factors having a
shaping effect on the performance information flow. Referring to research question,
however, we were also interested in how the factors affected the information flow.
Hence, we analysed the 13 factors to establish how they shaped the performance
information. For each of the factors, we made a description of the effect on the
information flow and assigned labels to these effects. With these descriptions and
labels as a basis, we made cross factor comparative coding to determine overall
categories of effects. This led to the identification of five types of effects, which we
chose to term dynamics. A dynamic simply refers to a cause of change. Finally, to tie
the knot back to the discussion on the effects of performance measurement, we
identified the actor reactions to the identified dynamics.

Supplier evaluation practices in company A


Company A produces industrial systems and is among the largest manufacturers in its
industry. The following describes how the company designed, implemented, and used
a supplier evaluation system. It focuses specifically on the application of the system to
evaluate key suppliers of electronic components.

Designing the supplier evaluation system in company A


Company A had formed a cross-functional group to develop a simple evaluation scheme
(Figure 1) and carry out the evaluation. The group consisted of the responsible category
manager, a product developer specialising in electronics, and an operational purchaser.
IJOPM Company Asupplier evaluation Rating:
Supplier: XXX
31,8 Supplierno: XXX
1. Excellent
2. Good
Product: Electronics 3. Average
Contact persons: Employee x and Employee y 4. Not satisfactory
Rated by: CM, PD, and OP 5. Not acceptable

894 Rating Q2200X

Grade Weight (%) Total

A Product quality 4 25 1.00

B On-time-delivery 5 25 1.25

C Cooperation 2 15 0.30
Figure 1.
The basic evaluation D Environment 1 10 0.10
scheme used by company
E Total cost development 5 25 1.25
A (with evaluation
example) Total Grade 3.9

The inclusion of different functions in the group was intended to produce the best
representation of supplier performance, incorporating business, process, and product
considerations. However, the exercise was dominated by purchasing objectives,
focusing on short-term financial performance. The category manager had the final
decision-making authority and took charge in the evaluation design meetings. The
product developer was very clear in his opinion regarding the design process:
[. . .] purchasing just says 10% savings but they do not know the product architecture and
the possibilities for benefiting otherwise we could gain a lot from focusing more on joint
development.
The lack of any measures on joint development and product innovation demonstrated
limited technological attention in the supplier evaluation system. Despite the apparent
disagreement, the product developer did not object to the scheme, possibly to avoid
conflict, or because it was deemed futile since purchasing would have the final word
anyway.
The weights allocated in the scheme (Figure 1) illustrate the prioritisation of quality,
delivery, and cost development over environment and cooperation. Cooperation and
environment were rarely discussed at the meetings with suppliers. Focus was on total
cost development, with quality and on-time delivery playing a role only in case of
urgent problems or when brought into play to negotiate lower prices.

Implementing the supplier evaluation system in company A


The evaluation group based their rating on expectations that had been stated in the
contracts with each supplier. A, B, and E (Figure 1) were retrieved directly from the
enterprise resource planning system (ERP system) and compared to expectations
regarding parts per million (PPM), percentage of supplies delivered on time, and a
10 per cent savings target, respectively. Environment was determined by asking
suppliers for an environmental registration (1), environmental policy (3) or neither (5).
Cooperation was determined qualitatively among the group members based on their Supplier
supplier interaction experiences. evaluation
Based on the emotional reactions of supplier actors it was apparent that the
expectations of company A were perceived harsh, bordering the unrealistic. Above, on processes
target, or below the 10 per cent savings expectation, for instance, scored 1, 3, and 5,
respectively. The purchaser drove these strict demands arguing that suppliers needed
to be under constant pressure and that poor ratings would motivate creativity. Good 895
ratings, on the other hand, would relax suppliers, and thereby impede performance
improvements. Also, since the suppliers were reluctant to open their books, they had to
be capable of improving. However, the pressure had the exact opposite effect as it was
perceived as unfair by suppliers. The lack of realism was revealed in the scores, where
the best supplier had managed to maintain status quo on prices, while the three other
had demanded price increases, one as much as 8 per cent. The overall average rating of
the four suppliers was 4.025. Suppliers argued that prices were very difficult to
reduce from a base price that had already been squeezed and battling currency
issues, increasing raw material prices, and mature design with little room for value
engineering. Interestingly, the group consistently computed bad supplier ratings in
accordance with the pressure logic, despite actually revealing satisfaction with
performance. For instance, the second worst rated supplier overall (3.9 almost not
satisfactory) was actually mentioned in very positive phrases in most interviews.
Especially, the purchaser was impressed: we do not use much time on this supplier
but that is because it is not necessary they are really good!.
Finally, the implementation revealed some data instability issues. Especially,
quality scores were associated with uncertainty. One supplier received notes with the
scorecard stating:
Our automatic measurement system is not running yet. But from manual registration
supplier X appears to live up to contractual PPM demands. In the future more precise
measurements of failure level will be registered.
These uncertainties made it difficult for suppliers to rely on the data as a basis for
improvements and led them to question the whole idea behind the evaluation exercise. In
several meetings with suppliers, the evaluation group admitted that company A employees
were poor at providing the necessary information regarding defect components, which
made it difficult for suppliers to optimise their quality and service deliveries.

Using the supplier evaluation system in company A


Bringing the evaluation to use revealed several information distorting factors. First,
the evaluation group added notes to the scorecard when handing it over to supplier
representatives with the purpose of motivating supplier actors, expecting that the
numbers alone would not have the desired effect. The notes would appeal to supplier
actors in various ways:
.
Demonstrate urgency. 29% delivered too late and 42% too early. Very poor
performance. Focus is needed here. Improvement MUST take place.
.
Refer to supplier status, commitments, and responsibility. 6 months lead-time is
simply not good enough from a strategic supplier such as supplier X.
.
Refer to past performance. It seems the past very good performance is starting
to slip?.
IJOPM Second, during evaluation meetings, the evaluation group led by the purchaser presented
31,8 and explained the evaluation scores and thereby attempted to influence the supplier actors
to improve performance. But the communicative efforts of company A met serious
resistance from supplier actors, who refused to accept these influence attempts and the
performance conclusions made. Several of the meetings were tense and emotions were
frequently aroused. In one specific relation, the supplier representatives clearly took
896 the rating as an insult. The sales manager responded to the scores: It looks like we
are unacceptable on all parameters, except co-operation why do you even buy from us?
He then launched a counter attack, questioning the harsh objectives: Can any of the other
suppliers even meet 10% price reduction or 100 PPM?. The purchaser admitted that no
supplier rated better than 5 on cost development and 500 PPM (rating of 3) on quality.
He then admitted that the evaluation was perhaps somewhat harsh and declared that the
supplier was actually average. Hence, the reactions from suppliers forced the purchaser to
moderate ratings at the scene of interaction. He seemed to realise that pursuing the original
demands would not motivate supplier actors. It was more likely that further pressure
would damage the relationship.
Third, another issue that incited particularly sharp reactions from suppliers was the
inability to relate supplier performance to company A performance. Provoked by a
poor cost rating, one supplier sales director argued:
We are a very poor supplier [. . .] but you have not counted in your low volumes in the rating
I think it is too easy for you to just make this rating with your new project of 100,000 delivered
pieces there would be a basis for evaluation we have a problem with giving you low prices and
then get small volumes in return you have not even seen what we can do yet.
Company A had promised a volume of 80,000 components annually when price
objectives had been written into the contract (and rating), but had only bought
16,000 annually so far. The supplier representative clearly expected a reciprocal
relationship where supplier performance would be balanced with company
A performance and felt unfairly treated when reciprocity had not been respected.

Supplier evaluation practices in company B


Company B develops, manufactures, and sells electronic appliances. The following
describes how the company designed, implemented, and used two different supplier
evaluation systems. One was originally designed to evaluate all suppliers and
contained quantifiable measures; the other was specifically designed for the evaluation
of strategic component suppliers and was a supplier evaluation sheet containing a
range of different qualitative assessments.

Designing the supplier evaluation system in company B


The supply chain director was in the process of finishing his MBA and became
inspired to introduce a supplier performance evaluation system. His objective was to
increase information availability and alignment in the supply chain. He explained:
I believe that one number is better than a thousand opinions.
One of the main criteria in use in the design of the new supplier performance
evaluation system was updatability. Performance parameters that were difficult
to measure were kept outside the scope of the system. A manager participating
in the process explained:
I believe that the way we decided on these activities and the measures quantifying their Supplier
performance was a question of let us get going with the system and lets find some
activities for which performance is easily measured. evaluation
What is easily measured is primarily that which is easily assessable, and in company
processes
B that meant the data which were available, transferable, and structured in the ERP
system. Hence, the ERP system somehow determined what could and should be
measured and thereby shaped supplier performance for company B. 897
One year after the implementation, the purchasing director decided to expand the
supplier assessment procedure. Some purchasers and suppliers had expressed
dissatisfaction with the way supplier performance was evaluated in the supplier
performance measurement system. They argued that relationships like the ones between
company B and its strategic suppliers were more complex than what could be represented
by the quantitative measures extracted from the ERP system. Therefore, it was decided that
the evaluation of strategic component suppliers should be expanded with a set of more
qualitative and complex measures. A project group was formed, and a brainstorming
process initiated. All departments interacting directly with suppliers on a daily basis were
included in this process (purchasing planning, sourcing, quality, and product development).
The outcome of this process was an evaluation sheet comprising five main dimensions of
supplier performance, each with a range of defining sub-measures (Table II).
The sheet seemed to mirror the interests of the different departments participating
in and giving inputs to the project and supplier performance as defined in the supplier
evaluation sheet emerged as a compromise.

Implementing the supplier evaluation system in company B


Updating the supplier evaluation sheet was a resource-demanding task and it was
therefore decided to evaluate only the 20 most important suppliers once a year. For the
remaining group, the 330 suppliers, performance was defined as their ability to deliver
on time, confirm orders, and deliver product quality.
Supplier performance for each of the sub-measures under each of the five
dimensions was to be rated, translated, and thus reduced to a number between 0 and
4. The sheet primarily consisted of qualitative dimensions and measures, which made
data collection a complex task. The raw data, on which the evaluation was performed,
were based on opinions and experiences working with suppliers. A database was
constructed to collect these opinions. All employees could contribute to the process by
writing specific accounts of how the supplier acted in certain situations, which either
deviated positively or negatively from what was expected of normal business. Some
suppliers were concerned with and commented on this information collection process,
claiming that it had a tendency to be biased towards unsatisfactory supplier
performance:
[. . .] we got to make sure that in this central database where everyone can go in and make
comments its not all negative comments because its so easy for someone to feel a little bit
uneasy about something and say: I put a comment in there whereas if delivery was on
time people tend not to put the good things into the database as well.
Once a year the content of this database was then reviewed in a meeting (one per
supplier) between representatives from the different departments. It was decided
how to score the supplier on each of the mainly qualitative-based measures
IJOPM
Performance indicators Importance Expectations Performance Action plan
31,8
Relationship
Key account management 2 3 3
Commitment 2 2 2
Communication 2 4 3
898 Project management 2 1 1 X
Confidentiality 2 3 2
Code of conduct 2 3 1 X
32 24
Management
Professionalism 3 2 2
Inquiry reaction time 1 3 4
New ideas 2 2 2
IT set-up 1 1 2
Economic development 1 2 1 X
Proactive 3 2 3
22 26
Technology
Fast prototypes 2 3 1 X
Master and prove new technologies 2 2 2
Master simulations/virtual prototyping 2 2 4
Responsibility/new components 2 3 1 X
20 16
Delivery
Delivery on time 2 2 2
Order confirmation 3 2 2
Spontaneous part deliveries 1 2 2
Invoicing 3 2 2
18 18
Quality
Measurement capability 3 2 1 X
Process control capability 3 3 3
Inspection procedures 2 3 2
Table II. Quality report 3 4 3
The basic evaluation Answers to CAR 2 2 4
sheet used in company B 37 33
(with evaluation example) 129 117

in the evaluation sheet. This way, both current (last years) performance and
expectations to the upcoming year was assessed.

Using the supplier evaluation system in company B


Similar to the design phase, the use phase was performed in two different ways,
depending on which of the two evaluation methods were used. The three quantitative
measures were updated monthly. The purchasers decided which of their suppliers
should have the performance data transmitted to them. Only the most important of the
individual purchasers suppliers were usually informed, and even for the most
important suppliers, changes in the normal routines occurred.
Such diversions from normal supplier performance communication routines were
accounted for in a multitude of different ways. One purchaser explained:
This month I decided not to send the data, since we have had some trouble registering the Supplier
delivery on time at our inventory location due to a long quality inspection lead time if I send
them, they will have no effect, since the supplier cannot recognise his own performance. evaluation
Another argued:
processes
I stopped e-mailing the performance data to this supplier I see no use their performance is
so poor and have been for a long time either they cant improve or are unwilling to do so.
899
This way, based on personal judgement, purchasers could impede the supplier
performance evaluation process. When deciding to e-mail the updated supplier
performance data, the purchaser often attached a brief note. One of these notes read:
I hereby forward the performance statistics for 2nd quarter. I wish I could welcome you back
with more positive figures, but the performance figures have not improved since the last
quarter.
By attaching notes, the purchaser could either soften or strengthen the performance
signal.
Another instance of distortion occurred when the evaluated supplier performance
was re-communicated in the supplier organisation. One supplier representative
explained how he rearranged the received performance data from company B into a
different format, and decided who should be informed inside the supplier organisation.
He then e-mailed it directly to these key people with a comment. He explained:
I take his information (the information e-mailed to him by the purchaser in Company B) and
put it in my spreadsheet I generate all the graphs that I showed you there and then
I forward his original message and I attach my spreadsheet now, and then I send it to
everybody that I think is concerned in our company obviously it goes to my boss it goes
to the quality boss it goes to the supply chain director and it goes to our customer service so
they can see how they are doing as well so all the key people involved in maintaining this
position [. . .] as far as I am concerned when I look at it, I believe it has a perception or
mental effect on our people.
What was defined as supplier performance had finally reached its target, the individuals
in the supplying organisation, whose behaviour it was designed to influence.
The use of the supplier evaluation sheet was less complicated. Each of the 20 most
important suppliers were invited to a yearly supplier assessment and relationship
development meeting held in one of company Bs offices. At this meeting, the
purchaser accounted for the supplier performance evaluations made. The feedback to
suppliers was often exemplified with verbal descriptions of specific incidents that had
occurred during the last year. As a last step, if current performance was evaluated
below expectations, the supplier had to create an action plan, explaining the changes
the supplier would initiate to improve its performance.

Factors involved in shaping and reshaping supplier performance


Factors refer to the underlying causes of information shaping or reshaping, which can
be in the form of organisational dimensions, systems limitations/circumstances,
cognitive barriers and/or actor preferences among others. Based on the within case
analysis of the two cases, 13 diverse factors were identified that either hindered
or transformed the performance measurement information. These factors therefore
shaped and reshaped what supplier performance was and what it was not (Table III).
31,8

900
IJOPM

Table III.

evaluation practices
Factors and dynamics

in companies A and B
shaping and reshaping
the outcome of supplier
Identified dynamics
Company A Company B Representing Reducing Amplifying Dampening Directing

Factors shaping the design of supplier evaluation systems


1. Evaluation group structure X X X
2. Decision-making authority X X
3. Performance complexity X X
4. Assessability/measurability of data X X
Factors shaping the implementation of supplier evaluation systems
5. Rating/translation models on supplier performance X X X
6. Buyers logic on how to motivate suppliers X X
7. Instability of supplier evaluation system X X
8. Resource consumption in updating data X X
Factors shaping the use of supplier evaluation systems
9. Addition of information X X X
10. Failure to benchmark supplier performance X X
11. Failure to relate to buying company performance X X
12. Unwillingness to inform suppliers X X
13. Re-communicating performance data X X
Factors shaping the design of supplier evaluation systems Supplier
Four factors were identified that shaped supplier performance information in the evaluation
design phase. The first two identified factors (1 and 2) were closely interlinked to
organisational dimensions and design, whereas the last two (3 and 4) were caused by processes
systems limitations and circumstance. In addition, we found that evaluation group
structure and the outcome of the design phase were linked to actor preferences and
that performance complexity became a shaping factor due to cognitive barriers of 901
involved suppliers and purchasers:
(1) Evaluation group structure. Companies A and B both formed evaluation groups
to design and monitor supplier performance. It was the representatives in these
groups that defined what to include and exclude from the performance
measurement models. The outcomes reflected the interests of the departments
participating in the evaluation groups.
(2) Decision-making authority. In company A, the category manager had the final
decision-making authority and the evaluation scheme ended up being dominated
by purchasing logic.
(3) Performance complexity. In company B, purchasers and suppliers had argued
that relationships like the one between company B and its strategic suppliers
were more complex than what could be represented by the quantitative
measures extracted from the ERP system. It was a case of perceived
misrepresentation of identities.
(4) Assessability/measurability of data. For company B, selecting scope and
measures meant finding measures that were easily available. The priority was
speed in the implementation phase, and focusing on easily assessable measures
in the ERP system.

Factors shaping the implementation of supplier evaluation systems


Four factors playing a role in the implementation of supplier evaluation systems were
identified. Factors 5, 7, and 8 were tightly linked to system limitations and
circumstances. However, although related to technical issues, these factors were also
hard to separate from actor cognition and preferences. Both factors 5 and 8, for
instance, were designed into the process due to buyer ideas of simplicity and efficiency
in the decision making involved in the supplier evaluation process. In addition, we
found factor 7 to be interlinked with supplier cognition, as some suppliers began to
question the entire supplier evaluation process. Factor 6 was directly related to the
cognitive barriers and preferences of the evaluating buyer actors:
(5) Rating/translation models on supplier performance. Companies A and B both
worked with rating models that translated more or less easily quantifiable
supplier performance into a short list of numbers. It was an instance of reducing
complex dimensions into one number a sort of condensation.
(6) Buyer logic on how to motivate suppliers. In company A, the evaluating buyer had
ideas on how best to motivate suppliers. As a result, supplier performance
information was deliberately shaped by the evaluating buyer using high and
almost unreachable targets.
IJOPM (7) Instability of supplier evaluation system. In company A, the implementation
31,8 revealed some data instability issues. This instability made it difficult for
suppliers to rely on the data as a basis for improvements and led them
to question the entire supplier evaluation process.
(8) Resource consumption in updating data. The update of the evaluation sheet was a
902 resource-demanding task for company B. It was decided that it should only be
conducted on the 20 most important suppliers and only once a year. Thus,
concerns regarding resource consumption in updating supplier performance
information caused a reduction in its audience.

Factors shaping the use of supplier evaluation systems


Five shaping factors were identified in this final of the supplier evaluation phases.
Overall, issues of actor cognitions and preferences played a major role. First, individual
buying evaluators (e.g. purchasers) had the ability to reshape supplier performance
information through their own cognition and preferences (factors 9 and 12). Such actor
interference and judgments were possible due to the non-automatic transmittal of
supplier evaluation data directly to suppliers. Second, factors 10 and 11 were related to
supplier cognition. Buyers failed to compare supplier performance levels to other
similar suppliers and to their own performance levels. As a result, suppliers became
de-motivated. Finally, factor 13 was linked to suppliers perceptions of the inability of
the existing format to satisfy the internal communicative needs of the supplier:
(9) Addition of information. In both buying companies, communicating supplier
performance to suppliers meant that involved buyers had the option to
pre-translate, soften or strengthen the data communicated by the raw numbers
in the supplier evaluation system. Such softening or strengthening was done by
adding information or notes to the e-mails sent to suppliers.
(10) Failure to benchmark supplier performance. In company A, some suppliers had
questioned the harsh objectives, which triggered further questioning regarding
what other comparable suppliers could achieve. This questioning forced the
purchaser to moderate ratings at the scene of interaction, since it was admitted
that no supplier rated better.
(11) Failure to relate to buying company performance: In company A, the inability to
relate supplier performance to own focal buying company performance,
accounting for the interdependence between them, meant that suppliers felt
unfairly treated.
(12) Unwillingness to inform suppliers. Purchasers in company B demonstrated their
ability and willingness to block suppliers performance measurement
information, and these actors mobilized a range of different arguments/logics
for doing so.
(13) Re-communicating performance data. In company B, supplier performance
communication involved a chain of actors both inside and outside the
organisation. The key account manager at one supplier decided to
re-communicate and re-direct the data in a new format and to a new audience.
Dynamics involved in shaping and reshaping supplier performance Supplier
The 13 factors identified in this study represent the underlying causes of supplier evaluation
performance information shaping. However, they do not add to our knowledge on how
such distortion or impediment works. To that end we adopted the concept of dynamic, processes
as defined in the methodology section. Based on this definition, the 13 factors were
classified into a set of five generic dynamics: representing, reducing, amplifying,
dampening, and directing (Table III and Figure 2). 903
Representing
Representing is defined as the act of speaking on behalf of an object, in this case
supplier performance. As part of the design phase, we found that representation issues
played a major role in shaping the definition of supplier performance. The evaluating
buyer actors produced a set of performance signals to the evaluated suppliers that was
shaped, not only by company or supply chain strategy, but also by the perceptions of
actors in internal functions, departments, and external key suppliers on how well the
supplier performance measurement system represented their identities and interests.
As a result, performance metrics and weightings where negotiated and this in turn
produced a set of priorities that besides being influenced by company strategy was
influenced by local prioritisations shaped by negotiation, power, and authority.
Suppliers performing well on parameters supporting buying company strategy, but
misaligned with the prioritisations as defined in the supplier performance
measurement system became confused, frustrated, and de-motivated.

Reducing
Reducing is defined as the act of making an object smaller or less in amount, degree, or
size. Here the object of reduction is the information contained in the supplier
performance data travelling between the evaluating buyer and the evaluated supplier.
Several factors, mostly occurring in the design and implementation phase, reduced the
supplier performance information. As a result, the evaluating actors produced a set of
performance signals to the evaluated suppliers that was inadequate in its scope and
strategic attention. Concerns about assessability/measurability of data, as well as
resource consumption in updating the supplier performance data led to a reduction in
the feasible set of supplier performance dimensions. Also, the rating/translation
Transmitted supplier performance information

Dampening Amplifying Reducing Representing

Evaluated Use Implementation Design Evaluating


supplier buyer
Figure 2.
Directing Dynamics involved in
shaping and reshaping the
outcome of supplier
evaluation practices in
companies A and B
IJOPM models developed to automate and translate multiple dimensions of supplier
31,8 performance into a weighted average condensed or reduced the signal sent to suppliers
into one single number. Finally, instability in the data generated in the buying
organisation affected the transmittal of data from the buyer to the supplier and thereby
had the ability to hinder or reduce the set of data received. The reducing dynamics
were mainly driven by the concerns of the designing and the implementing actors.
904 Concerns that were related to a cost of use perspective, where the minimisation
of manual resources, the ease of use, and automation all played a part. The reductions
that took place reduced the information available for supplier actors information that
was needed to support or direct their improvement efforts.

Amplifying
Amplification is defined as the act of making an object more marked or intense.
Amplification increases the strength of performance signals as communicated
to suppliers. Amplification was identified as an inherent dynamic working in both
implementation and use of supplier performance measurement systems and here it
shaped and reshaped supplier performance as transmitted to evaluated suppliers.
Buyers logics on how to motivate suppliers were found to provoke a change in the
scales used to translate good, bad, or average supplier performance outcomes.
Also, in the use phase, when informing evaluated supplier representatives, buyers
sometimes added or dosed information in portions that fitted into their own ideas and
agendas on how best to provoke a behavioural change in the supplier organisation and
a resulting improvement in supplier performance. The key effect of amplification
however was de-motivation. Suppliers felt unfairly treated and found it difficult to
accept performance impressions, which had been inflated beyond reasonable demands.

Dampening
Dampening is defined as the act of restraining or depressing an object. Dampening, in
supplier evaluation processes is a dynamic in opposition to amplification and it
involves the softening of performance expressions, typically because of fear of
suppliers reactions to the transmitted performance data. We found that dampening
took place in interactive encounters between the evaluating buyer and the evaluated
and objecting supplier. In order to avoid relational damage and de-motivation of
supplier actors, the evaluating buyer admitted that the performance impression was
not as grave as the impression implied. Hence, dampening is a withdrawal following
past amplification, taking the performance signal back towards a more neutral level.
We propose that dampening occurs typically when the relationship has entered severe
problems. By dampening the signal, buying company actors may succeed to some
extent in restoring face and goodwill. However, the dampening dynamic may also
confuse evaluated suppliers and make them question the accuracy, reliability, and
seriousness of the entire evaluation exercise.

Directing
The final identified dynamic is directing or re-directing. Directing is defined as the act
of assigning a route for an object. Directing, in supplier evaluation processes, is a
dynamic where the actor consciously or unconsciously affects the route that supplier
performance information will follow. A re-direction could imply that new or alternative
actors will receive the supplier performance data or it could mean that the data will be Supplier
blocked or hindered in reaching its intended audience. We found evidence that both evaluation
buying and supplying actors can take part in directing supplier performance
information. We propose that directing can be a serious inhibitor of supplier processes
performance improvement, as it can inhibit information from reaching its target.
The target being the supplier employees that needs to change their behaviour to
improve performance. In severe instances, such as the one documented in company B, 905
it may even result in a complete blockade of information.

Discussion and conclusion


This study has reported on two longitudinal case studies and investigated how
performance measurement information, travelling between the evaluating buyers and
evaluated suppliers, is shaped and reshaped in the evaluation process. The analysis
strongly supports the contention that the outcome of supplier evaluation processes
cannot be engineered simply by optimising the supplier evaluation systems, the
performance measures, and the data collection procedures that are put in place.
It contributes to a small but growing literature occupied with the study of how supplier
evaluation practices are linked to performance outcomes (Kannan and Tan, 2006;
Vonderembse and Tracy, 1999) and how involved actors have the potential to influence
such outcomes (Cousins et al., 2008; Dumond, 1991, 1994; Prahinski and Benton, 2004;
Prahinski and Fan, 2007; Purdy et al., 1994).
The application of a three-phase supplier evaluation model as an analytical
framework in this study (Bourne et al., 2000) integrates performance measurement and
supplier evaluation research. This study defines the supplier evaluation process as a
connected entity, which broadens performance measurement analysis (Elg and
Kollberg, 2009) within supplier evaluation practices, extending it from a study of single
contingencies (Kannan and Tan, 2006; Prahinski and Benton, 2004) to a study of an
interrelated chain of actor interference, decision making, and communication. The
study does not pre-specify either a buyer or a supplier perspective. Unlike existing
research on supplier evaluation it stays open for all actors (internal or external to the
focal buying organisation) that potentially, through their involvement, might impede
or distort the process.
Focusing on the entire supplier evaluation process as its object of research, this study
extends on previous research on supplier evaluation practices. First, it extends the link
between buyers decision making and performance outcomes (Dumond, 1991, 1994;
Kannan and Tan, 2006; Vonderembse and Tracy, 1999). Specifically, illuminating
information shaping factors such as evaluation group structure, decision-making
authority and buyers logic on how to motivate suppliers, it adds to our understanding
of why and how such observed performance outcomes emerge. Extending from prior
research (Prahinski and Benton, 2004; Prahinski and Fan, 2007; Purdy et al., 1994), this
study produced evidence that perceptions and cognitions related to buyers performance
information transfer processes have the potential to influence the outcome of such
efforts. By suggesting a set of factors that shape performance information, this study
adds knowledge on the reasons why suppliers may or may not improve performance as a
result of supplier performance evaluation practices. This study seeks to develop a theory
regarding the dynamics involved in shaping and reshaping supplier performance. It
explains how a set of underlying forces impacts, shapes, and reshapes the supplier
IJOPM performance information flow. Specifically, it contributes by identifying and defining
31,8 the five generic dynamics: representing, reducing, amplifying, dampening, and
directing, which function in supplier evaluation processes to impede and distort the
communicated supplier performance information.
The major limitation of this research is that the findings are derived from only two
cases. Taking an explorative theory building approach, we achieved a high level of
906 depth and detail in the retrieved data material, at the expense of a higher number of
cases. Although many important dynamics have been identified, more dynamics may
be identified when more cases are considered. However, only after several studies in
which no new dynamics can be identified, it is likely that those will constitute an
exhaustive set of relevant dynamics. We therefore encourage future research to
replicate this research with other cases, in order to see if more dynamics will be found,
before any attempt for statistic generalization is made. In addition, we propose a new
and more holistic direction for future research efforts into supplier performance
evaluation. The unit of analysis should be the supplier evaluation process that links
intentions of the evaluating buyer to the actual achieved motivational and behavioural
impacts on the evaluated supplier. Further, and based on the five identified dynamics,
we propose that future research into supplier evaluation practices could make an effort
looking outside the operations management community in order to be infused by at
least three different theoretical directions:
(1) Organisational theory and concepts such as organisational design, power
structures inside organisations, functional identity, and culture (linked to
representing).
(2) Information system research and the study of available data structures and the
behaviours and intentions of the actors using them (linked to reducing).
(3) Motivational theories and accounting literature offering theories on incentives
(linked to amplifying, dampening, and directing).
Our results are highly relevant for practitioners. Demonstrating how seemingly
harmless practices and involved actors may influence the functioning and impact of the
supplier performance procedures, we have indicated where evaluating buying managers
should look when dysfunctional outcomes of supplier evaluation practices need to be
revised. Adopting a process approach to supplier evaluation will help identify and
control possible undesirable dynamics along the activity chain, linking the intentions of
the evaluating buyer to supplier performance improvement effects. In more detail, such a
process approach will include mapping all activities and involved actors, the observed or
hypothesised dynamics, and a list of actual observed supplier motivational impacts.
In spite of the work and resource consumption involved in this approach, the cases in this
paper have demonstrated that the effort should be worthwhile.

References
Andon, P., Baxter, J. and Chua, W.-F. (2007), Accounting change as relational drifting: a field
study of experiments with performance measurement, Management Accounting
Research, Vol. 18, pp. 273-308.
Araz, C. and Ozkarahan, I. (2007), Supplier evaluation and management system for strategic
sourcing based on a new multicriteria sorting procedure, International Journal of
Production Economics, Vol. 106, pp. 585-606.
Beamon, B.M. (1999), Measuring supply chain performance, International Journal of Supplier
Operations & Production Management, Vol. 19 No. 3, pp. 275-92.
Bourne, M., Neely, A., Mills, J. and Platts, K. (2003), Implementing performance measurement
evaluation
systems: a literature review, International Journal of Business Performance Management, processes
Vol. 5 No. 1, pp. 1-24.
Bourne, M., Mills, J., Wilcox, M., Neely, A. and Platts, K. (2000), Designing, implementing and
updating performance measurement systems, International Journal of Operations & 907
Production Management, Vol. 20 No. 7, pp. 754-71.
Bourne, M., Kennerley, M. and Franco-Santos, M. (2005), Managing through measures: a study
of impact on performance, Journal of Manufacturing Technology Management, Vol. 16
No. 4, pp. 373-95.
Choi, T.Y. and Hartley, J.L. (1996), An exploration of supplier selection practices across the
supply chain, Journal of Operations Management, Vol. 14 No. 4, pp. 333-43.
Christopher, M. (1998), Logistics & Supply Chain Management, 2nd ed., Pitman, London.
Cousins, P.D., Lawson, B. and Squire, B. (2008), Performance measurement in strategic
buyer-supplier relationships: the mediating role of socialization mechanisms,
International Journal of Operations & Production Management, Vol. 28 No. 3, pp. 238-58.
Denzin, N.K. (1978), Sociological Methods: A Source Book, 2nd ed., McGraw-Hill, New York, NY.
Dumond, E.J. (1991), Performance measurement and decision making in a purchasing
environment, International Journal of Purchasing & Materials Management, Spring,
pp. 21-31.
Dumond, E.J. (1994), Making best use of performance measures and information, International
Journal of Operations & Production Management, Vol. 14, pp. 16-31.
Elg, M. (2007), The process of constructing performance measurement, The TQM Magazine,
Vol. 19 No. 3, pp. 217-28.
Elg, M. and Kollberg, B. (2009), Alternative arguments and directions for studying performance
measurement, Total Quality Management, Vol. 20 No. 4, pp. 409-21.
Evans, J.R. (2004), An exploratory study of performance measurement systems and relationships
with performance results, Journal of Operations Management, Vol. 22, pp. 219-32.
Forker, L.B. and Mendez, D. (2001), An analytical method for benchmarking best peer
suppliers, International Journal of Operations & Production Management, Vol. 21 Nos 1/2,
pp. 195-209.
Fugate, B., Sahin, F. and Mentzer, J.T. (2006), Supply chain management coordination
mechanisms, Journal of Business Logistics, Vol. 27 No. 2, pp. 129-61.
Gunasekaran, A. and Kobu, B. (2007), Performance measures and metrics in logistics
and supply chain management: a review of recent literature (1995-2004) for
research and applications, International Journal of Production Research, Vol. 45 No. 12,
pp. 2819-40.
Gunasekaran, C. and Tirtirogul, P.E. (2001), Performance measures and metrics in a supply
chain environment, International Journal of Operations & Production Management,
Vol. 21 Nos 1/2, pp. 71-87.
Hopwood, A.G. (1987), The archaeology of accounting systems, Accounting, Organization and
Sociology, Vol. 12 No. 3, pp. 207-34.
Kannan, V.J. and Tan, K.C. (2006), Buyer-supplier relationships: the impact of supplier
selection and buyer-supplier engagement on relationship and firm performance,
International Journal of Physical Distribution & Logistics Management, Vol. 36 No. 10,
pp. 755-75.
IJOPM Kaplan, R.S. and Norton, D.P. (1992), The balanced scorecard measures that drive
performance, Harvard Business Review, January/February, pp. 71-9.
31,8
Kennerley, M. and Neely, A. (2002), A framework of the factors affecting the evolution of
performance measurement systems, International Journal of Operations & Production
Management, Vol. 22 No. 11, pp. 1222-45.
Kuwaiti, M.E. (2004), Performance measurement process: definition and
908 ownership, International Journal of Operations & Production Management, Vol. 24
No. 1, pp. 55-78.
Lamming, R.C., Cousins, P.D. and Notman, D.M. (1996), Beyond vendor assessment
relationship assessment programmes, European Journal of Purchasing & Supply
Management, Vol. 2 No. 4, pp. 173-81.
Light, D. (1983), Surface data and deep structure: observing the organization of
professional training, in Van Maanen, J. (Ed.), Qualitative Methodology, Sage, Beverly
Hills, CA, pp. 57-69.
McAdam, R. and Bailie, B. (2002), Business performance measures and alignment impact on
strategy the role of business improvement models, International Journal of Operations
& Production Management, Vol. 22 No. 9, pp. 972-96.
Medlin, C.J. (2003), Relationship performance: a relationship level construct, Proceedings of the
19th IMP-Conference, Lugano, Switzerland.
Miles, M.B. and Huberman, M.A. (1994), Qualitative Data Analysis, 2nd ed., Sage, Thousand
Oaks, CA.
Morgan, C. and Dewhurst, A. (2007), Using SPC to measure a national supermarket chains
suppliers performance, International Journal of Operations & Production Management,
Vol. 27 No. 8, pp. 874-900.
Muralidharan, C., Anantharaman, N. and Deshmukh, S.G. (2001), Vendor rating in purchasing
scenario: a confidence interval approach, International Journal of Operations &
Production Management, Vol. 21 No. 10, pp. 1305-25.
Neely, A.D. (1999), The performance measurement revolution: why now and where
next, International Journal of Operations & Production Management, Vol. 19 No. 2,
pp. 205-28.
Neely, A.D., Gregory, M. and Platts, K. (1995), Performance measurement system
design, International Journal of Operations & Production Management, Vol. 15 No. 4,
pp. 80-116.
Neely, A.D., Richards, H., Mills, J., Platts, K. and Bourne, M. (1997), Designing performance
measures: a structured approach, International Journal of Operations & Production
Management, Vol. 17 No. 11, pp. 1131-52.
OToole, T. and Donaldson, B. (2002), Relationship performance dimensions of
buyer-supplier exchanges, European Journal of Purchasing & Supply Management,
Vol. 8, pp. 197-207.
Prahinski, C. and Benton, W.C. (2004), Supplier evaluations: communication strategies to
improve supplier performance, Journal of Operations Management, Vol. 22, pp. 39-62.
Prahinski, C. and Fan, Y. (2007), Supplier evaluations: the role of communication quality,
Journal of Supply Chain Management, Summer, pp. 16-28.
Purdy, L. and Safayeni, F. (2000), Strategies for supplier evaluation: a framework for potential
advantages and limitations, IEEE Transactions on Engineering Management, Vol. 47
No. 4, pp. 435-43.
Purdy, L., Astad, U. and Safayeni, F. (1994), Perceived effectiveness of automotive supplier Supplier
evaluation process, International Journal of Operations & Production Management,
Vol. 14 No. 6, pp. 91-103. evaluation
Quattrone, P. and Hopper, T. (2001), What does organizational change mean? Speculations processes
on a taken for granted category, Management Accounting Research, Vol. 12 No. 4,
pp. 403-35.
Ross, A., Buffa, F.P., Droge, C. and Carrington, D. (2006), Supplier evaluation in a dyadic 909
relationship: an action research approach, Journal of Business Logistics, Vol. 27 No. 2,
pp. 75-101.
Schmitz, J. and Platts, K.W. (2003), Roles of supplier performance measurement:
indication from a study in the automotive industry, Management Decision, Vol. 41
No. 8, pp. 711-21.
Simpson, P.M., Siguaw, J.A. and White, S.C. (2002), Measuring the performance of suppliers:
an analysis of evaluation processes, Journal of Supply Chain Management, Vol. 38 No. 1,
pp. 29-41.
Stuart, I., McCutcheon, D., Handfield, R., McLachlin, R. and Samson, D. (2002), Effective
case research in operations management, Journal of Operations Management, Vol. 20,
pp. 419-33.
Tan, K.C., Lyman, S.B. and Wisner, J.D. (2002), Supply chain management: a strategic
perspective, International Journal of Operations & Production Management, Vol. 22 No. 6,
pp. 614-31.
Teng, S.G. and Jaramillo, H. (2005), A model for evaluation and selection of suppliers in global
textile and apparel supply chains, International Journal of Physical Distribution &
Logistics Management, Vol. 35 No. 7, pp. 503-23.
Van Hoek, R.I. (1998), Measuring the unmeasurable measuring and improving performance in
the supply chain, Supply Chain Management, Vol. 3 No. 4, pp. 187-92.
Van Maanen, J. (1983a), Reclaiming qualitative methods for organizational research:
a preface, in Van Maanen, J. (Ed.), Qualitative Methodology, Sage, Beverly Hills, CA,
pp. 9-18.
Van Maanen, J. (1983b), The fact of fiction in organizational ethnography, in Van Maanen, J. (Ed.),
Qualitative Methodology, Sage, Beverly Hills, CA, pp. 37-55.
Vokurka, R.J., Choobineh, J. and Vadi, L. (1994), A prototype expert system for the evaluation
and selection of potential suppliers, International Journal of Operations & Production
Management, Vol. 16 No. 12, pp. 106-27.
Vonderembse, M.A. and Tracy, M. (1999), The impact of supplier selection criteria and supplier
involvement on manufacturing performance, Journal of Supply Chain Management,
Vol. 35 No. 3, pp. 33-9.
Voss, C., Tsikriktsis, N. and Frolich, M. (2002), Case research in operations
management, International Journal of Operations & Production Management, Vol. 22
No. 2, pp. 195-219.
Waggoner, D.B., Neely, A.D. and Kennerly, M.P. (1999), The forces that shape organisational
performance measurement systems: an interdisciplinary review, International Journal of
Production Economics, Vols 60/61 No. 3, pp. 53-60.
Willis, T.H. and Huston, C.R. (1990), Vendor requirements and evaluation in a just-in-time
environment, International Journal of Operations & Production Management, Vol. 10
No. 4, pp. 41-50.
IJOPM Wilson, E.J. (1994), The relative importance of supplier selection criteria: a review and update,
Journal of Supply Chain Management, Vol. 30 No. 3, pp. 35-41.
31,8 Wouters, M. and Sportel, M. (2005), The role of existing measures in developing and
implementing performance measurement systems, International Journal of Operations &
Production Management, Vol. 25 No. 11, pp. 1062-82.
Yin, R.K. (1994), Case Study Research, 2nd ed., Sage, Thousand Oaks, CA.
910
Corresponding author
Kim Sundtoft Hald can be contacted at: ksh.om@cbs.dk

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Vous aimerez peut-être aussi