Vous êtes sur la page 1sur 15

PUBLIC ADMINISTRATION AND DEVELOPMENT

Public Admin. Dev. 22, 109-122 (2002)


Published online in Wiley InterScience
(www.interscience.wiley.com) DOI; 10.1002/pad.219

BENCHMARKING: A TOOL FOR FACILITATING


ORGANIZATIONAL LEARNING?^
RANDHIR AULUCK*
Centre for Management and Policy Studies, Ascot, UK

SUMMARY
This article looks at betichmarking as a tool for promoting performance improvement and the 'learning organization' ideal.
Specifically, it considers some of the ways the European Foundation for Quality Management Excellence Model and self-
assessment approach are being applied within the UK public service. Further, the article introduces 'Dolphin', a new self-
assessment tool based on the Excellence Model, and describes how the tool can be applied in practice. Finally, whether bench-
marking can aid organizational improvement, organizational learning and establish the basis for a 'learning culture' is
discussed. Copyright © Crown Copyright 2002. Recorded with the permission of the Controller of Her Majesty's Stationery
Office. Published by John Wiley & Sons, Ltd.

INTRODUCTION
Benchmarking and the learning organization have become an established part of public service vocabulary over the
past decade, as has the preoccupation with assessment, measurement, target setting, results-driven management
and performance improvement (Oakland, 1999). Indeed, these concepts feature visibly within the current UK mod-
ernizing govemment agenda (Modernising Govemment White Paper, 1999). As part of the continued drive for
responsive, high-quality public services, tbe White Paper on Modernising Government proposes that: 'The Public
Service must become a leaming organization' (1999, p. 56).
We are reminded that in order to remain 'globally competitive' and 'abead of the curve' in terms of managing
and delivering high-quality, effective public services, we need to find better ways of bamessing 'knowledge', pro-
blem solving and facilitating individual and organizational learning. Benchmarking is presented as one of tbe key
tools to help organizations become more 'leaming oriented', to adopt a more systematic and rigorous approacb to
problem solving, and to become more engaged in leaming from others. It can inform strategic planning and policy
planning, support the development of new products and programmes, and enable organizations to be more respon-
sive to change. In short, benchmarking can help organizations develop a better awareness of themselves, what they
are doing, bow they are doing it, how well it works, and how tbey compare to other organizations.
Many academics and practitioners suggest tbat benchmarking is now one of the most popular business manage-
ment tools (Cox and Thompson, 1998; Ahmed and Rafiq, 1998). Also, the Modemising Govemment Wbite
Paper proposes that benchmarking be used by govemment organizations as a tool for promoting organizational
and service improvement, and highlights tbe importance of sharing good ideas across tbe public service. To this
end, the UK govemment has launched a Public Sector Benchmarking Programme to promote the use of the
Business Excellence Model across the public sector. The Excellence Model is a powerful tool that assesses orga-
nizational performance, provides a framework for identifying improvement areas, and offers a process for contin-
uous leaming. In short, both benchmarking and the leaming organization ideal are seen as 'institutional fairy-
godmothers', which offer potential to improve organizational performance in the public sector. Having said this.

*Correspondence to; Randhir Auluck, Centre for Management and Policy Studies, Sunningdale Park, Larch Avenue, Ascot, Berkshire SL5 OQE,
UK. E-mail; randhir.auluck@college-cmps.gsi.gov.uk
^Neither her Majesty's Stationery Office of the Centre for Management and Policy Studies, Civil Service College accept any responsibility for
the accuracy of any recipe, formula or instruction published in this article or in the journal.

Copyright © Crown Copyright 2002. Recorded with the permission of the Controller of Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd.
no R. AULUCK

both benchmarking and the learning organization are not without their critics and for some they remain a manage-
ment fad.
This article will discuss the ways in which the Excellence Model and the Self-Assessment approach are being
deployed within the UK public service. The article will also describe how the Centre for Management and Policy
Studies' (CMPS) Benchmarking Database Service works and introduces Dolphin, a new self-assessment frame-
work based on the Excellence Model, developed and operated by CMPS. Finally, I will argue that benchmarking
can aid organizational improvement and organizational learning, and establish the basis for a 'learning culture', but
only if implemented effectively.

ORGANIZATIONAL LEARNING AND THE LEARNING ORGANIZATION


Definitions of the concept of the learning organization abound. The UK Industrial Society captures the essence of
the concept as follows:
A learning organization is one which continually transforms itself. The process of transformation is a creative
one in which a willingness to change and adapt its needs exist. (Industrial Society, 1997, p. 3)
Garvin elaborates on this and suggests:
[A learning organization is] skilled at creating, acquiring and transferring knowledge, and at modifying its
behaviour to reflect new knowledge and insights. (Garvin, 1993)
Garvin suggests that there are five distinguishing features of a 'learning organization: 'systematic problem solving;
experimentation and the testing of new knowledge; leaming from experience; learning from others; and shared
knowledge and knowledge-spreading mechanisms (Garvin, 1993). Arie de Gues in The Living Company emphasizes
the capacity to change as a determining feature of a leaming organization. He argues that in order for an organization
to cope with the demands of a changing world, including globalization, competitiveness and recognition of need for
breakthrough improvements: 'any entity must develop new skills and attitudes: in short, the capability to leam... the
essence of leaming is the ability to manage change by changing yourself (de Geus, 1999, p. 27).
Other authors highlight different but related features, including:
1. Leaming climate—the way the idea of leaming is viewed within the organization; whether it is seen as func-
tional and passive or developmental and integrated with workplace performance (Smith and Taylor, 2000,
p. 197).
2. Leadership—for example, Schein argues that leaders play a key role in this process in that they determine the
behaviours the organization considers acceptable: 'If the leaders of today want to create organizational cultures
that will themselves be more amenable to leaming they will have to set the example by becoming leamers them-
selves and involving others in the leaming process' (Schein, 1997, p. 392).
3. Communication, team-working, leaming from mistakes and continuous leaming, and single- and double-loop
leaming (IRS, 1998).
A notable amount of the literature on organizational leaming emphasizes the importance of double-loop leaming.
However, Hill argues that while many organizations can and do achieve single-loop leaming, double-loop leam-
ing—the more valuable leaming achieved through questioning and challenging the norm, and specifically the stra-
tegic norm—is less evident and harder to achieve (Hill, 1996, pp. 19-25). Finally, Smith and Taylor (2000, p. 196)
offer a useful summary of the key features of the leaming organization ideal (see Figure 1).

THE LEARNING ORGANIZATION IDEAL AND THE PUBLIC SERVICE


The UK public service is actively being coaxed into embracing the principle of the leaming organization. The
Modemizing Govemment White Paper spells out the govemment's aspiration for the public sector to move toward
the leaming organization ideal and links this shift to the use of benchmarking:

Copyright © Crown Copyright 2002. Recorded with the permission of the Controller of
Her Majesty's Stationery Office. Published by John Wiley & Sons, Ud. Public Admin. Dev. 22, 109-122 (2002)
BENCHMARKING 111

Top managers' behaviour


Learning role models Anti-learning
Relationship with the extemal environment
Formative Reactive
Conducive structures
Flexible teams Status bound
Fitness of work processes
Continuous review Unchallengeable
Manager's role
Facilitation Close control
How information is found and used
Captured and shared Opportunistic and lost
Leaming climate
Self-development through work Training for certainty

Figure 1. Conceptual framework for the leaming organisation ideal. From Smith and
Taylor, 2000.

The public sector must become a leaming organization. It needs to leam from its past successes and failures. It
needs to consistently benchmark itself against the best, wherever that is found. (Modemising Govemment
White Paper, 1999, Part 6)
Nevertheless, there are some commentators who question the applicability of the leaming organization 'template'
within the UK public sector. For example, Wallace (1997) suggests that the concept of the leaming organization is
not a useful notion to apply to the public service because of a range of potentially constraining factors, for example
fixed structures, regulation, a tradition of non-participative policy making and the expectations of employees to act
rather than to leam (Wallace, 1997).
Smith and Taylor refer to the work of Edmonstone (1990) writing about his experience in the NHS, in which he
suggests that the applicability of the leaming organization concept in the public sector is necessarily limited
because of the fragmented structures of thought that accompany a hierarchy of fixed roles, the idea of account-
ability and the consequent view of mistakes, uncertainty and conflict, and the distance between espoused theories
and the theories in use, ambiguity over purposes, task obsession and related short-termism, tbe absence of rewards
for risk taking and the nature of the training culture.
Also, Smith and Taylor themselves challenge the transferability of the leaming organization ideal within the
public service. Their own research questions the extent to which the application of tbe leaming organization ideal to
Civil Service organizations is limited by structural or cultural features common to them (Smith and Taylor, 2000).

BENCHMARKING: WHAT IS IT?


The word 'benchmark' is a reference or measurement standard used for comparison. Benchmarking is the contin-
uous process of identifying, understanding and adapting practice and processes that will lead to better perfor-
mance. Because of the UK public service's perennial preoccupation with 'performance', the emphasis given to
'performance improvement' implicit in benchmarking has contributed to the technique gaining considerable
endorsement within this sector. Talbot (2000) offers an informative exploration of the concept of 'performance',
especially as defined by 'official scripts' within the public sector. He argues that there is, in fact, little conceptual
clarity about wbat is meant by 'performance' and that this poses a number of complexities in any performance
measurement exercise. This undoubtedly highlights one of the fundamental challenges of benchmarking: wbat
precisely is it that you are trying to benchmark?
A spectmm of insight into what constitutes benchmarking is available. According to Oakland, the concept is
based on the ancient Japanese quotation: 'If you know your enemy and know yourself, you need not fear the result

Copyright © Crown Copyright 2002. Recorded with the permission of the Controller of
Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd. Public Admin. Dev. 22, 109-122 (2002)
112 R. AULUCK

of a hundred battles' (Sun Tzu, The Art of War, 500 BC, reported in Oakland, 1999). Thurow (1999) describes it as
'copying-to-catch-up'. Ahmed and Rafiq suggest that it is about leaming how to improve business activity, pro-
cesses and management (Ahmed and Rafiq, 1998, p. 228). Also, an IRS Management Review suggests that it is
'considered one of the best management techniques to guard against complacency and support continuous
improvement' (Gooch and Suff, 1999, p. 6).
The rise and development of benchmarking are well documented (Camp, 1989; Watson, 1993). Watson (1993),
in an analysis of the historical development of benchmarking, suggests that it is moving from an art to a science. He
further suggests that, in doing so, it has moved through a number of distinct 'development generations'. However,
Ahmed and Rafiq (1998, p. 228) argue that while Watson suggests that the 1990s was the era of strategic and global
benchmarking, they believe that strategic benchmarking is still relatively uncommon and that strategic and global
approaches remain, to a large extent, aspirations for the future.
There is an array of different ways of cataloguing the different types of benchmarking. These can basically be
summarized into three approaches:
1. Comparing outputs, or measures, from different organizations. These can be quantitative, e.g. cost, price,
response time or error rates. They can also be qualitative, e.g. customer satisfaction levels, employee satisfac-
tion levels.
2. Assessing against a level of performance or standard which defines 'best practice' or a range of working prac-
tices and policies. It might be a published standard or a known standard, such as a quality organization.
3. Undertaking a detailed examination of the processes which produce a particular output, through intemal and
comparative analysis, with a view to understanding the reasons for difference in performance levels, and draw-
ing out best practice.
The first two approaches focus on outputs and standards and are thus concemed with the desired end result. Process
benchmarking differs in that it is concemed not only with what is achieved but also with how it is achieved (Jakes, 1998).
Benchmarking is firmly rooted in the quality movement. David Longbottom (2000) reviewed over 460 articles
on benchmarking. From his analysis, he concludes that benchmarking as a process is similar to the quality cycle
presented by Deming involving a continuous process of plan, do, check and act. Furthermore, together with others,
he observes that the critical common elements of any benchmarking are:
• Planning: involves intemal scrutiny, analysis of strengths and weaknesses, flow charting of processes, measure-
ment of performance, preparation for benchmarking.
• Analysis: involves identifying potential partners, exchange of information, site visits and observations of process.
• Implementation: involves adaptation of processes and implementation.
• Review: involves review and repeat as part of the continuous improvement philosophy.
Motwani (2001) agrees with this approach and specifically emphasizes the importance of building top manage-
ment, developing an understanding of what benchmarking entails, training every member of the project team, and
applying sound project management techniques.
Similarly, Jackson Grayson (1992) offers a useful set of pointers for successful benchmarking—his five 'must
haves':
1. Senior management support—'Not lip service but action. Not applause but participation' (p. 140).
2. Integrated with organizational strategy—'It requires thinking concretely about what customers want, under-
standing the critical success factors and the key processes required to achieve them' (p. 140).
3. Benchmarking must be a team activity—need to involve process owners as partners or you risk information
rejection.
4. Benchmarking must be planned, organized and managed: with goals, measures and monitoring.
5. You must understand your own process—'probably the single most common mistake in benchmarking is to
begin benchmarking with someone else before understanding your own process. This means knowing: how
do you do what you do, what are your problems and strengths; know what you want to leam when you talk
to others; or you risk 'industrial tourism' syndrome'.

Copyright © Crown Copyright 2002. Recorded with the permission of the Controller of
Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd. Public Admin. Dev. 22, 109-122 (2002)
BENCHMARKING 113

According to the IRS Management Review 'Benchmarking Best Practice', the choice of approach is likely to be
crucial to the success of the exercise. It suggests that benchmarking exercises that examine processes are more likely
to lead to performance improvement since they examine the reasons for performance gaps. It also suggests that:
• Internal benchmarking is a good starting point, but can be limiting.
• Benchmarking against direct competitors is useful, but difficult particularly at a process or strategic level.
• The best results come from functional benchmarking where comparisons of similar processes are made between
comparable, but not competing, organizations.
• Long-term benchmarking partnerships offer the greatest potential for continuous improvement.
Govemment, not just private sector companies, has also been keen to promote benchmarking as a way of enabling
public services to become more efficient and responsive. For example, it published its 'Best Practice Benchmark-
ing' document in 1995 (DTI publication) and in 1996 the Cabinet Office collaborated with 10 departments and
agencies to examine how benchmarking could improve their human resource management (Clulow, 1997,
pp. 32-34). More recently, the Modernizing Govemment White Paper (1999) includes a commitment to bench-
mark service delivery and policy functions and endorses the importance of peer review. The UK govemment has
also set up the Public Sector Benchmarking Service and is encouraging the take-up of the Excellence framework.

BENCHMARKING TOOLS
There are a variety of frameworks and tools for carrying out benchmarking. This article will describe two that are
being used widely within the UK public sector; namely, peer reviews and the Business Excellence Model.

Peer Reviews
Many organizations use peer review as a tool to support and develop their organization. It can be used to bring new
insights into the organization, help to identify priorities and facilitate the sharing of good practice. The Cabinet
Secretary's report to the Prime Minister on Civil Service Reform (Cabinet Office, 1999) took this further by commit-
ting Civil Service departments to carrying out independent reviews of their business planning through peer groups or
with outside organizations. Two Performance and Innovation Unit (UK Cabinet Office) reports, 'Wiring it Up' and
'Adding it Up' (Cabinet Office, 2000b), also recommend the use of peer review as a means of examining the quality of
analysis and modelling in policy making and so to help spread good practice and stimulate improvement.
Peer review uses people who have practical experience of a particular area or situation to act as 'friendly critics'
to another organization facing similar issues. Some uses of peer review in the public and private sector are listed in
Figure 2.
The Modernizing Govemment White Paper mandated the Centre for Management and Policy Studies (CMPS)
to organize a programme of peer review as a tool to help civil service organizations implement the principles and
priorities of the reform agenda. In response, CMPS has developed departmental peer review as one form of the
Extemal Challenge model (Cabinet Office, 2000a). Departmental peer review is not an audit but a leaming process
for the organization. It serves to complement but does not replace other self-assessment and scrutiny tools such as
the Excellence Model and Investors in People. It provides an opportunity for the organization to see itself as others
see it, benefit from the feedback of experienced practitioners who work in or run other organizations, and share
good practice. Furthermore, departmental peer review can serve as a flexible learning tool and one which can
enable an organization to strengthen its business planning, policy making and service delivery processes.
The review is carried out over a 1 -week period during which the team takes a high-level look at, for example,
how the organization is tackling the modemization agenda, performing across all or a significant part of its respon-
sibilities or tackling a key project or area of work. The review team usually comprises six to eight people and can
be drawn from different organizations and sectors. Briefly, the review team meets before the review week to agree
the approach and culminates in a report and presentation of findings to the Department's Management Board.
CMPS's role is to provide advice, guidance and support to departments during the preparation process, throughout
the review week and finally by following up and evaluating the entire review process.

Copyright © Crown Copyright 2002. Recorded with the permission of the Controller of
Her Majesty's Stationery Office. Published hy John Wiley & Sons, Ltd. Public Admin. Dev. 22, 109-122 (2002)
114 R. AULUCK

Mutual Challenge: involving two or more organizations reviewing each other and assumes the organizations involved have
a degree of confidence in each other. West Sussex and Surrey County Councils have established such a relationship.
Internal Challenge and Peer Assist: this is essentially concerned with sharing experiences. Someone in a parallel job
within the organization is asked to help prepare, advise on, or challenge a proposal from a perspective of direct experience
in that issue. BP uses this model extensively at all levels of the organization.
External Challenge: peers from outside the organization conduct a review by examining key documents and asking chal-
lenging questions of staff and stakeholders. The results are then discussed with management. This approach enables orga-
nizations to benefit from an unbiased interpretation of what others think of them and to hear ideas about how they might
improve.
Review Against Set Criteria: an example of this is the Local Govemment Improvement Programme. This approach sets a
benchmark of an ideal local authority. A review team is then involved in using this framework to assess the organization
against these standards.
Peer Groups: this involves bringing together people within a department who have similar tasks or who are at similar stages
in a project life cycle, to ensure continuous exchange of good practice and mutual support. Examples of this include the UK
Ministry of Defence and the Inland Revenue.
Peer Review of Papers: this is used to review scientific and academic papers prior to publication. It can be used to challenge
papers coming to boards and is an approach used by the UK Audit Commission.
Peer Review of IT-Dependent Projects: this uses practitioners from within an organization or drawn from a group of orga-
nizations across depanments, to perform targeted scrutiny of a project elsewhere in the organization. Such reviews are con-
ducted at key stages in the project life cycle. The adoption of this form of review is a key recommendation for UK
govemment IT projects.

Figure 2. Types of peer review. From Peer Review: A Guide, CMPS, 2000a.

The CMPS commissioned a recently completed review of its Departmental Review Programme delivered to six
departments in 2000/2001 (Cabinet Office, 2001a). This concludes that overall the process works well and reci-
pient departments valued its speed and insights. The process contributed to focusing on priorities and served to
further galvanize existing change programmes. However, there were concems that the process as it is currently
practised does not provide sufficient time for discussion of 'hest practice'. Some discussion of hest practice did
occur hut fell short of a detailed exchange.

Business Excellence Model


In 1992, the European Foundation for Quality Management (EFQM) launched a European Quality Award frame-
work as a tool for reviewing and identifying organizational improvements. Basically, the Excellence Model pro-
vides a set of criteria against which any organization can assess itself and use the framework to identify any 'gaps'.
Each criterion is weighted and this is now widely used as self-assessment scoring and in making award submis-
sions. The EFQM and the British Quality Foundation (BQF) publish useful guidelines for self-assessment, includ-
ing some specifically directed at the public sector, and for awards submissions.
Qther strengths of the model are that it:
• provides a comprehensive framework of organisational assessment;
• can be deployed flexibly and in a way that matches organizational need;
• is based on self-assessment—which means the organization owns the process and the outputs;
• enables comparisons within and across the public and private sectors;
• is a pragmatic tool rooted in the experience of practitioners;
• links with other quality improvement programmes such as the Baldrige scheme, and with other tools, e.g. the
Balanced Scorecard;
• provides a common language and framework for organizational assessment and improvement planning;
• promotes openness and transparency.

Copyright © Crown Copyright 2002. Reeorded with the permission of the Controller of
Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd. Public Admin. Dev. 22, 109-122 (2002)
BENCHMARKING 115

The Excellence Model of self-assessment can include a number of activities, such as:
discussion group/workshops;
surveys, questionnaires and interviews;
pro formas;
activity or process audits;
an award simulation (Cabinet Office, 2001b).
Self-assessment can be used at various points in the organization's development. For example, it can he used at the
beginning of an improvement programme to identify areas for improvement and to establish priorities. It can also
be used periodically, yearly or every 2 years to provide comparative data and to steer the improvement programme.
It can take place at the organization level or at a unit/team level.
Most practitioners recommend that the emphasis should be on understanding the organization's strengths and
areas for improvement rather than on scoring. Organizations that are just at the early stages of engaging with the
framework are usually advised not to depend too heavily on the scoring. The scoring mechanism can provide some
quantitative measurement (even if only in an indicative sense) and the very process of gaining consensus can pro-
mote useful discussion about the challenges facing the organization. It can also serve to engage senior managers.
However, the health waming is for organizations to avoid becoming overly obsessed with the scoring and not to
lose sight of the areas for improvement. The risk is that the organization expends large amounts of energy on the
self-assessment process and then feels too fatigued to implement any of the improvement plans.

LEVEL OF TAKE-UP OF BENCHMARKING AND THE EXCELLENCE MODEL


According to Zairi and Ahmed (1999, pp. 810-816) benchmarking has reached maturity within the UK, with over
60% of UK companies claiming some involvement. It has been reported as being the third most popular manage-
ment technique worldwide and the fourth in the UK between 1992 and 1996. It is also reported that 65% of central
govemment and 30% of local govemment organizations have taken up the Business Excellence Model (Bench-
marking Human Resource Activity, 1999, pp. 13-14).
In February 2000 the UK Cabinet Office commissioned Price Waterhouse Coopers (PWC, 2000) to undertake
an evaluation of its Public Sector Benchmarking Programme (Cabinet Office, 2001b). This programme is a
collaboration between the Cabinet Office and HM Customs and Excise. It serves as a cooperative leaming facility
for the public sector and is set up to act as the focal point for managing knowledge on benchmarking in the public
sector. It aims to promote organizational leaming across the public sector. It also aims to increase the availability of
information on applying benchmarking and sharing good practice. As part of the aforementioned evaluation study,
PWC looked at the impact of the Business Excellence Model and its effectiveness as it is being applied within the
UK public sector.
The PWC survey comprised a sample of 3500 public sector organizations and to date this represents the largest
study of 'excellence' in the UK public sector. The study shows that there has been an explosion in the level of use of
the Excellence Model in the British public sector. The study estimates that 44% of public sector organizations are
now using the model—more than two-thirds of them starting since 1998. It is interesting to note that according to the
Public Sector Excellence Programme team, interest in the model is growing even more rapidly: they report providing
information to over 500 organizations new to the model since the PWC research was conducted. What this suggests is
that with this level of new uptake a significant number of public sector organizations are at an early stage of maturity
in applying the model. This also suggests that the immediate scope of knowledge sharing through, for example, pro-
cess benchmarking might be limited at present (Public Sector Excellence Programme, Support Pack, 2001, p. 1).

BENCHMARKING: REPORTED BENEFITS AND CONSTRAINTS


The benefits of benchmarking are reported to be potentially very significant. For example, the UK CBI estimates
that if UK manufacturers secured the average best practice levels achieved by leading competitors such as those in
the USA, the UK's gross domestic product would increase by around £60 billion (CBI, 1997).

Copyright © Crown Copyright 2002. Recorded with the permission of the Controller of
Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd. Public Admin. Dev. 22, 109-122 (2002)
116 R. AULUCK

In terms of the Excellence Model, according to Benchmarking Human Resource Activity (1999), over 90% of
users report that their rate of improvement has increased as a direct result of using the tool. Specific reported ben-
efits include, for example:
• promoting an organizational dialogue about 'how things are' and 'what needs to change'; this gives people an
outlet for their experiences and thinking, throughout different parts of the organization; and
• encouraging innovation and exchange of ideas.
The PWC evaluation survey (2000) reports that 81% of the users surveyed believed that the model was already an
effective tool in their organizations. Almost all agreed that long-term use of the model would help them in achiev-
ing continuous improvement and consequently improve front-line service to customers. Furthermore, 85% of those
surveyed indicated that the model assisted them in linking together key policies and initiatives. The UK govern-
ment's modernization agenda identifies better performance management and better business planning, and 'joined-
up' practices as key strategic and operational imperatives. The PWC study reports that many public sector orga-
nizations are using the model as the overarching framework to help them fulfil this requirement.
The fact that the model is not prescriptive and can easily be adapted to reflect the culture and language of any
given organization was reported as being one of its key strengths.
Fifty-six per cent of survey respondents were current users and most of these took up use of the model in the past
3-4 years. This suggests that given that many organizations have only been using the model for a relatively short
period of time, they have not had sufficient time to make links with improvements in performance and their use of
the model. 'Maturity in using the Model comes over a number of years, conducting intemal and extemally vali-
dated self-assessments and fully deploying the Model across the organization' (PWC, 2000, p. 3). It is interesting
to note the level of deployment—the PWC study indicates that most of the users only managed to deploy the
model across one-quarter of their organization, and few public sector organizations believe that the model is fully
integrated within their organization.
Also, the PWC report suggests that most organizations see its main value as a diagnostic tool for improvement
rather than a scoring or performance-measuring device, and there is a fear that the govemment might seek
to enforce compulsory scoring of organizations and generate 'name and shame' league tables (PWC, 2000,
p. 6). Furthermore, lack of leadership commitment (both at an executive and political level) was seen as the
key barrier to driving forward excellence. Initiative overload was also cited as a significant barrier to achieving
excellence.

THE EXCELLENCE MODEL IN PRACTICE


There are many accounts of how the Excellence Model has been deployed with specific public service organiza-
tions (Morling and Tanner, 2000; Zaremba and Crew, 1995). The following are offered as three examples.
Jackson (1999) provides a useful account of how one clinical directorate of a National Health Service
(NHS) Trust used self-assessment and the excellence framework to secure a culture of continuous improvement.
Jackson describes how Huddersfield NHS Trust employed these tools to move it from a hierarchical, bureaucra-
tic, individualistic culture to one where the norms, values and beliefs reflected teamwork, involvement and
empowerment.
One of the clinical support services' sub-directorate started its self-assessment project in 1993 as part of a wider
quality improvement programme. Jackson reports that between 1993 and 1998 this organization set up 26 quality
improvement initiative groups involving 100 (50%) of the staff. On the value of the Excellence Model, Jackson
reports: 'the self-assessment model proved particularly useful in that it encouraged staff to assess [the Unit] as a
whole, before areas for action were prioritized' (p. 62) and '[the organization now has] a sense of identity, an out-
ward looking approach, and an appreciation of the contribution of the whole trust makes towards the delivery of
healthcare' (p. 63). Other reported benefits include the organization becoming better at identifying objectives for
each project improvement and action planning. Jackson also highlights the importance of leadership involvement,
effective communications and integration with strategy (p. 63).

Copyright © Crown Copyright 2002. Recorded with the permission of the Controller of
Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd. Pubtic Admin. Dev. 22, 109-122 (2002)
BENCHMARKING 117

Jackson describes some initial resistance to change, with some people within the organization using such
defending arguments as 'the time isn't right for such a change' to 'lack of resources'. She reports that this type
of resistance was managed by careful consideration of timing, keeping people informed about the change process,
increasing feelings of involvement and highlighting opportunities the process generates. Jackson advises against
'biting off more than you can chew' (p. 62)—it can be tempting to agree to take on a range of improvement initia-
tives in the initial rush of enthusiasm for the process, and failure can be demoralizing. Finally, she confirms that 'a
common language of assessment' is important and having a team familiar with the model is helpful, and in terms of
choosing projects for action it is helpful to have a mix of 'quick-wins' as well as those with a longer payback time.
Secondly, Hooper (1997) provides a useful account of the route the UK Post Office took on its large-scale 'total
quality' programme, and how benchmarking was used as part of this programme. This included a benchmarking
'trip' to the USA by several dozens of senior managers, a comprehensive staff attitude survey, an 18-month training
cascade delivered on a learn-use-lead format and use of the excellence framework as a benchmarking tool. Hooper
observes that while these efforts did lead to many benefits (motivation of staff, clear identification of the impor-
tance of intemal and extemal customers, alignment of individual and team and organizational objectives, the devel-
opment of a culture of continuous leaming, and a greater understanding of the need for clear leadership) the
programme was not without its difficulties. He states, for example, that early attempts to identify and spread
best practice was a challenge. Specifically, there was an evident lack of consistency of understanding of the assess-
ment process and how to use the excellence framework, and this led to different interpretations and deployment
within business units.
Thirdly, in 2000, the UK Civil Service College directorate of the Centre for Management and Policy Studies
used the Excellence Model self-assessment as a framework for internal benchmarking. The college consciously
chose to adopt the Excellence Model as a change management tool. It decided to use the Excellence Model to help
it consider the ways in which it needed to change and to help to create the conditions that would make change
effective. It was also seen as a tool that would support the 'better business planning' imperative spelt out by
the Civil Service modemization agenda.
The process used by the college comprised five stages: briefing and defining the approach, data generation and
data gathering, assessor rating, consensus meeting and action planning and reporting (see Figure 3). This process
culminated in the production of a lengthy 'wish-list' of organizational changes from which a small number were
taken forward for detailed action planning and implementation. The latter included the development of a new
5-year strategic plan and a number of 'quick-wins'. Over a year later, the questions that remains include: What
are the specific tangible improvements achieved as a result of this exercise? Can any such improvements be
directly and causally linked to the self-assessment exercise? What has been the impact of the exercise on 'bot-
tom-line' results like customer satisfaction or level of income generation? Such an analysis remains outstanding,
and therefore one could argue that the impact of the self-assessment really remains untested.
Furthermore, the college would need to carry out another cycle of self-assessment using the same assessment
framework within the near future to identify specific changes and improvements made during the intervening per-
iod. As yet, there is no plan for this to happen.

DOLPHIN: AN EXCELLENCE TOOL FOR SELF-ASSESSMENT


The Civil Service College used Dolphin, a new self-assessment framework developed and operated by the college.
Dolphin is basically a diagnostic tool to assist organizations to assess themselves and it uses the Excellence Model
as a framework. Organizations can use it to diagnose their strengths and weaknesses, develop an action plan to
improve operational services, benchmark results with other public sector organizations, and tap into the Knowl-
edge Pool series of good practice reports. Dolphin is particularly useful for those organizations new to the process
of self-assessment or those taking the first steps towards benchmarking. An electronic and interactive version of
Dolphin was launched in June 2001 and is now available on the Intemet.
Dolphin forms part of the benchmarking Database Service operated by the college. This service, established in
1996, is part of an integrated package of support that provides advice about self-assessment, access to Dolphin, and

Copyright © Crown Copyright 2002. Recorded with the permission of the Controller of
Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd. Pubtic Admin. Dev. 22, 109-122 (2002)
118 R. AULUCK

1. Defining the Approach and Briefing


This started with the appointment of an intemai lead assessor and an extemal assessor/consultant, and included the agreeing
of the assessment team members and the self-assessment process itself. A team of 10 assessors was identified, representing a
vertical cross-section of the organization, to form the Self-Assessment Team. The assessors were then put through a short
briefing session to ensure there was clarity about the process, its purpose and methodology and expected benefits. Use was
made of Dolphin, an electronic self-assessment pro forma designed by the college and based on the Excellence Model.
2. Data Gathering
Each assessor was given a short questionnaire to use as a basis for seeking information from colleagues throughout the orga-
nization. The questionnaires were disseminated to about 100 members of staff across all sections and levels of the organization.
3. Individual Assessor Rating
Each assessor used the data generated by the questionnaires and discussions with staff to complete the individual pro forma,
'Dolphin' Section A. The team was given 3 weeks to consult with staff and to complete the Dolphin pro forma. An additional
week was allocated for them to return their electronic versions of Dolphin to a central point for data collation. Assessors'
ratings and qualitative examples and comments were collated and coded, and a composite rating and assessment was pro-
duced. Clearly, sufficient time needed to be allowed for the assessors to do the data gathering and to complete Section A of
Dolphin.

4. Consensus Meeting
The aim of the one-day consensus meeting was to enable the Self-Assessment Team to agree an overall rating and qualitative
assessment using their individual assessments. This process was facilitated by an extemal consultant/adviser who had also
been at the initial briefing meeting and who had been available to the Self-Assessment Team for ongoing advice and support.
It is interesting to note that the Self-Assessment Team did try to come up with a consensus-based team score for each of the
criteria. However, given that the assessors varied quite significantly in their individual ratings, this made the process of con-
sensus building rather difficult. On some of the criteria the spectrum of scoring was so wide it seemed difficult to believe that
the assessors had all being assessing the same organization. Ultimately, this breadth of scoring rendered the consensus scor-
ing futile and meaningless and the team agreed to abandon the process part of the way through. No attempt was made to
agree an overall score for the organization. This suggests that 'hard scoring' might in itself be of little value (unless of course
you are going for an Award Submission) and highlights how difficult it can be reconciling differing individual assessor per-
spectives. Such a process, if it is going to have any real value, is going to be time-consuming and the assessors do need clarity
from the outset about what constitutes 'sufficient evidence' to merit one score or another. The temptation can be to 'rush
through' decisions about the scoring in order to 'just get the thing done'.

5. Action Planning and Reporting


The consensus meeting also examined the priorities for improvement and agreed recommendations (the key improvement
areas). The findings and recommendations were incorporated into a report and an action plan, which were put up to the
Business Executive. The recommendations and action plan were linked into a variety of other organizational systems and
processes including an on-going College Change Management programme. The findings have also been fed into a Strategic
Planning programme and a number of other Project Groups.

Figure 3. The civil service college self-assessment approach.

offers information about master class events, quality networks and conferences as well as access to monthly
Knowledge Pool reports. Basically, organizations submit the results of their self-assessment to the Database
Service and can then benefit from the comparative information held on other public sector organizations. The
method of self-assessment used is not important. Organizations can use a range of self-assessment tools from rapid
score to award submissions, as well as, of course. Dolphin; they are all database compatible.

How Dolphin works


Assessment using Dolphin is essentially based on a team approach. It is important to begin the process by deter-
mining whether the whole organization or a single business unit is to be assessed and to appoint a planner/coordi-
nator of the assessment process. The next step is to identify six to eight team members from a diagonal slice of the
organization. Each team member then individually completes Section A of Dolphin. Section B is then completed

Copyright © Crown Copyright 2002. Recorded with the permission of the Controller of
Her Majesty's Stationery Office. Published by John Wiley & Sons, Ud. Public Admin. Dev. 22, 109-122 (2002)
BENCHMARKING 119

as a team. This is then used to compile the action plan. Section B can be returned to the college for those organiza-
tions wishing to participate in the Database Service.
It is advisable for the assessment team to have some top management representation (securing commitment
form the top is essential), to include a diagonal slice from throughout the organization and to include any specialist
units. Extemal involvement can also he helpful. The role of the assessment team is to collect evidence and to reach
consensus about the evidence, the organization's strengths and areas for improvement, and score (if appropriate),
to prioritize the areas for improvement (AFIs) and to develop an action plan. It is important for the assessment team
to keep the real goal in mind—that of making improvements—and it is necessary to manage expectation raised
within the organization about the kinds of outputs and changes the process will deliver.
The Dolphin self-assessment process may be summarized as follows:
1. Develop commitment
2. Plan the self-assessment exercise
3. Establish and train the self-assessment team
4. Communicate plans
5. Conduct self-assessment
6. Determine action plan
7. Implement action plan
8. Review implementation

BENCHMARKING: PROBLEMS AND CONSTRAINTS


Having said all this, effective benchmarking is not that easy in practice. Thurow advocates 'learning from the best'
but argues that this is not always easy to do, given that most societies resist copying:
The process starts by having to admit that there is something to leam. Someone else does it better. Believing
that if it wasn't invented here, it can't be worth copying is a universal human failing. (Thurow, 1999, p. 93)
Those engaged in benchmarking can encounter many challenges, especially if they are not thoroughly prepared or
simply see it as a quick fix to organizational problems. Inadequate understanding of benchmarking and of one's
organizational processes is also a common difficulty and one that can limit the effectiveness of any benchmarking
(Motwani, 2001; Davies, 1998; Cox and Thompson, 1998; Campbell, 1999). Gathering data that is comparable
with one's own organization and identifying suitable benchmarking partners can also present problems. Talbot
(1999) presents a very cogent examination of the constraints of benchmarking, and in particular the value and lim-
itations of private-public sector benchmarking. He argues, for example, that specific benchmarks and assessment
criteria can be played out very differently in the private sector compared to the public sector. To illustrate, Talbot
suggests that criteria such as 'financial results' are likely to be translated as 'input' and 'expenditure' in a public
sector organization and less likely to be translated as 'output' or 'income'/'profit' as in the case of a private sector
organization. These kinds of differences in framing such assessment criteria can introduce difficulties in cross-
sector benchmarking.
Furthermore, benchmarking methods have been subject to challenge by those advocating alternative
approaches. For example. Hammer and Champy (1994) and Davies (1998) (proponents of Business Process
Re-engineering) argue that modern-day organizations are facing such a rapidly changing environment, driven
by technology leaps and the globalization of markets, that traditional approaches to process improvement, such
as benchmarking, are too slow and incremental. They argue for a more radical 'clean-sheet' approach: start from a
clean sheet, brainstorm, eradicate and obliterate processes to create a whole new way of doing things based upon
optimizing the use of technology. Longbottom suggests that this and other such critiques may only serve to expose
the difficulties in gaining total support for benchmarking (Longbottom, 2000, p. 101).
The Business Excellence Model is based on the premise that by improving the 'how', improved results (the
'what'), will follow. Some sceptics have questioned how well self-assessment describes the relationship between

Copyright © Crown Copyright 2002. Recorded with the permission of the Controller of
Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd. Public Admin. Dev. 22, 109-122 (2002)
120 R. AULUCK

results and enablers, and some report that early users of the model are disenchanted and that bottom-line results are
not always obvious (Seddon, 1998).
Longbottom's own study of the status of benchmarking within the UK indicates that some very successful pro-
jects have been implemented and that the level of interest among practitioners is high, especially within the man-
ufacturing sector. Longbottom, however, expresses surprise at the lack of penetration into other sectors. This study
suggests that some still consider benchmarking to be a routine activity largely based on common sense. 'This
means in practice they do not attach great importance to applying it as a strategically focused research tool for
delivering continuous improvements. There is little evidence of a systematic or rigorous approach in these circum-
stances. Comparisons between organizations tend to be ad hoc and informal' (Longbottom, 2000, 106).
Notably, this study reports particularly strong pressure to undertake benchmarking projects within the public
sector. 'However, there are also very strong concems expressed to us concerning levels of understanding and
underlying motivations' (Longbottom, 2000, p. 106).
On the issues of the types of benchmarking projects undertaken, the Longbottom study reports that the most of
the respondents undertook internal projects (54%) compared with competitor projects (32%) and generic projects
(14%). Respondents within this study also report problems in negotiating suitable extemal partners and also reluc-
tance from within their own ranks to share information. Furthermore, this study found only a few projects had
progressed into process analysis involving site visits and open sharing of best practice. Respondents report the lack
of confidence, lack of time and resources as blocks (p. 109). Other observations from the study include the follow-
ing: focus of benchmarking projects is narrow and selection criteria lack rigour; very few could be described as
generic or strategic in nature; few extend to consider suppliers or distributors or customer processes; selection of
projects largely arises from benchmarking champions, the need to update equipment and technology or reacting to
rising costs/falling profits.
We find very little evidence to show that organizations are identifying and prioritizing projects based on their
corporate and strategic planning process. The result is that selection is largely ad hoc and even where the
project is deemed to be successful by the participants, overall results in terms of impact on the companies
performance may be sub-optimal and, in some cases, even be considered a misuse of resources. (Longbottom,
2000, p. 109)
The study indicates that a high proportion of those that had taken part in a benchmarking project rated the project
as successful in meeting or exceeding the performance improvement measures set. Ninety-six per cent were
national and only 4% intemational. The low volume of international collaborations is surprising given the advance
of new technologies and the globalization of markets.

CONCLUDING COMMENTS
Benchmarking is essentially a continuous improvement tool. It compares processes and practices, challenges exist-
ing approaches and identifies improvement goals. It uses a structured approach to identify what needs to change,
how it can be changed and the benefits of the change. While the concept of benchmarking is used to refer to many
different activities, there are a number of common features. These include:
• Self-assessment
• Measurement by comparison
• Leaming and continuous improvement
Benchmarking, however simplified the process, is an aid to organizational leaming. In fact, the more simplified the
process, the easier it is for people to engage with it and not become overloaded by a cumbersome framework.
Benchmarking provides a systematic process for organizational leaming. The very act of getting a cross-section
of people from within and outside the organization to think about and to question, consciously and deliberately,
'how things are within the organization', and to do this within a structured framework, can in itself trigger
increased awareness—awareness of 'the need to make things better' and 'how to make things better'—and

Copyright © Crown Copyright 2002. Recorded with the permission of the Controller of
Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd. Public Admin. Dev. 22, 109-122 (2002)
BENCHMARKING 121

actively engages people in a leaming process. The art of enabling people to critically challenge (to reflect on and to
debate honestly and rigorously) basic assumptions about the way they do things, why they do things in a certain
way, and to explore the possibility of better altematives, can achieve several things. It can promote awareness of
'the taken for granted' organizational processes and systems, widen and deepen understanding of how things work
for real (and not just what is laid out in any manual), and can huild commitment and momentum for change.
The process can make an impact on the way in which people think about their world or work, and how they
engage with work processes. The real challenge is how to embed this 'philosophy' into everyday organizational
moments, seconds, transactions and movements; that is, how to transfer the learning into individual and organiza-
tional practice. Based on the studies referred to in this article and my own practical experience, I would suggest
that it is important to take account of the following issues in order to get the best out of any benchmarking exercise:
1. Reality check: benchmarking can, where it is applied sensibly, hold up a mirror to the organization (one of those
magnifying mirrors that gives a zoomed-in view). However, the process does need to be undertaken with a
degree of integrity, openness and honesty (otherwise the view is likely to be subject to distortion and the
'Mirror, mirror, who is the fairest of them all?' syndrome). Some may feel uncomfortable about the presented
picture and might reframe it into a more positive 'Oh, it's not that bad round here really'; driven by organiza-
tional loyalty, some might not wish to paint what they consider to be an overly negative picture. While this is
understandable, this tendency might give an inaccurate picture and can only limit the potential for forward
movement.
2. Need for a communication strategy: clearly, effective consultation, communication and feedback are essential
ingredients in securing broader support and promoting wider leaming about improvements, and can support
learning transfer.
3. Unwavering and consistent leadership commitment: those in leadership positions need to give active support to
and participate in the process. Also, they need to be clear about what is involved, why they want the organiza-
tion to embark on the activity, clarify their expectations and hold realistic expectations about what the process
will deliver.
4. Skill and know-how development: those who are involved in, and especially those leading the benchmarking
process need to have a detailed understanding of how the process works. This can be achieved through training
events and practical exposure to organizations that have used the approach over a period of time.
5. Less is more: while it might be tempting to produce a great number of exciting recommendations for improve-
ment action, this can end up being rather a paper exercise and lead to 'institutional paralysis'. It is usually more
helpful to identify a small number of focused priorities for action and to ensure that these are then mapped out in
depth and implemented successfully. These priorities also need to be harmonized with and aligned to any other
relevant, parallel initiatives within the organization.
6. Benchmarking is resource intensive and as such needs adequate resourcing: also, its impact is likely to be lim-
ited if the process is rushed through or truncated.
Finally, benchmarking can be a powerful tool for organizational leaming and continuous improvement. However, a
shallow understanding of organizational reality and a short-cut type engagement with benchmarking is likely to
produce only shallow end results and short-lived improvement efforts.

REFERENCES

Ahmed PK, Rafiq M. 1998. Integrated benchmarking: a holistic examination of select techniques for benchmarking analysis. Benchmarking S3:
225-242.
Ball A, Bowerman M, Hawksworth S. 2000. Benchmarking in local govemment under a central govemment agenda. Benchmarking 7: 20-34.
Benchmarking Human Resource Activity. 1999. Consortium Report 13-14.
Cabinet Office. 1999. Civil Service Reform: Report to the Prime Minister from Sir Richard Wilson, Head of the Home Civil Service. Cabinet
Office: London.
Cabinet Office. 2000a. Peer Review: A Guide. Centre for Management and Policy Studies. Crown Stationery Office: London.
Cabinet Office. 2000b. Wiring it Up and Adding it Up: Performance and Innovation Unit Reports. Crown Stationery Office: London.
Cabinet Office. 2001 a. Evaluation of Departmental Peer Review. Centre for Management and Policy Studies. Crown Stationery Office: London.

Copyright © Crown Copyright 2002. Recorded with the permission of the Controller of
Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd. Public Admin. Dev. 22, 109-122 (2002)
122 R. AULUCK

Cabinet Office. 2001b. Public Sector Excellence Programme Support Pack. Effective Performance Division, Modernising Public Services
Group. Crown Stationery Office: London.
Camp RC. 1989. Benchmarking: the search for industry best practices that lead to superior performance. Quality Press.
Camp RC. 1995. Why benchmarking improves business performance. Inside UK Enterprise Newsletter 1: 4.
Campbell L. 1999. Tailored not benchmarked. Harvard Business Review March/April: 4 1 ^ 9 .
CBL 1997. Processing power: fit for the future: how competitive is UK manufacturing? Manufacturing Brief. CBI: London.
Clulow C. 1997. Processing power. People Management 32-34.
Cox A, Thompson I. 1998. On the appropriateness of benchmarking. Journal of General Management 23(4): 1-20.
Davies P. 1998. The burgeoning of benchmarking in British local government: the value of learning by looking in the public services. Bench-
marking for Quality Management & Technology 5(4): 260-270.
de Geus A. 1999. The Living Company. Nicholas Brealey: London.
DTL 1995. Best Practice Benchmarking. DTI: London.
Edmonstone J. 1990. What Price the Leaming Organisation in the Public Service. In Pedler M, Burgoyne J, Boydell T, Welshman G (eds). Self-
Development in Organisations. McGraw Hill: London.
EFQM. 1999. Assessing for Excellence: A Practical Guide for Self-Assessment. EFQM. Crown Stationery Office: London.
Garvin DA. 1993. Building a leaming organization. Harvard Business Review July-August: 78-91.
Gooch R, Suff P. 1999. IRS Management Review 14: Benchmarking Best Practice. IRS Eclipse Group: London.
Hammer M, Champy J. 1994. Re-engineering the Corporation: A Manifesto for Business Revolution. Nicholas Brealey: London.
Hill R. 1996. A measure of the learning organization. Industrial & Commercial Training 28: 19-25.
Hooper G. 1997. Post Office Counters Ltd: The Quality Journey Total Quality Management 8: 187-190.
Industrial Society. 1997. Learning Organisations: Managing Best Practice, Vol. 33. Industrial Society: London.
IRS. 1998. IRS Management Review: Using Human Resources to achieve Strategic Objectives. Leaming Strategies Review Issue 8. IRS Eclipse
Group: London.
Jackson S. 1999. Achieving a culture of continuous improvement by adopting the principles of self-assessment and business excellence. Inter-
national Journal of Health Care Quality Assurance 59-64.
Jackson Grayson C. 1992. Taking on the world. TQM Magazine 139-143.
Jakes N. 1998. Benchmarking for quality training and development programmes. Innovation through Leaming Seminar Report. Konrad-Ade-
naur-Stiftung: Germany.
Kanji Gopal K. 1998. Measurement of business excellence. Total Quality Management 9: 633-643.
Longbottom David. 2000. Benchmarking in the UK: an empirical study of practitioners and academics. Benchmarking 7: 98-117.
Modernizing Govemment White Paper. 1999. Crown Stationery Office: London.
Morling P. Tanner S. 2000. Benchmarking: a public service business management system. Total Quality Management 22(4-6): 417-426.
Motwani J. 2001. Viewpoint. International Journal of Quality & Reliability Management 18(3): 234-236.
Naylor G. 1999. Using the Business Excellence Model to develop a strategy for a healthcare organisation. International Journal of Health Care
Quality Assurance Jil-AA.
Oakland JS. 1999. Total Organizational Excellence: Achieving World-Class Performance. Butterworth-Heinemann: Oxford.
PWC (Price Waterhouse Coopers). 2000. Preface: Report on the Evaluation of the Public Sector Excellence Programme. PWC: London.
Schein E. 1997. Organisational Culture and Leadership. Jossey-Bass: San Francisco, CA.
Seddon J. 1998. The Business Excellence Model: will it deliver? Management Services 8-9: Institute of Management Services, UK.
Smith KD, Taylor WGK. 2000. The Leaming Organisation ideal in civil service organisations: deriving a measure. The Learning Organization
7(4): 194-205.
Talbot C. 1999. Public performance: towards a new model. Public Administration 14: 3.
Talbot C. 2000. Performing 'Performance': a comedy in five acts. Public Money & Management October-December: 63-68.
Thurow LC. 1999. Building Wealth. Harper Collins: London.
Wallace L. 1997. Leaming at the whole organisation level. In The Learning Organisation in the Public Sen'ices, Cook JA, Staniforth D, Stewart
J (eds). Ch. 12 Gower: Aldershot, UK.
Watson GH. 1993. Strategic Benchmarking: How to Rate your Company's Performance against the World's Best. Wiley: Chichester.
Wilson G. 1998. The impact of the European Quality Award model on organizational performance: a Northern Ireland perspective. Total Quality
Management 9: 237-240. Carfax.
Zairi M, Ahmed P. 1999. Benchmarking maturity as we approach the next millennium. Total Quality Management Journal 4(5): 810-816.
Zairi M, Hutton R. 1995. Benchmarking: a process-driven tool for quality improvement. TQM Magazine 7(3): 3 5 ^ 0 .
Zairi M, Whymark J. 2000. The transfer of best practices: how to build a culture of benchmarking and continuous leaming. Part 1. Bench-
marking 7: 62-78.
Zaremba D, Crew T. 1995. Increasing involvement in self-assessment: the Royal Mail approach. TQM Magazine 7(2): 29-32.

Copyright © Crown Copyright 2002. Recorded with the permission of the Controller of
Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd. Public Admin. Dev. 22, 109-122 (2002)

Vous aimerez peut-être aussi