Vous êtes sur la page 1sur 10

Performance measurement maturity in a national set of universities

Abstract
Purpose – Performance measurement in higher education has attracted substantial attention,
often focussing on the applicability and value of performance measurement concepts to the
sector. The purpose of this paper is to use components of a seven-element maturity model to
examine the development of performance measurement maturity in New Zealand universities
in the period 2008-2013.
Design/methodology/approach – Documentary analysis was the primary approach. A total of
48 annual reports were examined. The focus was the statement of service performance, but all
surrounding material was also examined. Each annual report was subjected to a range of
quantitative and semi-quantitative analyses.
Findings – Universities have shown strengths in aligning measures to strategic direction, the
quality of commentary, and improvement in the use of outcomes frameworks. More variable
results have been seen in the breadth and quality of measures, and most importantly, in the
use of performance information to guide institutional decision-making. This lack of evolution
is likely to be linked to the particular accountability relationships surrounding the
universities, which while part of the public sector are semi-autonomous. It is also likely to be
linked to academic organisational culture.
Originality/value – There have been few examinations of the use of performance
measurement by universities, with most studies focussing less on operational practice than on
broader theoretical issues. This study provides useful information about the actual use of
performance measurement.
Keywords Performance measurement, Performance management, Organizational culture,
Higher education Paper type Research paper

Introduction
This study examines the development of performance measurement maturity in a national set
of universities. Over the past three decades, public sector performance measurement has
emerged as a topic of both substantial academic interest as well as practical importance. The
introduction – or imposition – of performance measurement in higher education has not been
without controversy. There has been substantial debate about the relevance of practices
derived from the broader public sector to the field ( Johnes and Taylor, 1990; Cave et al.,
1997; Cullen et al., 2003; Broadbent, 2007; Broadbent and Laughlin, 2009; Chitty, 2009;
Skolnik, 2010). A range of studies have examined the rationale behind the introduction of
performance measurement in specific institutions and educational jurisdictions (Niklasson,
1996; Dill, 1997, 2007; Gaither, 1997; Sanders et al., 1997; Stein, 1997; Jongbloed and
Vossensteyn, 2001; Orr et al., 2007; Hicks, 2008; McLendon et al., 2008; Feller, 2009;
Ginsberg, 2011; Kallio and Kallio, 2014). Few studies, however, have focussed directly on
the degree of maturity shown in the performance measurement practices of those institutions
– simply put, how well universities use performance information. This study aims to provide
a small contribution to this area by focussing on New Zealand’s eight universities.
The primary focus of this study is the degree of change in practice – what has occurred –
rather than the reasons for the change – the why. It is essential to understand the first before
engaging with the second question. This information is valuable to academics and
policymakers alike; to the former, it provides some newly tilled ground to explore with more
potent conceptual tools focussing on why, and for the latter, it provides additional evidence
from which to develop practice.

This study begins by noting the broader literature on public sector performance measurement,
and from this synthesises a public sector performance measurement maturity model
encompassing seven primary elements. It then identifies the specific New Zealand context,
including governance relationships. Components of the model are then used to evaluate the
degree of evolution in performance measurement displayed by New Zealand’s universities in
the period 2008-2013 inclusive. This period was chosen as it encompassed several major
efforts by governmental agencies to improve public sector, and specifically tertiary
education, performance measurement. The primary methodology utilised is documentary
analysis, focussing on the 48 annual reports produced by the universities in the period. Both
quantitative and qualitative lenses are utilised. University names are anonymised using the
letters A to H to avoid distractions or accusations of bias. This documentary analysis is then
used to identify a number of overarching trends, as well as to inductively develop two
hypotheses for further testing.
This study is introductory in nature, and in its use of documentary analysis, provides little
insight into the internal use of performance measurement. Despite its limitations, it allows for
some tentative conclusions to be reached about the general standard of performance
measurement maturity in New Zealand universities, conclusions that may further our
understanding of the field in a more global sense.
Performance measurement in higher education Performance measurement has both narrow
and broad definitions; the former focus on the use of qualitative and quantitative indicators to
measure activities and achievements (Ghobadian and Ashworth, 1994; Wang, 2002); the
latter focus on not merely measurement, but the use of performance information to control
and manage as well (Kloot and Martin, 2000; Broadbent and Laughlin, 2009; Bisbe and
Malagueno, 2012; Bititci et al., 2012).
This study uses the broader approach. While “management by numbers” probably dates back
to the development of numeracy, performance measurement as a more formal concept
became an increasingly important issue in the western world through the 1980s and 1990s
(Carter et al., 1992; Schick, 1996; Fleming and Lafferty, 2000; Proppers and Wilson, 2003;
Taylor, 2009; Moynihan and Pandey, 2010; Van Dooren et al., 2010; Hood, 2012; Lewis,
2015). Performance measurement was seen as a way to improve the effectiveness and
efficiency of bureaucratic public sector organisations (van Sluis et al., 2008; Forrester, 2011;
McAdam et al., 2011), as well as improve their accountability (Poister and Streib, 2005).
Higher education soon absorbed this new trend, with tertiary performance measurement
emerging in the UK ( Johnes and Taylor, 1990; Yorke, 1991), USA (Burke and Freeman,
1997; Cunningham, 1997; McLendon et al., 2008), Spain (García-Aracil and Palomares-
Montero, 2010), Germany (Orr et al., 2007), and Ireland (Irish Higher Education Authority,
2013), Australia (Campbell and Siew Haw, 2012; Freeman, 2014) amongst others. The
adoption of performance measurement in disparate geographical areas was often driven by
similar factors: a relaxation of central control over finance, a move from elite to mass
education, changes from industrial to knowledge-based economies, and increased funding
pressures (Dill, 1997; Layzell, 1999; Cullen et al., 2003; Martin and Sauvaugeot, 2011;
Duque, 2013).
Elements of these conditions remain today, with many tertiary systems around the world
experiencing continued funding pressures (Mitchell et al., 2014; Morgan, 2015). Performance
measurement in tertiary education has attracted a range of critiques. Some have opposed what
are seen as quasi-market and neo-liberal approaches intruding into a non-market field (Curtis,
2008; Forrester, 2011; Kallio and Kallio, 2014). Focussing on issues of power and control,
some have queried the choice of performance measures, and thus the underlying conception
of higher education quality being utilised (Dill, 1997; Cullen et al., 2003). The frequent focus
on research performance has been criticised for its effect on quality, autonomy and
motivation (Hicks, 2008; Feller, 2009; Kallio and Kallio, 2014).
Other criticisms have been more technical, focussing on the ability of relatively simple
performance indicators to validly measure performance in so complex an environment as
education (Owlia and Aspinwall, 1996; Australian Government, 2012). While there is
insufficient space in this paper to adequately consider the validity of these various arguments,
there is evidence in later sections that suggests they have had an influence on the practice of
performance measurement.
A synthetic performance measurement maturity model In order to trace the evolution of
performance measurement maturity within public sector organisations, it is first necessary to
have some criteria against which to judge such evolution – a maturity model. While such a
model is not readily available, the literature is full of normative recommendations from which
one can be developed (Grizzle, 1982; Flynn, 1986; Smith, 1990, 1995b; Yorke, 1991;
Ghobadian and Ashworth, 1994; Ammons, 1995; Flapper et al., 1996; Kravchuk and Schack,
1996; Blumstein, 1999; Kloot and Martin, 2000; Kelly and Swindell, 2002; Behn, 2003;
Proppers and Wilson, 2003; Ferreira and Otley, 2009; Choong, 2013, 2014; NZ Office of the
Auditor General, 2013). The author has integrated these recommendations into a synthetic
model, a tool that can be used for the critique of any public sector performance measurement
framework. The synthetic model adheres to a purposive and rational (Boyne and Chen, 2006;
Taylor, 2009), rather than symbolic, perception of performance measurement (Roy and
Segun, 2000; Modell, 2004).
The model consists of seven elements, organised into three groups, against which evaluation
is conducted. These elements and groups were derived from the literature and tested for ease
of use and explanatory power; multiple revisions were undertaken before the final set was
chosen. This scope and depth allows for a more nuanced exploration of the specifics of
performance measurement usage than is usually encountered in the literature, which is more
focussed on issues of adoption and implementation (de Lancer Julnes and Holzer, 2001;
Henri, 2006; Moynihan and Pandey, 2010). The following sections briefly summarise these
groups and elements.
Group one: use and results
Group One is concerned with the use of performance information by the organisation, and the
results that are achieved from this use.
Element one: usage of performance information in decision-making including trade-offs For
a performance framework to facilitate organisational actions, performance information must
be used by decison-makers. Specifically:
A mature performance measurement framework uses performance information to guide
organisational decision-making, including the development of budgets.
Element two: strategic alignment and prioritisation
Not all organisational actions are necessarily conducive to the achievement of organisational
purpose and creation of public value. Performance information should be linked to those
actions of greater importance. Specifically:
A mature performance measurement framework will clearly link performance measures to
strategic direction, including the explicit prioritisation of those measures deemed more
strategically important.
Group two: key design elements
Group two is concerned with the specific design of the framework and its inherent logic and
value.
Element three: use of outcomes framework
The outcomes framework, sometimes termed a “production model”, “intervention logic” or
“logic model”, is a common tool that links outcomes and impacts (the effects generated in the
external environment) to outputs (goods and services produced by the reporting entity) and
to processes (internal activities) and inputs (resources consumed). Specifically:
A mature performance measurement framework will use a logical outcomes framework to
show the resources consumed, services delivered, and results achieved by the actions of the
reporting entity, and thus the value generated for broader society, community, and economy.
Element four: variety, comprehensiveness and quality of measures
Public sector organisations are complex. Too few measures may provide an appearance of
coherence and focus, but may ignore major areas of organisational activity. Both quantitative
and qualitative measures are required. Poorly chosen measures may provide little insight into
organisational actions, or may cause perverse behaviour by staff (Smith, 1995a; Proppers and
Wilson, 2003; McLean et al., 2007). Specifically:
A mature performance measurement framework will utilise a variety of valid, purposive
performance measures comprising the full scope of organisational action, and will combine
financial and non-financial measures to make better sense of complexity. The framework will
select measures that are less prone to gaming and other perverse behaviour.
Element five: depth and insight of commentary
Performance measures by themselves, even if well-chosen, may provide insufficient
information for decision-making. Commentary is often essential to the interpretation of
performance information. Specifically:
A mature performance measurement framework incorporates rigorous, analytical
commentary to make sense of trends, achievements, and to contextualise quantitative data.
Group three: key shapers Group three is concerned with elements outside the performance
framework itself, but which heavily affect its utility.
Element six: internal ownership of performance framework
If staff own and understand the performance framework, measures are more likely to reflect
activities and results of substantive rather than symbolic value, and staff are less likely to
engage in perverse behaviour. Collaborative development may ensure this ownership.
Specifically:
A mature performance measurement framework is well understood and accepted by staff,
which may require collaborative development of the framework.
Element seven: accurate and timely underlying data
Without accurate underlying data, any performance measurement framework is fatally
flawed. Specifically:
A mature performance measurement framework includes accurate and timely underlying data
capable of addressing the full range of measures required.

The New Zealand context


New Zealand’s eight universities are semi-autonomous parts of the broader public sector,
formally defined as crown entities. Three key government bodies have governance
relationships. The Ministry of Education (MoE) sets national tertiary education strategy, but
the universities are primarily funded via the Tertiary Education Commission (TEC). The
Office of the Auditor General (OAG) is interested in the universities’ use of public money
and reported performance information.
New Zealand was one of the first adopters of New Public Management models in its public
sector in the 1980s and 1990s, including performance measurement (Boston, 1996; Schick,
1996). However, until the mid-2000s, little attention was paid to the performance of tertiary
institutions, including universities. In 2006, the Performance Based Research Fund was
implemented, focussing on research performance (Curtis, 2008). This was followed by
baseline monitoring reports, efforts to develop common indicators across the tertiary sector
(University C, 2009; University D, 2009). After a 2008 publication by the OAG stating a
desire to improve the standard of public sector performance reporting (NZ Office of the
Auditor General, 2008), the TEC and New Zealand Vice Chancellors’ Committee (NZVCC)
developed university-specific outcome frameworks in early 2010 (Eng, 2015). In the
intervening period, the first set of educational performance indicators, which provided
standard, consistent data, was published (Tertiary Education Commission, 2009).

Methodology
Given the developments noted above, it was decided to focus on the period 2008-2013, as
this would encompass the period of major governmental intervention. As there was no
existing literature available, a method suitable to the exploration of new ground was
necessary.
Documentary analysis, a common approach in policy analysis, history and related fields was
chosen; it has proven a useful tool for the examination of performance measurement use in
other fields (Hatry, 1978; Usher and Cornia, 1981).
It was decided to focus primarily on university annual reports published during the evaluation
period. Under the Education Act 1989, the annual report is the university’s primary external
accountability document. Each annual report is required to include a statement of service
performance (SSP) that provides performance information for the calendar year. It was
thought that annual reports would likely represent the most advanced performance
measurement practice undertaken by the universities, given that the documents are externally
audited, published externally, and have substantial guidelines governing their production. It
was realised that only five of the model elements could be adequately tested through a
documentary approach, with little information on elements six and seven likely to be
discovered. However, this shortcoming was regarded as acceptable, given the introductory
nature of the study. A total of 48 annual reports were examined, comprising more than 1.64
million words. The primary focus was the SSP, but all surrounding material was also
examined. Each annual report was subjected to a range of quantitative and semi-quantitative
analyses (of the SSP unless stated otherwise) derived from the synthetic model:
• page length;
• word count;
• number of quantitative measures;
• number of measure years (total number of measures multiplied by number of years of
reported data for each measure);
• use of outcomes framework (tri-fold categorisation: no – measures not fitted to outcomes
framework; partial – some effort made to link measures; yes – stated explicitly and measures
arranged);
• link to strategic plan (tri-fold categorisation: no – not in the SSP or elsewhere; partial –
measures linked but not explicitly; yes – measures linked explicitly);
• relative position of SSP and financials;
• costing of services (tri-fold categorisation: no – no effort to link costs with outputs; partial –
some effort made to link costs with outputs; yes – costs and outputs linked); and
• breadth of measures (tri-fold categorisation: few – cover minority of activities; some –
cover a majority of activities; full – cover entire scope of university activities and results).
Each annual report was also subjected to qualitative analysis via the extraction of key themes
that aligned to the elements of the synthetic model. This was done through close reading of
the documents, initial categorisation and then revision of qualitative categories. This
qualitative analysis allowed for the identification of nuance that escaped the quantitative
tools. The approach taken allowed for substantial breadth of analysis – all universities could
be examined for the entire period. However, there were some shortcomings with the method.
It was anticipated that annual reports would likely show a more advanced approach to
performance measurement practice that might not be representative of internal practice. It
was also anticipated that internal performance practices would be invisible, except where
reflected in the reports themselves, which might itself be an inaccurate reflection. A third
potential issue, bias, was discounted as the focus was on performance measurement rather
than performance itself per se; as such, any inflation or manipulation of results in the reports
(unlikely given the auditing process) would have no bearing on the analysis conducted for
this study.
Findings
The following sections present findings against each of the elements examined (one through
five). Each section includes tables of data as well as qualitative observations.
Element one: usage of performance information in decision-making including trade-offs
While an analysis of annual reports cannot provide comprehensive insight into this element,
some information can be gained. In particular, the relative positioning of SSPs may indicate
the importance accorded to service performance information.
Potentially indicating a growing positive attitude towards the importance of service
performance information, there has been a slight trend towards positioning the SSP before
financial statements (Table I), although a majority already did so. It is also illuminating to
consider whether SSPs (or other parts of the annual report) include service costing
information, essential for the consideration of trade-offs. The evidence (Table I) suggests that
universities do not possess sufficient information about the relative costs of service to
adequately consider trade-offs, a point also noted by the OAG (NZ Office of the Auditor
General, 2012, 2013). This might be compared to other parts of the public sector, such as the
police, where different output costs are clearly specified, allowing for the consideration of
different output mixes (New Zealand Police, 2011).
Element two: strategic alignment and prioritisation
Annual reports provide useful information about the alignment of performance measures to
strategic plans, as shown in Table II.

The frequency of alignment has remained consistent during the evaluation period (Table II),
with a slight shift from those only showing partial alignment to those showing full alignment.
A common practice classed as partial alignment is exemplified by University H, which
arranges measures by strategic plan themes in its SSP but does not make them explicit.
This might be contrasted with University G, which in its 2009 SSP explains the link between
strategic direction, objectives, and specific performance measures (University G, 2010). A
similar approach is taken by University C in linking key strategic areas, strategic targets, and
key performance indicators (University C, 2009, 2010). There was an absence of clear
prioritisation. While “Research” might be listed as a strategic goal, it was seldom clear which
were the associated priority performance targets, such as research outputs or citation scores.
There was no explicit weighting of performance targets in any of the annual reports.
Element three: use of outcomes framework
Annual reports display an interesting disjuncture between the rhetoric used, which is often
focussed on societal outcomes, and the more formal presentation of performance information.
Table III shows the use of outcomes frameworks to present performance information. There
is a clear increase in use of outcomes frameworks from 2011 annual reports onwards (Table
III), likely linked to the work done by the TEC and NZVCC, but still only half use them in
any semblance, and only one in a full manner. The timing of adoption is particularly
interesting. The outcomes framework was agreed in early 2010, but not used until 2011
annual reports, which were produced in early 2012. This year’s delay may indicate a desire
by universities to first incorporate the framework into planning documents before utilising it
as a performance reporting tool.
This lack of formal attention might be compared to statements in the reports, such as that a
particular university contributes to desired economic, social, and environmental outcomes
(University H, 2012); that another wishes to ensure all activities “contribute to New
Zealand’s economic and social transformation goals” (University G, 2010, p. 5); that a third
is “committed to meeting the needs of New Zealand and New Zealanders” (University E,
2009, p. 1); and that a fourth has a key focus of public contribution, including New Zealand’s
culture, society, and economy (University D, 2009). Alongside this rhetoric, various concepts
derived from the outcomes framework emerge in various forms – University F mentions it
from 2009, but never utilises it for performance information; University E uses the
terminology from 2010, but does not link it to measures; and University C utilises the
concepts correctly from 2010 but does not use a visual framework until 2013.
It seems clear that the language of outcomes frameworks has confused many, despite
substantial national guidance (Public Finance Act, 1989; NZ Office of the Auditor General,
2002, 2008, 2012, 2013). The term “outcome” is often linked to generic “results”, and the
term “output” is often solely linked to research outputs. Some stated intermediate outcomes
(impacts) are inputs, such as the quality of staff and institutional reputation (University G,
2013), or funding received (University C, 2011). Others are outputs, such as the quantum of
research produced (University F, 2009).
Element four: variety, comprehensiveness and quality of measures
Annual reports provide useful information about the types of measures used by universities,
much of which can be assessed quantitatively.
The breadth of measures figure reflects the degree to which the SSP provides performance
information across the scope of the organisation’s activities. There has been very little change
in the breadth of measures reported (Table IV). One university (C) has retrenched
considerably from a very comprehensive approach; another (D) has slightly expanded. The
rest have remained consistent in their approach. Focus areas include research and teaching,
but there are few measures related to community service or support services. This lack of
attention to the full breadth of organisational activities has occurred despite government
guidance to the contrary (NZ Office of the Auditor General, 2013).
SSPs are largely dominated by input and process measures. There is also a consistent use of
particular output measures, notably course and qualification completion, and research
outputs. There are few impact and outcome measures. Input measures such as enrolments are
difficult to game without deliberately corrupt behaviour (Guerin, 2015; Hunter, 2015), but
output measures such as course and qualification completion are more vulnerable, and could
be gamed through reduced evaluation standards (Tobenkin, 2011; Bachan, 2015). Research
outputs, because of the intermediating nature of an external peer-reviewer, are less prone to
gaming, though it is possible (Elder, 2012; Tertiary Education Commission, 2015).
From 2008 to 2012, there was a slight decline in the mean, and a larger decline in the median
number of measures reported (Table IV); from 2012 to 2013 there was an even more sizeable
decline. In some cases this was due to the rationalisation of similar measures into single
categories; in others, it was due to the elimination of entire categories of measures.
Universities C and B account for substantial variation from 2010 to 2011. Measure-years
provide further illumination. A measure-year is one measure reported for one year; a single
measure with four years of reported data would provide four measure-years, as would four
measures with only a single year of reported data each.Measure-years show a steady decline
in median throughout the evaluation period, with more oscillation in mean (Table IV).
Governmental action in regards to EPIs seems to have had very little effect on the quantity of
performance information presented; if anything, it seems to have led to universities reporting
less.
Element five: depth and insight of commentary

Annual Reports provide substantial information about the depth and analytical insight of
performance measurement commentary within universities. Approaches to commentary differ
dramatically. In its 2008 report, University F did not provide any descriptive commentary; by
2011, it included not merely commentary on specific results but also a discussion about the
selection of measures. University H has consistently presented quantitative measures first,
and then associated commentary and highlights; University B has taken an opposite
approach. Commentary and highlights are often integrated, but not always (University G,
2009). At times, commentary is used to make up for shortfalls in specific measures. This is
particularly notable with outcome and impact measures that are not clearly defined; in such
cases, rather than a measure such as economic or research impact, a set of highlights showing
various impacts is provided (University D, 2013). Commentary is seldom selfcritical –
failures to achieve performance targets are often glossed over, noted as being remedied in the
future through new projects or programmes, or blamed on external events beyond the control
of the university. Explanations for specific results, whether positive or negative, are seldom
particularly nuanced.
The raw quantity of commentary provided has not changed dramatically. Both mean and
median page length have remained around 15 pages (Table V), but some universities have
seen more dramatic increases or decreases. Median word count has increased, whereas mean
word count has decreased. University C is primarily responsible, as it has dropped by almost
17,000 words over the evaluation period. While the SSP is the main source of commentary,
Chancellors’ and Vice-Chancellors’ statements may provide substantial amounts as well,
often focussed on specific measures such as international rankings (University H, 2012,
2014) or the imposition of centrally directed targets (University A, 2011).
Discussion
In regards to element one of the synthetic model, there is limited evidence that service
performance information is used to guide substantive decision-making in New Zealand
universities, mirroring findings elsewhere (Vakkuri and Meklin, 2003). The lack of clear
output costing illustrates a disjuncture between targets and resourcing, and is itself linked to
the input-focussed system of governmental tertiary funding in New Zealand. There are also
likely difficulties in specifying and disaggregating the different products/services produced
by the universities, whether teaching, research, or community service. Because of this, it
might be questioned as to whether the true value generated by the expenditure of public
money on tertiary education is captured. In relation to element two, New Zealand universities
show some degree of maturity in strategically aligning performance measures, although
explicit prioritisation is still rare.
With element three, there has been increased adoption of an outcomes framework, albeit to a
limited extent. This limited use might be compared to the broader New Zealand public sector,
where outcomes frameworks are frequently used not merely for reporting purposes, but also
to drive planning. It is possible that universities do not perceive themselves as being akin to
other parts of the public sector in terms of providing specific services for the purpose of
creating public value, but instead hold a more introspective self-perception. Alternatively, it
may simply be that the outcomes framework as a tool is inadequate for the analysis of tertiary
education performance (Broadbent, 2007), due to the complexities imposed by the presence
of students as simultaneously inputs, processes and outputs (Sirvanci, 1996).
New Zealand universities demonstrate substantial variation in the number and breadth of
performance measures reported, element four of the synthetic model. They seldom report
against the full scope of organisational activities. The infrequency of outcome and impact
measures, and prevalence of teaching output measures, makes many frameworks potentially
vulnerable to gaming. Lastly, in element five, New Zealand universities generally provide
substantial commentary. This is unsurprising, given issues surrounding the relevance of
quantitative measures to university activities, and in many cases that commentary is used to
make up for shortfalls elsewhere in the performance measurement framework utilised. The
commentary is seldom incisive and self-critical, however.
External actors, including the MoE, TEC, and OAG appear to have had only a limited effect
on performance measurement practices. The introduction of EPIs standardised measures
already reported by universities in different forms, but has not led to the development of new
teaching output measures. The development of a sectoral outcomes framework has not been
followed by its widespread adoption. There has been limited attention to output costing and
associated issues of effectiveness and efficiency. This lack of adaptation may simply reflect
the semi-autonomous nature of New Zealand universities, or it may simply reflect funding
mechanisms. There is little incentive for universities to focus more closely on outcomes or
even outputs when funding is not provided against such criteria.
From a broader perspective, it is notable that much university performance measurement
focusses on institutional capability and as such is introspective. This is intriguing, as it is
diametrically different to what emerged in the New Zealand public sector when performance
measurement was first introduced, where service provision was at times myopically elevated
over vital issues of institutional capability (Schick, 1996). This is likely to reflect differences
in the accountability of the core public sector, held to be directly responsible for the delivery
of specific outputs, and the more diffuse accountability of the university sector due to its
semi-autonomous nature. This difference has likely encouraged a shorter-term, service-
focussed approach in the former group, whereas the later have been able to take a longer-
term, capability-focussed approach.
The lack of evolution in the use of performance measurement is likely to be strongly linked to
university organisational culture (Henri, 2006; Heinrich and Marschke, 2010; Hood, 2012;
Jennings, 2012; Taylor, 2014). Academic culture, being professional and individualistic, may
be highly resistant to externally imposed managerialist trends such as performance
measurement, as noted earlier in summarising the critical literature on the topic.
This resistance would result in a bare minimum of adherence to externally directed reporting
requirements, expressed via annual reports, coupled with limited substantive and internal
implementation of performance measurement. As such, the untested element six of the model
– internal ownership of the performance framework – may have affected maturity in the
elements tested. This suggests that much value could be gained by exploring processes of
performance-related knowledge generation and ownership within universities (Hess and
Ostrom, 2005; Ostrom, 2010), and in particular by examining the mechanisms whereby
“managerial” emphases are converted into the everyday work of academics and
administrative staff (Feller, 2009).
The findings of this study suggest at least two tentative hypotheses worthy of further testing:
H1. Public sector universities will show lesser evolution in performance measurement
practice than other public sector organisations, even when subjected to equal or similar
external accountability regimes.
H2. Public sector universities’ degree of maturity in the use of performance measurement will
reflect the extent to which resources are allocated according to results-focussed (outcome and
output) performance targets.
Next steps
It is important to explore whether the trends discovered here have broader applicability. It is
planned to conduct a similar study of the annual reports of polytechnic institutes in New
Zealand, and universities in Australia, which operate under a similar governance model. It is
hoped that other researchers might utilise the synthetic model to evaluate other national or
subnational sets of universities, enabling the creation of a global picture. These studies could
either be conducted deductively utilising H1 and/or H2, or could be conducted in an inductive
fashion to further explore the way in which universities utilise performance measurement.
Even more intriguing is the possibility of inter-sectoral comparisons. The synthetic model,
being agnostic, would allow for the comparison of performance measurement maturity in
universities with other parts of the public sector.

Vous aimerez peut-être aussi