Académique Documents
Professionnel Documents
Culture Documents
John Welsh
To cite this article: John Welsh (2017): Ranking academics: toward a critical politics of academic
rankings, Critical Policy Studies, DOI: 10.1080/19460171.2017.1398673
Article views: 27
Download by: [Gothenburg University Library] Date: 16 November 2017, At: 06:27
CRITICAL POLICY STUDIES, 2017
https://doi.org/10.1080/19460171.2017.1398673
ARTICLE
ABSTRACT KEYWORDS
Downloaded by [Gothenburg University Library] at 06:27 16 November 2017
There is a need in academic rankings research for a more critical and Apparatus of security; critical
political analysis beyond the register of normative global govern- philosophy; governmentality;
ance studies and the pervasive positivism of new public manage- higher education; league
ment that dominates the literature of social policy in the area of tables; methodology;
higher education and research. Given that academic rankings are sociology of education;
university
powerful topological mechanisms of social transformation, critical
theorists have a responsibility to engage with this extant research
and to establish a politically sensitive agenda of relevant critical
analysis. Thus, this article identifies three uncritical and pervasive
assumptions that dominate academic rankings research, and which
preclude a properly critical, and thus political, understanding of the
ranking phenomenon. The powerful imbrication of these assump-
tions in rankings research will then be demonstrated by a review of
the extant literature broken down into three broad categories of
recent research (micro-methodology, sociocultural criticism, poten-
tially critical). Building on points of departure in the third category
that are promising for a critical agenda in future analyses of rank-
ings, the piece concludes by suggesting three specific and under-
treated aspects of academic rankings promising for future critical
analysis. These aspects concern the roles of social apparatus, poli-
tical arkhê, and historical dialectic.
Introduction
Why write another article about academic rankings? The reason, quite simply, is that
there still exists a great deal to be said about them beyond the bureaucratic analyses that
dominate the literature. This is especially so given the prospective mutation of the
ranking logic from league tables into a second generation of multidimensional forms
(CHE, Multirank), and because of their continued reconfigurations of social life within
and without the academy. The form might change, but the logic, agenda, and decisive
effects of rankings as a topological apparatus remain.
We are all tired of hearing about rankings, and I am more tired than everybody else.
But the need to write about them, and the need to engage them critically, seems to
intensify in direct proportion to this sentiment. So, rather than bore the reader with the
usual introductory grist about rankings ‘spreading like a virus’ (Hazelkorn 2011, 85),
having ‘attracted wide attention recently’ (Ioannidis et al., 1), and us all having ‘to learn
to live with them’ (Bowden 2000, 58; Hazelkorn 2008), I will simply say that, with due
respect to the many thoughtful researches on them, research on academic ranking is
one of those areas which makes a critical theorist groan. If one wanted an example for
one’s fresh-faced undergraduates of how turgid social science can pave over an impor-
tant area of sociological interest, annihilating the potential for critical interrogation of
the political and social forces of transformation in our time, then this would be it.
Considering how academic ranking in particular is bound instrumentally into the
transformation of our own academic lives and laboring, one would have thought a
reaction beyond the tepid bureaucratic temperatures that dominate the discourse today
would have been more forthcoming amongst academics. But I suppose that if ranking
is, as I will argue, an apparatus of governmentality, then perhaps, with a minor
alteration, the old adage of de Maistre and Tocqueville is indeed correct . . . people get
Downloaded by [Gothenburg University Library] at 06:27 16 November 2017
rankings, to find problems with them, to be ‘skeptical of current ranking exercises’ (Kehm
2013, 20), and simply to dwell on their ‘possible negative effects’ in pedagogic, organiza-
tional, economic, or psychosocial terms (Erkkilä 2013, 5). Such work is happy however to
proceed within the logic of the object over which they are avowedly ‘critical’, serving
practically to reproduce the rationality of that over which they disapprove and so to
contribute to the police preservation of the social and political status quo. In this anodyne
sense, all inter-textual utterances are critical, and so any academic intervention into a
particular discourse is implicitly critical. Rarely is the word used in a manner that keeps
faith with its proper usage understood in the tradition of German critical philosophy,
whereby ‘critical’ entails certain epistemological, ontological, political, affective, personal,
and even procedural predicates and commitments. Some normatively ‘critical’ works are
promising for further development, and arguably progressive in their own way, but by
Downloaded by [Gothenburg University Library] at 06:27 16 November 2017
taking them to be ‘critical’, we satisfy ourselves that rigorous interrogation and scrutiny of
an immanent nature is being undertaken on the ranking phenomenon somewhere out
there, constitutive of opposition to the institutional penetration of rankings, and contrib-
utory to a politics of rankings, when it is actually hardly taking place at all.
In what follows, I shall lay out three core aspects to rankings that pervade the
literature as assumptions, and which frustrate movement toward more critical analyses
and insight into rankings. They are repeated ad nauseum throughout the vast body of
research and are blithely accepted with little controversy, contestation, or exploration.
On the basis of these core characteristics, I shall then review three literatures of the
extant research on rankings, outlining how each relates to properly critical analysis, but
how they are to varying degrees reproductive of the uncritical core characteristics to be
outlined below, and suggesting how this reproduction frustrates movement in more
critical, and therefore political, directions.
I should concede at this point that the researches I cite are more nuanced than the
categorizations into which I place them for the purposes of general exposition. I can only
justify this violence by claiming that such is necessary in order concisely to identify their
common denominator. Despite their varied nuance, the register, procedural loyalties,
epistemic assumptions, and implicitly political conclusions in these works nevertheless
remain within the paradigm I am criticizing. As such, the injustice of analytical reduction is
the price that must be paid for trenchancy and concision in argumentation.
the most of it’ (Altbach 2012, 31) to accept that ‘railing’ will not make them go away; we
are assured that ‘within only a few years, rankings have become an unavoidable part of
academic life, for better or worse’ (Ranking Forum of Swiss Universities 2008), and are
thus ‘part of the landscape whether we like it or not’ (Labi 2008). In fact, it is acceptance
of their putative permanence that principally characterizes the extant literature on
academic ranking and in which consists that literature’s most objectionable feature.
Fascinatingly, the doxa of ranking’s inevitability seems even to be willfully accepted by
its most pessimistic critics. Examples: ‘league tables rule’ in academia (Marginson 2009,
6); ‘rankings are also inevitable’ (Altbach 2006, 2); the ‘unstoppable rise of league tables’
is but one step in the irresistible ‘direction of audit cultures and an audit society’
(Lorenz 2012, 607). Given that contingency is essential to the political to challenge the
complacent and interested assertion that rankings are ‘here to stay’, ineluctable, irre-
Downloaded by [Gothenburg University Library] at 06:27 16 November 2017
sistible, and an unassailable fact of social life is an urgent priority. The goal here is then
to emphasize not rankings’ positivist ontological status but in fact their profoundly
contingent character. For out of this realization can be born a properly critical and
political response to their overbearing emergence. I want to take a dissenting view, and
in many respects a properly critical and optimistic view, which seeks to convert insight
into ‘conditions of possibility’ over despair, resignation, and the TINA refrain (There Is
No Alternative) of ‘capitalist realism’ (Fisher 2009).
That is the invariant refrain to which we are all expected in the extant literature to
resign ourselves and adapt. The inevitability of rankings is presented in almost deter-
ministic terms. Given the historical social forces assumed, why have they arisen in the
particular form of ordinal league tables, why principally since the 1980s, why is there a
pressure to develop multidimensional ranking forms, why do rankings not result in the
market behavior for which they are justified, and why is the ranking form so seemingly
irresistible? These questions never seem to arise when the historical force is welded so
unquestionably to historical form.
The history of the academic ranking literature itself reflects the changing modula-
tions and tempos of social force in the accumulation regime of the mode of
production. The literature around those first wave formulations of rankings
(1980s) focused on the fairly mundane matter of numerating for the purposes of
privatization (Drew and Karpf 1981). The expanding interest in rankings after the
end of the Cold War (1990s), and amidst the hegemonic ascendency of neoliberal
ideology, began to concern itself with the question of global governance and the
need for ordered knowledge to this end. The 2000s saw a turn toward multidimen-
sionality and a drive for more complex and responsive rankings commensurate to
the growing need for market-like behaviors at the micro-level in higher education.
After 2008, a more normatively critical voice began to emerge as part of the general
disillusionment with the neoliberal paradigm but nevertheless bound into its appar-
ently inescapable logic. What is now required is a further critical step that has
greater potential to break with the gravity of this paradigm.
When Altbach states that ‘if rankings did not exist, someone would have to invent
them’ (Altbach 2012, 27), he is reinforcing a doxa that apparently denominates most
rankings research. In fact, there is an unacknowledged historical interpretation at work
here. Whilst superficially the ranking form is taken to be unchangeable and necessary,
and thus ahistorical in terms of the impossibility for social change through political
CRITICAL POLICY STUDIES 5
action, there is an implicit recognition in this discourse of the historical nature of its
emergence through time. However, the social forces from which it has arisen are
unstoppable and ineluctable, making of this assumption regarding rankings one that
is not only explicitly conservative but also implicitly determinist.
But surely a key contradictory tension in the rankings phenomena is made manifest in
their formal emergence and a space opens up between the forms themselves and the social
and historical forces which have occasioned their emergence (massification, telecoms
revolution, global migration, ‘globalization’, etc.; see Altbach 2012, 27, 31; Shin et al.
2011, 3; Hazelkorn 2011, 7; Hinchcliffe 2007, 56). It is a non sequitur to claim that these
particular forms are inevitable on the basis of the historical momentum of their emergence
alone, as most researchers seem to, and certainly not without adequately exploring what
these historical forces are. Furthermore, whilst rankings are realized within the power of a
Downloaded by [Gothenburg University Library] at 06:27 16 November 2017
certain rationality that grants them their ‘inevitability’, their form is the contingent
product of discursive conflict – politics. The task ahead then is to articulate this tensile
fissure, between historical form and historical force, and drive a wedge into it. I recom-
mend, according to this analysis, that for rankings research to be critical a historical view
must be taken of them, but one that recognizes and explores the dialectical contradictions
internalized to its form, rather than one that accepts the causal force of its emergence as a
completed process and a social fait accompli. To fail to do this is not only to fail to be
critical, but it is to be politically reactionary, perhaps even dogmatic.
In one sense, and one sense only, are rankings truly inevitable and irresistible, and
that is when they take the form of a ‘reversal of contingency into necessity’ (Zizek
2014, 146). This is where the actualization of the event retroactively creates its always
already historical necessity. As such, it is by the very preoccupations, treatments, and
mentioning of rankings themselves, and the repetition of the epistemic conditions of
their constitution, along with constant referral to the predicative assumptions and
legitimizations, that we come to make an unstoppable necessity of the ranking form.
By conceding to the terms of this terrain, we make our critical proposals and
objections not simply incorrect but impossible of ever being correct as the terms
on which they themselves seek predication are expunged from the past of the
Present. This leaves but one option to refuse the dialectical reversal of contingency
into necessity and deny the very rationality of the Present that makes of itself a
necessity from a contingent past and irrationally to reject ranking in the very essence
of its form. It is only at the moment of recognition that the potency of the trompe
d’oeil evaporates and proves itself in the moment that it ends. But given the
historicity of ranking’s emergence, how is this operation to be undertaken in a
sublative rather than reactionary manner?
evaluation of social conditions (Flyvbjerg 2001, 125), they are treated as reality-reflect-
ing models first and foremost.
As we shall see further down, rankings may subsequently be spoken of, in the
limpid language of normative social science, as ‘performative’, but that aspect of
them is necessarily secondary to their essence as constructed presentations of
measurement. To accept or adhere to the practice of rankings with any consistency
of principle must entail the assumption that they represent reality faithfully in
accord with a positivist correspondence notion of truth. This positivism, whilst
congenial to policemen, is devastating to critical thought or politics, for which any
absence of ontological contingency precludes its very existence. Empirical science is
not politics.
This view of rankings forces us into a duality, wherein we are permitted only to
Downloaded by [Gothenburg University Library] at 06:27 16 November 2017
affirm them as true or false, correct or incorrect, better or worse, and thus precludes
other kinds of judgment over their desirability or actuality. This enforcement of a lack
of choice is how ‘numerical objectification of social phenomena can function to
depoliticize potentially political issues’ (Erkkilä and Piironen 2014, 1) and replace
them with Marcuse’s ‘matter-of-factness’ that lies at the epistemological heart of police
and its instrumental reason (Marcuse 1982, 141–142). It is this assumption that
propagates the dubious and popular understanding of ‘transparency’, where otherwise
one might see just an infinite variety of opacities in the panoply of policy options laid
before us. Needless to say, challenging this implicit, yet regnant, pseudo-epistemology
in the literature will be a priority.
Political and social scientists are so preoccupied with being satisfactorily ‘scien-
tific’ in their research on ranking, obsessing over a limited and impoverished notion
of rigor, rather than with being ‘political’. In his great founding work of Regulation
Theory, Michel Aglietta noted that the thing about the ‘criteria of scientificity’ is that
‘it can only proceed normatively’ and that ‘the apparently rigorous character of the
theory should not deceive us . . . the stricter its logic, the more divorced from
reality’. It is in fact a metaphysics bordering on the theological that postulates an
‘immutable essence underlying the variability of phenomena’ (Aglietta, 13–15), a
metaphysics that assumes the contingent as a datum. This is how we can get a
supposedly ‘critical approach’ in the form of ‘a university benchmarking exercise’
(Proulx 2007, 71), which of course is not critical in any meaningfully intellectual or
political way.
To be critical or political will be to abandon the positivist pretentions to corre-
spondence truth in rankings research, not because it is in error, but because it is at
best irrelevant and at worst counterproductive to critical motion and political
analysis.
and performativity (Usher 2009, 88), the former concerning consumer choice and the
‘need to benchmark’ (Altbach 2012, 31), whilst the latter relates to various ‘government
performance funding initiatives’ (Dill 2006, 1), such as the RAE and REF in Britain or
Exzellenz Initiativ in Germany, which are based to a great extent on the rankings
phenomenon and the qualification and indexing that is constitutive of it. The political
raison d’état Foucault observed in the official establishment of ‘knowledge’ (savoirs) can
clearly be seen in the way in which rankings are discussed as essential to national
economies (Foucault 2002). These assumptions, priorities, and imperatives have been
recently rolled out again in the Stern report, based upon the latest REF assessment
round in the United Kingdom, where one can read Lord Stern’s simultaneously
reassuring and admonishing conclusions that entirely endorse the logic of ranking
and reinforce the ‘imperative to govern’ that motivates their continued existence.
The UK is at the forefront of world research and has the most productive science base in
the G7, ranking first amongst comparable major research nations for Field Weighted
Citations Impact. Its great strengths in research are a special asset and comparative
advantage and are crucial to the future of the UK in a rapidly changing and sometimes
turbulent world. (Stern et al. 2016, §120)
The distribution of knowledge is only socially efficacious to the extent that it is also a (re)
distribution of positions. To gauge the distance between the two distributions, one must
therefore have an additional science. Ever since Plato, this royal science has had a name.
That name is political science (Rancière 2014, 68).
The research contributory to the first literature is that of the micro-methodologists. This
research is decidedly positivist and is concerned with detailed analyses of the statistical
and quantitative methodological aspects of how the data for rankings are collated and
presented as information for social scientific ends. These ends however are not
addressed within the assumptions of the research itself, aside from reference to sub-
stitutive bromides like ‘quality’ and ‘excellence’ (Hazelkorn 2011), which, at best, are
only ever bracketed and rarely scrutinized. This category contains all those ‘biblio-
metric’ research papers, filled with formulae and Greek letters, as highly numerate as
they are often illiterate, and that are legion in the pages of journals like Research in
Higher Education (Drew and Karpf 1981; Rinia et al 1998; Meredith 2004). This kind of
CRITICAL POLICY STUDIES 9
research, plausibly ‘science’ but unequivocally not ‘political’, merely tinkers with the
micro-functioning of ranking methodologies, so as merely to ‘identify better practices’
(Usher and Savino 2007, 5) and apply ‘corrections’ to ‘inappropriate design’ (Van Raan
2005, 135). What is politically significant about this literature is not just that it operates
to a positivist mode but also within a positivist ontology. Taken together, this meta-
physical positivist paradigm manifests as total. As such, micro-methodological criticism
tends to share a rather limited set of thematic complaints over rankings, of which the
principals are:
alumni contributions (240), and which ‘reinforce existing positive and negative
stereotypes’ contrary to justifiably rigorous method.
(6) Circularity: Rankings are recognized to be ‘instrumental’, in the sense that
methodologists unwittingly select the values and variables to suit their own
ends or tastes in compilation.
(7) Gaming: The conscious manipulation of data or statistical methods for self-
interested purposes on the part of individuals and institutions (Bastedo and
Bowman 2011, 6; Sauder and Fine 2008; Stevens 2007; Ehrenberg 2002;).
formed into objective characteristics’ (Artushina and Troyan 2007, 84). In this kind of
ranking research, the overriding question is simply that of arriving at a ‘superior
indicator of university excellence in terms of objectivity and comprehensiveness’ (Li,
Shankar, and Ki Tang 2011, 927), yet at no point will ‘excellence’, ‘objectivity’, or
‘superior’ be problematized, explored, or justified in themselves. Conversely, proble-
matic ‘subjectivity’ is of a kind that is merely understood as ‘dependence of the
outcomes [of peer-review] on the choice of individual committee members’ (Van
Raan 2005, 135), that is it is simply conflated with interested opinion and ‘bias’. Most
recently, one can read in the Stern Report how the principal concern with rankings lies
not in the contribution to governing derived from their very being, but in their
‘systematic bias in favour of. . ..’ (Stern et al. 2016, §42). The criticisms of the extant
literature have been integrated into the logic of government, but the imperative to
govern through ‘qualculation’ and numeration is not challenged.
For the micro-methodologists, the scope of their critique of rankings stops at the door
of the ranking compiler (THE, QS, ARWU, Leiden, CHE, USNWR, etc.), who usually are
considered by the former to be too ‘whimsical’, ‘arbitrary’, ‘incompetent’, and lacking
rigor in methodological procedures, choice of variables, assignation of significance, etc.
(Harvey 2008, 190; Van der Wende 2008, 56). For Meredith (2004, 445), problems arise
from ‘arbitrary changes in rank’ that are ‘due to misleading data or poor methodology’.
Lee Harvey’s primary concern is that ‘indicator selection is rigorous’ (2008, 190) and
seems content to rest at that. The implication of these studies undertaken by the micro-
methodologists is therefore that academic scientists, such as themselves, ought to have
their hands on the tiller to correct the false courses resulting hitherto from the partial and
interested subjectivities of the compilers of rankings and their ‘greedy’ as well as ‘non-
expert use of bibliometric indicators’ (Van Raan 2005, 133–134).
The first problem to rear up in this micro-methodological literature, which seems
to comprise the largest single category of research on academic rankings, is in its
symptomatic ‘methodological fetishism’ (Wright Mills 1970), which has dogged
academic social science at least since the advent of the Cold War. This epistemic
mode has become a defining feature of the currently ‘dominant streak in social
science’ (Flyvbjerg 2001, 168; Jameson 2007, 95, 101) and suffocates our ability to
break paradigmatically out of ‘persistent pathologies’ (Hay 2011), which might
otherwise allow us to conceive of other ways of being (or even seeing) in the
world. Little wonder that both the contingency and historicity of ranking, the two
CRITICAL POLICY STUDIES 11
aspects that make of it a political rather than scientific phenomenon, are clearly
marginalized. There is a seeming dearth of reflexivity or epistemological introspec-
tion amongst the micro-methodologists, and not without social implications as
profound as they are dark.
Highly scientific researchers, such as Li, Shankar, and Ki Tang (2011), busy them-
selves with ‘factors at the micro or institutional level that are driving the success of
[American] universities’, and yet they seem somewhat naïve regarding world-historic
social forces, the problem of cause and effect in social explanation, and the epistemo-
logical dubiousness of terms like ‘success’. None of these intellectual questions are
addressed, bracketed, or even seem to be known about. But then, this is not the proper
function of this bureaucratic undertaking . . . contributing to the police power is. This is
how we can end up with an article in which the domination of university league tables
Downloaded by [Gothenburg University Library] at 06:27 16 November 2017
by American institutions – the world’s richest institutions in the world’s most powerful
state – is treated as ‘an intriguing question’ (Li, Shankar, and Ki Tang 2011, 923), when
such a question is not only not intriguing, but obvious. In fact, it is the wrong question
if we want sincerely and fearlessly to speak ‘truth to power’ in the pursuit of parrhesia
(Foucault 2001), rather than merely to bend its force in our favor. It is no surprise that
micro-methodologists in China and Southeast Asia tend to recommend the Chinese
ARWU (Li, Shankar, and Ki Tang 2011), British micro-methodologists the THE or QS,
and Continental European micro-methodologists the German CHE or the EU’s
Multirank (Van Vught and Ziegele. U-Multirank: Final Report (2011); Van Vught
and Westerheijden (2010) ‘Multidimensional Ranking’).
Even if we were all to share the micro-methodologist’s blue-eyed mission to ‘improve
ranking systems for the benefit of higher education as a whole’ (Liu and Cheng 2005,
135), it is hard to see how this might be accomplished given that the social, political,
and cultural aims and purposes for rankings, and the activities they purport to measure
and assess, are never adequately considered.
it’ (Svensson et al. 2010, 2) and thus agree to work with ranking but in more
methodologically sophisticated and socially sensitive ways. In the humanities and social
sciences, this intellectual ballast has proven decisive, and it is to these agnostics that my
critical analysis is principally addressed. The main themes of criticism conjured by the
sociocultural critics are therefore as follows:
(7) Transparency (Altbach, 2006, 3): Institutions and academics are to be ‘accoun-
table’ for the resources allotted to them by society, and so, they should provide
‘evidence about the results they achieve’ (Stensaker and Kehm 2009, vii). The
transparency discourse is accepted unproblematically as a valid social epistemo-
logical argument for further penetration of techniques of quantified calculation
and measurement (xviii).
As can be seen, even though mildly critical of rankings, the epistemological legitimacy
of ranking methodology is never seriously challenged here and they are ontologically
accepted as reality-reflecting phenomena as befits the hegemonic empiricist paradigm of
academic social science. This means that the epistemological critique of this literature
cannot become much more sophisticated than its general fixation on ‘bias’ (Dill 2006, 5;
Downloaded by [Gothenburg University Library] at 06:27 16 November 2017
Holmes 2006). In sum, the problem with rankings to most sociocultural critics ‘con-
cerns the practice, not the principle’ (Altbach, 2006, 2). In this way, those on the
threshold of both micro-methodology and sociocultural criticism, like Anthony Van
Raan, can still maintain that ‘bibliometric indicators have great potential’, but only on
condition that the methodological technique is ‘sufficiently advanced and sophisticated’
(Van Raan 2005, 135). Likewise, whilst Philip Altbach might be disapproving of the
positivist claim that ‘numbers are assumed to be a proxy for quality’, the concession
nevertheless follows that ‘they are to a significant extent’ (Altbach, 2006, 2). This reveals
the shared epistemological denominator for both the socioculturally sensitive critic and
the devotee of micro-methodological positivism: their acceptance of the epistemic
parameters of police.
The difference is usually no more than a shift from a self-professed ‘quantitative’ to a
‘qualitative’ procedure. Consequently, this kind of research endeavors, like the micro-
methodologists, to ‘improve’ rankings, though now with the eye of the bourgeois refor-
mer rather than that of the garage mechanic. While it is the case that ‘they [the Micro-
Methodologists] often measure the wrong things, and they use flawed metrics to do the
measurements’, the legitimate use of measurement and ranking itself is approved, as long
as it is ‘analyzed properly’ (Stensaker and Kehm 2009, vii). Such criticisms of method are
brought to bear without questioning the bases of ranking itself, even if the sociocultural
critics often prefer a form of assessment and organization more nuanced, worldly, and
sometimes more discursive, than the unpalatable seriality of the ordinal league table.
Rather than rejection, deconstruction, or transcendence, the ‘challenge’ is then ‘to ensure
that they [rankings] provide accurate and relevant assessments, and measure the right
things’ (Altbach, 2006, 2), and to steer analysis toward the ‘nuances, problems, uses – and
misuses’ of rankings (Altbach 2012. 27??). In short, the sociocultural critics are to be
humanist and socially scientific advisors to the numbers bods.
The main thrust of the advice produced in this literature is to move academic
analyses on rankings beyond the ‘holistic rank ordering of institutions in league tables’
(Marginson 2009, 13), of the kind familiar from the main compilations, and to place
these micro-ranking phenomena into a broader and more fractured terrain where
ranking is properly grasped within an emergent reconfiguration of social relations. If
ranking is to remain ‘relevant’, and of course we all hope and nightly pray that it does,
this apparently will require efforts to break down the no longer ‘feasible’ institutional
horizon of rankings, and instead ‘to compare world universities on comparable
14 J. WELSH
disciplinary fields’ (Proulx 2007, 76). This kind of movement means two key changes in
academic rankings: multidimensionality and user-orientation. League tables will be
replaced by a multiplying and dizzying array of assessable objects beyond the institu-
tional level, with the latter being expanded or shattered into departments, states,
disciplines, regions, even individuals, to be placed into some kind of ordered relation
predicated on whatever assessed criteria. Simultaneously, the proximate compilation of
data will be made no longer by king-making compilation organizations, nor even by
micro-methodological experts, but by the ‘user’ itself (students, parents, state financiers,
job applicants), who, after punching in their preferences to responsive software pro-
grams and giving the handle a swift crank, will receive the offerings of the algorithm.
The German CHE ranking and the EU’s Multirank are the favorites of this research
literature and their recommendation is an instantly recognizable badge of membership
Downloaded by [Gothenburg University Library] at 06:27 16 November 2017
amongst the sociocultural critics. Though free of the serial form of the league table, the
CHE and Multirank rankings are nevertheless rankings. They are just as much a
systemic set of relations regulating the disposition of objects in social reality, given a
quantitative expression, established according to defined criteria, and brought into
being to satisfy the imperative of human government. To this extent, U-Multirank is
a ranking, but a new generation of ranking. As such, it is how these relations between
subjects and objects are conceived, constituted, and justified that is essential for under-
standing the social rationalities that inhere to rankings, and the conceptual claims they
make.
Despite the apparent benefits of this new and more plural and versatile form of
ranking, and aside from the corresponding objections made by the ‘distinctive bene-
ficiaries’ and ‘conflicting interests’ whom it threatens (Marginson 2009, 13; Altbach
2006, 3), it still runs contrary to a critical and political analysis of ranking. For this, we
must reverse Altbach’s position and find fault not in the practice of ranking but in the
principle. This is an epistemological leap that neither the micro-methodologists nor the
sociocultural critics can make.
This is the case specifically as regards the instrumental relationship between the
epistemological bases for rankings and their resultant ontological effects.
The ontological trap refers to the fact that once a consensus over the major
characteristics of reality is achieved, it will be very difficult to question this reality
and its social structures of domination. The symbolic and institutional structura-
tion covers ontological and epistemological layers that are mutually dependent.
The knowledge produced and reproduced about reality thus determines what will
be considered as forming the elemental structures of this same reality (Kauppi and
Erkkila 2011, 315). We can therefore see how ‘the problems with positivism lead to
deeper issues’, and how ‘if such questions remain unanswered it is possible that the
prevailing intellectual climate will regard them as essentially unaskable’ (O’Connor
2004, 7). There crystallizes a kind of doxa, which is in fact what is happening when
we hear claims over the ‘inevitability’ of university ranking ascendency.
But this line of criticism must be taken a step further for a critical analysis to be
forthcoming. More theoretically, articulate critique must be elaborated that
situates the ‘instrumentality’ of academic rankings within the instrumental
16 J. WELSH
social actors are coerced and seduced to internalize the normative pressures
imposed upon them’ (Bastedo and Bowman 2011, 9). As anodyne a description
of the disciplinary mode as this might be, it is at least a start. In an explicitly
Foucauldian vein, Sauder and Espeland take this line further by presenting
rankings as a technology of ‘disciplinary rationality’ revealed through ‘processes
of surveillance and normalization’ (2009, 64).
In keeping with the Foucault of the middle works (1991, 2003, 2006), this is a
fruitful line of analysis but remains nevertheless resolutely somatic, spatial, and
optic in its capillary modeling of the disciplinary power in rankings. The
argument here is important and fresh, but I advocate taking this line of analysis
forward from this point into Foucault’s later lectures and the crucial shift in his
genealogical treatment there of modern modalities and rationalities of power.
Whilst Sauder and Espeland mobilize Foucault’s classic of ‘disciplinary power’
(Foucault 1991), it is in his later lectures on ‘governmentality’ (governmental
rationality) that the emergence of global rankings as global ‘apparatuses of
security’ (dispositives) can more strikingly and comprehensively be apprehended
and their historic potency grasped. In fact, this line of further research is indeed
hinted at the end of their article, where they state that ‘rankings are part of a
global movement that is redefining accountability, transparency, and good
governance in terms of quantitative measures. We ignore these trends at our
peril’ (Sauder and Espeland 2009, 80).
By bringing these three lines of critical analysis on academic rankings – doxa, instru-
mentality, and (meta-)disciplinarity – into contact with one another, a new agenda of
critical analysis can be brought to bear on rankings that posits them as apparatuses of
governmentality that have emerged in historical capitalism between the logic of police
power and the imperative of capital accumulation in the world system.
clear distribution of positions and capacities, grounding the distribution of power between
rulers and ruled; it is a temporal beginning entailing that the fact of ruling is anticipated in
the disposition to rule and, conversely, that the evidence of this disposition is given by the
fact of its empirical operation. (Rancière 2015, 59)
The arkhê is the political counterpart to a social doxa. Where a doxa realizes and
reproduces a given set of epistemological assumptions through an instrumental ration-
ality set in motion, the arhkê is the political moment that motivates this motion and the
logic that establishes the doxa’s circularity into social relations. Research must then
elaborate how rankings have emerged and operate as arkhê and then empirically
investigate how this rankings arkhê has emerged historically in the transformation of
academic life and the wider social relations of which it is a part.
Third, there is the dialectical relationship alluded to earlier between the contingency
of political form and the necessity of social force. The conventional research collapses
these two into a static and reactionary acceptance of the status quo established by the
arkhè of rankings. The critical aim is to separate form from force and then place them
back into relation with each other but in dialectical relation to each other through
historical time. It is in this manner that we can see how nothing is ‘here to stay’, how
resignation can be converted into conditions of possibility, and how political practice
can transform the contingency of rankings form through the application of critical
thought.
This takes us to the beginning of a potentially new agenda in rankings research from
which we can possibly articulate a rejection of the principle of rankings so that we need
not collaborate in its practice. However, this will only be possible if we resist the
18 J. WELSH
Disclosure statement
No potential conflict of interest was reported by the author.
Notes on contributor
John Welsh is a researcher at the Department of Political and Economic Studies, University of
Downloaded by [Gothenburg University Library] at 06:27 16 November 2017
Helsinki. His principal current research concentration is on the transformation and governing of
academic life, primarily within traditions of critical social theory. Recently published articles can
be found in Critical Sociology, Housing, Theory & Society, and the International Journal of
Politics, Culture and Society.
ORCID
John Welsh http://orcid.org/0000-0002-7136-1001
References
Altbach, P. 2006. “The Dilemmas of Rankings.” Bridges 12: 2–3.
Altbach, P. 2012. “The Globalization of College and University Rankings.” Change: the Magazine
of Higher Learning 44 (1): 26–31. doi:10.1080/00091383.2012.636001.
Amsler, S., and C. Bolsmann. 2012. “University Ranking as Social Exclusion.” British Journal of
Sociology of Education 33 (2): 283–301. doi:10.1080/01425692.2011.649835.
Artushina, I., and V. Troyan. 2007. “Methods Of The Quality Of Higher Education Social
Assessment.” Higher Education In Europe 32 (1): 83-89.
Bastedo, M., and N. Bowman. 2011. “College Rankings as an Organizational Dependency.”
Research in Higher Education 52 (1): 3–23. doi:10.1007/s11162-010-9185-0.
Berndtson, E. 2013. Global Disciplinary Rankings and Images of Quality: The Case of Political
Science. In Erkkilä. T. (ed), Global University Rankings: Challenges for European Higher
Education. Basingstoke: Palgrave McMillan. 178-195.
Bhaskar, R. 2008. A Realist Theory of Science. New York: Verso.
Bowden, R. 2000. “Fantasy Higher Education: University And College League Tables.” Quality In
Higher Education 6 (1): 41-60.
Boyer, P. 2004. College Rankings Exposed. Princeton, NJ: Princeton University Press.
Dill, D. 2006. “Convergence and Diversity: The Role and Influence of University Rankings.”
Consortium of Higher Education Researchers (CHER) 19th Annual Research Conference,
Germany, University of Kassel, September, 9.
Dill, D., and M. Soo. 2005. “Academic Quality, League Tables, and Public Policy: A Cross-
National Analysis of University Ranking System.” Higher Education 49: 495–533. doi:10.1007/
s10734-004-1746-8.
Dowling, W. C. 1984. Jameson, Althusser, Marx: An Introduction to the Political Unconscious.
Ithaca, NY: Cornell University Press.
Drew, D., and R. Karpf. 1981. “Ranking Academic Departments: Empirical Findings And A
Theoretical Perspective.” Research In Higher Education 14 (4): 305-320.
Ehrenberg, R. 2002. Tuition Rising: Why College Costs so Much. Cambridge, MA: Harvard
University Press.
CRITICAL POLICY STUDIES 19
Erkkilä, T., and O. Piironen. 2013. “Reforming Higher Education Institutions in Finland:
Competitiveness and Global University Rankings.” In Global University Rankings: Challenges
for European Higher Education, edited by T. Erkkilä, 124–143. Basingstoke: Palgrave
McMillan.
Erkkilä, T. 2013. “Introduction: University Rankings and University Higher Education.” In
Global University Rankings: Challenges for European Higher Education, edited by T. Erkkilä,
3–19. Basingstoke: Palgrave McMillan.
Espeland, W., and M. Sauder. 2007. “Rankings and Reactivity: How Public Measures Create
Social Worlds.” American Journal of Sociology 113 (1): 1–40. doi:10.1086/517897.
Fisher, M. 2009. Capitalist Realism: Is There No Alternative? London: Zero Books.
Flyvbjerg, B. 2001. Making Social Science Matter. Cambridge: Cambridge University Press.
Foucault, M. 1991. Discipline and Punish: The Birth of the Prison. London: Penguin.
Foucault, M. 2001. Fearless Speech. Los Angeles: Semiotext(e).
Foucault, M. 2003. Society Must Be Defended: Lectures at the College De France, 1975-76.
Downloaded by [Gothenburg University Library] at 06:27 16 November 2017
Labi, A. 2008 Obsession With Rankings Goes Global. The Chronicle of Higher Education, 17
October. Available online at: http://www.chronicle.com/article/Obsession-With-Rankings-
Goes/33040 Accessed 30 October 2017)(
Lang, D., and Q. Zha. 2004. “Comparing Universities: A Case Study between Canada and China.”
Higher Education Policy 17 (4): 339–354. doi:10.1057/palgrave.hep.8300061.
Li, M., S. Shankar, and K. Ki Tang. 2011. “Why Does the USA Dominate University League
Tables.” Studies in Higher Education 36 (8): 923–937. doi:10.1080/03075079.2010.482981.
Liu, N. C., and Y. Cheng. 2005. “The Academic Ranking of World Universities.” Higher
Education in Europe 30 (2): 127–136. doi:10.1080/03797720500260116.
Lordon, F. 2014. Willing Slaves of Capital: Spinoza & Marx on Desire. New York: Verso.
Lorenz, C. 2012. “If You’re so Smart, Why are You under Surveillance? Universities,
Neoliberalism and New Public Management.” Critical Inquiry 38 (3): 599–629. doi:10.1086/
664553.
Marcuse, H. 1982. “Some Social Implications of Modern Technology.” In The Essential Frankfurt
Downloaded by [Gothenburg University Library] at 06:27 16 November 2017
School Reader, edited by A. Arato and E. Gebhardt, 138–162. New York: Continuum.
Marginson, S. 2009. “University Rankings, Government and Social Order: Managing the Field of
Higher Education according to the Logic of the Performative Present-as-Future.” In Re-
Reading Education Policies: A Handbook Studying the Policy Agenda of the 21st Century,
edited by M. Simons, M. Olssen, and M. Peters. Rotterdam: Sense Publishers.
Meredith, M. 2004. Why Do Universities Compete in the Ratings Game? An Empirical Analysis
Of The Effects Of The U.s. News And World Report College Rankings. Research in Higher
Education 45(5): 443-461.
Merisotis, J. 2002. “On The Ranking Of Higher Education Institutions.” Higher Education In
Europe 27 (4): 361-363.
Münch, R. 2013. “The Colonization of the Academic Field by Ranking.” In Global University
Rankings: Challenges for European Higher Education, edited by T. Erkkilä, 196–219.
Basingstoke: Palgrave McMillan.
Mustajoki, A. 2013. “Measuring Excellence in Social Sciences and Humanities.” In Global
University Rankings: Challenges for European Higher Education, edited by T. Erkkilä, 147–
165. Basingstoke: Palgrave McMillan.
Nixon, J. 2013. The Drift to Conformity: The Myth of Institutional Diversity. In Erkkilä. T. (ed).
Global University Rankings: Challenges for European Higher Education. Basingstoke: Palgrave
McMillan. 92-106.
O'Connor, B. 2004. Adorno's Negative Dialectic: Philosophy And The Possibility Of Critical
Rationality. Cambridge, MA: MIT Press.
Proulx, R. 2007. “Higher Education Ranking and League Tables: Lessons Learned from
Benchmarking.” Higher Education in Europe 32 (1): 71–82. doi:10.1080/03797720701618898.
Rancière, J. 1999. Disagreement: Politics and Philosophy. Minneapolis, MN: University of
Minnesota Press.
Rancière, J. 2014. The Hatred of Democracy. London: Verso.
Rancière, J. 2015. Dissensus: On Politics and Aesthetics. London: Bloomsbury.
Ranking Forum of Swiss Universities.2008. Introduction: University Rankings Based On Football
League Tables. Available online at: http://www.universityrankings.ch/en/on_rankings/introduc
tion Accessed(30 October 2017).
Rinia, E., T. Van Leeuwen, H. van Vuren, and A. Van Raan. 1998. “Comparative Analysis Of A
Set Of Bibliometric Indicators And Central Peer Review Criteria.” Research Policy 27 (1): 95-
107.
Salmi, J., and A. Saroyan. 2007. “League Tables as Policy Instruments.” Higher Education
Management and Policy 19 (2): 1–38. doi:10.1787/hemp-v19-2-en.
Sauder, M., and G. Fine. 2008. “Arbiters, Entrepreneurs, and the Shaping of Business School
Reputations.” Sociological Forum 23 (4): 699–723. doi:10.1111/socf.2008.23.issue-4.
Sauder, M., and W. N. Espeland. 2009. “The Discipline of Rankings: Tight Coupling and
Organizational Change.” American Sociological Review 74 (1): 63–82. doi:10.1177/
000312240907400104.
CRITICAL POLICY STUDIES 21
Shin, JC, Toutkoushian, RK., and Teichler, U. (eds.) 2011. University Rankings: Theoretical Basis,
Methodology and Impacts on Global Higher Education. Dordrecht, NL: Springer.
Stensaker, B., and B. Kehm. 2009. “Introduction.” In University Rankings, Diversity, and the New
Landscape of Higher Education, edited by B. Kehm and B. Stensaker, vii–xix. Rotterdam, NL:
Sense Publishers.
Stern, N., (2016) Building On Success And Learning From Experience: An Independent Review
Of The Research Excellence Framework. London: Department For Business, Energy &
Industrial Strategy, Uk Government. Available Online at: https://www.gov.uk/government/
publications/research-excellence-framework-review Accessed(30 October 2017).
Stevens, M. 2007. Choosing a Class: College Admissions and the Education of Elites. Cambridge,
MA: Harvard University Press.
Sünker, H. 2006. “Knowledge Society/ Knowledge Capitalism And Education.” Policy Futures In
Education 4 (3): 217-219.
Svensson, P., S. Spoelstra, M. Pedersen, and S. Schreven. 2010. “The Excellent Institution.”
Downloaded by [Gothenburg University Library] at 06:27 16 November 2017
Ephemera 10 (1): 1.
Usher, A. 2009. “University Rankings 2.0: New Frontiers in Institutional Comparisons.”
Australian Universities Review 51 (2): 87–90.
Usher, A., and J. Medow. 2009. “A Global Survey Of University Rankings And League Tables.”
In, eds 3-18. Rotterdam, NL: Sense Publishers.
Usher, A., and M. Savino. 2006. A World of Difference. A Global Survey of University League
Tables, 1–21. Toronto: Educational Policy Institute (EPI).
Usher, A., and M. Savino. 2007. “A Global Survey of University Ranking and League Tables.”
Higher Education in Europe 32 (1): 5–15. doi:10.1080/03797720701618831.
Van der Wende, M. 2008. “Rankings And Classifications In Higher Education: A European
Perspective.” Higher Education: Handbook Of Theory And Research 23: 49-73.
Van Dyke, N. 2005. “Twenty Years of University Report Cards: Where are We Now?” Higher
Education in Europe 30 (2): 103–125. doi:10.1080/03797720500260173.
Van Raan, A. 2005. “Fatal Attraction: Conceptual and Methodological Problems in the Ranking
of Universities by Bibliometric Methods.” Scientometrics 62 (1): 133–143. doi:10.1007/s11192-
005-0008-6.
Van Vught, F., and D. Westerheijden. 2010. “Multidimensional Ranking: A New Transparency
Tool For Higher Education And Research.” Higher Education Management And Policy 22 (3):
1-26.
Webster, T. 2001. “A Principal Component Analysis of the U.S. News & World Report Tier
Rankings of Colleges and Universities.” Economics of Education Review 20 (3): 235–244.
doi:10.1016/S0272-7757(99)00066-7.
Wright Mills, C. 1970. The Sociological Imagination. London: Penguin.
Zizek, S. 2014. Event: Philosophy In Transit. London: Penguin.