Vous êtes sur la page 1sur 15

Accounting, Organizations and Society 36 (2011) 269283

Contents lists available at ScienceDirect

Accounting, Organizations and Society


journal homepage: www.elsevier.com/locate/aos

On audits and airplanes: Redundancy and reliability-assessment in high technologies


John Downer
Center for International Security and Cooperation (CISAC), Encina Hall, Stanford University, Stanford, CA 94305, United States

a r t i c l e

i n f o

a b s t r a c t
This paper argues that reliability assessments of complex technologies can usefully be construed as audits and understood in relation to the literature on audit-practices. It looks at a specic calculative tool redundancy and explores its role in the assessments of new airframes by the Federal Aviation Administration (FAA). It explains the importance of redundancy to both design and assessment practices in aviation, but contests redundancys ability to accurately translate between them. It suggests that FAA reliability assessments serve a useful regulatory purpose by couching the qualitative work of engineers and regulators in an idiom of calculative objectivity, but cautions that this comes with potentially perverse consequences. For, like many audit-practices, reliability calculations are constitutive of their subjects, and their construal of redundancy shapes both airplanes and aviation praxis. 2011 Elsevier Ltd. All rights reserved.

There is no safety in numbers  James Thurber

Introduction Reliability, accounting and modernity From power-stations to pacemakers, modern industrial societies increasingly depend on technologies that cannot be allowed to fail. Technological risk, therefore, has become a signal feature of modernity (Beck, 1992), and its administration an obligation of modern governance. Integral to this task are the auditing and accounting practices through which we know the reliability of complex technologies. These frame vital societal choices about the worlds we create around us. For all this, however, reliability assessments of the most publicly signicant high technologies from nuclear power-plants to civil jetliners invoke calculative practices that are both opaque to the
Tel.: +1 415 695 4274.
E-mail address: jrd22@cornell.edu 0361-3682/$ - see front matter 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.aos.2011.05.001

public gaze and largely neglected by sociologists of accounting. This paper is an effort to begin the long process of redressing the latter. A panoply of oversight bodies are responsible for performing reliability assessments of high technologies. Invariably, these are state regulators, such as the Nuclear Regulatory Commission (NRC) and Federal Aviation Administration (FAA) in the US. Among other functions, they measure and verify the reliability of complex and potentially dangerous technologies. This function is important, not only because the technologies involved are consequential, but also because the reliability they require is not readily knowable. At low levels of reliability it is possible to simply inspect performance: the reliability of mundane artifacts such as light bulbs, for instance, is readily observable to a point where audit-practices become trivial (Power, 1996: 301).1 At the same time, relatively little rides on predicting the reliability of such artifacts before their diffusion through society: if a batch of lightbulbs is unreliable then this will quickly become apparent and the wider social consequences will be limited. In these in1 As Pentland (1993: 611) puts it: Fundamentally, auditing involves the certication of the unknowable.

270

J. Downer / Accounting, Organizations and Society 36 (2011) 269283

stances, therefore, the well-studied practices of quality control are very adequate. Complex, dangerous technologies, however, are very different in both respects. Such technologies demand ultra-high levels of reliability, and their assessments must be prospective, as much rides on them being known to be reliable before they are deployed. Yet reliability at such levels cannot simply be inspected in a laboratory setting, where billions of hours of observation would be required to achieve statistically meaningful measures, [which, even then, would be problematic (Downer, 2007; Pinch, 1993)]. In such cases, the auditability of reliability almost becomes the essence of reliability itself. A new airplane design awaiting the FAAs approval to carry passengers, for instance, is reliable only insofar as it can be measured to be. Reliability assessments, accordingly, become the means by which we make reliability visible. Like many audit calculations, they bring the future into the present (Miller, 2003: 186): portraying the performance of yet-unrealized machines as knowable, calculable and amenable to control. Without them, the reliability required of socially consequential technologies is too private to be socially or politically meaningful. As sociologists have long recognized, however, this kind of constitutive relationship where a variable is only knowable through its assessment makes measurement highly consequential. It inevitably grants audits and their calculative practices great inuence over the institutions they assess, even to the extent that certain social phenomena come to hinge on specic accountings (e.g.: Burchell, Clubb, & Hopwood, 1985; Hopwood & Miller, 1994). All the public expenditure on initiatives to reduce climate change, for instance, is justied in relation to the calculations that tell us the phenomenon exists; without them we would be unaware of the problem. Unsurprisingly, perhaps, such inuence can lead to perverse consequences. Sociologists working in a variety of domains report that audits often colonize practices more fully than auditors intend (e.g.: Davenhill & Patrick, 1998; Day & Klein, 2001; Munro, 2004; Neyland & Woolgar, 2002; Strathern, 2000). They observe that calculative tools can transgure practices in unexpected ways and with unanticipated consequences (e.g.: Miller & Napier, 1993; Power, 2003). [This is visible, for instance, when US universities offer early admissions to boost their acceptance ratios and enhance their performance in rankings tables (Wickenden, 2006).] Power (1994) expresses this in terms of the rise of the auditee and audit mentalities. One institutional response to the rise of the auditee has been to quietly separate audit calculations from the work they assess, such that audits become performances with limited effect on practice. Meyer and Rowan (1977), for instance, argue that attempts to rationalize organizational behavior frequently result in the creation of formal rules and structures that have symbolic value but are decoupled from actual practice. (Such as the nine-to-ve working week that purportedly governs the vocational lives of untenured academics.) As Power (2003: 190) puts it: modern audit societies are often characterized by elaborate games of compliance. Such studies have far-reaching implications for our understanding of modern bureaucracies. Given the grow-

ing signicance of high technology to modernity, therefore, and the central role of assessments in their governance, it is important that we explore them in this context. Technology regulation as engineering audit The neglect by sociologists of high-technological counting practices is surprising. It almost certainly owes something to the origins of such work being focused on audit and accounting practices in the nancial sense, and the fact that technological assessments are rarely construed in these explicit terms. Sociologists have long recognized that the denitions of both audit and accounting are mutable, however, with the boundaries and forms of both being constantly renegotiated as practices evolve and are transferred to new domains (Miller & Napier, 1993: 631 2). In keeping with this, sociological denitions of audit and accounting have long been expanding to register more fully the diversity and multiplicity of modern audit societies (Power, 1997), and sociologists interested in auditpractices now look far beyond purely nancial domains. They study monitoring and assessment in realms such as healthcare (Day & Klein, 2001) and social work (Munro, 2004), for instance, both of which invoke a range of nonmonetized, but nevertheless auditable, indicators of quality. Power (1996), perhaps most notably, proposes an inclusive sociology of audit knowledge to explore how new domains are made auditable through the social construction of measurable facts. If we follow Power, therefore, and dene auditors as ... the many forms of inspectorate and evaluative bodies that take as their objects the performance of the auditee (2003: 188), then it is difcult to justify the exclusion of high-technology regulators. Indeed, the work of high-reliability assessment differs very little in principle (although greatly in practice, as we will see) from that of quality assurance, which Power sees as a driving idiom of the modern audit society (1997, 2003). If this realm rarely employs the language of audits or accountancy (or, indeed, the terms themselves), it is because it evolved in parallel to nancial institutions rather than emerging in the recent audit explosion (Power, 1994) that borrowed heavily from the language of nance. But this is not a justication for excluding them. Mechanical objectivity A more credible justication for eschewing the calculative practices of technology assessment would be their widespread perception of objectivity, which suggests that such calculations would be sociologically uninteresting. Machines, perhaps more than almost any other non-nancial objects, are invariably portrayed as discrete and quantiable by the people who assess them (Wynne, 1988). It is almost self-evident that the measurable facts of many audit regimes are, to some degree, social constructions, but engineering is often seen as privileged in this regard: as being possessed of an unusually robust evaluative framework that sets its calculations apart from social inuence. Next to research quality in universities, for example, mechanical reliability in technologies appears to be an

J. Downer / Accounting, Organizations and Society 36 (2011) 269283

271

eminently calculable and quantiable quality: one that can be determined with formal rules and objective algorithms that promise incontrovertible, reproducible and value-free assessments. This understanding of technology assessment is pervasive. Gherardi and Nicolini (2000: 343) call it the bureaucratic vision of safety; Porter (1995) calls it the ideal of mechanical objectivity. It suggests a compelling reason to imagine why sociologists interested in counting practices might look elsewhere for their case-studies. Sociologists would be wrong to be deterred, however, as modern research on technological work convincingly argues that the orderly public image invoked by the ideal of mechanical objectivity belies a messy reality of real engineering practice (e.g.: Collins & Pinch, 1998; Downer, 2007; MacKenzie, 1996a; Wynne, 1988). This literature explores the practical implications of epistemological dilemmas such as the problem of relevance or the experimenters regress (Collins, 1985; Pinch, 1993), to argue that technological disputes cannot be denitively resolved: a point it illustrates through a series of case-studies showing how proximity to real engineering work reveals hidden ambiguities and disagreements. As Wynne (1988) writes: In all good ethnographic research [of] normally operating technological systems, one nds the same situation. [. . .] Beneath a public image of rule-following behavior [. . .] experts are opening with far greater levels of ambiguity, needing to make uncertain judgments in less than clearly structured situations (1988: 153). By this view, it is quixotic to imagine that technological practice can be governed or assessed by formal rules or rubrics without leaning heavily on interpretation and judgment. The ideal of mechanical objectivity, in other words, does not hold up to scrutiny. There is more to understanding technology than can be captured in standardized tests, and something about experts that eludes expert-systems. Collins (1985) and Mackenzie (1996b) reect on this, both arguing that technical knowledge-claims, much like sausages and scriptures, appear increasingly meritorious the further they are viewed from the circumstances of their production. To the extent that that the calculative practices of engineering are construed as objective, however, this deep ambiguity has far-reaching implications. It suggests that, even in engineering, mechanical objectivity is an unrealizable and misleading goal, and that, where technological assessments trade in hard data and solid technical conclusions, then their discourse is masking the ambiguities and social processes behind the numbers. It points, in short, to a subjectivity in engineerings calculative practices that merits exploration by sociologists of accounting. The following paper will illustrate this argument by exploring a specic calculative practice in the context of an important modern technology audit: the FAAs TypeCertication process. Type-Certication The FAA is congressionally charged with the role of assessing the designs of new civil aircraft and declaring

them t (or unt) for public use. Type-Certication is the process through which it executes this duty. It allows regulators to audit the reliability of new designs, by dening the calculative practices that make reliability visible. Like most public engineering discourse, it is couched in a highly quantitative idiom that reects the ideal of mechanical objectivity. When asked the reliability of a new aircraft, aviation regulators offer a strictly quantitative answer: a number, invariably expressed as a probability. When asked for the justication of that number, and they offer a formal, mathematical justication (e.g.: Lloyd & Tye, 1982). The minutiae of Type-Certication are an extensive pyramid of guidance material usually Technical Standard Orders that specify detailed stipulations for each part and system of a civil aircraft, down to the last bolt. At the tip of this pyramid, in the US, is Part 25 of the code of Federal Aviation Regulations (FAR-25): the master-document governing the design of large civil aircraft as an integrated system (National Transportation Safety Board, 2006).2 Like many forms of audit (e.g.: the latest ISO-9000 standards), Type-Certication aims to be outcome focused and to act on its subjects by utilising their autonomy (Miller, 2003: 180). To this end, FAR-25 avoids stipulating specic design solutions, as early aviation standards did (Komons, 1978), and instead species minimum reliability goals for each system and the rules for assessing its compliance (NRC, 1998). A technology of government (Rose & Miller, 1992: 183), it links responsibility with calculation to foster the calculating engineer and, thereby, the reliable airplane. The purpose of this is to avoid stiing innovation by imposing unnecessary constraints on manufacturers: the FAA, like most auditors, aspires to audit neutrality (Power, 1996: 296), in that it wants to colonize practices only insofar as this promotes a specic goal: in this case, reliability. The certication process is demanding and typically lasts through the entire development cycle of a new aircraft.3 Manufacturers supply the FAA with detailed plans, drawings and analyses to regulators, who then scrutinize the designs and oversee extensive tests on prototypes [usually through designees (Downer, 2010)]. If a design meets the required standards, the regulator then issues a certicate approving it for commercial ight and avowing (among other things) that that the FAA has found its reliability to be satisfactory (Government Accountability Ofce, 1993: 10). This process might sound straightforward from a distance, but from close-up the calculations involved are epistemologically forbidding. The level of reliability that regulators demand of an aircraft design is extremely high,

2 Apart from the engines, which are governed by a separate standard: FAR-33. The EU, through their regulator EASA, uses a different code, which, after a long period of harmonization, is almost identical to FAR-25. 3 The amounts of money and labor invested in this process are difcult to ascertain precisely, but are, by all accounts, extremely onerous. By 1980, Lockheed was estimating that it would submit approximately 300,000 engineering drawings, 2000 engineering reports, and 200 vendor reports in the course of certicating a new wide-body aircraft. It would also conduct about 80 major ground tests and 1600 ight test hours and send around 1500 letters to the FAA (National Academy of Sciences, 1980: 29). These gures that have undoubtedly grown considerably in the intervening years, as aircraft have become more complex and regulations more demanding.

272

J. Downer / Accounting, Organizations and Society 36 (2011) 269283

and so the reliability they require of each of an aircrafts constituent elements must be commensurately higher. (If a airplane is composed of 100 critical systems, all of which must function simultaneously for the airplane itself to function, then each must be 100 times more reliable than the airplane itself needs to be.) The widely cited, although indirectly derived, goal of Type-Certication is to ensure that each safetycritical element in an airliner has a proven reliability of no more than one failure in every billion hours of ight. This is to say that for each hour of ight, each system should function 99.9999999% of the time. This gure the critical and oft-cited nine 9s (usually expressed as a probability of 0.999999999, where a value of 1 represents perfection) appears frequently in public aviation discourse and lies at the core of the bureaucratic vision of safety promulgated by aviation regulators in the US and around the world (NRC, 1998; Lloyd & Tye, 1982). In accounting terms, there is a useful sense in which we can understand the FAA as what Miller (1994, 1998) and others have called a hybrid, in that it is an institution that translates between, and reconciles, different epistemic worlds. It helps publics and policymakers make informed, evidence-based decisions about aviation by translating its inscrutable technologies into the language of formal management schema. In performing this bridging function, hybrids invoke specic calculative practices: what Miller, Kurunmki, and OLeary (2008) call hybrid instruments. These are the conceptual tools through which auditors transform disorderly practices into quantitative variables. In different guises, such transformations together with questions of their delity, utility and inuence are central concerns of the sociological literature on accounting, and it is these themes that this paper will explore in relation to aviation assessment. It will do this by focusing on a specic calculative practice that is central to almost all formal assessments of reliability in complex technological systems: redundancy. To understand the signicance of redundancy both to high-reliability engineering, and to assessments thereof it helps to begin with an illustration.

ditching, the aircraft coughed back into life. Passengers in life-preservers applauded the euphony of restarting engines and the crew landed them safely in Jakarta (Tootell, 1985). The cause of their close call was not a mystery for long. Investigators soon determined that ash from a nearby volcano had clogged the engines, causing them to fail and restart only after the aircraft lost altitude and left the cloud. This nding quietly unnerved the aviation industry. Prior to the Jakarta incident, nobody imagined a commercial aircraft could lose all its propulsion. A Boeing 747 can y relatively safely on a single engine, so with four it enjoys a comfortable degree of redundancy. This is why Boeing puts them there. Under normal circumstances, engineers consider even a single engine failure highly unlikely so, until British Airways inadvertently proved otherwise, they considered the likelihood of all four failing during the same ight to be negligible as near to impossible as to make no difference. At the time, the aviation industry considered a quadruple engine failure so unlikely that they taught pilots to treat indications of it as an instrumentation failure. Nobody had considered the effects of volcanic ash. The incident, and others like it, was signicant more widely because civil aircraft, like most complex technologies, depend heavily on redundancy for their safety. If a critical system fails then another should always be there to do its work. This is true even of airframes, which are designed with redundant load-paths. In fact, although redundancy may seem abstract and esoteric, it is perhaps the single most important engineering tool for designing, implementing, and importantly assessing reliability in all complex, safetycritical technologies: the sine qua non of high-reliability engineering and its public administration.4

Two redundancies The meaning of redundancy, and its importance to engineering are simple, or at least in principle. If we imagine that a system is composed of many elements, then an element is redundant if it is part of a system that contains backups to do its work if it fails. A system has redundancy, meanwhile, if it contains redundant elements. This can mean having elements that work simultaneously but are capable of carrying the load individually if required such as the engines and structural beams on civil airliners or it can mean having idle elements that awake when needed, such as backup generators. It can work at different levels: sometimes being two capacitors on a circuit-board, and, at others, the duplication of whole systems, such as in the US missile programs of the 1950s, which entrusted mission success to the doctrine of overkill and the redundancy of the entire missile (Swenson, Grimwood, & Alexander, 1998: 182). The essential principle is that all redundant elements must fail before the complete system fails. If each element only fails infrequently, therefore, then
4 This is also true, to some extent, of social systems. Rochlin, La Porte, and Roberts (1987) identify it as a key strategy of successful high reliability organizations and are skeptical of mechanistic management models that seek to eliminate it in the name of efciency.

A brush with disaster On 24 June 1982, high above the Indian Ocean, a British Airways Boeing 747 began losing power from its four engines. Passengers and crew noticed a numinous blue glow around the engine housings and acrid smoke creeping into the cabin. Minutes later, one engine shut down completely. Before the crew could fully respond a second failed. Then a third. And then, upsettingly, the fourth. The pilots found themselves gliding a powerless Jumbo-Jet, 80 nautical miles from land and 180 miles from a viable runway. Unsurprisingly, this caused a degree of consternation among the crew and (despite a memorably quixotic request that they not let the circumstances ruin their ight) the passengers, who began writing notes to loved ones while the aircraft torturously earned its mention in The Guinness Book of Records for longest unpowered ight by a non-purpose-built aircraft. Finally, thirty seconds from disaster, with the crew prepared for a desperate mid-ocean

J. Downer / Accounting, Organizations and Society 36 (2011) 269283

273

there is a vanishingly small probability of them all failing simultaneously. John von Neumann is generally credited as being the rst person to propose redundancy as an autonomous solution for reliability problems in complex, tightly-coupled systems, in his classic (1956) treatise: Probabilistic Logics and Synthesis of Reliable Organisms from Unreliable Components. Grappling with the dilemmas of aggregating thousands of unfaithful vacuum tubes into a working computer, the migr savant radically perceived that a redundant system, if congured well, could be more reliable than its constituent parts. If 1956 seems incongruously late for such a straightforward-seeming concept, then understand that von Neumanns true innovation lay not in the idea of redundancy per se, but in envisaging a system that could rely on it without human intervention by automatically switching between components. He understood that redundant elements require managerial systems to determine, indicate, and/or mediate between failures. The basic idea [. . .] is very simple, he writes. Instead of running the incoming data into a single machine, the same information is simultaneously fed into a number of identical machines, and the result that comes out of a majority of these machines is assumed to be true. [. . .] this technique can be used to control error. (1956: 44). This was exemplary systems level engineering, and found application far beyond computers. Indeed, Perrow (1984: 196), pointing to redundancys signicance across engineering domains, equates the engineering conception of redundancy with a Kuhnian scientic paradigm.5 Paradigms are probably an over-invoked trope in academic discourse, but the analogy is unusually apposite in this instance: it highlights redundancys capacity to frame the design of engineering objects, and, simultaneously, its ability to act as a conceptual lens through which to know the designs it frames. Redundancy, we might say, serves as both design- and audit-paradigm: enabling engineers to conceive ultra-reliable systems, but also enabling regulators to assess them. Redundancys signicance to building reliable machines is self evident, but its signicance to assessment practices is less so. To understand the latter it is important to grasp redundancys role in mathematical reliability calculations. The basic principles here are also straightforward. Put simply, where two redundant and independent elements operate in parallel to form a system, the probability of system failure is often expressed as the probability that both elements will fail at the same time. If one element has a 0.0001 probability of failing over a given period, for instance, then the probability of two redundant elements failing over the same period can be construed as that number squared: (0.0001)2 or 0.00000001. Note that redundancy, here, has demonstrated a tenthousandfold reliability increase and, signicantly, without recourse to lab tests (Shinners, 1967: 56; Littlewood, Popov,
5 Petroski (1994) argues persuasively for the existence of design paradigms in engineering, describing them as principles that are common across ostensibly disparate specialties and give engineering a theoretical foundation.

& Strigini, 2002: 781). Herein lies its importance to technology auditors. This seemingly prosaic calculation (or variations on it) is enormously signicant because lab tests alone cannot publicly demonstrate the reliability required of safetycritical elements to the one-in-a-billion levels that Type-Certication requires. According to both classical and Bayesian probability theory, a test system would have to run, failure free, for a little over 114,000 years before the FAA could say with plausible condence that that the system would fail only once in every billion hours (Rushby, 1993).6 Clearly such a test is impractical, but redundancy resolves the problem by offering a powerful, straightforward and convincing rubric with which regulators can mathematically establish reliability levels that are much higher than they could derive from testing alone. In a world where quantitative technology assessments are both politically indispensable and statistically problematic, therefore, redundancy offers the surety that publics and policymakers require to audit, and thereby govern, the technological world. As I will explain below, however, this account of redundancy, although widely promulgated and highly inuential, is less straightforward than it appears. On close examination it becomes clear that the engineering conception of redundancy as a design-tool maps imperfectly onto the way it is invoked by auditors as a calculative premise,7 and this undermines its effectiveness as a means of accurately quantifying engineering practice. This is to say that the regulatory conception of redundancy, as a neat reliability multiplier, ts awkwardly with sociological accounts of real engineering work, which construe it as a messy practice that draws deeply on informal knowledge and resists formal quantication (Collins & Pinch, 1998; Wynne, 1988). It should be no surprise, therefore, that various observers see redundancy calculations as resting on unrealistic engineering assumptions that ignore important variables (e.g.: Littlewood, 1996; Littlewood, Popov, & Strigini, 1999; Littlewood & Strigini, 1993; Popov, Strigini, & Littlewood, 2000). In different ways, these critics articulate the idea that the platonic niceties of mathematics t awkwardly onto the ambiguities of real engineering practice. Redundancy audit calculations, we might say, represent the fusion of fundamentally incommensurable ways of knowing. The following sections illustrate this incommensurability. Although far from comprehensive, they unpack some of the messiness of redundancy as it is used by engineers, and explain why this messiness subverts the role ascribed to redundancy by auditors. Each section outlines a specic logical intricacy of redundancy, illustrates its practical consequences, and explains how this complicates its use as a reliability audit tool. The rst outlines redundancys troubled relationship with mediation.

6 Even this ignores the argument made by sociologists of technology, outlined above, that all technological tests contain profound and irreducible epistemic uncertainties (Downer, 2007; Pinch, 1993). 7 Or, more accurately: people use redundancy differently in different contexts, as, technically speaking, most technology regulators are themselves engineers.

274

J. Downer / Accounting, Organizations and Society 36 (2011) 269283

Mediation Recall that redundancy only became useful in complex automated systems after von Neumann envisaged a system for managing it. Although useful, however, his innovation points to a deep aw in redundancys neat logical clarity. It may sound straightforward to add an engine to the wing of an airplane, but this simplicity dissolves in light of the systems needed to manage that engine: monitoring its performance and choreographing the aircrafts response to its potential failure. This is a complex role. Engine failures on modern jetliners instigate a ballet of automated actions, especially at critical moments such as takeoff. A computer must rst sense the failure, for instance, then coordinate with an array of other systems to alert the pilot, cut the fuel, adjust the rudder appropriately, compensate for the missing thrust, and more besides (Rozell, 1996). For various reasons, the presence of mediating-elements muddies the calculable impact of redundancy on a systems reliability. Perhaps the most signicant of these is that they themselves can fail and cause accidents.8 In January 1989, for instance, a British Midland 737400 crashed at Kegworth, near Nottingham, killing 44 people. One of the aircrafts two redundant engines caught re and a fault in the not redundant system designed to identify failures led the pilots to shut down the wrong engine (Krohn & Weyer, 1994: 177).9 (In this incident the original failure was in the redundant system i.e.: the engine but it is certainly not unknown for management systems to even instigate aviation disasters. Rushby (1993) gives a list).10 Despite being critical to the reliability of redundant systems, however, mediating systems cannot be redundant themselves, as then they would need mediating, leading to an innite regress. The daunting truth, to quote a 1993 report to the FAA, is that some of the core [mediating] mechanisms in fault-tolerant systems are single points of failure: they just have to work correctly (Rushby, 1993).11 This problem is exacerbated by a necessary consequence of mediation: the fact that it increases the number of elements in a system. It represents an exemplary case of what Arthur (2009) describes as structural deepening, leading to more unexpected interactions between elements: an effect that many observers equate with unreli-

ability (Hopkins, 1999; Perrow, 1984; Rushby, 1993).12 (I will discuss below how such interactions create another source of ambiguity by encouraging failures to propagate.) Such dilemmas pose manageable challenges for manufacturers, but almost insuperable difculties for auditors. The former can mitigate the problem by designing mediating systems to be simpler and therefore more reliable than the elements they mediate. This does little for auditors, however, because even if manufacturers can make mediating systems simpler and more reliable than the systems they manage, the limits of empirical lab-tests still preclude any demonstration or proof of that reliability to the levels the FAA requires. This is hardly the end of redundancys intricacies, however. Further challenges arise from a closely related phenomenon, concerning what we might call the assumption of independence. Independence The spectacular 2009 ditching of US Airways ight 1549 into the Hudson River by midtown Manhattan was instigated by an abrupt and total engine failure, much like the near-ditching of the British Airways ight outside Jakarta. In both cases the redundant engines failed because they were equally susceptible to a common external pressure: Canada geese, in one case, and volcanic ash in the other. A signicant critique of redundancy audit calculations lies in the observation that they erroneously assume that redundant elements are independent in respect to their failure behavior (Popov, Strigini, May, & Kuball, 2003: 2). To say that two elements are independent is to say that the chances of one failing are not linked, in any way, to the chances of the other failing. This assumption permeates the logic of reliability calculations, but, as the 155 stunned passengers whose ight ended prematurely in the Hudson River will attest, it is far from safe. There are a multitude of reasons to doubt the independence of redundant elements, which inevitably share a common function and, more often than not, a common design. One, as we saw above, is that they are all, necessarily, connected to common (and fallible) mediating systems (such that a failure of one engine, for instance, can lead to the erroneous shutdown of another). Another is that many failures result from pressures that rarely affect redundant elements entirely independently. A cloud of ash or ock of birds is likely to stress all the engines in an airplane simultaneously, for example, as might an internal event such as a re or a fuel leak.13 Even idle elements may succumb to such pressures, and fail even as they wait in reserve. [Indeed, such latent or dormant failures pose particular dangers because they can go undetected. In May
12 Simplicity and reliability go hand in glove, or so many commentators suggest. Mary Kaldor, for instance, argues that Soviet technology was often more reliable than its Western equivalent because it was uncomplicated, unadorned, unburdened [. . .] performing only of what [was] required and no more (1981: 111). Yet modern airliners, assembled from over a million parts, are extraordinarily complex machines, and much of this complexity stems from their redundancy. 13 See for instance, Eckhard & Lee, 1985; Hughes, 1987; Littlewood, 1996; Littlewood & Miller, 1989.

8 Sagan (1993) reports that false indications from safety devices and backup systems in the US missile-warning apparatus have nearly triggered nuclear wars! 9 Interestingly, the pilot disregarded vibration gauges indicating the error because these had gained a common knowledge reputation for unreliability. 10 MacKenzie (2001: 22829) highlights further problems inherent to redundancy management by illustrating the epistemic problems of discerning when, exactly, a system is malfunctioning. 11 Or, in engineering parlance: A centralized redundancy management function will likely inherit the criticality level of the most critical task that it supports (Rushby, 1993: 69).

J. Downer / Accounting, Organizations and Society 36 (2011) 269283

275

1995, for instance, the NTSB criticized the Boeing 737s rudder control system on this basis. The system involved two slides: one that usually did the work, and a second, redundant, slide that lay in reserve. The FAA argued that since the system rarely used the second slide it could fail silently, leaving the aircraft a single failure away from disaster for long periods (Acohido, 1996).] Again, engineers have practical techniques for mitigating such difculties, much as they have techniques for mitigating the drawbacks of management systems. These usually rely on what is known as design-diversity. Design-diversity is the practice of engineering redundant elements differently whilst keeping their functions the same: producing interchangeable black boxes with dissimilar contents. The basic idea is that if elements differ from each other they will have different weaknesses and fail in different ways and at different times. Manufacturers approach design-diversity in varying ways. Some leave it to evolve spontaneously by giving isolated design teams responsibility for different elements and hoping a lack of central authority will result in different designs with independent failure behavior (Littlewood & Strigini, 1993: 9). Others adopt an opposite approach, where a central design authority actively promotes diversity by explicitly requiring different teams to use divergent approaches, solutions, and testing regimens. Software-makers, for instance, might require teams to code in different languages (Popov et al., 2000: 2). In an extension of design-diversity, known as functional diversity, manufacturers build redundant elements around different underlying phenomena (Littlewood et al., 1999: 2). Civil aircraft, for example, use pressure, radar and GPS to determine their altitude. Design-diversity might ameliorate the engineering problems associated with independence, but, like the engineering response to mediation, it does little for auditors. Determining the dissimilarity of artifacts would require a well-dened metric of dissimilarity, yet dissimilarity like truth, beauty, and contact lenses inevitably rests in the eye of the beholder. Diversity, therefore, will always have a bounded and socially-negotiated meaning (Collins 185: 65), such that there will never be a philosophically absolute sense to which two technologies are different, or a mathematically useful measure of it that escapes the bounds of lab-tests.14 In principle, regulators could quantify the extent of a systems independence by testing all its redundant elements together as a single system to establish a correlation coefcient, but such tests, again, would be constrained by the same practical limits of all lab-tests, which reliability assessments need to transcend.

Propagation For both engineers and auditors, the problems of isolation and mediation are exacerbated by the fact that failures, even in diverse and functionally-independent elements, have a tendency to propagate. On July 19, 1989, near Sioux City, Iowa, fatigue from a manufacturing aw led to an explosion in the tail-mounted engine of a United Airlines DC-10. The two wing-mounted engines ensured that the aircraft, with its 285 passengers, still had ample thrust, but shrapnel from the explosion tore into the fuselage, severing all three of the triple-redundant hydraulic systems and rendering the airplane uncontrollable. As with a quadruple engine failure, the aviation community had deemed a triple hydraulic failure to be almost mathematically impossible. Each hydraulic system had its own redundant pumps connected to redundant (and differently designed) power sources with redundant reservoirs of hydraulic uid. The safety inherent in this redundancy was so straightforward and readily obvious, industry experts condently asserted, that [. . .] any knowledgeable, experienced person would unequivocally conclude that the failure mode [i.e.: triple hydraulic failure] would not occur (Haynes, 1991). (Remarkably, the crew obtained some control by manipulating the power to different engines. They made a credible attempt at a landing, saving themselves and 175 of the passengers.) The Sioux City accident illustrates an important dimension of technological failures that redundancy calculations cannot fully account for: that they rarely keep to themselves. Indeed, the majority of fatal airplane accidents involve unanticipated chains of failures, where the malfunction of one element causes others to fail in what the NTSB call a cascade (National Academy of Sciences, 1980: 41). Many aircraft systems have elements with catastrophic potential (loosely dened as a capacity to fail in a way that harms other elements in the same system). Good examples of these, Perrow suggests, are transformation devices, involving chemical reactions, high temperature and pressure, or air, vapor, or water turbulence, all of which, he says, make a system particularly vulnerable to small failures that spread unexpectedly with disastrous results (1984: 9).15 The accident record supports this view. Several commercial aircraft have crashed because their fuel tanks unexpectedly exploded, such as on December 8, 1963, when a lightning strike left a Boeing 707 Pan Am Flight 214 buried in a Maryland corneld.16
15 Perrow is probably too limited in his scope here as it makes sense to expand his set of potentially catastrophic elements to include anything that contains large amounts of energy of any kind be it electrical, kinetic, potential or chemical. An element may operate at high pressure or speed, it may be explosive, corrosive, or simply have a potentially destabilizing mass. An element may even propagate failure simply by drawing on a resource required by other systems such as electricity, fuel, or oil. A faulty engine that leaks fuel will eventually threaten the other engines, as in August 2001 when a faulty crossfeed valve near the right engine of an Airbus A330-200 leaked fuel until none remained to power the plane and both engines failed. 16 The aircraft en route to Lisbon from Toronto with 291 passengers glided for 115 miles before making a high-speed touchdown in the Azores that wrecked the undercarriage and blew eight of the ten tires (GPIAA 2004).

14 The idea that different groups of engineers, if left to their own devices, will design the same artifact differently, nds some support in the Social Construction Of Technology (SCOT) literature, which often highlights invisible technological options (Bijker, Hughes, & Pinch, 1989). Yet the same literature also suggests that where designers come from similar professional cultures and have problems specied for them in similar ways, their designs will likely converge.

276

J. Downer / Accounting, Organizations and Society 36 (2011) 269283

If taken seriously, the notion of propagation has farreaching implications for the understanding of redundancy and its relationship to reliability. Consider, for instance, the United Airlines ight outlined above, and what it suggests about redundant engines. Two engines are better than one, writes Perrow, four better than two (1984: 128). This seems simple enough and is perhaps correct, but his axiom is less simple than it appears. If, when engines fail, they do so catastrophically, fatally damaging the aircraft as happened to United Flight 232 in July 1989 then it is not at all clear that four engines are safer than two. In fact, it is hypothetically possible that four engines could be much less safe than two. Imagine, for the sake of example, that an aircraft can either have a conguration of two or four engines. It can function adequately with only one engine, and the engines enjoy mathematically perfect independence in their failure behavior. Further imagine that the chance of any given engine failing during a ight is one-in-ten (it is a very unreliable design!). Also, however, imagine that one out of every ten engines that fail will explode and destroy the airplane. (Of course, the airplane also suffers a catastrophic failure if all the engines fail in the same ight.) Now, it follows that an aircraft has a higher chance of an all-engine failure with the two-engine conguration than with four, but it also enjoys a lower chance of a catastrophic explosion. The math works out such that the combined risk of any catastrophic event during a ight (an all-engine failure or an explosion) is higher with a four-engine conguration than with two. This is to say that two engines would be safer than four.17 Of course, the probabilities in this example use what engineers might call very relaxed empirical assumptions, but understand that Boeing has, in fact, made this very argument, albeit using very different numbers. Advocating for lower restrictions on two-engined airplanes, the manufacturer suggested that its 777 is safer with two engines than with four because of the reduced risk of one failing catastrophically (Taylor, 1990, cited in Sagan, 2004: 938). Again, propagation is a manageable engineering problem. Aircraft manufacturers can mitigate it by partitioning or isolating different elements. They do this by physically segregating and shielding them from each other; separating engines on the wing, for example, and shielding them to contain broken fan blades. Similarly, high-grade electronics are sometimes encased in ceramic for this reason. Once more, however, this is a practical approach to an engineering problem that offers little succor to auditors seeking to quantify its effectiveness. An appreciation of propagation implies that a perfect measure of reliabilityfrom-redundancy must account, not only for the independence between redundant elements in the same system, but also for the independence of these elements from other functionally unrelated elements in separate systems. In Perrows (1984) terms, this would amount to an exact measure of the coupling in a technology; and, as several observers have pointed out, such measures are impractical (e.g.: Hopkins, 1999). Isolation cannot solve this dilemma.
17 This calculation is hopelessly incomplete, moreover, as exploding engines might damage more than just the other engines, and disable other critical systems, such as the hydraulics as in the Sioux City accident.

It might help improve system safety, but perfect isolation is impossible in an integrated system, as the accident record eloquently testies. Factoring it into reliability calculations, therefore, would require a means of quantifying it. This, in turn, would require a set of calculative tools by which to assess another irreducibly subjective variable. This subjectivity occasionally becomes visible in negotiations between different national regulators. During the certication of the Boeing 747400, for example, the FAA and its European counterpart, the JAA,18 differently interpreted an identically worded regulation governing the isolation of redundant wiring. The JAA interpreted the word segregation more conservatively than the FAA, forcing Boeing to redesign the wiring of the aircraft late in the certication process (Government Accountability Ofce, 1992: 16). Because of this, the 747400 now exists in two slightly different designs. Ultimately, therefore, the chance of failure propagation in an airplane must be assessed by human judgement: fallible and unquantiable. This leads neatly into a (but certainly not the) nal shortcoming of redundancy calculations.

People On 29 December 1972, just outside Miami, the crew of Eastern Airlines Flight 401 became so xated with a faulty landing gear light that they failed to notice they had inadvertently disengaged the autopilot. They continued in their distraction until the aircraft smashed into the Everglades, killing 101 of the 176 passengers. Another often hidden aspect of redundant systems is that they require people to build, maintain and work them, and people are far from infallible. The NTSB has estimated that 43% of fatal accidents involving commercial jetliners are initiated by pilot error (Lewis, 1990: 196).19 Pilot error is a exible term, and poor interface design misleading displays, for instance is undoubtedly responsible for many such errors (Perrow, 1983), but people are undeniably fallible. They get ill. They get impatient; stressed; scared; distracted and bored. They make mistakes. Indeed, there exists an extensive literature on human error as it relates to expert systems (e.g.: Reason, 1990).20 In modern civil aviation, people, in the capacity of pilots, frequently serve as the functional equivalent of mediating elements: monitoring highly automated systems, and switching between them when failures occur. Humans can be uniquely versatile in this role because they can respond creatively to unanticipated errors and interac18 The Joint Aviation Authorities (JAA) was the precursor to what is now the European Aviation Safety Agency (EASA). 19 A surprising number happen when pilots misread navigational instruments usually under stress and y into the ground. (Euphemistically known as Controlled Flight Into Terrain or CFIT). There were at least 43 CFIT incidents involving large commercial jets in the decade between 1992 and 2002 (Flight Safety Foundation 2004). 20 Human errors in redundant systems can also be latent, especially in systems where unlike driving in Paris errors do not automatically and immediately reveal themselves (Reason, 1990: 17383). Pilots who misunderstand the procedures for shutting-down an engine, for instance, may never discover their error until it is too late.

J. Downer / Accounting, Organizations and Society 36 (2011) 269283

277

tions (Hollnagel, 2006: 4). Recall that the crew of the hydraulically stricken DC-10 outside of Sioux City were able to save many lives by steering the airplane with the throttles a hitherto unimagined response to an impossible problem. Despite such advantages, however, people are far from awless mediators. The job of monitoring highly stable systems for very rare failures arguably requires an unrealistic capacity for boredom, and without offering much practice for whatever interventions (invariably to be performed under stress) that a failure might require (Reason, 1990: 180 82). A relatively common mistake in twin-engine aircraft, for instance, is for the pilot to respond to an engine failure by shutting down the wrong engine. (Here we see a failure cascade, with one engine indirectly precipitating the failure of another because of their common link at the level of the pilot.)21 They must also interpret what are often ambiguous signals from indicators, distinguishing between false alarms, which are quite common, and real failures. As with the other shortcomings of redundancy, designers are able to mitigate the problems that people pose. Ironically they often do this through redundancy: commercial aircraft have two pilots, for instance.22 Diversity is also invoked in this context: such as in the rule that requires twinengined aircraft that traverse wide oceans (so-called ETOPS ights) to have each engine maintained by different people. Of course, such practical measures offer little to formal reliability calculations. Even if it were possible to quantitatively assess the reliability of individual human beings itself an enormously doubtful proposition accurate measures of their role in the larger system would have to account for inscrutable social phenomena arising from their interactions. Where more than one person has responsibility for a single task, for instance, there is a risk of what Sagan (2004: 939) calls social shirking: the mutual belief of each person that the other will take up any slack. Rituals of verication Redundancy, as an engineering technique, undoubtedly enhances the reliability of many technological systems. Indeed, there is a very real sense in which it makes commercial air travel possible. Yet it comes with some perverse consequences that mean it does not improve reliability in every instance, and rarely improves it as much as straightforward reliability calculations might suggest. [There is a continuing debate, in the organisational sociology literature, about the relative benets of redundancy (e.g.: Clarke, 1993: 680), especially its role in social systems (e.g.: La Porte & Consolini, 1991; Sagan, 2004). It is probably fair to say, however, that both sides would agree on this.]
21 Failures need not originate with the people operating the system directly. Investigators of a multiple engine failure on a Lockheed L-1011, for instance, determined the engines were united by their maintenance. The same personnel had checked all three engines, and on each they had retted the oil-lines without the O-rings necessary to prevent in-ight leakage. For further examples of maintenance-induced common mode failures see Ladkin (1983). 22 Indeed, many complex social organizations lean heavily on redundant personnel, including aircraft carriers, missile control centers, and air trafc control networks (see e.g.: LaPorte, 1982).

For all the reasons outlined above, redundancys effects are impossible to quantify exactly, and this undermines its value as a calculative tool. Manufacturers consider its manifest shortcomings and ambiguities in great depth, and routinely employ elaborate formal tools to analyze their implications. Failure Modes and Effect Analysis (FMEA), for instance, is a tool that allows engineers to address the problems of independence and isolation by mapping each possible interaction in a system as a fault-tree. But while tools like FMEA are undoubtedly useful in building a more reliable airplane, they are much less useful for quantifying the extent of this reliability. As accident reports periodically attest (e.g.: National Transportation Safety Board, 2006: 10), FMEA and its kin require as much art as science from the people who employ them it being impossible to foresee every possible interaction between parts, or to calculate the degree to which one has (Hollnagel, 2006: 15). Such tools are primarily design-, rather than audit-, aids, therefore. Much like the techniques they assess, they serve the architects of airframes better than their auditors. Genuinely accurate measures of the reliability that redundancy offers would require the quantication of variables such as the degree of independence, isolation and similarity in a system, as well as measures of that systems human dimensions, and many other variables besides. FMEA cannot perform this function. No tool can. Such variables are unavoidably subjective, and yet, at the levels of reliability at which airplanes must operate, the uncertainties they create can be highly consequential. Aviation regulators, much like auditors in other spheres, are aware of the vagaries and judgments hidden in their quantitative assessments, yet their assessments, like most audits, have little room for such vagaries. This is because, as outlined above, the ideal of mechanical objectivity (with its portrayal of machines as fundamentally quantiable) is central to the legitimacy of their work. Why this should be so is too expansive a question to answer satisfactorily in this essay, but we might glimpse the outlines of an answer. A rhetoric of objectivity is central to the legitimacy of all audits (Van Maanen & Pentland 1994: 54; Power, 1997),23 in part, because both auditors and auditees are vulnerable to what Rothstein, Huber, and Gaskell (2006) describe as institutional risks (from publics, courts and so forth). Objectivity, moreover, has long been associated with formal quantication, as Porter (1995) makes clear, and this is especially in relation to technology (e.g.: Wynne, 1988). Where high-prole technologies (such as civil aircraft) pose readily-apparent public dangers, therefore, then the institutional risks are unusually high, and there are corresponding pressures on auditors to accommodate Gherardi and Nicolinis (2000: 343) bureaucratic vision of safety, with its idealized model of regulation as a process governed

23 Jasanoff (2003) argues that this is especially true of the United States; an effect, she suggests, of a distinctive American civic epistemology born of strong democratic inclinations and the litigation-heavy nature of American public life. Vogel (2003: 567) echoes this view, linking the adversarial US legal system with an emphasis on highly formalized and hence legally defensible risk assessments in a wide range of regulatory regimes.

278

J. Downer / Accounting, Organizations and Society 36 (2011) 269283

by formal rules and objective algorithms that promise precise, value-free assessments. As we have seen, however, this ideal requires the systematic elision of calculative ambiguities. Airplane manufacturers could never quantitatively demonstrate the levels of reliability required of aircraft if they assiduously accounted for every uncertainty, and regulators could not demand such assurances without condemning the entire industry. For the sake of rhetorical legitimacy, therefore, both systematically promulgate an unrealistic vision of Type-Certication as a process governed by objective calculations, and where a calculation has to be made explicit, then they hide its ambiguities. They achieve this elision in varying ways: They negotiate the problem of propagation by largely ignoring it. As one report puts it: The failure of a neighbouring system or structure is not considered [by the FAA] within a systems design environment and so is not taken into account when analysing possible ways a system can fail (National Academy of Sciences, 1980: 41). And where regulations require redundant elements to be isolated as they do with engines then audit calculations assume that isolation to be mathematically perfect. (As outlined above, engineering practice is much more subtle with engineers even using formal tools to explore possible correlation coefcients but these calculations undermine rather than rene formal regulatory proof-claims, which must ignore them). The solution to the problem of independence is much the same: from an audit standpoint, it is largely ignored; and where regulations mandate design-diversity then the calculations assume it creates perfect failure independence (even if engineers do not). Mediating systems, meanwhile, are negotiated with semantics: regulations do not designate management systems to be safetycritical, thereby excluding those systems from reliability proofs. (Even though, again, manufacturers understand their criticality.) People, however, cannot be ignored in the same way pilots being conspicuously safetycritical to airplanes (unlike service personnel, who are less conspicuous). Reliability calculations negotiate the ambiguities of human behavior by invoking bureaucratic rewalls. This is to say that although the FAA invests heavily in training and evaluating the people who work in aviation it assesses them separately: Type-Certication is strictly an analysis of the airplane, independent of the vagaries of its operation. In other words, reliability assessments come with implicit caveats, such as: ... given proper maintenance or ... if handled correctly. (The regulatory separation of ight crews from airframes creates another avenue the industry can use to manage the calculative difculties imposed by mediating elements. They can exploit the interstices between regulatory regimes by passing mediating roles and, hence, the epistemological buck to the pilots.) In effect, we might say that the FAAs reliability calculations are performative more than they are functional. As

outlined above, they are hardly unique in this: audits in many domains have long been construed in a similar vein (e.g.: Power, 2003: 190). Yet Hilgartner (2000) argues that this decoupling is especially true of scientic or technological practice. In these arenas, he suggests, the more qualitative aspects of knowledge production are frequently hidden or stage-managed and this plays an important role in constituting expert authority.24 Wynne (1988) makes a similar point, writing that technical authorities invariably present ... a dramatic, unequivocal display of scientic authority, while walling off the private space in which it was prepared. This is his white box, which hides the messiness of engineering behind walls of false transparency. Inside the white box If regulators white box airplanes for reasons of rhetorical legitimacy, then what exactly is inside the box? It is certainly not the intention of this paper to suggest that airplanes are unsafe or that their engineers are careless in their methods. Quite the opposite. The manufacturers of modern aircraft are, by all accounts, both diligent and subtle certainly to a degree that exceeds the authors ability to criticize, as the extensive reliability data collected from aircraft in service attest. Yet this same data also vindicates regulators, in that it conrms that, in the past at least, airframes have been about as reliable as the FAA has predicted them to be. This, again, is commensurate with the authors perception of backstage regulatory practice, which he is in no position to reproach (see: Downer, 2010). It does, however, pose an important question: if the calculative formalities of Type-Certication are fundamentally misleading, how then are regulators achieving such accurate assessments? If redundancy calculations are front-stage, in other words, then what is backstage? The full answer to this question is too complex to fully demonstrate in a paper of this scope, but it is possible to outline its essential principles.25 In essence, we should understand that, even if those closest to aviation engineering understand that redundancy metrics are misleading, they have other reasons to trust in the safety of new airframes. These reasons can be summarized as (a) judgment and (b) inference. (a) Where inevitable gaps exist between calculative models and the reality of practice, what remains is judgment. Much of the real work of airframe assessment, therefore, rests on tacit expertise, which the FAA engages with in varying ways. For instance, it mobilizes the knowledge, and experience of the manufacturers by deputising some of their engineers to act as its surrogates in overseeing tests and calculations: a relationship is formalized in what the FAA calls Designated Engineering
24 Successive studies of complex systems support his insight, highlighting deciencies in the descriptions of technical work embodied in formal assessments (e.g.: Schulman 1993; Woods and Hollnagel, 2006). 25 Although I have looked at it in more detail elsewhere (Downer, 2007; 2010).

J. Downer / Accounting, Organizations and Society 36 (2011) 269283

279

Representatives, or DERs (NAS, 1980: 7).26 Beyond the DERs, the FAA also assesses broader subjective qualities of the manufacturers. To an unrecognised degree, for instance, regulators draw heavily on their personal knowledge of the people and institutions involved in an airplanes design and manufacture to make judgements about the competence of those actors. In other words, they quietly assess the people who build airplanes in lieu of the airplanes themselves (Downer, 2010). (b) More substantively, the FAA draws inferences from the service experience of airplanes in operation. Reliability assessments of new civil aircraft lean very heavily on inferences from the statistically wellestablished data from earlier, different, aircraft designs. This is viable because the architects of new aircraft are highly conservative when developing new models. Large civil aircraft change only very incrementally between generations. Innovations are extremely modest, with new technologies being withheld until their reliability has been well-established in other contexts (in military aircraft, for instance) (Downer, 2007). Insofar as a new airplane is no different from it predecessors, therefore, then there is a meaningful sense in which its auditors can know its reliability before they begin calculating. (The FAAs subjective judgements of the manufacturers become more meaningful in this context: when it evaluates manufacturers, it is looking, among other things, at their commitment to conservatism.) These practices are effective and important, but they are rhetorically weak and difcult to defend in the context of modern audit societies. Hence the need to create a space for them. Redundancy may not be an accurate tool for quantitatively assessing technological reliability, but it is essential to the ritualized theatre of verication that creates a front-stage space that both masks and legitimates a series of backstage assessment practices. There is an extent, therefore, to which FAA reliability gures should be understood as what Lampland (2010) refers to as provisional numbers, and formal justications for them as what Clarke (1999) calls fantasy documents, in that both serve a social function that is distinct from their ostensive purpose. Again, such an arrangement is hardly unique. Indeed, the literature on contemporary organisations offers many examples of complex but ultimately invalid calculations being exploited for instrumental ends such as the negotiation of less clearcut practices (e.g.: Bowker & Star, 1999; Lampland, 2010; Thomas, 1994). It may be true, as Shapin (1995: 255) has argued, that most of us are like Shakespeares Cordelia, in that we expect truth to shine by its own lights, and are skeptical of claims that are lubricated by the oily art of stage management, but it is arguable that sometimes even truth needs lubrication.
26 DERs are employees of the manufacturers, usually with 1520 years experience, who hold key technical positions and work on the aircraft they assess.

Such lubrication undoubtedly confers benets, as outlined above, but as sociologists have long recognized (e.g.: Hood, 2002; Hunt, 2003; Miller, 2003b) it also exerts unexpected pressures, and even, on occasion, exacts costs. Shaping systems To understand the potential costs exacted by misleading audit-practices, it is necessary to rst understand how those practices inuence the objects they govern. As outlined above, numerous studies show how accounting practices can be constitutive of the objects they govern (e.g.: Burchell et al., 1985; Hopwood & Miller, 1994), and there is good reason to believe that technologies might be similarly affected. The determinist idea that technologies are shaped entirely by physical constraints has been convincingly discredited by sociological accounts that show how social inuences come to shape technological systems (e.g.: Bijker et al., 1989; Mackenzie & Wajcman, 1999). We might reasonably imagine, therefore, that redundancy calculations shape airplanes. After all, the explicit and intended purpose of Type-Certication is to shape designs: bending them towards reliability, even if only partially and inconsistently. This is where the constitutive properties of certication assessments are supposed to end, however. As outlined above, they adhere to the ideal of audit neutrality (Power, 1996: 296), with performance- (as opposed to prescription-) based rules that are intended to colonize engineering practices and constrain design options only to the extent that they promote reliability, and no further (NRC, 1998). The ideal of audit neutrality has long been problematic in the accounting literature, however. As outlined above, sociologists regularly nd that audits colonize practices more fully than auditors intend, often transguring practices in unexpected ways and with unanticipated consequences (e.g.: Miller & Napier, 1993; Munro, 2004; Neyland & Woolgar, 2002; Power, 2003; Strathern, 2000). It is not difcult to see how redundancy, through its importance to quantitative reliability, claims might constrain airplane designs. In a broad sense, for instance, this might be reected in the sheer prevalence of redundancy in modern airframes. On a more direct level, meanwhile, the selective ways that calculations recognize the redundancy-related problems of failure inter-dependence and failure propagation almost force manufacturers to implement either design-diversity or isolation in specic instances. The calculations formally recognize that engines can fail explosively, for instance, so manufacturers must isolate them; the calculations do not recognize that engines have interrelated failure behavior, however, so manufacturers can quantitatively justify an airplanes reliability without recourse to varying means of propulsion. This is not to say that such effects are unintended. There are good engineering reasons why manufacturers might want to isolate engines but not vary their design, even if this offers a weak justication for a quantitative reliability claim. Also, of course, redundancy is an effective technique for building, as well as assessing, highly reliable systems,

280

J. Downer / Accounting, Organizations and Society 36 (2011) 269283

and so it is very plausible that manufacturers would use it extensively even if they were unconstrained. Constraint is not colonization. As we have seen, however, designing reliability is one thing but quantitatively demonstrating it is another, and insofar as these pull in different directions then we might expect the calculative rhetoric of Type-Certication to shape airplanes in unintended ways. Establishing unequivocally that Type-Certication leads to compromised airplane designs is an unrealizable ambition, given the inherent subjectivities of technology assessment, and the absence of data from equivalent aircraft built under different audit regimes (all modern airliners comply with FAR-25). Certainly no manufacturer will attest to their designs being anything less than optimal. This is not to say that assessment does not have perverse consequences, however, and if audit calculations constrain airplane designs, then it is interesting to consider how those constraints reect the ways that calculations misconstrue redundancy. We might begin by reexamining the neutrality of FAA reliability audits. As discussed, the need to establish quantitative proof of reliability requires the extensive use of redundancy in airplane designs, and in many cases this is probably ideal. Yet it is important to remember that there are various engineering routes to reliability. Instead of investing a limited weight allowance in adding a redundant system (with its mediating elements), for instance, it might be preferable to invest that weight in overdesigning a single element. (Airplanes have very strict weight budgets, and the weight allowance for two redundant structural beams, for instance, could equally be allocated to one beam of twice the size.) For all its usefulness, redundancy demands compromises. It increases the number of elements in a system, with complex ramications such as magnifying the systems weight and complexity. It requires mediating systems, which further exacerbate weight and complexity, and which may themselves be loci of failure. It can also mean duplicating systems that have inherent catastrophic potential, such as engines with their fastmoving fan-blades. Indeed, such considerations have led several experts to argue that redundancy can become a primary source of instability in complex systems (e.g.: Hopkins, 1999; Rushby, 1993). As Popov et al. (2003: 2) put it: Redundancy [is only] a reasonable use of project resources if it delivers a substantial [...] increase in reliability, greater than would be delivered by spending the same amount on other ways of improving reliability. For all these reasons, it is difcult to imagine that redundancy is always the optimal design solution for reliability where it is used in airplane architectures. And, hence, it seems likely that if redundancy was not the most bureaucratically satisfying solution for demonstrating reliability, then it would be less ubiquitous in airframes. Demonstrating this is difcult, for the reasons outlined above, but some anecdotal evidence supports the idea. It is not unusual for experts, if pressed, to indicate they sometimes feel that airplanes use redundancy in areas where other design options might have been preferred, or at least explored. Perhaps the most compelling evidence for this lies in Boeings argument, outlined above, that two engines on

their 777 were safer than four (Taylor, 1990 cited in Sagan, 2004: 938): an implicit admission that the original fourengine conguration owed more to regulatory compliance than design optimization.27 To a large degree, however, the question of optimality is moot because the principle still stands. The fact is that redundancy need not be ideal from an engineering standpoint, but manufacturers need it to demonstrate compliance to certain regulations. Hence, certication requirements, through their calculative criteria, constrain manufacturing choices more than they purport to, and audit neutrality is as problematic an ideal in engineering as in any domain. Framing practices The idea that audits shape (or could shape) technologies in unexpected ways is an unfamiliar theme in the accountancy literature. A more familiar claim, however, is the idea that audits adversely shape or colonize organizational practices. We might expect this to be especially true in technological domains, where practices are framed in relation to the requirements of machines (or their perceived requirements), and there is a strong commitment to an ideal of mechanical objectivity. Calculations wield more unfettered inuence where they are construed to be objective.28 Redundancy calculations shape aviation practices, in part, through the engineering eventualities they publicly validate (or, more often, invalidate). Take, for instance, the way airlines trains pilots. At the time of the Jakarta Incident, remember, the industry considered a quadruple engine failure so unlikely that they taught pilots to treat indications of it as an instrumentation failure. This logic still structures pilot training, despite there having been several high-altitude all-engine failures since then. [All from common causes, such as fuel exhaustion (e.g.: Williams, 2003).] Remember also the incident outside Sioux City, where the aviation industry had been so unequivocally condent that a triple hydraulic failure was impossible on a DC-10, that the airline had never simulated such a contingency during pilot training, and so the crew had to improvise ways of maneuvering the airplane using the throttles on the y (Haynes, 1991). The engineers who designed the DC-10s hydraulic system might have imagined a triple failure to be theoretically possible at least to the degree of some of the other contingencies that shape pilot training. These are not the people who frame training manuals, however, and these kinds of deep insights quickly get lost in quantication when they contradict the numbers. In a much broader sense, the ideal of mechanical objectivity (to which redundancy calculations are integral), cre27 There are many other potential ramications of the way that TypeCertication manages the complexities of redundancy. For instance, we might hypothesize that the regulatory separation of operators and airframes, encourages manufacturers to use pilots in redundancy-mediating roles instead of automating such functions with systems that would then need proving. 28 This is especially true of policy domains, such as the UKs, that are increasing committed to the idea of evidence-based policymaking (Rothstein & Downer, in press).

J. Downer / Accounting, Organizations and Society 36 (2011) 269283

281

ates its own unintended consequences for organizational practices. We saw above how idealized notions of technology assessment in aviation create a backstage space for its informal practices, yet, as Sims (1999) argues, formal perceptions of technological work can sometimes be disruptive of the practices they mask. This happens, in part, because when technological discourse emphasizes objectivity and hides social processes, then it is easy for observers to lose sight of those processes and misconstrue the organizational value of expertise (Perrow, 1984: 273). Indeed, many observers suggest that audit regimes in general (e.g.: ONeill, 2002; Power, 1997), and the techno-scientic ideal of mechanical objectivity in particular (e.g.: Wynne, 1988; Sims, 1999; Jasanoff, 2003), both serve to marginalise expertise that cannot be quantied: sometimes bounding crucial expertise and judgments out of assessments (and, thereby, out of decision-making). Given the centrality of judgement in redundancy calculations (and therefore in its reliability assessments), therefore, then we should be sensitive to the constraints of portraying them as formal and objective. Many such constraints arise when the informal practices of aviation regulation come to light and regulators are seen to be defying their own calculative rhetoric, as this then appears as either a failure or corruption of the system. As noted, for instance, the practice of deputising engineers for their tacit knowledge (in the DER system) is vital to the FAAs assessment work. The system offers the regulators access to expertise and experience they rely on in their assessment work. In relation to the regulatory ideal, however, it appears like a conict of interest, and so the FAA repeatedly has to re-justify it to formal investigations launched in the wake of public exposs that portray it as dysfunctional (Downer, 2010). Such situations are not unusual. By fronting an ideal of objectivity in its assessment practices, the FAA often becomes prisoner to its own performances. Aviation is a highly scrutinized industry, and the closer that institutions are scrutinized be it in courts, the press or public hearings then the more they must conform to their own representations. High-prole accidents, in particular, inevitably open-up the backstage of technological practice to intense public scrutiny. This invariably reveals a jarring disjuncture between practice and portrayal begetting an image of culpability and, with it, a demand for accountability.29 For this reason, high-prole failures are invariably followed by acts of censure that bolster institutional credibility more than outcomes, while exacting long-term and under-recognized costs. As the NRC have bluntly conceded: ... pressures have sometimes caused the FAA to take corrective action that is otherwise unjustied (NRC, 1998). Such action invariably conforms to the objective mechanical ideal: misconstruing the subjectivity of assessment by narrowing the scope for judgement, for example, or by draining experts and expert knowledge from organisations through the ritual sacrice of hard-won careers just when those organisations need it most.
29 An important critical theme in the sociology of accounting literature is that the deepening of audit-cultures can lead to a decline in organisational trust (Power, 2003: 190).

Perhaps more consequential than the direct effects of censure on institutional cultures, moreover, are its indirect effects. Several studies have observed that institutional risks such as the threat of legal, vocational or reputational damage create incentives for managers to adopt an extremely defensive organisational posture (e.g.: Rothstein & Downer, in press; Rothstein et al., 2006). They further suggest that this can conict with the management of the societal risks (such as passenger safety) that should be the primary concern of organisations like the FAA. (There may well come a time when it can no longer justify its DER program.) Indeed, such effects are already apparent in some technological domains, where formal metrics have eclipsed informal practices. Take, for instance, this lament from a British expert working in aviation procurement for the Ministry of Defence: When performing quality assessment [. . .] you used to go away and you used to meet the workers, and you could get a feel for those people and youd get a feel for the company. And it would be people like me that would have that feel, [. . .] but now we just dont get that insight at all [. . .]. Its a shame.30 A nal notable consequence of the calculative ideal that redundancy calculations help reify, is a misplaced perception of the broader utility of technology regulation. Where engineering audit-practices downplay the role of qualitative judgement in favour of an ideal that suggests there is some endpoint of total or complete rigour, then they reify an idealized relationship between experts and policymakers where science speaks truth to power (Jasanoff & Wynne, 1998). Redundancy is deeply woven into epistemic the fabric of modernity: silently implicated in the reliability assessments publics and policymakers use to make important choices about a wide variety of dangerous technological systems. In aviation this works quite well because the assessments tend to be accurate. As discussed above, however, aviation reliability assessments are accurate despite, rather than because, of redundancy calculations, which largely just mask less quantiable practices. Just because government regulators can invoke such calculations to accurately assess airplanes, therefore, does not necessarily imply they can do so to accurately assess other systems. We expect an increasing number of technologies to be as reliable as modern aircraft, yet few technological domains share the extensive service experience and commitment to design conservatism that the FAA leverage in their informal practices. The implications of this are important. Coda It is important to recognize that the counting practices of high technology, like those of other audit domains, are worthy of sociological consideration and amenable to sociological deconstruction. Like all institutional auditpractices, they are socially constructed and materially constitutive: shaping technological systems and colonizing
30

Anonymous interview.

282

J. Downer / Accounting, Organizations and Society 36 (2011) 269283 Downer, J. (2007). When the chick hits the fan: Representativeness and reproducibility in technological tests. Social Studies of Science, 31(1), 726. Downer, J. (2010). Trust and technology: The social foundations of aviation regulation. British Journal of Sociology, 61(1), 87110. Eckhard, D., & Lee, L. (1985). A theoretical basis of multiversion software subject to coincident errors. IEEE Transactions on Software Engineering, 11, 15111517. Gherardi, S., & Nicolini, D. (2000). To transfer is to transform: The circulation of safety knowledge. Organization, 7(2), 329348. Government Accountability Ofce, GAO (1992). Aircraft certication: Limited progress on developing international design standards. In Report to the Chairman, Subcommittee on Aviation, Committee on Public Works and Transportation, House of Representatives. Report No. 147597 August. Government Accountability Ofce, GAO (1993). Aircraft certication: New FAA approach needed to meet challenges of advanced technology. In Report to the Chairman, Subcommittee on Aviation, Committee on Public Works and Transportation, House of Representatives. GAO/RCED-93-155 September 16. Haynes, A. (1991). The crash of United Flight 232. Paper presented at NASA Ames Research Center, Dryden Flight Research Facility, Edwards, California, 24 May. Hilgartner, S. (2000). Science on stage: Expert advice as public drama. Stanford University Press. Hollnagel, E. (2006). ResilienceThe challenge of the unstable. In E. Hollnagel, D. Woods, & N. Leveson (Eds.), Resilience engineering: Concepts and precepts (pp. 917). Aldershot: Ashgate. Hood, C. (2002). The risk game and the blame game. Government and Opposition, 37, 1537. Hopkins, A. (1999). The limits of normal accident theory. Safety Science, 32, 93102. Hopwood, A., & Miller, P. (1994). Accounting as social and institutional practice. Cambridge, UK: Cambridge University Press. Hughes, R. (1987). A new approach to common cause failure. Reliability Engineering, 17, 21112136. Hunt, B. (2003). The timid corporation: Why business is terried of taking risk. Leicester, UK: Wiley. Jasanoff, S. (2003). Breaking the waves in science studies: Comment on H.M. Collins and Robert Evans, The Third Wave of Science Studies. Social Studies of Science, 33(3), 389400. Jasanoff, S., & Wynne, B. (1998). Science and decisionmaking. In S. Rayner, & E. Malone (Eds.), Human choice and climate change., Vol. 1: The societal framework (pp 188). Pacic Northwest Labs: Batelle Press. Komons, N. (1978). Bonres to beacons: Federal civil aviation policy under the air commerce act. Washington, DC: US Government Printing Ofce, pp. 1926-1938. Krohn, W., & Weyer, J. (1994). Society as a laboratory: The social risks of experimental research. Science & Public Policy, 3(21), 173183. Ladkin, P. (1983). The Eastern Airlines L1011 common mode engine failure. <http://www.ntsb.gov/ntsb/brief.asp?ev_id=20001212X19912 &key=1> (accessed 5 May). Lampland, M. (2010). False numbers as formalizing practices. Social Studies of Science, 40(3), 377404. LaPorte, T. (1982). On the design and management of nearly error-free organizational control systems. In D. Sills, V. Shelanski, & C. Wolf (Eds.), Accident at Three Mile Island. Westview: Boulder, CO. La Porte, T., & Consolini, P. (1991). Working in practice but not in theory: Theoretical challenges of high reliability organizations. Journal of Public Administration Research and Theory, 1, 1947. Lewis, H. (1990). Technological risk. New York: Norton & Co. Littlewood, B., Popov, P., & Strigini, L. (2002). Assessing the reliability of diverse fault-tolerant systems. Safety Science, 40, 781796. Littlewood, B., & Strigini, L. (1993). Validation of ultra-high dependability for software-based systems. Communications of the ACM, 36(11), 6980. Littlewood, B., Popov, P., & Strigini, L. (1999). A note on the reliability estimation of functionally diverse systems. Reliability Engineering and System Safety, 66, 9395. Littlewood, B. (1996). The impact of diversity upon common cause failure. Reliability Engineering and System Safety, 51, 101113. Littlewood, B., & Miller, D. (1989). Conceptual modeling of coincident failures in multi-version software. IEEE Transactions on Software Engineering, 15(12), 15961614. Lloyd, E., & Tye, W. (1982). Systematic safety: Safety assessment of aircraft systems. London: Civil Aviation Authority. MacKenzie, D. (2001). Mechanizing proof: Computing, risk, and trust. Cambridge, MA: MIT Press, pp. 228229. MacKenzie, D. (1996). Knowing machines: Essays on technical change. Cambridge, MA: MIT Press.

engineering practices. They may even be more constitutive in engineering than in other domains. Modernity widely subscribes to a pervasive but misleading ideal that construes technologies as more objectively knowable than most audit objects, and this imbues engineering calculations with uncommon inuence. In the specic case of FAA reliability assessments, the ideal of objectivity leans heavily on redundancy because of its usefulness in enacting quantitative proof. It allows auditors to translate the vicissitudes of engineering practice into the formal language of regulatory assessment. Yet this function depends on a false equivalence between redundancy as engineering tool and redundancy as audit paradigm, where the latter is misconstrued as accurately reecting the former. And, given the socially and materially constitutive nature of formal reliability assessments, this disjuncture has complex ramications with signicant social consequences. Calculative practices like redundancy are not important because they accurately represent the technological world but because they claim to, and because such claims become institutionalized in ways that shape technological designs, inuence technological practices, and frame important technological choices. Acknowledgements The author would like to thank Chick Perrow; Trevor Pinch; Scott Sagan; Michael Lynch; Ron Kline; Terry Drinkard; David Demortain; Anne Harrington de Santana; Matthais Englert; Jeanette Hoffman; Bridget Hutter; Peter Miller and Anna Phillips: all of whom have read and commented on this paper at different times in its long gestation. Also some very indulgent reviewers, plus innumerable interviewees and email correspondents who will remain anonymous but who were integral to its technical details, such as they are. All errors, of course, are the authors alone. References
Acohido, B. (1996). Pittsburgh disaster adds to 737 doubts. Seattle Times, 29 October. Arthur, B. W. (2009). The nature of technology: What it is and how it evolves. New York: Free Press. Beck, U. (1992). Risk society: Towards a new modernity. London: Sage. Bijker, W., Hughes, T., & Pinch, T. (Eds.). (1989). The social construction of technological systems: New directions in the sociology and history of technology. Cambridge, MA: MIT Press. Bowker, G., & Star, S. L. (1999). Sorting things out: Classication and its consequences. Cambridge: MIT Press. Burchell, S., Clubb, C., & Hopwood, A. (1985). Accounting in its social context: Towards a history of value-added in the United Kingdom. Accounting, Organizations. And Society, 10(4), 381413. Collins, H. (1985). Changing order. London: Sage. Collins, H., & Pinch, T. (1998). The golem at large: What you should know about technology. Cambridge: Cambridge University Press. Clarke, L. (1993). Drs. Pangloss and Strangelove meet organizational theory: High reliability organizations and nuclear weapons accidents. Sociological Forum, 8(4), : 675689. Clarke, L. (1999). Mission improbable: Using fantasy documents to tame disaster. Chicago: University of Chicago Press. Davenhill, R., & Patrick, M. (Eds.). (1998). Rethinking clinical audit: The case of psychotherapy services in the NHS. London: Routledge. Day, P., & Klein, R. (2001). Auditing the auditors: Audit and the national health service. London: Nufeld Trust.

J. Downer / Accounting, Organizations and Society 36 (2011) 269283 Mackenzie, D. (1996b), How do we know the properties of artifacts? Applying the sociology of knowledge to technology. In R. Fox (Ed.), Technological change. Amsterdam, pp. 249251. MacKenzie, D., & Wajcman, J. (Eds.). (1999). The social shaping of technology open (2nd ed. Buckingham: University Press. Meyer, J., & Rowan, B. (1977). Institutional organizations: Formal structure as myth and ceremony. American Journal of Sociology, 83, 340363. Miller, P., & Napier, C. (1993). Genealogies of calculation. Accounting, Organizations and Society, 18(78), 631647. Miller, P. (1994). Accounting as social and institutional practice: An introduction. In A. Hopwood & P. Miller (Eds.), Accounting as social and institutional practice. Cambridge: Cambridge University Press. Miller, P. (1998). The margins of accounting. European Accounting Review, 7, 605621. Miller, P. (2003). Governing by numbers: Why calculative practices matter. In Ash Amin & Nigel J Thrift (Eds.), The Blackwell cultural economy reader. London: Blackwell. Miller, P., Kurunmki, L., & OLeary, T. (2008). Accounting, hybrids and the management of risk. Accounting, Organizations and Society, 33(78), 942967. Munro, E. (2004). The impact of audit on social work practice. British Journal of Social Work, 34(8), 10751095. National Academy of Sciences, NAS (1980). Committee on FAA airworthiness certication procedures Improving Aircraft Safety. In FAA Certication of Commercial Passenger Aircraft Committee on FAA Airworthiness Certication Procedures Assembly of Engineering National Research Council. Washington, DC: NAS. National Research Council (1998). Improving the continued airworthiness of civil aircraft: A strategy for the FAAs aircraft certication service. Washington, D.C: National Academy Press. National Transportation Safety Board, NTSB (2006). Safety report on the treatment of safety-critical systems in transport airplanes. Safety Report NTSB/SR-06/02. PB2006-917003. Notation 7752A. Washington DC. Neyland, D., & Woolgar, S. (2002). Accountability in action? The case of a database purchasing decision. British Journal of Sociology, 53, 259274. ONeill, O. (2002). Called to account. Third 2002 Reith Lecture. London: BBC. Pentland, B. (1993). Getting comfortable with the numbers: Auditing and the micro production of macro order. Accounting Organizations and Society, 18(78), 605620. Perrow, C. (1983). The organisational context of human factors engineering. Administrative Science Quarterly, 28, 521541. Perrow, C. (1984). Normal accidents: Living with high-risk technologies. New York: Basic Books Inc. Petroski, H. (1994). Design paradigms: Case histories of error and judgment in engineering. Cambridge: Cambridge University Press. Pinch, T. (1993). Testing one, two, three . . . testing!: Toward a sociology of testing. Science, Technology &Human Values, 18(1), 2541. Popov, P., Strigini, L., May, J., & Kuball, S. (2003). Estimating bounds on the reliability of diverse systems. IEEE Transactions on Software Engineering, 29(4), 345359. Popov, P., Strigini, L., & Littlewood, B. (2000). Choosing between faulttolerance and increased V&V for improving reliability. DISPO Technical Report.

283

Porter, T. (1995). Trust in numbers: The pursuit of objectivity in scientic and public life. Princeton NJ: Princeton University Press. Power, M. (1996). Making things auditable. Accounting Organizations and Society, 21(2/3), 289315. Power, M. (1997). The audit society: Rituals of verication. Oxford: Oxford University Press. Power, M. (2003). Evaluating the audit explosion. Law & Policy, 25(3), 185202. Power, M. (1994). The audit explosion. Demos: London. Reason, J. (1990). Human error. Cambridge: Cambridge University Press. Rochlin, G. I., La Porte, T. R., & Roberts, K. H. (1987). The self-designing high-reliability organization: Aircraft carrier ight operations at sea. Naval War College Review, 40(4), 7690. Rothstein, H., Huber, M., & Gaskell, G. (2006). A theory of risk colonization: The spiralling regulatory logics of societal and institutional risk. Economy and Society, 35(1), : 91112. Rothstein, H., & Downer, J. (in press), Risk-based policymaking and the institutional modulation of risk. Public Administration. Rose, N., & Miller, P. (1992). Political power beyond the state: Problematics of government. British Journal of Sociology, 43(2), 173205. Rozell, N. (1996). The Boeing 777 does more with less. Alaska Science Forum, 23 May. Rushby, J. (1993). Formal methods and the certication of critical systems. SRI Technical Report CSL-93-7 December. <http:// www.csl.sri.com/papers/csl-93-7/>. Sagan, S. (2004). The problem of redundancy problem: Why more nuclear security forces may produce less nuclear security. Risk Analysis, 24(4), 935946. Sagan, S. (1993). The limits of safety. Princeton NJ: Princeton University Press. Shinners, S. (1967). Techniques of system engineering. New York: McGrawHill. Sims, B. (1999). Concrete practices: Testing in an earthquake-engineering laboratory. Social Studies of Science, 29(4), 483518. Strathern, M. (2000). Audit cultures: Anthropological studies in accountability, ethics and the academy. London: Routledge. Swenson, L., Grimwood, J., & Alexander, C. (1998). This new ocean: A history of Project Mercury. Washington DC: NASA History Ofce. Taylor (1990). Twin-engine transports: A look at the future. Seattle, WA: Boeing Corporation Report. Thomas, R. (1994). What machines cant do: Politics and technology in the industrial enterprise. Berkeley: University of California Press. Tootell, B. (1985). All four engines have failed: The true and triumphant story of BA 009 and the Jakarta incident. London: Andre Deutsch. Van Maanen, J., & Pentland, B. (1994). Cops and auditors: The rhetoric of records. In S. B. Sitkin, & R. J. Bies (Eds.), The legalistic organization. Newbury Park: Sage. Wickenden, Dorothy (2006). Top of the class. In The New Yorker. October 2. New York. Conde Nast. Williams, M. (2003). The 156-tonne Gimli Glider. In Flight safety Australia: 25. Wynne, B. (1988). Unruly technology: Practical rules, impractical discourses and public understanding. Social Studies of Science, 18, 147167.

Vous aimerez peut-être aussi