Académique Documents
Professionnel Documents
Culture Documents
Niklas Möller
STOCKHOLM 2006
This licentiate thesis consists of the following introduction and:
Möller N., Hansson S. O., Peterson M. (2005), “Safety is More Than the Antonym
of Risk”, forthcoming in Journal of Applied Philosophy.
Niklas Möller, Division of Philosophy, Department of Philosophy and the History of Technology,
Royal Institute of Technology (KTH), SE-100 44 Stockholm, Sweden.
ii
Abstract
Möller, N., 2006. Safety and Decision-making. Theses in Philosophy from the Royal Institute of Technology
n 12. 92 + viii pp. Stockholm ISBN 91-7178-272-9.
Safety is an important topic for a wide range of disciplines, such as engineering, economics,
sociology, psychology, political science and philosophy, and plays a central role in risk analysis and
risk management. The aim of this thesis is to develop a concept of safety that is relevant for
decision-making, and to elucidate its consequences for risk and safety research and practices.
Essay I provides a conceptual analysis of safety in the context of societal decision-making,
focusing on some fundamental distinctions and aspects, and argues for a more complex notion than
what is commonly given. This concept of safety explicitly includes epistemic uncertainty, the degree
to which we are uncertain of our knowledge of the situation at hand. It is discussed the extent to
which such a concept may be considered an objective concept, and concluded that it is better seen
as an intersubjective concept. Some formal versions of a comparative safety concept are also
proposed.
Essay II explores some consequences of epistemic uncertainty. It is commonly claimed that
the public is irrational in its acceptance of risks. An underlying presumption in such a claim is that
the public should follow the experts’ advice in recommending an activity whenever the experts have
better knowledge of the risk involved. This position is criticised based on considerations from
epistemic uncertainty and the goal of safety. Furthermore, it is shown that the scope of the
objection covers the entire field of risk research, risk assessment as well as risk management.
Essay III analyses the role of epistemic uncertainty for principles of achieving safety in an
engineering context. The aim is to show that to account for common engineering principles we
need the understanding of safety that has been argued for in Essays I-II. Several important principles
in engineering safety are analysed, and it is argued that we cannot fully account for them on a
narrow interpretation of safety as the reduction of risk (understanding risk as the combination of
probability and severity of harm). An adequate concept of safety must include not only the
reduction of risk but also the reduction of uncertainty.
Keywords: conceptual analysis, safety, risk, epistemic uncertainty, epistemic values, values in risk
assessment, risk analysis, risk management, safety engineering
ISSN 1650-8831
ISBN 91-7178-272-9
iii
iv
Acknowledgements
I am very grateful to my supervisors Sven Ove Hansson and Martin Peterson for
their valuable comments, suggestions and support without which this licentiate
thesis would not have come about. Thanks also to my colleagues at the Division of
Philosophy for valuable suggestions and comments on earlier version of the papers.
A special thanks goes to Lars Lindblom, Kalle Grill and Per Wikman-Svahn, for
their friendship, inspiration, careful reading and lengthy discussions. And Eva, for
her love and our never-ending deliberation.
This work has been financially supported by Krisberedskapsmyndigheten, The
Swedish Emergency Management Agency. The support is gratefully acknowledged.
v
vi
Contents
Abstract
Acknowledgements
Introduction
Aim and scope of the thesis
Decision theoretical background
Preview of Essays I-III
References
Essay I
“Safety is More Than the Antonym of Risk”
Forthcoming in Journal of Applied Philosophy.
Essay II
“Should We Follow the Experts’ Advice? On Epistemic Uncertainty and
Asymmetries of Safety”
Submitted manuscript
Essay III
”Interpreting Safety Practices: Risk versus Uncertainty”
Submitted manuscript.
vii
viii
Introduction
Safety considerations are important in a wide range of disciplines. Risk
assessment, risk management, risk perception and risk communication
directly revolve around questions of safety. There is also a wide range of
other disciplines where safety concerns are important: engineering,
economics, sociology, psychology, political science and philosophy. It
should therefore not come as a surprise that there are many different
conceptions of safety. This is as it should be, since different aspects are
more or less important depending on what our goals are. Depending on
whether we are interested in how non-experts think about (“perceive”) risk
and safety, or what role such concepts have in forming governmental
policies, or how we should construct a bridge, we might expect quite
different – yet hopefully related – conceptualisations of safety.
That does not mean that all conceptualisations are equally good. Quite
the opposite, since an understanding of a concept in one field often has a
bearing on another, related field. This is especially true in such an
interdisciplinary field as risk and safety research. Here, influences from
many different disciplines come together, and it is of the outmost
importance that the concepts we use are clear and adequate. When we set
safety as a goal for our systems and processes, or claim that one type of
technology is safer than another, we must ensure that the concepts that we
use are the ones we should use. Otherwise we are not addressing the right
question, and then the answer does not have the right significance. This is
not only a logical possibility. In this thesis I claim that common conceptions
of safety used in risk research and risk practice are deficient, and I develop a
more adequate concept. Also, I point out some of the consequences of a
better understanding of safety for risk and safety research and practices.
1
Aim and scope of the thesis
The aim of this thesis is to develop a concept of safety that is relevant for
decision-making, and to elucidate its consequences for risk and safety
research and practices. Arguably, the concept of safety is important in
decision-making, since concern about safety is a significant factor in many
decision processes.
The concerns on which I focus are primarily normative. In developing
and defending a concept of safety, I argue for a certain understanding of
how we should look at safety as opposed to “simply” how we do look at it.
The conceptual analysis of safety is closely related to the use of this concept
in decision-making. There are two basic approaches to this relation.
According to one of them, safety is a factual concept much like length or
mass. Whether something is more or less safe is analogous to whether one
person is taller than the other: with careful measurement we can find the
answer that is there. On such an understanding of safety, safety concerns
can be treated as a factual input in the overall decision process. Risk and
safety assessment can then be successfully isolated from the normative
dimension of decision-making.1 According to the other approach to the
relation between safety and decision-making, safety cannot be separated
from the normative aspects of decision-making, since there are essential
normative aspects already in the safety assessment. I argue for this second
way of looking at the relation between safety and decision-making.2
1 Cf. e.g. Ruckelshaus (1983) and the treatment in Mayo (1991), 252.
2 Mainly in Essay II.
2
knowledge and are therefore of interest for philosophy of science.3 Moral
philosophy deals with the “narrow” question of how to act, viz. how to act
morally right. The growing sub-field of practical reasoning deals with how we
should reason about what to do in more general terms (not exclusively
dealing with the moral question).4 However, decision-making has most
systematically been studied in decision theory. Several decision theoretical
themes and concepts are highly relevant for the essays in the thesis, notably
the conceptualisation of probability, epistemic uncertainty, and utility, as
well as decision rules such as Maximising Expected Utility. In this section I
will give a short introduction of these topics.
Decision theory is an interdisciplinary field of inquiry of interest for
mathematics, statistics, economics, philosophy and psychology. It covers a
broad spectrum of questions involving decision-making, from experimental
studies and theories of how we do in fact make decisions (descriptive decision
theory) to theories about how we should make them (normative decision theory).5
Even though the foundations were laid in the seventeenth and eighteenth
centuries by authors such as Pascal, Bernoulli and Condorcet, modern
decision theory dates to the twentieth century and gained in influence in the
1950s and onwards.6
A fundamental concern in decision theory is information on which
decisions are made. Classical decision theory divides decision problem into
different categories. One category is called decision under certainty. This is when
we know what the outcome will be when we choose an alternative. Many
decisions that we make are – or can at least be approximated as – decisions
under certainty. If I have the desire for a mango and contemplate whether
3 Even though most philosophers of science have been mainly interested in basic sciences
such as physics – e.g. Kuhn ([1962] 1970), Lakatos and Musgrave (1970) – in the last
decades there has been an interest in more applied areas as well. See for example Mayo
and Hollander (1991).
4 For a recent anthology, c.f. Millgram (2001).
5 Cf. Resnik (1987) for an introduction to the field, and Gärdenfors and Sahlin (1988) for
3
to go to my local supermarket to buy one, the decision may not be
characterised as a decision under certainty, since the store is sometimes out
of mangos. If I, on the other hand, call my friend that works there and get a
positive answer that the store is full of them, I know the outcomes of my
alternatives: a walk to the store, less money and a mango if I decide to go,
and no walk, more money but no mango if I do not.
There are many ways of categorising situations with less than certainty
about the outcome. The modern point of departure here is the seminal work
from 1921 by the economist Frank Knight.7 He made a distinction between
on the one hand “measurable uncertainty” and on the other “something
distinctly not of this character”.8 For the first kind he reserved the term
“risk”. This kind, he claims, “is so far different from an unmeasurable one that
it is not in effect an uncertainty at all”.9 For the other, “unmeasurable” kind,
he reserved the term “uncertainty”. The entities referred to as measurable or
unmeasurable are the probabilities of the different outcomes.10
In effect, this categorisation remains the basic distinction still used. In
their classical textbook, Duncan Luce and Howard Raiffa defines decision-
making under risk as when “each action leads to one set of possible
outcomes, each outcome occurring with a known probability” and decision-
making under uncertainty “if either action or both has as its consequence a set
of possible specific outcomes, but where the probabilities of these outcomes
are completely unknown or not even meaningful”.11
Decision under certainty, risk and uncertainty are the three basic categories
in classical decision theory. As they have been described by Luce and Raiffa
above, however, they are not exclusive. Our knowledge of the probability
can be partially known. We may, for example, know that the probability that
a chess game between Gary Kasparov and Viswanathan Anand will end by a
4
tie or a win for Kasparov when he is holding the white pieces is, say, 60-70
percent, without pretending that we know what they are exactly.12 Then we
are not at all completely ignorant of the probabilities, but they are not
known with total precision, either. Some textbooks, like David Resnik’s
Choices, reserve the third category for when the probability of the outcomes
is unknown or only partly known.13 I will use the concept decision under
uncertainty in this latter construal, including partial as well as no knowledge of
the probabilities.
This distinction between decision under risk and decision under
uncertainty is fundamental in classical decision theory, where the probability
referred to is thought to be an objective concept, a property of the world
itself.14 An alternative is to construe probability as a subjective concept. In
Bayesian decision theory, probability is conceived of as a measure of the degree
of belief that an agent has in a proposition or a state of affairs (such as, say,
“it will rain tomorrow”). This is combined with a notion of utility into a
sophisticated decision system. Frank Ramsey was the first to show that it is
possible to represent the beliefs of an agent by a unique probability measure
based on some rationality assumptions and assumptions on ordering of
utilities.15 Authors such as de Finetti, von Neumann and Morgenstern, and
Leonard Savage have suggested alternative axiomatisations and
developments of Ramsey’s work.16
On a Bayesian construal, all (rational) decisions are decisions under risk
(known probabilities), since the rational decision-maker always, at least
“decision under ignorance” and expression that is more commonly used to mark out the
case when there is not even partial knowledge of the probability. I will stick to the more
common term “uncertainty”.
14 Relative frequencies or logical (”a priori”) probability. E.g. Knight ([1921] 1957), 214-
216, 223-224.
15 Ramsey (1931).
16 de Finetti (1937), von Neumann and Morgenstern (1944), and Savage ([1954] 1972).
5
implicitly, assigns a probability value to an outcome. Faced with new
information, the agent may change her probability assessment (in
accordance with Bayes’ theorem), but she always assigns determinable
probabilities to all states of affairs. Critics, internal as well as external to the
Baysian framework, have challenged the plausibility of this view. Daniel
Ellsberg, Henry Kyburg, Isaac Levi, Gärdenfors and Sahlin and others have
pointed out that there seems to be a significant difference between some
decision situations that should be given the same probability distribution
according to classical Bayesianism.17 The amount of knowledge may vary
with the situation at hand, and this seems to be relevant for decision-
making. Judging situations such as coin-tossing and the likelihood of picking
a red ball from an urn with a known configuration seems very different
from judging whether a bridge will hold or what the weather will be like in
Rome a month from now. Thus, there seems to be a difference in how
certain we can be of the likelihood of an outcome depending on the
information and situation at hand, i.e. there is an epistemic uncertainty that it
may not be reasonable to reduce to a unique probability value.
With the inclusion of epistemic uncertainty into the Bayesian framework
the gap between classical decision theory and its Bayesian counterpart is
considerably narrowed. One common problem is how to specify this
epistemic uncertainty, the entity Knight described as “unmeasurable”. There
has been a significant amount of work on trying to specify the notion in the
literature, and in Section 4 of Essay I we provide a short overview.
A central tenet in Bayesian decision theory, dominating also in classical
decision theory, is the notion of Maximising Expected Utility. The common
idea is that each outcome can be assigned a numerical value signifying the
goodness of the outcome (the “utility”) as well as a probability value, and
that the decision-maker should pick the alternative that has the largest sum
17 Ellsberg (1961), Kyburg (1968), Levi (1974). Gärdenfors and Sahlin (1982). These
articles and others dealing with ”unreliable probabilities” are collected in Gärdenfors and
Sahlin (1988).
6
of the products of the utilities and the probabilities of outcomes. With the
inclusion of epistemic uncertainty into the framework, this decision criterion
has been questioned and others have been proposed.18 There is however a
fundamental problem with utility ascriptions, namely the strong assumptions
that must be in place in order for any criterion even remotely similar to the
principle of maximising utility to be meaningful. It is quite easily shown that
the utility numbers that are assigned to an outcome must conform to a
cardinal scale. This means that we should not only be able to rank an
outcome as better than another, but also tell how much better, viz. the
magnitude of distances between them. This is an assumption that has
received a great deal of criticism.19
Most problems where safety is an issue should be considered as decision
under various degrees of uncertainty. The possibility of a harmful event
taking place is at the heart of the safety consideration. Therefore, it should
not come as a surprise that probability as well as the comparison of utility
(severity of harm) is important in risk and safety analysis.
18 Cf. Gärdenfors and Sahlin (1982) for an example as well as comparisons with earlier
attempts.
19 Apart from criticism regarding the possibility for one agent of comparing outcomes,
there is the question of how to compare utilities among different agents. This is known as
the problem of interpersonal comparisons. Cf. Harsanyi (1976), Sen (1970) and Weirich
(1984) for some influential views on the subject.
7
includes epistemic uncertainty, the degree to which we are uncertain of our
knowledge of the situation at hand. We also discuss the extent to which
such a concept may be considered an objective concept, and conclude that it
is better seen as an intersubjective concept. We end by proposing some
formal versions of a comparative safety concept.
By discussing the distinctions between absolute and relative safety and
that between objective and subjective safety, an initial clarification of the
concept is reached. All combinations of these have their usage and our
conclusion is rather that it is more important to keep them apart than to use
only one of them. In most societal applications, however, reference seems to
be made to the objective safety concept (often in its relative form, allowing
safety to be a matter of degree).
We then analyse safety in terms of another important concept, risk.
Safety is often characterised as the antonym of risk, such that if the risk is
low then the safety is high and vice versa, where risk is construed as the
combination of probability of a harmful event and the severity of its
consequences.20 We criticise this understanding of safety by pointing to the
importance of epistemic uncertainty for safety – which is construed broadly, as
an uncertainty of the probabilities as well as the uncertainty of the severity
of the harmful events – concluding that a concept of safety relevant for
decision-making should acknowledge this aspect and thus include it in the
characterisation of safety. We also give an overview of the discussion of
uncertainty about probability ascription.
Our characterisation of safety thus includes three aspects: harm,
probability and epistemic uncertainty. Another potential component of the
safety concept is then considered, viz. control. There is a connection between
the degree to which the agent is able to control an outcome and her
perceived safety. However, emphasis on the objective safety concept for
societal decision-making reveals that there is no general connection between
20 Normally as the product of the probability and the severity of the harmful event.
8
control and safety in the relevant interpretations of the concept: sometimes
less control enhances safety, sometimes the opposite is true. Control is thus
only an apparent aspect of safety.
A strictly subjective notion of safety cannot be very useful in societal
decision-making, but on the other hand an objective safety concept is not
easy to construe. Particularly regarding the aspect of harm, with an essential
value-ladenness, but also regarding probability and epistemic uncertainty, it
is hard if not impossible to reach an objective standard. What we can argue
for is instead an intersubjective concept, with an independence from pure
agent-subjectivity, relying on intersubjective values (in the case of harm) and
the best available expert judgements (in the case of probability and
uncertainty).
We end the essay with some formal definitions showing a way of
including an interpretation of uncertainty as degrees of confidence.
9
of the public in matters of risk in general are unreliable. I claim in particular
the reasonableness of two alleged discrepancies, Consequence Dominance (the
perceived severity of the harmful consequence is the predominant factor
that determines whether an activity is accepted or rejected) and Value
Asymmetry of Safety (avoiding bad outcomes is much more important than
receiving good ones).
To evaluate the claim of the Expert Argument, I use the common
understanding of risk as the statistical expectation value of the severity of a
harmful outcome. Such an interpretation depends on very strong premises.
Here I can grant them, however, since I show that even with such strong
premises, the Expert Argument is invalid. I argue that decisions regarding
risks in the relevant context are decisions regarding safety, and as such
considerations about epistemic uncertainty are vital (a main conclusion of
Essay I). Even if the risk is judged to be small, it is not safe unless the
epistemic uncertainty is sufficiently small as well. The vital concern is not
whether the expert knowledge of the risk is the best one available, but
whether that knowledge is good enough.
I then show that the invalidity of the expert argument is more than a
logical possibility by defending the Consequence Dominance. I argue for the
plausibility of the Consequence Dominance trough connecting it to (and
arguing for) what I call the Knowledge Asymmetry of Safety:
Finally, I show that the scope of the Expert Argument does not limit itself
to risk management only but is evident also in risk assessment. The natural
objection against this conclusion is that the Expert Argument explicitly
mentions the recommendation of the experts and the purpose of risk
10
assessment is not to make any recommendations but merely to state the
scientific facts of the matter. However, epistemic uncertainty is an important
decision factor in the scientific process of risk assessment and this has a
different relevance in risk assessment than for science in general since these
two activities have different goals. The main goal of science is to gain
knowledge of the world, and it may be argued that there are acknowledged
methods of handling different grades of epistemic uncertainty, since there
are certain epistemic values integral to the entire process of assessment in
science. For risk assessment, however, the goal is safety. There is reason to
consider a potential harm relevant to risk assessment, even if the “purely”
scientific case cannot (as of yet) be made. Therefore, since there is no
possibility of “insulating” risk assessment from epistemic uncertainty and
thus the kind of normative aspects we recognised in risk management, the
objections to the Expert Argument based on considerations of epistemic
uncertainty are relevant also for risk assessment.
Essay III. The final essay, written in collaboration with Sven Ove Hansson,
analyses the role of epistemic uncertainty for principles and techniques of
achieving safety in an engineering context. The aim of this esssay is to show
that to account for common engineering principles we need the
understanding of safety that has been argued for in the previous essays. On
a narrow risk reduction interpretation of safety (understanding risk as the
combination of probability and severity of harm) we cannot fully account
for these principles. This is not due to deficiencies of those principles,
however, but due to a shortcoming in the capability of the theoretical
framework to capture the concept of safety. An adequate concept of safety
must include not only the reduction of risk but also the reduction of
uncertainty.
After giving an initial theoretical background, we analyse the principles
and methods put forward in the engineering literature (giving a list of 24
11
principles of various levels of abstraction in Appendix 1). These principles
are divided into four categories:
(1) Inherently safe design. Minimizing inherent dangers in the process as far as
possible. This means that potential hazards are excluded rather than just
enclosed or otherwise coped with. Hence, dangerous substances or
reactions are replaced by less dangerous ones, and this is preferred to
using the dangerous substances in an encapsulated process.
(2) Safety reserves. Constructions should be strong enough to resist loads and
disturbances exceeding those that are intended. A common way to obtain
such safety reserves is to employ explicitly chosen, numerical safety
factors.
(3) Safe fail. The principle of safe fail means that the system should fail
“safely”; either the internal components may fail without the system as a
whole failing, or the system fails without causing harm. For example, fail-
silence (also called “negative feedback”) mechanisms are introduced to
achieve self-shutdown in case of device failure or when the operator loses
control.
(4) Procedural safeguards. Procedures and control mechanisms for enhancing
safety, ranging from general safety standards and quality assurance to
training and behaviour control of the staff. One example of such
procedural safeguards is regulation for vehicle operators to have ample
time between actual driving in order to prevent fatigue. Frequent training
and checkups of staff is another. Procedural safeguards are important as a
‘soft’ supplement to ‘hard’ engineering methods.
12
engineering contexts: it may be an important tool for safety, but it is not the
final arbitrator since it does not deal adequately with issues of uncertainty.21
21 I would like to thank Sven Ove Hansson, Martin Peterson, Rikard Levin, Lars
Lindblom, Kalle Grill and Per Wikman-Svahn for their helpful comments on drafts of this
introduction.
13
References
Bernoulli, D., "Exposition of a New Theory on the Measurement of Risk",
Comentarii Academiae Scientiarum Imperialis Petropolitanae, as translated and
reprinted, Econometrica 22 ([1738] 1954), 23-36.
Condorcet, M., “Plan de Constitution, presenté a la convention nationale les
15 et 16 février 1793”, Oeuvres 12 ([1793] 1847), 333-415.
de Finetti, B. ”La prevision: see lois logiques, ses sources subjectives”,
Annales de l’Institut Henri Poincaré 7 (1937).
Gärdenfors, P. and Sahlin, N.-E., Decision, Probability and Utility, Cambridge:
Cambridge University Press (1988).
Gärdenfors and Sahlin, ”Unreliable Probabilities, Risk Taking, and Decision
Making”, Synthese 53 (1982), 361-386.
Harsanyi, J., Essays on Ethics, Social Behavior, and Scientific Explanation,
Dordrecht: Reidel (1976)
Knight, F., Risk, Uncertainty and Profit, New York: Houghton Mifflin ([1921]
1957).
Kuhn, Thomas, S., The Structure of Scientific Revolutions, Chicago: The
University of Chicago Press, ([1962] 1970).
Lakatos, I. and, Musgrave, A. (eds), Criticism and the Growth of Knowledge,
Cambridge: Cambridge University Press (1970).
Luce, R. D. and Raiffa, H., Games and Decision, New York: Wiley (1957).
Mayo, D. “Sociological Versus Metascientific Views of Risk Assessment”, in
Mayo and Hollander (1991).
Mayo, D. G. and Hollander, R. D. (eds.), Acceptable Evidence, Science and
Values in Risk Management. Oxford: Oxford University Press (1991).
Millgram, E. (ed), Varieties of Practical Reasoning, Cambridge MA: MIT Press
(2001).
Pascal, B., Pensées, Paris: Garnier ([(1670] 1961).
Ramsey, ”Truth and Probability”, in R. B. Braithwaite (ed), The Foundations of
Mathematics, London: Routledge and Kegan Paul (1931), 156-198.
14
Resnik, M. D., Choices: An Introduction to Decision Theory, Minneapolis:
University of Minnesota Press (1987).
Ruckelshaus, W. D., “Science, Risk and Public Policy”, Science 221 (1983).
Savage, L. J., The Foundation of Statistics, New York: Dover ([1954] 1972).
Sen. A., Collective Choices and Social Welfare, San Francisco: Holden-Day
(1970).
von Neumann, J and Morgenstern, O., Theory of Games and Economic Behavior,
Princeton: Princeton Univerity Press (1944).
Weirich, P, “Interpersonal Utility in Principles of Social Choice”, Erkenntnis
21 (1984), 295-317.
15
Forthcoming in Journal of Applied Philosophy
1. Introduction
Even though much research has been devoted to studies of safety, the
concept of safety is in itself under-theorised.1 In most research on safety the
meaning of the term is taken for granted. A closer scrutiny will show that its
meaning is often far from well defined. The Oxford English Dictionary divides
its definition of safety into eleven denotations with several branches.2 In
technical contexts, safety is frequently defined as the inverse of risk: the
lower the risk, the higher the safety.3 In this article we are going to show that
1 The authors would like to thank Nils-Eric Sahlin, the members of the Risk Seminar at
the Department of Philosophy and the History of Technology at the Royal Institute of
Technology, and an anonymous referee for their helpful criticism.
2 Oxford English Dictionary (1989, 2nd edition).
3 In Oxford English Dictionary, the corresponding definition would be 5a, “The quality of
17
such a definition of safety is insufficient, since it leaves out the crucial aspect
of deficiencies in knowledge. Safety-as-the-antonym-of-risk certainly
captures important dimensions of safety, but it does not give an exhaustive
understanding of the concept.
The aim of this article is to provide a conceptual analysis of safety. We
purport to offer an analysis that captures what experts in risk and safety
research as well as ordinary laypersons would be liable to include in the
concept. Without an in-depth understanding of the concept of safety, the
subject matter of risk and safety research remains fuzzy and it is not clear
what the objectives of reducing risk and achieving safety really means. This
article contributes to such an understanding by analysing several aspects of
safety – mainly focusing on the neglected aspect of epistemic uncertainty –
as well as by drawing central distinctions between different types of safety.
In Section 2 we introduce distinctions between absolute and relative
safety and between objective and subjective safety. In Section 3, we
investigate the relation between safety and risk, in Section 4 its relation to
uncertainty, and in Section 5 its relation to control. In Section 6 we put into
question that an objective safety concept is attainable. In Section 7, the
results from the previous sections are summarised in the form of a proposed
definition of safety, and in the concluding section, Section 8, some more
general conclusions are offered.
18
means no harm”4 and that “[s]afety is by definition the absence of
accidents”.5 However, the absolute concept of safety is problematic in this
and many other contexts, since it represents an ideal that can never be
attained.6 Therefore, operationalisations of the safety concept that have been
developed for specific technological purposes typically allow for risk levels
above zero. Such a relative concept of safety is also expressed in the US
Supreme Court’s statement that “safe is not the equivalent of ‘risk free’”.7
According to a relative safety concept, a statement such as “this building is
fire-safe” can be interpreted as a short form of the more precise statement
“the safety of this building with regard to fire is as high as can be expected in
terms of reasonable costs and preventive actions, and the risk of a fire
spreading in the building is very low”. Another example, taken from the
American Department of Defense, states that safety is “the conservation of
human life and its effectiveness, and the prevention of damage to items,
consistent with mission requirements.”8
For our present purposes there is no need to denounce either the
absolute or the relative safety concept. They can both be retained, but must
be carefully kept apart.
Another, equally important distinction is that between objective and
subjective concepts of safety. According to the subjective concept of safety,
“X is safe” means that “S believes that X is safe”, where S is a subject from
whose viewpoint the safety of X is assessed.9 According to the objective
concept of safety, the truth of the claim “X is safe” depends on S’s beliefs
only insofar as these beliefs have an influence, through S’s action, as to
whether or not any harm will occur. For example, if you are about to drink a
4 Miller, C.O. (1988) ‘System Safety’, in Wiener, E.L. & Nagel, D.C. (eds.), Human factors in
aviation (San Diego, Academic Press), 53-80.
5 Tench, W. (1985) Safety is no accident (London, Collins).
6 On the use of “utopian” goals that cannot be attained, see Edvardsson, K. and Hansson
S. O. (2005) ‘When is a goal rational?’, Social Choice and Welfare, 24, 343-361.
7 Miller, ‘System Safety’, 54.
8 Ibid.
9 We will here leave aside the problem of selecting an appropriate subject S for a particular
safety issue.
19
large glass of something you believe to be gin, but which is in fact a lethal
drug, then this is a situation of subjective rather than objective safety.
As the example shows, when we talk about safety, the objective safety
concept is indispensable. If we only use the subjective safety concept we will
not have a language accustomed to dealing with the dangers of the real
world. On the other hand, we also need to be able to talk about (different
persons’) subjective concepts of safety. We propose to deal with this
terminological issue by reserving “safe” for the objective concept and using
appropriate belief-ascribing phrases to denote subjective safety.
The objective safety concept constitutes a terminological ideal that may
be difficult to realise. If our knowledge about every determinant of safety
cannot be considered objective, it may be impossible to construct a fully
objective concept of safety. We will return to this issue in Section 6 after
having introduced the dimensions of the safety concept.
10 Note that the standard theory of safety presupposes a relative conception of safety.
20
(5) risk = the fact that a decision is made under conditions of known
probabilities (“decision under risk”).11
The last definition, (5), is the interpretation of risk used in decision theory
and is of little interest in the present context. (4) may be considered as a
compound of (1) and (3), where the unwanted event has been assigned a
value (disutility) and a probability. However, there are aspects of (1) and (3)
that are not very easily included in the representation of (4). Notably, (4)
rests on the presupposition that an unwanted event can be given a precise
value, which is a strong assumption not needed in (1) and (3).
For the purpose of this article, we shall assume that probability and harm
are the major components of risk, but we do not need to assume that they
can be combined into a one-dimensional measure of risk as in definition
(4).12 It should be obvious that safety increases as the probability of harm, or
its severity, decreases. One of the definitions of safety in the Oxford English
Dictionary is fully in line with the interpretation of risk in terms of probability
and harm: “The quality of being unlikely to cause or occasion hurt or
injury”.13 A typical example taken from a safety application states: “[R]isks
are defined as the combination of the probability of occurrence of
hazardous event and the severity of the consequence. Safety is achieved by
reducing a risk to a tolerable level”.14
If an unwanted event (harm) associated with a given risk is small, it may
be incorrect to talk about safety. For example, drawing a blank ticket in a
lottery would be an unwanted event, but the avoidance of this would not be
described as a matter of safety (unless, of course, the lottery was about
something severe, such as when a person participates in a game of Russian
roulette). Thus, the nature of the unwanted event is relevant here: if its
21
severity is below a certain level it does not count as a safety issue.15 We will
use the term “harm” that (contrary to “unwanted effect”) implies a non-
trivial level of damage. There are many types of harm that may be relevant to
safety. We will assume that all statements about safety refer to an either
explicitly or implicitly delimited class of harms (”safety against accidents”,
“safety against device failure” etc.).
The probabilities referred to in a risk or safety analysis are in most cases not
known with certainty, and are therefore subject to epistemic uncertainty. This
aspect is paramount for the notion of safety, but is often neglected in the
safety discourse. The relevance of uncertainty for decision-making has been
shown in empirical studies, e.g. by Daniel Ellsberg.16 Ellsberg showed that
people have a strong tendency in certain situations to prefer an option with
low uncertainty to one with high uncertainty, even if the expectation value is
somewhat lower in the former case. Below we show a similar point using a
different example.17
Suppose that you are walking in the jungle, and are about to cross an old
wooden bridge. The bridge looks unsafe, but the innkeeper in a nearby
village has told you that the probability of a breakdown of this type of bridge
is less one in ten thousand. Contrast this example with a case in which you
are accompanied by a team of scientists at the bridge, who have just
examined this particular bridge and discovered that the actual probability
that it will break is one in five thousand. Even though the probability is now
for some reason, had a desire to hurt yourself, and therefore engaged in a stunt act in
which the probability of a severe accident is very high, we might say that you chose a safe
way to kill yourself, meaning certain, but we would never say that you were safe.
16 Ellsberg, D. (1961) ‘Risk, Ambiguity and the Savage axioms’, Quarterly Journal of
[1982]) ‘Unreliable probabilities, risk taking, and decision making’, in P. Gärdenfors and
N.-E Sahlin (eds.), Decision, Probability, and Utility (Cambridge, Cambridge University Press),
313-334.
22
judged as higher, the epistemic uncertainty is far smaller and it is not
unreasonable to regard this situation as preferable to the first one in terms of
safety.
This kind of example indicates that safety should be a function that
decreases with the probability of harm, with the severity of harm, and with
uncertainty. In a discussion of safety against one particular type of harm, the
severity-of-harm factor is constant, and we can schematically summarize the
effects of probability and uncertainty as in figure 1. A shift from a to b
illustrates how we may have the same level of safety for different estimates
of probability, if there is a corresponding difference in uncertainty. Likewise,
the same probability estimate gives rise to different levels of safety when
there is a difference in uncertainty, which is illustrated by a shift from b to c.
Uncertainty
Safety
c
a
z
y
b
x
Probability
Figure 1. Safety as a function of probability and uncertainty. x, y and z are levels of safety
such that x>y>z.
The relevance of uncertainty for safety is also evident from the common
engineering practice of adding an “extra” safety barrier even if the
probability that this barrier will be needed is estimated to be extremely low,
e.g. in the context of nuclear waste facilities. Such extra barriers that make
23
the construction fail-safe are best argued for in terms of the possibility that
the probability estimate may be incorrect.18
It should thus be clear that epistemic uncertainty is a necessary aspect of
the concept of safety. A fundamental question is how epistemic uncertainty
should be characterised more in detail. This is a controversial area in which
no consensus has been reached. The most extensive discussions have been
in decision theory regarding how to express the uncertainty of probability
assessments. In order to give an indication of possible directions to develop
the concept we will conclude this section with an overview of this
discussion.
Two major types of measures of incompletely known probabilities have
been proposed. Let us call them binary and multi-valued measures. A binary
measure divides the probability values into two groups, possible and
impossible values. In typical cases, the set of possible probability values will
form an interval, such as: ”The probability of a major earthquake in this area
within the next 20 years is between 5 and 20 per cent.” Binary measures
have been used by Ellsberg, who refers to a set of ”reasonable” probability
judgments.19 Similarly, Levi refers to a ”permissible” set of probability
judgments.20 Kaplan has summarised the intuitive appeal of this approach as
follows:
As I see it, giving evidence its due requires that you rule out as too
high, or too low, only those values of con [degree of confidence]
which the evidence gives you reason to consider too high or too
low. As for the values of con not thus ruled out, you should remain
undecided as to which to assign.21
18 Contrast this with another type of safety barrier, serving a somewhat different purpose:
river dikes in the Netherlands, a case where the frequency data for water levels in different
seasons are well known, but different layers of dikes are built to contain different water
levels (summer dike, winter dike and sleeper dike).
19 Ellsberg, ‘Risk, Ambiguity and the Savage axioms’.
20 Levi, I. (1986) Hard Choices: Decision Making under Unresolved Conflict (Cambridge,
24
Multivalued measures generally take the form of a function that assigns a
numerical value to each probability value between 0 and 1. This value
represents the degree of reliability or plausibility of each particular
probability value. Several interpretations of the measure have been used in
the literature, of which we will mention (1) second-order probability, (2)
fuzzy set membership, and (3) epistemic reliability:
1. Second-order probability. The reliability measure may be seen as a measure
of the probability that the (true) probability has a certain value. We may
think of this as the subjective probability that the objective probability has a
certain value. Alternatively, we may think of it as the subjective probability,
given our present state of knowledge, that our subjective probability would
have had a certain value if we had ”access to a certain body of
information”.22
As was noted by Brian Skyrms, it is ”hardly in dispute that people have
beliefs about their beliefs. Thus, if we distinguish degrees of belief, we
should not shrink from saying that people have degrees of belief about their
degrees of belief. It would then be entirely natural for a degree-of-belief
theory of probability to treat probabilities of probabilities.”23
In spite of this, the attitude of philosophers and statisticians towards
second-order probabilities has been mostly negative, due to fears of an
infinite regress of higher-and-higher orders of probability. David Hume
expressed strong misgivings against second-order probabilities.24 Similar
doubts are expressed in a modern formulation: ”merely an addition of
second-order probabilities to the model is no real solution, for how certain
are we about these probabilities?”25
22 Baron, J. (1987) ‘Second-order probabilities and belief functions’, Theory and Decision, 23,
27.
23 Skyrms, B. (1980) ‘Higher order degrees of belief’, in DH Mellor (ed.), Prospects for
189.
25
This is not the place for a discussion of the rather intricate regress
arguments against second-order probabilities.26 It should be noted, however,
that similar arguments can also be devised against the other types of
measures of incomplete probability information. The basic problem is that a
precise formalization is sought for the lack of precision in a probability
estimate.
2. Fuzzy set membership. In fuzzy set theory, uncertainty is represented by
degrees of membership in a set. In common set theory, an object is either a
member or not a member of a given set. A set can be represented by an
indicator function (membership function, element function) µ. Let µY be the
indicator function for a set Y. Then for all x, µY(x) is either 0 or 1. If it is 1,
then x is an element of Y. If it is 0, then x is not an element of Y. In fuzzy
set theory, by contrast, the indicator function can take any value between 0
and 1. If µY(x) = 0.5, then x is ”half member” of Y. In this way, fuzzy sets
provide us with representations of vague notions. Vagueness is different
from randomness.
In fuzzy decision theory, uncertainty about probability is taken to be a
form of (fuzzy) vagueness rather than a form of probability. Consider an
event about which the subject has partial probability information (such as
the event that it will rain in Oslo tomorrow). Then to each probability value
between 0 and 1 is assigned a degree of membership in a fuzzy set A. For
each such probability value x, the value µA(x) of the membership function
represents the degree to which the proposition ”it is possible that x is the
probability that the event occurs” is true. In other words, µA(x) is the
possibility of the proposition that x is the probability that a certain event will
happen.27 The difference between fuzzy membership and second-order
26 Skyrms ‘Higher order degrees of belief’. Cf. Sahlin, N.-E. (1983) ‘On second order
probability and the notion of epistemic risk’, in B.P. Stigum and F. Wenztop (eds.),
Foundations of Utility Theory with Applications (Dordrecht, Reidel), 95-104.
27 On fuzzy representations of uncertainty, see Unwin, S. (1986) ’A Fuzzy Set Theoretic
Foundation for Vagueness in Uncertainty Analysis’, Risk Analysis, 6, 27-34, and Dubois, D.
and Prade, H. (1988) ‘Decision evaluation methods under uncertainty and imprecision’, in
J. Kacprzyk and M. Fedrizzi (eds.), Combining Fuzzy Impression with Probabilistic Uncertainty in
Decision Making (Berlin, Springer Verlag), 48-65.
26
probabilities is not only of a technical or terminological nature. Fuzziness is
a non-statistical concept, and the mathematical laws of fuzzy membership
are not the same as the laws of probability.
3. Epistemic reliability. Gärdenfors and Sahlin take a different approach
than the traditional Bayesian approach. They use a set of possible probability
distributions and assign to each probability representation a real-valued
measure ρ that represents the ”epistemic reliability” of the probability
representation in question.28 The specific mathematical properties of ρ are
kept open.
As should be obvious, a binary measure can readily be derived from a
multivalued measure.29 The latter carries more information, but this is an
advantage only to the extent that such additional information is meaningful.
Another difference between the two approaches is that binary measures are
in an important sense more operative. In most cases it is a much simpler
task to express one's uncertain probability estimate as an interval than as a
real-valued function over probability values. For our present purposes, we
do not have to assume that epistemic uncertainty is expressible in a
particular format, but in Section 7 we will use a multivalued measure to
illustrate the relation between probability and uncertainty.
27
To say that a certain risk is more controllable for an agent than another risk
means, in the present context, that there is a more reliable causal relationship
between the acts of the agent and the probability and/or the severity of the
harm.31 Controllability can be conceived either as an objective or a
subjective concept. Objective controllability is hard if not impossible to
measure, but subjective controllability can be measured (on an ordinal scale)
by asking respondents to indicate to what degree they can control various
processes, for instance by ticking the appropriate box on a Lickert-scale.
At first sight one might believe that, everything else being equal, more
controllability implies more safety. However, this does not seem to hold in
general. Consider two nuclear power plants, one of which runs almost
automatically without the intervention of humans, whereas the other is
dependent on frequent decisions taken by humans. If the staff responsible
for the non-automatic plant is poorly trained, it seems reasonable to
maintain that the automatic plant is safer than the non-automatic one because
the degree of controllability is lower in the automatic plant. Arguably, even if
they are excellently trained a high degree of control may lower the safety.
Consider a third nuclear plant that is much less automated than the ones we
have today. Due to cognitive and mechanical shortcomings, a human being
could never make the fast adjustments that an automatic system does, but
this plant has excellent staff that perform as well as any human being can be
expected to do under the circumstances. In spite of this increased control
this too would be an unsafe plant. The reason is that the probability of
accidents due to human mistakes would be much greater than in a properly
designed, more automatic system.
The effects of control on safety in our nuclear plant example can be
accounted for in terms of the effects of control on probability. If increased
human control decreases the probability of an accident, then it leads to
higher safety. If, on the other hand, it increases the probability of accidents,
then it decreases safety. Hence, this example does not give us reason to add
31This is an extension of the psychometrical concept, which only regard the subjective
dimension.
28
control to the three dimensions of safety that we have already listed; it acts
here via one of the dimensions we already have. However, there are other
cases in which such a reduction is not equally easy to perform. For example,
consider a person who is sitting in a car next to the driver, driving around in
heavy traffic. Even if she judges the driver to be just as skilled as herself, she
may feel less safe than if she were herself the driver, i.e. in control. She may
agree that the probability of an accident is the same in the two cases, but
nevertheless feel much safer when able to control the situation. Another
example would be a person who is afraid of being robbed, and therefore
carries a gun. Even if she were convinced by statistical evidence showing
that robbery victims who carry arms run a greater risk of injury or death, she
would feel safer when carrying a gun.
For the last two examples, the distinction between subjective and
objective safety is essential for the analysis. Being the driver or carrying a
gun makes the person feel safer. However, in both cases it would be sensible
for someone else to say: “You only feel safer. In fact, you are exposed to a
greater risk and therefore you have lost rather than gained in safety.” Given
our choice in Section 2 of an objective safety concept (or rather, a safety
concept that is as objective as possible) these and similar cases can be dealt
with by showing that the control factor is relevant only for a subjective
concept of safety and should therefore be excluded from the analysis.
In some safety issues, the effects of control can be reduced to
uncertainty rather than probability. A preference for driving rather than
being a passenger may be a case of this. I may judge the other to be just as
good a driver as I am, but I am more certain about my own skills than about
those of the other. Hence, I am safer (from a rational subjective point of
view) if I am driving, even if I judge our skills to be equal. Here, the control
dimension is reduced to the uncertainty dimension. However, for an external
observer, the degree of epistemic certainty about their skills might be equal;
hence from that viewpoint the level of safety is equal in the two cases.32
29
6. The limits of objective safety
With the aim of elucidating important aspects of the safety concept for
technical and scientific use, we have identified three dimensions: severity of
harm, probability and uncertainty. As noted in Section 2 our aim is a safety
concept that is as objective as possible, and we have eliminated subjective
elements wherever possible. It is now time to evaluate how far we have been
successful in eliminating subjective elements from each of these three
dimensions.
The severity of harm is, of course, essentially value-laden. Even if we are
able to compare some harms, such as a broken finger vs. a broken finger and
a broken leg, this comparison is in general a subjective evaluation. For
example, in a comparison between two accidents, one of which caused a few
serious injuries and the other a larger number of less severe injuries, it might
be far from clear which is the most severe accident. To some extent we may
use intersubjective methods for judging degrees of harm. Attempts such as
QALY (quality adjusted life years) in medical ethics have been made to solve
such issues of comparison.33 In some variants of risk-benefit analysis, harms
such as deaths and injuries are assigned monetary values. However, whatever
currency the severity of harm is measured in – be that euros or healthy years
– rational persons can disagree about the comparative severity of different
harms and have no access to objective means, no independent standard, by
which their disagreement can be resolved. It is hard to see how the
subjective aspect of the value-ladenness of severity of harm can be
eliminated. What we can often achieve, however, is an intersubjective
assessment that is based on evaluative judgments that the vast majority of
humans would agree on. As can be seen from studies of the QALY concept,
there is indeed agreement on a wide range of such judgments.
33Nord, E. (1999) Cost-Value Analysis in Health Care: Making Sense out of QALYs
(Cambridge, Cambridge University Press). See also the review by Hansson, S. O. (2001) of
Erik Nord, ‘Cost-Value Analysis in Health Care: Making Sense out of QALYs’ in
Philosophical Quarterly, 51, 132-133.
30
There is a well-established distinction in probability theory between
subjective and objective interpretations of probability.34 According to the
objective interpretation, probability is a property of the external world, e.g.
the propensity of a coin to land heads up. According to the subjective
interpretation, to say that the probability of a certain event is high means
that the speaker’s degree of belief that the event in question will occur is
strong. When we are dealing with the repetition of technological procedures
with historically known failure frequencies, it may be possible to determine
(approximate) probabilities that can be called objective. However, in most
cases when a safety analysis is called for, such frequency data are not
available, unless perhaps for certain parts of the system under investigation.
Therefore, (objective) frequency data will have to be supplemented or
perhaps even replaced by expert judgment. Expert judgments of this nature
are not, and should not be confused with, objective fact. Neither are they
subjective probabilities in the classical sense, since by this is meant a
measure of a person’s degree of belief that satisfies the probability axioms
but does not have to correlate with objective frequencies or propensities.
They are better described as subjective estimates of objective probabilities.
However, what we aim at in a safety analysis is not a purely personal
judgment but the best possible judgments that can be obtained from the
community of experts. Therefore, this is also essentially an intersubjective
judgment.
Procedures for expressing and reporting uncertainties are much less
developed than the corresponding procedures for probabilities.35 However,
the aim should be analogous to that of probability estimates, namely to
obtain the best possible judgment that the community of experts can make
on the extent and nature of the uncertainties involved. Objective knowledge
34 The subjective approach to probability theory was introduced by Ramsey 1926 in his
paper ‘Truth and probability’ which can be found in P. Gärdenfors and N.-E Sahlin (eds.)
(1988), Decision, Probability, and Utility (Cambridge, Cambridge University Press), 19-47. See
also Savage, L. (1972, [1954]) The foundations of statistics, 2nd ed. (New York, Dover).
35 Levin, R., Hansson, S. O., and Rudén, C. (in press) ‘Indicators of Uncertainty in
31
about uncertainties is at least as difficult to obtain as objective knowledge
about probabilities.
In summary, then, an objective safety concept is not attainable, but on
the other hand we do not have to resort to a subjective safety concept that is
different for different persons. The closest we can get to objectivity is a
safety concept that is intersubjective in two important respects: (1) it is
based on the comparative judgments of severity of harm that the majority of
humans would agree on, and (2) it makes use of the best available expert
judgments on the probabilities and uncertainties involved. This intersubjective
concept of safety should be our main focus in technical and scientific
applications of safety.36
36 Even those who, in analogy with the case of probability, deny the existence of anything
like objective safety could accept an intersubjective usage of the concept.
37 In this we follow the convention of preference logic, in which “at least as good as” is
32
knowledge about the harm. For expository reasons, we will first propose a
definition that only takes severity and probability into account, and then add
uncertainty.
One way to deal with the combination of severity and probability is to
consider only safety against classes of events that are so narrowly defined
that all elements in each class have the same degree of severity. This would
mean for instance that we would not talk about “safety against industrial
accidents” but about “safety against an industrial accident killing exactly one
person”, etc. For each of these categories, a state is then at least as safe as
another, with respect to that category, if and only if the probability of an
event in that category is at most as large. However, this way of speaking
does not correspond to how we in practice talk about safety. For our
definition to be at all compatible with common usage, it must be applicable
to categories of harm that contain events of different severity, such as
“safety against industrial accidents”.
In order to avoid unnecessary complications we will assume that the
intersubjective relation “at least as severe as”, as introduced in Section 5, is
complete.38 The two-dimensional comparative safety concept can then be
defined as follows:
38There are at least two ways to deal with cases when it is not. One is to replace “at least
as severe as” by “not more severe than” in the definitions that follows. This relation is
complete. Another is to consider all reasonable relations of severity, and replace “at least
as severe as” by the more demanding “at least as severe as according to all reasonable
relations of severity”. In the latter case, incompleteness in severity may (but need not,
depending on the probability distribution) give rise to incompleteness in the comparative
“at least as safe as”.
33
Probability Probability
B
A
Harm Harm
39Note that whereas (1) only requires a relation “at least as severe as”, the expected utility
rule requires a cardinal measure of severity.
34
should be taken into account and how they should be grouped, etc.40
Arguably the most important of these is probability. We will focus on
uncertainty about probabilities.
The assumption that we need is that there are at least two well-defined
“degrees of confidence” with which a probability can be stated. Hence,
instead of dealing with statements of the form
35
It should be noted that definition (2) does not require more than an ordering
of severities, and does not require even comparability of degrees of
confidence.41
Clearly, (2) introduces incomparable situations, namely when A is safer
than B with respect to one level of confidence whereas B is safer than A
according to another level of confidence. These levels of confidence might
for example represent the shift from a 50% confidence interval to a 1%
interval that includes uncertain threats that concern mainly B. See figure 3.
Probability
Probability Harm
Harm
Figure 3. A case in which A is at least as safe as B holds with respect to (1), but not (2). In
the two graphs to the left the probabilities as a function of different severities of harm are
shown for the two states A (thin lines) and B (thick lines). The full lines represent the
most likely probability estimate, whereas the two dotted lines in each graph represent the
upper probability limit for different degrees of confidence. The dotted line with the
highest probabilities (square dots) represent the “worst case” probability assessment in
regard to the knowledge at hand. The graphs in the rightmost part show comparisons of
each of the three degrees of confidence. The reason that the relation A is as least as safe as B
holds with respect to (1), but not in respect to (2) is that whereas (1) holds both for the
36
most likely probability assessment (full line; lowest graph) and for the next level of
confidence (middle graph), it does not hold for the lowest level of confidence, the “worst
case scenarios”.
Given a relation “at least as safe as”, a monadic predicate representing the
adjective “safe” can be constructed at different levels of stringency.42 The
logical technique for this is the same as that developed for other pairs of
dyadic and monadic predicates, such as “at least as long as” vs. “long”, etc.43
The following basic criteria should be satisfied by any monadic concept
“safe” that conforms with a given relation “at least as safe as”.
Clearly, there may be many predicates safe* that satisfy this criterion.
Different such predicates may differ in terms of stringency, defined as
follows:
The term absolutely safe can be constructed as the most stringent safety
predicate. Other terms such as reasonably safe and relatively safe can be
constructed as safety concepts at lower levels of stringency. However, not all
predicates that satisfy the positivity property correspond to the term “safe”
in natural language. Predicates with a low degree of stringency (such as
“safer than the least safe alternative”, which can be substituted for safe* in
42 It should be noted that the relative ”at least as safe as” is not defined in terms of a
monadic predicate “safe” (such a definition would be impossible) and therefore a
definition of “safe” in terms of that relation is not circular.
43 Hansson, S. O. (2001) The Structure of Values and Norms (Cambridge, Cambridge
37
the definition, and satisfies positivity) would not be so described. There
seems to be a socially constructed level, perhaps best specified as
“acceptable level of safety”, below which it is misleading to use the term
“safe”. This leads us to a second criterion for a concept of safety:
8. Conclusion
38
precisely as possible in terms of the defining constituents of the respective
safety concepts.
39
Submitted manuscript
Niklas Möller
Division of Philosophy, Royal Institute of Technology
1. Introduction
It is a common opinion in risk research that the public is irrational in its
perception and acceptance of risks.1 Many activities that are considered safe
by experts, such as storage of nuclear waste, are considered to be unsafe by
the public. Other activities that are known by all to be hazardous – such as
driving a car without wearing a safety belt – are avoided by the public only
1I would like to thank Sven Ove Hansson, Martin Peterson, Nils-Eric Sahlin, Birgitte
Wandall, Lars Lindblom and Kalle Grill for their detailed and constructive criticism, as
well as all members of the risk seminar whose help has guided me along the way.
41
when forced by regulation. A paradigmatic example of the discrepancy
between the views of experts and the public is nuclear power production: a
large number of people regard nuclear power as dangerous even though
most experts regard it as being a safe method of energy production. A more
recent example is GMO, genetically modified organisms, which is
considered safe to a much higher degree by experts than by laymen.
The influence of laypeople if often characterised as harmful,2 and the
analysis of the experts characterised as complete in itself, leaving no more
than the implementation of their recommendation to the decision makers.3
Reasoning in risk communication frequently rests on the premise that
people are too ignorant of the real risk and tend to make uninformed
judgments concerning it.4 Therefore, it is asserted, a major objective is to
adjust the perceptions of laypersons in order to narrow this gap.5
In this article, normative objections are put forward against the argument
that the public should follow the experts’ advice because the experts have
better knowledge of the risk. Let us refer to this argument as the Expert
Argument:
(1) Experts recommend X.6
(2) Experts have better knowledge than the public of the risk involved
in X.
---
(3) Therefore: the public should accept X.
The aim of this article is threefold. First and foremost, I will argue that
even reasonably restricted, the Expert Argument is not valid. It is not valid
because there is always epistemic uncertainty involved apart from estimations of
the risk. This highlights a vital concern: the question is not whether the
public distrust of the risk assessments and point to information made accessible to the
public as the solution.
5 This aim is expressed in Kraus, Malmfors, & Slovic (1992/2000), 312.
6 ‘X’ may be an activity or technology like nuclear energy production of energy, or a
42
expert knowledge of the risk is the best available, but whether it is good
enough. Second, I will show that the objection from epistemic uncertainty is
more than a logical possibility. Central tendencies from the empirical study
of risk perception, though criticised by experts, may be considered
reasonable in light of considerations from epistemic uncertainty. Third, I will
show that the scope of the objection from epistemic uncertainty covers the
entire field of risk research, risk assessment as well as risk management.
In Section 2, some results from empirical studies of risk perception are
presented in order to distinguish between discrepancies not defended and
“discrepancies” defended for the same reasons that render the Expert
Argument invalid. Section 3 consists of some specifications and limitations
of the Expert Argument. In Section 4, the basic objection to the Expert
Argument is presented, and in Section 5 I argue that the invalidity of the
Expert Argument is more than a mere theoretical possibility. In Section 6,
the scope of the objections is explored and in the final section, Section 7, it
is concluded that focusing on ways of describing and communicating
epistemic uncertainty is necessary to make the best possible decisions
regarding risks.
43
Let us start discussing some claims regarding discrepancies that will not
be disputed in this paper. Many of the findings on risk perception show
tendencies that might be regarded as irrational. One type of discrepancy may
be labelled value incoherence. For example, when asked about acceptable levels
of risk for an activity, different framings of the question – whether stated in
lives saved or lives lost – render different answers from the respondents.7
Likewise, ordinary laypersons may assess a program that saves, for example,
4500 out of 11000 lives as more valuable than one of equivalent cost that
saves 10000 out of 100000, even though the total number of lives saved is
much higher in the second case.8 This is also mirrored in the fact that actual
spending per lives saved varies with several magnitudes; Slovic (1997) shows
a variation from US$ 500 up to 10 millions for life saved by various
interventions, and Ramsberg & Sjöberg (1996) show a similar variation.9
Another type of discrepancy is epistemic, caused by insufficient knowledge
of risks. One such example is the perception of the relation between
dose/exposure (of radiation and chemicals) and risk. Kraus et al (1992)
show that the public tend to be much less sensitive to dose and exposure
considerations than are experts.10 Chemicals, for example, tend to be
perceived as either safe or unsafe, regardless of quantities such as dose or
exposure.11 In many areas where the connection between exposure and
harmful effect is well documented, this perception may be regarded as a
genuine lack of knowledge on part of the public.
From cases of discrepancies like these it may be tempting to conclude
that the dispositions of the public in general in matters of risk are unreliable.
This conclusion is however premature. One important tendency of the
public that is tempting but – as will become clear – wrong to regard merely as
a discrepancy is the tendency to base acceptance or rejection of risk mainly
on the possible harmful consequences of the risk. Several studies have
44
shown that the severity of consequences is by far the strongest parameter for
explaining acceptance of an activity that involves risk. Starr (1969) showed
that the accepted level of risk was inversely related to the number of people
exposed.12 In the seminal psychometric study by Fischhoff et al (1978),
seven different influencing aspects where examined, and severity of
consequences was by far the most influential factor regarding whether the
risk was considered acceptable or not.13 More recently, Sjöberg (1999, 2003,
2004) has demonstrated this tendency in various contexts.14. Let us call this
the Consequence Dominance:
For the expert eye, this tendency is deplorable, since an improbable outcome
such as an airplane crash may be used as an argument for preferring going
by car even if the expected outcome of the latter action is much worse, or
for preferring old means of energy production instead of nuclear power in
light of the catastrophic potential of a meltdown regardless of how unlikely
this outcome may be. Therefore, this looks like yet another misconception
that should be corrected, another instance of the application of the Expert
Argument. In Section 5, however, this tendency will stand out as reasonable
once the problem with the Expert Argument is revealed.
There is another empirical result that provides a link for understanding
why the Consequence Dominance is telling against the Expert Argument.
Several studies by Sjöberg et al (1993, 1999, 2001) have shown an asymmetry
between safety and (positive) utility, which we will call the Value Asymmetry of
Safety.15
12 Starr (1969).
13 Fischhoff, Slovic, Lichtenstein, Read & Combs (1978/2000). The seven aspects were
Voluntariness of risk, Immediacy of effect, Knowledge about risk, Control over risk, Newness, Chronic-
catastrophic, Common-dread and Severity of consequences. (p. 86).
14 Cf. Sjöberg, (1999), Sjöberg (2003) and Sjöberg (2004).
15 Sjöberg & Drottz-Sjöberg (1993), Sjöberg (1999), Sjöberg & Drottz-Sjöberg (2001).
45
(Value Asymmetry of Safety) Avoiding bad outcomes is much more
important than receiving good ones.
16 The seminal paper Ellsberg (1961) shows a similar tendency. This tendency implies, of
course, not that there is no breaking point: for example if your own car is breaking down
in any minute anyway, if the value of the new car is very high for you or if the risk of
losing your own car is very small.
17 In its strongest interpretation it is an instance of the maximin rule in decision theory, a
rule saying that one should choose the alternative that maximises the worst possible
outcome.
18 In present risk management, the critique of pure risk-benefit analysis is a sign of the
general acceptance of the principle of the Value Asymmetry of Safety: it is not acceptable
for an activity to be too risky, even if the gains are significant (c.f. Hansson (1993)). Of
course, we may accept an activity that includes risks even though we could avoid it.
Automobile driving is a classic example of an activity we seem to accept even though it is
to blame for a significant percentage of the annual unnatural deaths. Other voluntary
activities like rock climbing and parachuting are examples of situations where we are
prepared to increase risks in exchange for something we desire. Therefore, we have an
asymmetry here rather than “safety at all costs”.
46
question of safety is therefore the primary concern implicit in the explicit
mentioning only of the risk.
The Value Asymmetry of Safety is thus not only an empirical fact of
public perception, but also (albeit implicitly) accepted by proponents of the
Expert Argument. This principle points to a shift from considerations of
risk to considerations of safety, which in Section 4 will be shown to have
important consequences for our evaluation of the Consequence Dominance
and the Expert Argument.
The main critique of the Expert Argument proposed here focuses on this
bridge premise. Even if experts recommend X, and experts have better
knowledge of risk than the public, it is not at all certain that (φ) is valid.
Therefore, what has been characterised as a disagreement between experts
and the public is not sufficient grounds for the normative conclusion that
the public should accept X.
19 This is the thesis called Hume’s law, stating that a moral conclusion cannot be validly
inferred from non-moral premises. In this case, (1) and (2) are descriptive, non-normative
(and thus non-moral) claims, whereas (3) is normative. Some interpretations of the thesis
are controversial for moral philosophers, but the weak logical interpretation I make use of
here is not much disputed. Cf. Hume (1967 [1739-40]), 469-470, for the original
formulation.
20 We need the subordinate clause, since in the general case, there could be a third agent
having even better information and both agents should follow the recommendation of this
third agent rather than anything else’s. Thus, we interpret (1) as entailing that there is no
third agent with better knowledge than the experts in question.
47
For the Expert Argument to offer any initial plausibility, we have to state
certain limitations. Firstly, it will be assumed that what is broadly captured
by the term “experts” are the relevant technical and scientific experts
specialised in X as well as the non-technical experts on managing risks, i.e.
professionals preparing and making risk decisions.21 Secondly, we assume
that there are no specific morally relevant considerations involved, such as
considerations of justice.22 This is indeed idealised, but may be granted since
the aim of this paper is to show that even under these circumstances the
Expert Argument is invalid.
21 If we only include the scientific experts it may be argued that the Expert argument
doesn’t even get off the ground, since the prevailing opinion is that scientific experts
should produce scientific facts, not decide courses of action. This argument will be met in
Section 6, where the question of science and values will be touched upon.
22 What is morally relevant is of course a matter of debate, since anything affecting the well
“the combination of the probability of an event and its consequences” which may
reasonably be interpreted as referring to the expectation value. See also Cohen (2003) for a
recent example.
25 More correctly, we should sum up the values for all different harmful outcomes, since
there is often more than one harmful event to take into consideration.
48
product of these values as the measure of the risk.26 However, for our
purposes we may concur that the risk associated with X is the statistical
expectation value of the severity of the harmful event. We may here accept
this strong conception of risk, even if weaker conceptions may be
considered more reasonable, since this gives a maximum of force to the
Expert Argument. Weakening the assumption only gives us further reasons
against it.27
In decision theory, decisions under known probabilities are called
“decisions under risk”.28 Known probabilities are always an idealization, of
course, since even a perfectly normal coin-tossing situation may be biased in
some subtle way, not to mention any normal complex decision situation. In
reality, there is always an epistemological primacy to our ascriptions of
probabilities and outcome evaluations: the probability and degree of severity
that we assign to an outcome are always based on the estimations we have
access to. The objective probabilities – if such an entity exists beyond our
epistemic limitations – are in general different from our estimations.
This almost trivial point is always a matter of concern, but in general
decision-making, it may sometimes be considered a matter of gaining on the
swings what you lose on the roundabouts: the estimates may be just as often
too high as too low. This symmetric relation between over- and
underestimating the probabilities involved may be argued to hold for most
outcomes: using the best estimations we have may be the most rational way
to go when making decisions. However, in the cases we are considering
here, we have no such symmetry: instead the background assumption of the
26 For the statistical expectation value to be meaningful we must be able to compare the
utilities of outcomes not only on a ordinal scale, like grading them from best to worst, but
on a cardinal (interval) scale, like the measurement of temperature in Celsius or
Fahrenheit. (Cf. e.g. Resnik (1987), especially 81-85, for an introduction of utility scales).
When the only harmful effect we consider is number of casualties this assumption may be
reasonable, but how much worse is e.g the loss of a leg compared to a whiplash injury?
That these effects may be measured on a cardinal scale is very much a matter of
controversy.
27 For example, if there is no justified way of scientifically comparing the value of the
harmful effects, and thus no way of using an expectation value, referring to expert
knowledge of the risk may be considered incorrect.
49
entire Expert Argument is what we above have called the Value Asymmetry
of Safety. In the context in which we are interested, decisions involving risk
are decisions regarding safety, and then swing-and-roundabout arguments
are not valid. The reasons for this are many. Firstly, from the Value
Asymmetry of Safety follows that we cannot “compensate” a higher risk
with a higher utility if safety really is our primary concern. Secondly, we are
not helped by arguments saying that the risk of other activities are probably
overestimated even if activity X turns out to be underestimated, so that for
the society at large it “evens out”. For what is at stake in the Expert
Argument is explicitly the safety of activity X. Finally, and most importantly,
when dealing with safety more than the best estimations of risk are involved,
which the following paragraphs show.
Often, safety is used as the antonym of risk: the less risky an activity is,
the safer it is, and vice versa. The Value Asymmetry of Safety, however,
suggests that something more must be taken into consideration than the best
available knowledge of the risk.29 This is so because in the context of risky
activities, most of our decisions are made with different levels of uncertainty
about the probability and severity of harm. In those cases, shifting focus
from risk to safety includes an additional epistemic aspect, as the following
example illustrates: Suppose you are on the way to make a parachute jump in
a poor country far away from home. You have never been to this beautiful
place before and do not have much knowledge of the exact equipment you
are about to use, but have previously read from a reliable source that the
probability of equipment malfunctioning in the air is low, less than one in a
hundred thousand on a normal jump. The equipment does look somewhat
old and worn, however, and you cannot help feeling worried. Contrast this
with a case in which a team of experts flown in from home tells you that this
exact equipment (in light of its long usage) actually has a somewhat higher
probability of malfunctioning, and that a better probability estimate is one in
“Risk” here thus refers to the type of information at hand (known probabilities) and not
28
50
thirty thousand per jump. Even though the probability is now judged as
higher, the uncertainty whether the information is trustworthy is significantly
lower than in the previous case and it is not unreasonable to regard this
situation as preferable to the first one in terms of safety. Whereas in the first
case the knowledge at hand was such that it allowed for a rather large range
of actual probabilities – it could be much less safe than your best estimate
tells you – the estimation of the second case is much more certain. Different
levels of epistemic justification are thus significant for decisions regarding
safety.
What is referred to in this example is the aspect of epistemic uncertainty.
The safety of an activity in the context of decision-making is not only a
function of risk in terms of the estimated probability and severity of harm,
but also of the epistemic uncertainty of these entities.
Now we may see that the vital premise in the Expert Argument that one
should follow the recommendation of the agent with the best knowledge of
the risk involved is far from evident. For us to say that the safety is high
requires not only that the expected value of the harm is small, but also that
the epistemic uncertainties involved are small enough.30 The premises in the
Expert Argument are insufficient for the conclusion it wants to make; hence,
the argument is invalid.
29 For a more elaborated treatment of the difference of the concepts of risk and safety
than what follows here, see Möller et al (2005).
30 Note here that we have accepted, for the sake of argument, the expectation value
conception of risk. However, the argument in this section is not dependent on this
interpretation but is valid as long as risk is understood as some combination of the
estimations of the severity of harm and its probability.
51
Consequence Dominance states that the predominant factor for
accepting/rejecting a risky activity is the severity of the consequence of this
activity. If, for example, activities A and B have the same probability for a
lethal accident, but for activity A the number of casualties in an accident are
higher than for activity B, then A is normally considered less safe than B and
thus less likely to be accepted.31 This is consistent with the expectation value
approach to risk. However, the tendency of the public is to prefer B to A as
soon as the possible harm of A is considered significantly higher, even if the
statistical expectation value of harm for A is significantly less than that of B.
Focusing only on the outcome thus goes against general expert opinion and
is judged as irrational. The modern classic is the nuclear power debate.
Compared to the harmful effects of several competing methods of energy
production (e.g. using coal, oil etc) most experts consider nuclear energy to
be much safer. Still many people are reluctant to accept nuclear energy in
view of the catastrophic potential of a meltdown, even though the estimated
probability for such an event is deemed as very low.
There are many important psychological factors such as control, affect,
dread etc put forward in the literature to account for Consequence
Dominance. Many of these have a restricted normative force in terms of
accepting or rejecting an activity. In the case of dread as well as affect and
control a relevant question is whether these attitudes are reasonable in light of
the evidence. They seem to be descriptive claims only and we still need an
analysis of why they carry any normative force. If such an explanation cannot
be given, one might argue that it is the attitudes rather than the activities that
are in need of changing. After a person has claimed that she feels more in
control when driving a car than going by airplane, and thus safer, we may
still ask what relevance this has in light of the amount of statistical data
telling another (general) story.
The epistemic uncertainty aspect of safety, however, provides a case for
the normative force of the Consequence Dominance that the mere
31Let us say, for the sake of argument, that the severity of an event is measured in number
of casualties.
52
descriptive factors fail to do. This case is built on a very plausible
assumption concerning epistemic uncertainty that we may call the Knowledge
Asymmetry of Safety.
32 There is an implicit assumption here, namely that that all probability estimates are
comparably low - as is the case with relevant societal activities and substances. For high
probability events such a knowledge asymmetry is less likely: if the probability for an event
is 40% and 50% respectively, there is probably no evident difference in uncertainty, ceteris
paribus.
33 I.e. the classical interpretation (Laplacean view) of probability. Cf. Resnik (1987), ch. 3
53
For harm, there is an open question regarding individuation. On the one
hand, a harmful event may be an end-state outcome such as number of
casualties.35 Then there is no uncertainty about the severity of harm and the
first part of the claim of the Knowledge Asymmetry is invalid. However, the
truth of the important second statement – that the increase in uncertainty
about probability is higher than the corresponding increase in uncertainty
about harm – is trivially true. Hence, the conclusion of the argument is still
valid. On the other hand, a harmful event may be a non end-state harm. In
these more interesting cases – such as “a plane crash” or a “a nuclear
meltdown” – the uncertainty about the effect in human lives is evident.
Analogous to the case of probability estimates, the lower the reasonable
estimate, the less knowledge do we have of the harmful outcome of the
event. In the beginning of nuclear energy production, for example, there was
comparably little knowledge of the human effect of radiation. Thus, the
estimations of the severity of harm had higher uncertainty than current ones.
However, even though there is a correlation between probability
estimates and the uncertainties about the severity of harmful events, this
correlation is much weaker than the essential one mentioned above between
probability estimates and the uncertainty of that estimate. This is because the
severity of a harmful event is less dependent on the probability estimate for
a certain activity: what is of importance if the total knowledge of that type of
harm. Thus, if the harmful event is a certain amount of radioactivity, the
cause of this harmful event is not important for the estimation severity of
the harm. The less the knowledge of the type of event depends on the type
of activity in question, the less correlation is there between the probability
estimate and the uncertainty of the severity of an event. This means that the
increase in uncertainty for lower probability estimates is, ceteris paribus, higher
for probability than for severity of harm. In terms of our parachute
illustration: even if the estimated probability for the malfunctioning of the
54
equipment decreases, the uncertainty about the severity of the harmful event
of malfunctioning during a jump stays the same because we have good
enough knowledge of what happens when falling to the ground from high
altitudes.
From the considerations of the Knowledge Asymmetry of Safety, the
initial discussion of the section is presumably seen in a different light. When
the possible harm of A was considered significantly higher than that of B
but with lower expectation value of harm, at first glance it seems irrational to
prefer B to A. However, when the epistemic uncertainty is included this is
not at all certain, since the difference in epistemic uncertainty about the
probabilities may be many times greater than the differences in the statistical
expectation value of harm.
The Knowledge Asymmetry of Safety is of course not a complete
justification for a strict use of consequence reasoning alone when it comes
to risky activities. Indeed it cannot be, since it describes a tendency on
theoretical grounds for what on each occasion (to a significant degree) is an
empirical matter. What the above considerations do, however, is to show the
reasonability in putting a strong emphasis on the possible harmful outcome
even if the probability is low, which is the criticised tendency of the
Consequence Dominance. Expert criticism of such a tendency thus emerges
as an instance of the invalidity of the Knowledge argument.
Based on this knowledge asymmetry, the factors of control, affect and
dread, if considered at least loosely connected to consequences, may be
considered as reasonable dispositions to have since they make us avoid
situations with low but uncertain (estimated) probabilities of great harm. The
same applies to low acceptance of new and ‘unnatural’ activities.36 Thus,
epistemic uncertainty provides a normatively relevant explanation of many
common psychological tendencies surrounding risk perception.
mentioned earlier about the possibility of comparing different harmful effects. However,
this is not the outcome-uncertainty of which we are referring to here.
36 Fischhoff, Slovic, Lichtenstein, Read & Combs (1978/2000).
55
In summary, the Knowledge Asymmetry of Safety thus shows that
rejection of the best estimation of the risk involved need not be a sign of
irrationality but may in many cases be reasonable in matters of safety. Even
if there is no party with better knowledge of the risk of a certain activity, that
fact is not sufficient for accepting it. Depending on the epistemic
uncertainty, the activity may be much more risky than the best estimations
state, and that possibility may not be a price we are willing to pay.37
37 In light of the reasoning in this section, the public rejection of what experts recommend
need not be considered a sign of distrust of the experts. However, that is the way it is often
interpreted especially in the risk communication field, with ”public relation” consequences
such as focusing on how to receive better trust from the public in a way that does not have
anything remotely to do with science and justification. In many cases, distrust may perhaps
be reasonable, since the acclaimed experts may be stakeholders in the very same activity
they are recommending. (In resource demanding areas, like parts of the chemical industry,
the tests as well as the main part of the analysis is made by the industry itself. In such
cases, the public scepticism may be especially reasonable. See Hansson (2004), 357-58). In
general, however, the public rejection of expert recommendations need not and should
not be seen as distrust for experts (Cf. Drottz-Sjöberg (1996), Sjöberg & Drottz-Sjöberg
(2001) and Sjöberg (2001)). It may fruitfully be interpreted as distrust of the level of
scientific knowledge.
38 Cf. National Research Council (1983) and European Commission (2003) for standard
classifications.
39 Cf. Mayo (1991).
56
binding capacity of a molecule in a cell-culture may not be considered a
decision maker in the relevant sense any more than the mathematician
working with the basic theoretical models underlying such an analysis. But
what about the toxicological experts responsible for the complex evaluation
giving the outcome of the risk assessment of this molecule? In this section, it
is argued that epistemic uncertainty is an important decision factor in the
scientific process of risk assessment, having consequences for the scope of
the objections to the Expert Argument.
Until now we have been focusing mainly on the impact of the best
estimations of risk on decisions involving safety, i.e. on what is traditionally
called risk management. The presence of epistemic uncertainty questions the
claim that we should follow those with the best knowledge of the risk. It
may be perfectly reasonable to accept the estimations of the risks involved
and still, due to epistemic uncertainty, reject the recommendation of the
experts.40 The experts and the public are then using different methods for
handling this uncertainty, and only referring to the best available estimation
of the risk is not sufficient for settling the issue.
However, epistemic uncertainty is not a product of risk management but
of risk assessment. Among the data relevant for risk assessment there are
likely to be parts that are based on a core of scientific theories and
observational data with relatively high evidential justification.41 But the more
models and approximations of limited justification and application are
included in the final inferences, the more these evaluations – and thus the
output of the process – contain epistemic uncertainty. This may be
illustrated in figure 1. The layers in the triangle to the left illustrate different
40 Note that ’experts’ here is understood in a broad sense (as stated in Section 2), including
not only scientific experts but also ‘risk management experts’, i.e. experts with
understanding of the relevant risk assessment data that are responsible for making
recommendations for decisions.
41 The following description of scientific knowledge as a core of deeply justified
knowledge and a less deeply justified periphery should be seen as in line with Willard van
Orman Quine’s holistic “web of knowledge” approach. Cf. Quine (1951). The necessity of
theory in making observations (a reason for including theoretical statements as well as
observational statements in the “core”) is commonly labelled theory-laden observation;
57
degrees of justification for the different scientific data and relations. The
bottom layer of the figure consists of the core data of direct observations
and theoretical knowledge. The higher levels illustrate scientific hypotheses
of less direct justification.42 An example is carcinogenic effects of chemical
substances. Our primary interest is the effects on humans, but for obvious
reasons we mainly perform animal testing and extrapolate the results. The
direct observations of animal effects may here represent lower levels in the
figure, whereas the extrapolations to humans have a higher degree of
epistemic uncertainty and are represented in the upper levels of the figure.
The data serving as input to the risk assessment consists of complex
inference chains with different levels of justification and, hence, different
degrees of epistemic uncertainty.
High uncertainty
*
*
* *
*
* * Low uncertainty
Hanson (1958) coined the phrase. Cf. Churchland (1979), Scheffler (1982) and Hesse
(1974) for important contributions in support of this claim.
42 Even the “core statements” are, in the holistic view, susceptible to change in view of the
58
In science in general, it may be argued that there are acknowledged methods
of handling these different grades of epistemic uncertainty, since there are
certain epistemic values “integral to the entire process of assessment in
science”.43 Epistemic values are normally taken to be internal to science, i.e.
they are exactly the kind of values that the scientific experts are competent
to apply in their scientific enterprise.44 Thus, in science there exist standards
for when to acknowledge a hypothesis and regard it as scientific knowledge
as well as when to discard it, i.e. ways to handle epistemic uncertainty.
However, risk assessment does not have the same goal as science. The
main goal of science is to gain knowledge of the world, and the function of
epistemic values is to accept as scientific knowledge only what has been
sufficiently “confirmed”.45 For risk assessment, however, as has been argued
throughout this article, the goal is safety. Everything there is reason to
consider a potential harm is relevant to risk assessment, even if the “purely”
scientific case cannot (as of yet) be made.46 Therefore, epistemic values
made for the binary goal of inclusion/exclusion do not provide the full story
for risk assessment. The aggregation of the data to assessment output is thus
made with reference – implicit or explicit – to different methods for
aggregation of uncertainties; thus, with reference to other values than those
internal to science.47
An illustrating example by Nils-Eric Sahlin and Johannes Persson shows
how different estimations of the epistemic uncertainty, not different
scientific data, results in different safety levels regarding dioxin in fish for
59
the United States and the Nordic countries.48 Different estimations on how
the sensitivity of humans compare to the test animals and different safety
factors resulted in a difference of almost 1000 in magnitude for the safety
levels of dioxin intake (5 pg/kg body weight and day in the Nordic countries
compared to 0.0064 pg/kg for the EPA, American Environmental
Protection Agency).49
The conclusion to be drawn from the above considerations is that even
if risk assessment does not conclude in an explicit recommendation whether or
not to accept a certain activity or substance, it is susceptible the same type of
objections due to epistemic uncertainties. Just as was the case when we
focused on the risk management process, which weight we should put on
different cases of uncertain knowledge in the risk assessment stage reflects
our values in regard to uncertain knowledge and is thus an open question
not to be settled only by those with the best knowledge of the risk.
Therefore, depending on where we draw the line for the scientific process
we get different “Expert Arguments”. If we say that the scientific process
actually does conclude in statements of safety, such that “activity X is safer
than Y”, we may in practice consider this very close to a recommendation
indeed, and the step to our Expert Argument is short. If, on the other hand,
the outcome of the scientific process is only the most basic scientific data,
the core level in figure 1, we get considerably further away from the Expert
Argument, since the gap from data presented at its fullest to
recommendation will be quite large. But then the question emerges how this
data itself may be of help to risk management, since it will probably be in a
form that is in great need of interpretation and merging to be possible to
manage – i.e., of “risk assessment”.
It seems reasonable to conclude that when the outcome of the risk
assessment is such that it is possible to use as basis for a decision regarding a
complex societal activity, it is on such a high level that it includes, explicitly
60
or implicitly, recommendations or proto-recommendations. Therefore, since
there is no possibility of “insulating” risk assessment from epistemic
uncertainty and thus the kind of normative aspects we recognised in risk
management, the objections of the Expert Argument based on
considerations of epistemic uncertainty are relevant also for risk assessment.
61
Trying to improve the scientific basis and reaching better estimations of the
risk involved in hazardous activities, technologies and/or substances is
certainly a major aim for risk assessment and risk management. However,
epistemic uncertainty will always be present where risk is involved and the
focus is on safety. The quality of knowledge differs for different activities,
technologies and substances, and it is far from evident that what on the
surface looks like the same degree of risk should be handled the same way.
Therefore, we must actively focus on ways of measuring and communicating
epistemic uncertainty, giving societal decision-making the best foundation
possible.
There are other important aspects outside the scope of the argument
from epistemic uncertainty and a focus on safety, such as moral aspects of
risk taking and questions of whether there is a just distribution of risks and
benefits among the population.50 These aspects should be considered in
addition to the epistemic uncertainty factor. Even without them, what has
here been called Expert Argument is heavily questioned. Having the best
knowledge of the risk is not enough. Epistemic uncertainty is a basic fact
relevant to risk and safety decisions and this should be reflected in the entire
field of risk research.
62
References
Ackerman, F. and Heinzerling, L. (2002) Pricing the priceless: cost-benefit
analysis of environmental protection, University of Pennsylvania Law Review
150:1553-1584.
Möller N., Hansson S. O., Peterson M. (2005), Safety is more than the
antonym of risk, forthcoming in Journal of Applied Philosophy.
Cohen, B. (2003) Probabilistic Risk Analysis for a High-Level Radioactive
Waste Repository, Risk Analysis 23:909-915.
Churchland, P. (1979) Scientific Realism and the Plasticity of Mind. Cambridge:
Cambridge University Press.
Drottz-Sjöberg, B.-M. (1996) Stämningar i Storuman efter Folkomröstningen om ett
Djupförvar (Projekt Rapport No. PR D-96-004). Stockholm: SKB.
Durodié, B. (2003a) The True Cost of Precautionary Chemicals Regulation,
Risk Analysis 23:389-398.
Durodié, B. (2003b) Letter to the Editor Regarding Chemical White Paper
Special Issue, Risk Analysis 23:427-428.
Ellsberg, D. (1961) Risk, Ambiguity and the Savage axioms, Quarterly Journal
of Economics 75:643-669.
European Commission (2003) Technical Guidance Document in support of
Commission Directive 93/67/EEC on Risk Assessment for new notified
substances, Commission Regulation (EC) No 1488/94 on Risk Assessment for
existing substances and Directive 98/8/EC of the European Parliament and of the
Council concerning the placing of biocidal products on the market. Luxembourg:
Joint Research Centre, EUR 20418 EN, Office for Official Publications
of the EC.
Fetherstonhaugh, D., Slovic, P., Johnson, S. M., & Friedrich, J. (1997/2000)
Insensitivity to the Value of Human Life: A Study of psychophysical
Numbing, in Slovic (2000), 372-389.
Fischhoff, B., Slovic, P., Lichtenstein, S., Read, S., & Combs, B.
(1978/2000) How Safe Is Safe Enough? A Psychometric Study of
63
Attitudes Toward Technological Risk and Benefits, in Slovic (2000), 80-
103.
Hanson, N. (1958) Patterns of Discovery. Cambridge: Cambridge University
Press.
Hansson, S.O. (1993) The false promises of risk analysis. Ratio 6:16-26.
Hansson, S.O. (1998) Setting the limit: Occupational Health Standards and the
Limits of Science. Oxford: Oxford University Press.
Hansson, S.O. (2004) Fallacies of risk, Journal of Risk Research 7:353-360.
Hansson, S.O. (in press), Seven Myths of Risk, Risk Management.
Hesse, M. (1974) The structure of Scientific Inference. Berkeley: University of
California Press.
Hume, D. (1967 [1739-40]) A Treatise of Human Nature. Ed. Selby-Bigge, L.
A. Oxford: Clareton Press.
International Organization for Standardization (2002) Risk Management –
Vocabulary – Guidelines for use in standards, ISO/IEC Guide 73:2002.
Kuhn, T. (1962) The Structure of Scientific Revolutions. Chicago: University of
Chicago Press.
Kraus, N., Malmfors, T. & Slovic, P. (1992/2000) Intuitive Toxicology:
Experts and Lay Judgements of Chemical Risks, in Slovic (2000), 285-
315.
Lakatos, I. and Musgrave, A., eds. (1970) Criticism and the Growth of Knowledge,
London: Cambridge University Press.
Leiss, W. (2004) Effective risk communication practice, Toxicology Letters
149:399-404.
Mayo, D. (1991) Sociological Versus Metascientific Views of Risk
Assessment, in Acceptable Evidence, Science and Values in Risk Management,
249-280. Eds. Mayo, D. G., and Hollander, R. D. Oxford: Oxford
University Press.
McMullin, E. (1982), Values in Science, PSA: Proceedings of the Biennial Meeting
of the Philosophy of Science Association 2:3-28.
64
McMullin, E. (1993). Rationality and paradigm change in science. In World
Changes: Thomas Kuhn and the Nature of Science, ed. P. Horwich, 55-78.
Cambridge: The MIT Press.
National Research Council (1983) Risk assessment in the federal government –
managing the process. Washington: National Academy Press.
Quine, W. (1951) Two dogmas of empiricism, Philosophical Review 60: 20-43.
Ramsberg, J. & Sjöberg, L. (1996) The cost-effectiveness of lifesaving
interventions in Sweden, Rhizikon: Risk Resesarch Report No. 24, 271-290.
Stockholm: Center for Risk Research.
Resnik, M. (1987) Choices: An introduction to decision theory. Minneapolis:
University of Minnesota Press.
Sahlin, N.-E. & Persson, J. (1994) Epistemic Risk: The Significance of
Knowing What One Does Not Know, in Future Risks and Risk
Management, 37-62. Eds. Brehmer, B. & Sahlin, N.-E. Berlin: Springer.
Scheffler, I. (1982, 2nd ed.) Science and Subjectivity. Indianapolis: Hackett.
Slovic, P. (1997/2000) Trust, Emotion, Sex, Politics and Science: Surveying
the Risk-assessment Battlefield, in Slovic (2000), 390-412.
Slovic, P. (2000) The Perception of Risk. London: Earthscan.
Sjöberg, L. (1999) Risk Perception in Western Europe, Ambio 28 (6):543-
549.
Sjöberg, L. (2001) Limits of Knowledge and the Limited Importance of
Trust, Risk Analysis 21:189-198.
Sjöberg, L. (2003) Risk Perception, Emotion, and Policy: The case of
Nuclear Technology”, European Review 11:109-128.
Sjöberg, L. (2004) Explaining Individual Risk Perception: The Case of
Nuclear Waste, Risk Management: An International Journal 6 (1):51-64.
Sjöberg, L. & Drottz-Sjöberg, B.-M. (1993) Attitudes to Nuclear Waste,
Rhizikon: Risk Research Report No. 125. Stockholm: Center for Risk
Research.
Sjöberg, L. & Drottz-Sjöberg, B.-M. (2001) Fairness, risk and risk tolerance
in the siting of a nuclear waste repository, Journal of Risk Research 4:75-
101.
65
Starr, C. (1969) Social benefit versus technological risk, Science, 165:1232-
1238.
Wandall, B. (2004) Values in science and risk assessment, Toxicology Letters,
152:265–272.
Wynne, B. (1982) Institutional mythologies and dual societies in the
management of risk, in The risk analysis controversy, 127-143. Eds.
Kunreuther, H. & Ley, E. Berlin Springer-Verlag.
66
Submitted manuscript
ABSTRACT. There are many principles and methods recommended for the
engineer as means to ensure safety, as well as methods for assessing the safety
of a system. In Probabilistic risk analysis (PRA) and Probabilistic safety analysis
(PSA), risk is interpreted as a combination of the probability of an adverse
consequence and the severity of the consequence, and safety is seen as the
antonym of risk. This account is insufficient on theoretical grounds.
Furthermore, common practices in safety engineering supply evidence of its
insufficiency. Putting forward a number of principles referred to in the
literature, and focusing on three important covering principles of safety
engineering, Inherently safe design, Safety reserves and Safe fail, we show that
engineering principles for achieving safety cannot be fully accounted for on
a probabilistic risk reduction interpretation. General theoretical reasons and
safety practices alike show that an adequate concept of safety must include
not only the reduction of risk but also the reduction of uncertainty.
1. Introduction
Safety is a concern in virtually all engineering processes and systems.1 There
are many principles and methods recommended for the engineer as means
to ensure safety. There are also methods for assessing the safety of a system. A
dominating quantitative method is probabilistic risk and safety analysis.
Safety is here conceived as the antonym of risk, and risk is interpreted as a
combination of the probability of an adverse consequence and the severity
67
of the consequence. On such an understanding, achieving safety is
conceived as risk reduction.
Risk reduction is indeed an important aspect of gaining safety. We will
argue, however, that it is insufficient as a total understanding of safety:
ensuring safety is not only a matter of reducing the risk. The purpose of this
paper is to show that common engineering principles, if taken seriously,
cannot be fully accounted for on a narrow risk reduction interpretation. This
is not due to deficiencies of those practices, however, but to a shortcoming
in the capability of the theoretical framework to capture the concept of
safety. We will propose that an adequate concept of safety must include not
only the reduction of risk but also the reduction of uncertainty. (Uncertainty
differs from risk in the absence of well-determined probabilities.) Only with
such a broadened concept of safety can we adequately account for the
success of important engineering principles in achieving safety.
68
a nuclear power plant follow the principle that plant conditions associated
with high radiological doses or releases shall be of low likelihood of
occurrence, and conditions with relatively high likelihood shall have only
small radiological consequences.”4
Stated in such a vague manner, the identification of safety with risk
reduction appears uncontroversial. Clearly, if the probability or the severity
of an accident has been reduced, then this is an advantage in terms of safety.
However, it does not follow that a complete characterisation of safety can be
obtained with reference only to probability and severity. In particular, an
account restricted to these variables assumes that they are both known (or
knowable). Hence, there is an underlying premise of knowability regarding
the probabilities and consequences involved. In decision theory, such a
situation is called decision under risk.5 Events like coin-flipping or roulette
spinning are paradigmatic examples of such decision situations: if the coin or
roulette is correctly made, the probabilities as well as the possible outcomes
are known.
Decisions in most complex areas of life are not based on probabilities
that are known with certainty. Strictly speaking, the only clear-cut cases of
decision under risk (known probabilities) seem to be idealized textbook cases
such as the abovementioned coin-tossing. Even statistical data based on an
abundance of experience, such as the probability of rainfall in June in a given
European city or the accident frequency of a Boeing 747, are not fully
known in this idealized way. They provide important information to be used
in decisions, but it is perfectly reasonable to doubt whether the values apply
to the situations at hand; they are characterized by epistemic uncertainty and
should not be treated as probabilities known with full precision. Hence,
almost all decisions are decisions under uncertainty. To the extent that we make
decisions under risk, this does not mean that these decisions are made under
conditions of completely known probabilities. Rather, it means that we have
chosen to simplify our description of these decision problems by treating
them as cases of known probabilities.
69
The presence of epistemic uncertainty specifically applies to engineering
contexts. An engineer performing a complex design task has to take into
account a large number of hazards and eventualities. Some of these
eventualities can be treated in terms of probabilities; the failure rates of
some components may for instance be reasonably well-known from
previous experiences. However, even when we have a good experience-
based estimate of a failure rate, some uncertainty remains about the
correctness of this estimate and in particular about its applicability in the
context to which it is applied. In addition, in every system there are
uncertainties for which we do not have good or even meaningful probability
estimates. This may include the ways in which humans will interact with as
yet untested devices. It may also include unexpected failures in new materials
and constructions or complex new software, and many other types of more
or less well-defined hazards. It always includes the eventuality of new types
of failures that we have not been able to foresee.
In the presence of epistemic uncertainty the identification of safety with
risk reduction is problematic. The best estimate may state that the
probability that a particular patient survives with a left ventricular support
device by, say, 15 years is 50 percent. Yet, we have good reasons to be
uncertain of this estimate in a way we do not have to be in the case of coin
tossing. We have far more reason to believe in the probability estimates of
some failures than others, and as soon as there is uncertainty it is an open
question whether the probability of a harmful consequence is in fact higher
than what our best estimates say. If we can choose between two
components such that the best estimate of the probability of malfunction is
the same in both cases but the uncertainty about the first is greater than
about the second, then we have good reasons to use the second. In terms of
safety, ceteris paribus, an old computer system that has been running in the
environment for years may be preferable to a new one, even if the best
estimate of the probability of malfunction for the latter is the same (or even
somewhat lower).
70
In summary, engineering safety always has to take into account
uncertainties that can be meaningfully expressed in probabilistic terms as well
as eventualities for which this is not possible. In the following sections we
will argue that a reasonable interpretation of several common engineering
practices is to view them not only as methods for reducing the risk but also
as methods for reducing the uncertainty.
Firstly, Bahr writes, we should “design out” the hazard from the system.9 If
that is not possible, we should control the hazard using various fail-safe
devices, e.g. pressure valves relieving the system of dangerous pressure
71
build-up.10 When designing out or controlling is not an option, warning
devices (e.g. smoke alarm) and procedures (e.g. emergency shutdown) and
training should be used.11
We have divided the principles listed in Appendix 1 into four categories
or covering principles:12
(1) Inherently safe design. A recommended first step in safety engineering is to
minimize the inherent dangers in the process as far as possible. This
means that potential hazards are excluded rather than just enclosed or
otherwise coped with. Hence, dangerous substances or reactions are
replaced by less dangerous ones, and this is preferred to using the
dangerous substances in an encapsulated process. Fireproof materials
are used instead of inflammable ones, and this is considered superior to
using inflammable materials but keeping temperatures low. For similar
reasons, performing a reaction at low temperature and pressure is
considered superior to performing it at high temperature and pressure in
a vessel constructed for these conditions.
(2) Safety reserves. Constructions should be strong enough to resist loads and
disturbances exceeding those that are intended. A common way to
obtain such safety reserves is to employ explicitly chosen, numerical
safety factors. Hence, if a safety factor of 2 is employed when building a
bridge, then the bridge is calculated to resist twice the maximal load for
which it is intended.
(3) Safe fail. There are many ways a complex system may fail. The principle
of safe fail means that the system should fail “safely”; either the internal
components may fail without the system as a whole failing, or the
system fails without causing harm.. One common example is fail-silence
mechanisms: fail-silence (also called “negative feedback”) mechanisms
are introduced to achieve self-shutdown in case of device failure or
when the operator loses control. A classical example is the dead man’s
handle that stops the train when the driver falls asleep. One of the most
important safety measures in the nuclear industry is to ensure that
reactors close down automatically in critical situations.
72
(4) Procedural safeguards. There are several procedures and control
mechanisms for enhancing safety, ranging from general safety standards
and quality assurance to training and behaviour control of the staff.
Procedural safeguards are especially important in identifying new
potential harms (audits, job studies) and controlling employee behaviour
that cannot be “designed out” from the process (warnings, training).
One example of such procedural safeguards is regulation for vehicle
operators to have ample time between actual driving in order to prevent
fatigue. Frequent training and checkups of staff is another. Procedural
safeguards are important as a ‘soft’ supplement to ‘hard’ engineering
methods.
73
interaction. Probabilistic treatment of human agent-hood is even more
problematic than the cases we are explicitly dealing with in what follows.
74
general principle includes avoidance of all types of hazardous
“concentrations” such as accumulation of energy and storage of large
quantities of hazardous substances in one place.15 An example of
simplification is the method of making incorrect assembly of a system part
impossible, for example by making asymmetric parts.16 Instead of training
the staff to avoid incorrect assembly, failure is simply made practically
impossible.17
The principle of inherently safe design aims at eliminating the sources of
harm. The natural interpretation of such a method is that we ensure that the
harmful event will not take place: if we remove the flammable substance, fire
will not occur. Naturally, we may say that the probability of a harmful
consequence is reduced. However, in most cases of inherently safe design
we are dealing with issues that are hard to give probabilistic treatment. The
principle of inherently safe design is best viewed as a method for protection
against the unforeseen: it is not mainly the worker at the assembly line
putting together the same parts a hundred times a day that is in need of
asymmetric parts, but the passenger in a burning lower deck compartment
trying to assemble the fire extinguisher.18 The principle of inherently safe
design is a way to decrease the uncertainty about whether harmful events
will take place. The soundness of the principle even in absence of any
(meaningful) probability estimates is an indication that something is lacking
from the traditional conception of safety as risk reduction.
5. Safety reserves
Humans have presumably made use of safety reserves since the origin of our
species. They have added extra strength to their houses, tools, and other
constructions in order to be on the safe side. However, the use of numerical
factors for dimensioning safety reserves seems to be of relatively recent
origin, probably the latter half of the 19th century. The earliest usage of the
term recorded in the Oxford English Dictionary is from WJM Rankine’s
book A manual of applied mechanics from 1858. In the 1860s, the German
railroad engineer A. Wohler recommended a factor of 2 for tension.19 The
75
use of safety factors has been well established for a long time in structural
mechanics and its many applications in different engineering disciplines.
Elaborate systems of safety factors have been developed, and specified in
norms and standards.
A safety factor is typically specified to protect against a particular
integrity-threatening mechanism, and different safety factors can be used
against different such mechanisms. Hence one safety factor may be required
for resistance to plastic deformation and another for fatigue resistance. As
already indicated, a safety factor is most commonly expressed as the ratio
between a measure of the maximal load not leading to the specified type of
failure and a corresponding measure of the applied load. In some cases it
may instead be expressed as the ratio between the estimated design life and
the actual service life.
In some applications safety margins are used instead of safety factors.
Although closely linked concepts, a safety margin differs from a safety factor
in being additive rather then multiplicative. In order to keep airplanes at a
sufficiently long distance from one another a safety margin in the form of a
minimal distance is used. Safety margins are also used in structural
engineering, for instance in geotechnical calculations of embankment
reliability.20
According to standard accounts in structural mechanics, safety factors
are intended to compensate for five major categories of sources of failure:
The first two of these refer to the variability of loads and material properties.
Such variabilities can often be expressed in terms of probability
distributions. However, when it comes to the extreme ends of the
76
distributions, lack of statistical information can make precise probabilistic
analysis impossible. Let us consider the variability of the properties of
materials. Experimental data on material properties are often insufficient for
making a distinction between e.g. gamma and lognormal distributions, a
problem called distribution arbitrariness.22 This has little effect on the central
part of these distributions, but in the distribution tails the differences can
become very large. This is a major reason why safety factors are often used
for design guidance instead of probabilities, although the purpose is to
protect against failure types that one would, theoretically, prefer to analyze in
probabilistic terms. As Zhu (1993) puts it:
The last three of the five items in the list of what safety factors should
protect against all refer essentially to errors in our theory and in our
application of it. They therefore are clear examples of uncertainties that are
not easily amenable to probabilistic treatment. The eventuality of errors in
our calculations or their underpinnings is an important reason for applying
safety factors. This uncertainty is not reducible to probabilities that we can
determine and introduce into our calculations. (It is difficult to see how a
probability estimate could be accurately adjusted to compensate self-
referentially for the possibility that it may itself be wrong.) It follows from
this that safety factors cannot be accounted for exclusively in probabilistic
terms.
6. Safe fail
The concept of safe fail is that of safety even if parts of the system or even
the whole system fails. There are many concepts in engineering safety for
77
which the principle may be applicable; several of the principles in the
Appendix may indeed be seen as safe fail applications. Sometimes in the
literature safe fail is put in contrast to fail-safe: a safe fail system, then, is a
system designed to safely fail whereas a fail-safe system is one designed not
to fail. (Put differently: it is safe from failing rather than safe when failing.)
The point is somewhat polemical but rightly used an instructive one, and it
carries an important lesson to which we will return later (the Titanic lesson).
That said, however, it should be noted that this polemic distinction between
safe fail and fail-safe does not work on most explications of ‘fail-safe’ in the
literature, since those explications of ‘fail-safe’ would rightly be interpreted
as referring to safe fail rather than fail-safe. Thus, both terms would refer to
the same thing. Hammer (1980), for example, says: “Fail-safe design tries to
ensure that a failure will leave the product unaffected or will convert it to a
state in which no injury or damage will occur.”24 Similarly, the IAEA (1986)
states that “the principle of ‘fail-safe’ should be incorporated […] i.e. if a
system or component should fail the plant should pass into a safe state”.25
Rather, the distinction seems to be one of perspective: for any level of a
system/component, we may ask whether it will fail as well as whether its
failure will result in danger beyond the system/component (e.g. harm to
humans). Using the concept of safe fail means ultimately focusing on the
latter question. We will therefore use this term to cover the entire spectrum
including this latter concern of being safe when the system fails.
Fail-safe. The concept of fail-safe is mainly used for specific methods and
principles for keeping the system safe in case of failure, such as shutting
down the components or the entire system. Basically there are two modes of
fail-safe (in this narrower construal), fail-silence and fail-operational. Fail-silence
means that the system is stopped when a critical failure is detected,
prohibiting any harmful event from occurring (the expressions “negative
feedback” and “fail-passive”26 are also used).27 An electrical fuse is a
paradigmatic example of a fail-silence application, as is the dead man’s
handle that stops the train when the driver falls asleep. Fail-operational
means that the system will continue to work despite the fault.28 In aviation,
78
fail-operational systems are paramount: airborne failures may lead to partial
operational restrictions, but system shutdown is normally not a particularly
safe option. A safety-valve is another paradigmatic fail-operational device: if
the pressure becomes too high in a steam-boiler, the safety-valve lets out
steam from the boiler (without shutting down the system).
Safety barriers. Another application of the safe fail principle is the usage of
several safety barriers. Some of the best examples of the use of multiple safety
barriers can be found in nuclear waste management. The proposed
subterranean nuclear waste repositories all contain multiple barriers. We can
take the current Swedish nuclear waste project as an example: The waste will
be put in a copper canister that is constructed to resist the foreseeable
challenges. The canister is surrounded by a layer of bentonite clay that
protects the canister against small movements in the rock and “acts as a
filter in the unlikely event that any radionuclides should escape from a
canister”.29 This whole construction is placed in deep rock, in a geological
formation that has been selected to minimize transportation to the surface
of any possible leakage of radionuclides. The whole system of barriers is
constructed to have a high degree of redundancy, so that if one of the
barriers fails the remaining ones will suffice. With usual PRA standards, the
whole series of barriers would not be necessary. Nevertheless, sensible
reasons can be given for this approach, namely reasons that refer to
uncertainty. Perhaps the copper canister will fail for some unknown reason
not included in the calculations. Then, hopefully, the radionuclides will stay
in the bentonite, etc.
In nuclear power plant safety the principle of safety barriers has been
generalised to the concept of Defence in depth.30 Even if physical layers of
safety barriers is the foundation of the concept, Defence in depth includes a
large superstructure of procedural and technical safeguards for events
including and going beyond emergency preparedness in a worst-case
scenario.
Reliability, Redundancy, Segregation, Diversity. Several of the principles listed
in the appendix can be seen as applications of the safe fail principle. Four
79
closely related principles are reliability, redundancy, segregation and diversity.
Reliability is here the key concept which the latter three may be seen as
means of achieving.31 Reliability and safety are related concepts but
important to keep apart. A system can be very unreliable and yet perfectly
safe, as long as the failures in question are minor. A regular office PC is an
example of a (relatively) unreliable but safe system: even if you have to boot
the system now and then, the (physical) safety consequences are not
severe.32 In the context of engineering safety, reliability issues mainly
concern system functions important for safety.
Redundancy is an important means of achieving reliability: if one
component fails, there are alternative ways to achieve the function in
question and the system performance is unaltered.33 Having more engines
than needed for flight is an example of system redundancy. Sending a piece
of information two independent ways through a system is another.
Segregation is a related concept. Two parts of a system may both fail if they
are physically (or temporally) too close; there may for example be a common
cause of their failure due to their proximity, or the failure of one part may
cause the failure of the other, for instance if overload of an engine causes a
fire that makes another engine fail. Yet another related concept is diversity:
redundant parts should have different realisations of the same function in
order to avoid common cause failures, i.e. failures resulting from one
common cause. Redundant software systems, for example, should not be
based on the same algorithms.34
The principle of safe fail in all its different applications (far from all of
which have been mentioned here) is yet another example of how the
uncertainty aspect is inherent in the concept of safety. Meaningful
probability estimates may perhaps, in some context, be possible for most of
the sub-principles here mentioned. Yet, as should now be clear, even so, the
purpose of these methods is to prevent the unforeseeable. And here the
tools of probabilistic risk and safety assessment are insufficient. We may call
this the Titanic lesson. We now know that the Titanic was far from unsinkable.
But let us consider a hypothetical scenario. Suppose that tomorrow a ship-
80
builder comes up with a convincing plan for an unsinkable boat. A
probabilistic risk analysis has been performed, showing that the probability
of the ship sinking is incredibly low. Based on the PRA, a risk-benefit
analysis has been performed. It shows that the cost of life-boats would be
economically indefensible. Because of the low probability of an accident, the
expected cost per life saved by the life-boats is above 1000 million dollars,
way above what society at large is prepared to pay for a life saved in areas
such as that of road traffic. The risk-benefit analysis therefore clearly shows
us that the ship should not have any lifeboats. How should the naval
engineer respond to this proposal? Should she accept the verdict of the
economic analysis and exclude lifeboats from the design? Our proposal is
that a good engineer should not act on the risk-benefit analyst’s advice in a
case like this. The reason should now be obvious: it is possible that the
calculations may be wrong, and if they are, then the outcome may be
disastrous. Therefore, the additional safety barrier in the form of lifeboats
(and evacuation routines and all the rest) should not be excluded, in spite of
the probability estimates showing them to be uncalled for.
7. Conclusions
We have seen that major strategies in safety engineering are used to deal not
only with risk – in the standard, probabilistic sense of the term – but also
with uncertainty (in a way that is not reducible to risk). From this either of
two conclusions can be drawn: either these principles are inadequate to deal
with safety; or the concept of safety as the antonym of risk is insufficient.
We propose that the latter conclusion is the more plausible one. If so, then
this has important implications for the role of probabilistic risk analysis in
engineering contexts. PRA is an important tool for safety, but it is not the
final arbitrator since it does not deal adequately with issues of uncertainty.
Although engineers calculate more than members of most other professions,
the purpose of these calculations is to support, not to supplant, the
engineer’s judgment. Safety is a more complex matter than what can be
81
captured in probabilistic terms, and our understanding of the concept must
mirror this fact.
82
Appendix 1
Principles of engineering safety. In the third column, they are divided into
different categories: (1) inherently safe design, (2) safety reserves, (3) safe fail
and (4) procedural safeguards.
Principle, reference Brief description Cat
Inherently safe Potential hazards are avoided rather than 1
35
design controlled.
Safety factor36 The system is constructed to resist loads 2
and stresses exceeding what is necessary
for the intended usage by multiplying the
intended load by a factor (>1).
Margin of safety37 An (additative) margin is used for 2
acceptable system performance as a
precautionary measure.
Stress margins38 The system is designed so that statistical 2
variations in stresses do not lead to
failure.39
Screening40 Control measure to eliminate 3,
components that may pass operating tests 4
for specific parameters but show signs of
possible future failure (or reduced
sustainability).41
Safety barriers42 Physical barriers providing multiple layers 3
of protection; if one layer fails, the next
will protect from system failure.
Reliability43 A measure of system failure rate. High 3
reliability against certain types of failures
is necessary for system safety.
Redundancy44 Method of achieving reliability for 3
important system functions. Redundant
parts protect the system in case of failure
of one part.
Diversity45 Redundant system parts are given 3
83
different design characteristics to avoid
failures from a common cause to cause
failure in all redundant parts.
Segregation Redundant parts should not be 3
(Independence, dependent on each other. Malfunction in
46
Isolation) part should not have any consequences
for a redundant part. One way to avoid
this is to keep the parts physically apart.
Fail-safe design47 Method to ensure that even if a failure of 3
one part occurs the system remains safe,
often by system shut down or by entering
a “safe mode” where several events are
not permitted.
Proven design48 Relying on design that has been proven 3
by the ”test of time”, i.e. using solutions
or materials that have been used on many
occasions and over time without failure.
Single failure criterion Design criteria stating that a failure of a 3
(Independent single system part should not lead to
malfunction)49 system failure. System failure should only
be possible in case of independent
malfunction.
Pilotability (safe The system operator should have access 3
information load)50 to the control means necessary to prevent
failure, and the work should not be too
difficult to perform.51
Quality52 Reliance on materials, constructions etc 3,4
of proven quality for system design.
Operational interface Focusing on controlling the interface 3
control53 between humans and (the rest of) the
system and equipment. For example,
using interlocks to prevent human action
to have harmful consequences.
Environmental The environment should be controlled so 4
54
84
control54 that it cannot cause failures. Especially,
neither extremes of normal
environmental fluctuations nor energetic
events such as fire should be able to
cause failures.
Operating and Automatic as well as manual procedures 4
maintenance are used as a defence against failures.
procedures55 Training in order to follow procedures is
a part of such safety procedures.
Job study Identifying potential causes through 4
observations56 collecting data from observations and
audits, e.g. interviewing staff about
potential or existent hazardous practices.
Controlling Controlling certain types of behaviour 4
57
behaviour (e.g. alcohol and drug abuse, lack of
sleep), e.g. by tests and audits.
Standards58 Standardised solutions of system design, 1-
material usage, maintenance procedures 4
etc. Standards may be applied to all areas
of safety engineering.
Timed replacement59 Replacing components before their 4
performance has decreased as a
precautionary procedure. This can be
done regularly without any signs of
decreased performance, or by using
indicators of potential failure such as
component degradation or drift.
Procedural Procedures such as instructions to 4
60
safeguards operators to take or avoid specific actions
in general or in special circumstances.
Warnings61 Warning devices and information are 4
provided when control measures are
insufficient (or in addition to them).
85
Notes
1 The authors would like to thank Martin Peterson, Kalle Grill and the members of the
Risk Seminar at the Department of Philosophy and the History of Technology at the
Royal Institute of Technology for their helpful criticism.
2 For example, the International Organisation for Standardization (2002) defines risk as
“the combination of the probability of an event and its consequences”. Green (1982), 3,
and Cohen (2003) are two examples of this usage. Conceptualising safety as the inverse of
the risk is frequently shown in the literature, e.g. Harms-Ringdahl (1987-1993), 3. Misumi
and Sato (1999), 135-144, writes “[R]isks are defined as the combination of the probability
of occurrence of hazardous event and the severity of the consequence. Safety is achieved
by reducing a risk to a tolerable level”. Koivisto (1996), I/5, defines safety as a function of
probability and consequence, as do Roland & Moriarty (1983), 8-9.
3 International Atomic Energy Agency (1986), 2 (my emphasis).
4 International Atomic Energy Agency (1986), 3.
5 Decisions under perfect deterministic knowledge is called “decision under certainty”.
6 E.g. International Atomic Energy Agency (1986), Hammer, W. (1980); Nolan (1996).
7 Koivisto (1996), 18.
8 N. J. Bahr (1997), 14-17. NASA (1993), 1-3.
9 Bahr (1997), 14.
10 Bahr (1997), 15.
11 Bahr (1997), 16-17.
12 The categorization is tentative; as should be clear from the above discussion, there are
several options for which categories to choose as well for which category the principles fit
(since they are not exclusive).
13 Example from Bahr (1997), 14-15.
14 Kletz (1991).
15 Gloss & Wardle (1984), 174.
16 Hammer (1980), 108-109.
17 C.f. D. Gloss & Wardle (1984), 171-172, Hammer (1980), 108-109.
18 The worker in the assembly line also needs help not to make mistakes, of course;
primarily in the cases where she is tired, ill or absent-minded. I.e. the cases hardest to treat
probabilistically.
19 Randall (1976).
20 Duncan (2000).
21 Knoll (1976). Moses (1997).
22 Ditlevsen (1994).
23 Zhu, T.L. (1993).
24 Hammer (1980), 115.
25 International Atomic Energy Agency (1986), 9.
26 E.g. Hammer (1980), 115.
27 Jacobsson, Johansson, Lundin (1996), 7.
28 Jacobsson, Johansson, Lundin (1996), 7. Some authors distinguishes between partial
86
35 Nolan (1996), 22; Gloss & Wardle (1984), 171-174, gives, several principles in this
category; Koivisto (1996), 25-28; Bahr (1997), 14.
36 Hammer (1980), 66, 71 (derating).
37 Hammer (1980), 67.
38 International Atomic Energy Agency (1990), 34.
39 International Atomic Energy Agency (1990), 34 and Hammer (1980), 67.
40 Hammer (1980), 76.
41 Hammer (1980), 76.
42 International Atomic Energy Agency (2000); International Atomic Energy Agency
(1986), 4.
43 International Atomic Energy Agency (1986), 7; Nolan (1996), 22.
44 International Atomic Energy Agency (1986), 7; Nolan (1996), 23; Hammer (1980), 71,
74-76.
45 International Atomic Energy Agency (1986), 8.
46 International Atomic Energy Agency (1986), 8. In International Atomic Energy Agency
(1990), 34, the term “segregation” is used (a better term perhaps if including also temporal
“isolation”). Gloss & Wardle (1984), 174, use the term “isolation” here. C.f. also Nolan
(1996), 23, 117.
47 Jakobson & Johansson (1996), 7-8; IAEA (1986), 9; Nolan (1996), 22, 119; Bahr (1997).
48 Koivisto (1996), 19-24; International Atomic Energy Agency (1986), 10; International
87
References
Bahr, N. J. (1997), System Safety Engineering and Risk Assessment: A Practical Approach,
Carnino, A. Nicolet, J.-L. , Wanner, J.-C. (1990) Man and risks: technological and human risk
Shinozuka, M. and Yao, J. (1994) Proc. of ICOSSAR'93: Structural Safety & Reliability,
1241-1247.
Duncan, J.M. (2000) Factors of safety and reliability in geotechnical engineering, Journal of
Gloss, M. Gayle Wardle (1984), Introduction to safety engineering, New York: Wiley.
Hammer, W. (1980) Product Safety Management And Engineering, Englewood Cliffs, New
Jersey: Prentice-Hall.
Harms-Ringdahl, Lars (1993), Safety analysis: principles and practice in occupational safety, London:
International Atomic Energy Agency (1986), General design safety principles for nuclear power
International Atomic Energy Agency (1990), Application of the single failure criterion, Vienna:
International Atomic Energy Agency (2000), Safety of Nuclerar Power Plants: Design, Vienna:
88
International Organization for Standardization (2002) Risk Management – Vocabulary –
Jacobsson, J., Johansson, L.-Å., Lundin, M. (1996), Safety of Distributed Machine Control
Kletz, T.A. (1991), Plant Design for Safety, a user-friendly approach, New York: Hemisphere Pub.
Corp.
Knoll, F. (1976) Commentary on the basic philosophy and recent development of safety
allocation of safety-integrity levels, Reliability Engineering & System Safety, 66(2): 135-144.
Structures 19:293-301.
NASA (1993), Safety Policy and Requirements Document. NHB 1700.1 (V1-B). Washington DC:
NASA, 1-3.
Nolan, D. (1996), Handbook of fire and explosion protection engineering principles for oil,
Randall, F.A. (1976) The safety factor of structures in history, Professional Safety January:12-
28.
Roland H., Moriarty B. (1983), System safety engineering and management, New York: John Wiley
& Sons.
Zhu, T.L. (1993) A reliability-based safety factor for aircraft composite structures, Computers
& Structures 48: 745-748.
89
Theses in Philosophy from the Royal Institute of Technology
1. Martin Peterson, Transformative Decision Rules and Axiomatic Arguments for the Principle
of Maximizing Expected Utility, Licentiate thesis, 2001.
2. Per Sandin, The Precautionary Principle: From Theory to Practice, Licentiate thesis, 2002.
3. Martin Peterson, Transformative Decision Rules. Foundations and Applications, Doctoral
thesis, 2003.
4. Anders J. Persson, Ethical Problems in Work and Working Environment Contexts,
Licentiate thesis, 2004.
5. Per Sandin, Better Safe than Sorry: Applying Philosophical Methods to the Debate on Risk and
the Precautionary Principle, Doctoral thesis, 2004.
6. Barbro Björkman, Ethical Aspects of Owning Human Biological Material, Licentiate
thesis, 2005.
7. Eva Hedfors, The Reading of Ludwig Fleck. Sources and Context, Licentiate thesis, 2005.
8. Rikard Levin, Uncertainty in Risk Assessment – Contents and Modes of Communication,
Licentiate thesis, 2005.
9. Elin Palm, Ethical Aspects of Workplace Surveillance, Licentiate thesis, 2005.
10. Jessica Nihlén Fahlquist, Moral Resonsibility in Traffic Safety and Public Health,
Licentiate thesis, 2005.
11. Karin Edvardsson, How To Set Rational Environmental Goals: Theory and Applications,
Licentiate thesis, 2006.
12. Niklas Möller, Safety and Decision-making, Licentiate thesis, 2006.
91