Vous êtes sur la page 1sur 28

1.

Introduction
Philosophers’
Imprint  , . 
 
Much of the recent literature on reasons is focused on a common range
of issues, concerning, for example, the relation between reasons and
motivation, desires, and values, the issue of internalism versus exter-
nalism in the theory of reasons, or the objectivity or reasons. This paper
is concerned with a different, and orthogonal, set of questions: What
are reasons, and how do they support actions or conclusions? Given
a collection of individual reasons, possibly suggesting conflicting ac-

REASONS AS tions or conclusions, how can we determine which course of action, or
which conclusion, is supported by the collection as a whole? What is
the mechanism of support?
I begin by considering one possible line of response, which I refer

DEFAULTS to as the weighing conception, since it is based on the idea that reasons
support actions or conclusions by contributing a kind of normative
or epistemic weight, and that the goal is then to select those options
whose overall weight is greatest. This general idea is almost certainly
ancient, but we know that it goes back at least to 1772, where we can
find a version of the weighing conception described with some ele-
gance in a letter from Benjamin Franklin to his friend Joseph Priestley,
the chemist. Priestley had written to Franklin for advice on a practical
matter. In his reply, Franklin regrets that he has no help to offer on the
specific matter at hand, since he is not sufficiently familiar with the
facts, but recommends a general technique for reaching decisions in
John F. Horty situations of the kind facing his friend:

My way is to divide half a sheet of paper by a line into two
columns; writing over the one Pro, and over the other Con. Then,
during the three or four days consideration, I put down under
Department of Philosophy & Institute for Advanced Computer Studies
the different heads short hints of the different motives, that at
University of Maryland, College Park
different times occur to me, for or against the measure. When
I have thus got them all together in one view, I endeavor to
estimate their respective weights; and where I find two, one on
each side, that seem equal, I strike them both out. If I find a
reason pro equal to some two reasons con, I strike out the three.
©  John F. Horty
<www.philosophersimprint.org//>

john f. horty Reasons as Defaults

If I judge some two reasons con, equal to three reasons pro, I My objection to this picture is not so much that it is a version of
strike out the five; and thus proceeding I find at length where the weighing conception—although, in fact, the theory I present in
the balance lies . . . (Franklin 1772, pp. 348–349) this paper is set out explicitly as an alternative to this view. Instead,
my objection is that the generalized weighing conception as described
I suspect that most of us would now regard Franklin’s picture as here is simply incomplete as an account of the way in which reasons
quixotic, or at least extraordinarily optimistic, both in its assumption support conclusions. Broome distances himself from the more objec-
that the force of reasons can be captured through an assignment of tionable features of the quantitative weighing conception—numbers,
numerical weights, and in the accompanying assumption that prac- additive functions—but fails to tell us what should take their place. If
tical reasoning can then be reduced to an application of arithmetic the weights associated with reasons are not numbers, what are they;
operations (indeed, Franklin goes on to characterize his technique as what are these entities of a vaguer sort? If these weights are not aggre-
a “moral or prudential algebra”). But neither of these assumptions is gated through simple addition, how are they aggregated; what is this
actually essential. A number of contemporary writers are still willing more complicated function?
to endorse what they think of as a more general form of the weigh- In raising this objection, I do not mean to criticize Broome, who
ing conception—like Franklin’s, but without the commitment to a pre- surely does not intend to present anything like a complete account of
cise numerical representation, or to arithmetic operations. According his generalized weighing conception in the paper I have cited, but only
to this more general view, reasons can still be thought of as supporting to sketch the view before getting on with other work. Nevertheless, I
conclusions by contributing weights, of a sort; the weights contributed do feel that the objection highlights a real problem for contemporary
by different reasons can still be thought of as combined in some way, advocates of the generalized weighing conception, and one that I have
even if the combination function is not arithmetic; and these combined not seen addressed. Once we move past the level of a rough outline,
weights can still be balanced against each other, with the correct out- it will not do to say only that reasons lend some kind of weight to
come defined as that whose weight is greatest. One example of this conclusions, and that these weights are assembled somehow. A theory
generalized weighing conception, distinguished by its exceptional clar- of the relation between reasons and their outcomes should be subject
ity, can be found in a recent paper by John Broome, who summarizes to the same standards of rigor that Frege brought to the study of the
his position as follows: relation between premises and their consequences.
Let us return to our initial questions: What are reasons, and how
Each reason is associated with a metaphorical weight. This
do they support conclusions? My answer is that reasons are provided
weight need not be anything so precise as a number; it may
by defaults, and that they support conclusions in accord with the logic
be an entity of some vaguer sort. The reasons for you to φ and
of default reasoning (sometimes known as nonmonotonic logic, some-
those for you not to φ are aggregated or weighed together in
times as defeasible logic). The goal of this paper is to articulate and
some way. The aggregate is some function of the weights of
begin to develop this answer.
the individual reasons. The function may not be simply ad-
Although there is no single theory that we can now point to as the
ditive . . . It may be a complicated function, and the specific
correct logic for default reasoning, I begin by describing what seems to
nature of the reasons may influence it. Finally, the aggregate
me to be one particularly useful way of developing such a logic. This
comes out in favor of your φing, and that is why you ought to
logic is presented here only in as much detail as necessary to show that
φ. (Broome 2004, p. 37)

philosophers’ imprint -2- vol. 8, no. 3 (april 2007)

known as the Tweety Triangle. and reviewed in my (1994b). initiated by Touretzky (1986). Bird( x ) → Fly( x ). a graphical representation of default relations among classes 3. let us begin with a standard example. reason for a conflicting conclusion. with ⊃. 8. and also of the issues it raises. are not able to fly. once it has been established that Tweety ∧. instantiated for Tweety. Finally. developed by a num- ber of authors. provides a reason for the conclusion. A theory of default reasoning stands for the statement that Tweety is a bird.2 If an agent is told only that Tweety is telling us that. as a rule. vol. no. in order to avoid the complexities in- priority relations among default rules based on the treatment of this topic found in the literature on inheritance reasoning. I focus on two such issues: first. it is this default. see my (1994b) for an overview of nonmonotonic inheritance rea. by default. and with $ as a is a bird. which can appropriately be referred to with the pronoun ‘it’. ation for the statement Bird(Tweety) and of F as an abbreviation for cessful. in ing a more robust theory of reasons. in many ways. The turnstile % indicates ordinary Tweety of the general default logical consequence. In an effort to find language that is both gender neutral and unobtrusive. that it can be elaborated to deal with certain issues involved in develop. the second is a body of literature on the semantics of nonmonotonic Fly(Tweety)). are able to fly. it is natural to suppose that and by way of application. think of B as an abbrevi- first is the original default logic of Reiter (1980). we moral theory. I show how the resulting account can shed the first of these reasons is defeated by the second. 3 (april 2007) . To illustrate: if B 2. 2. The default to the particular instance B → F. addition. The current theory is mapped rules like this—defeasible generalizations—than it is to understand their out along the general lines suggested by Reiter. and explored from a technical perspective.3 argument from reason holism to a form of extreme particularism in Where A and B are formulas from the background language. Now against this background. and arguably the most suc. default according to which birds. computer. which penguins. I often assume that the agent is an impersonal reasoning device. as a rule. such as a soning. one of the most widely ap- plied formalisms for nonmonotonic reasoning. There is also a default according to situations in which the priority relations among reasons. The particular logic presented here has its roots in two earlier sources. so that the agent some light on a topic in the theory of reasons that has recently attracted should withdraw its initial judgment and conclude instead that Tweety a good deal of attention: Jonathan Dancy’s interesting and influential cannot fly. but includes an account of particular instances. ∨ and ¬ as the usual propositional connectives. to provide some idea of the a bird. Our everyday reasoning seems to be governed by a general is set out more carefully. as a rule. which now provides a themselves seem to be established through default reasoning. philosophers’ imprint -3. easier to understand general default inheritance networks. However. whenever it has been established that A. birds are able to fly (to get from this general 1.1 After presenting this default logic. or defaults. that Tweety can fly. then B → F is the rule that allows us to conclude We take as background an ordinary propositional language. It is. I then show how view recommended here. of entities. It is called this because of its triangular shape when depicted as an inheri- tance network. john f. and on the in my (2007). horty Reasons as Defaults there really is a concrete theory at work. by default. that Tweety is a penguin. second. But suppose the agent is told. and F for the statement 2. is stronger than the default about birds. volved in a treatment of variables and instantiation in default logic. it would be natural for that agent to conclude that Tweety is shape of that theory. the overall theory able to fly. This particular default can be thought of as an instance for special constant representing truth.1 Default theories and scenarios that Tweety can fly. let A → B represent the default rule that allows us to conclude B. Because the default about penguins the treatment of undercutting defeat and exclusionary reasons.

so that. Although we will not address the Reliability is another source. but provides me with a Conclusion(D) = {Conclusion(δ) : δ ∈ D} reason for doing so. We assume two functions—Premise and Conclusion—that pick out fault that Tweety flies given that he is a bird might be classified as the premises and conclusions of default rules: if δ is the default A → B. For expository con- generalizations they instantiate. we will begin by focusing on epistemic reasons almost exclusively. Yet a third thesis is that neither practical nor epistemic rea. or the complicated tis in my left knee provide reasonably reliable predictions of oncoming topic of their interactions. then authority provides yet notation in both cases—relying on context to indicate whether the con. or others. some reasons are better than others. such an ordering relation is action but actually supports a proposition: that I ought to have lunch referred to as a strict partial ordering. rather than the defeasible supported proposition or a recommended action. And once we of these two kinds of reasons individually. no. simply because the theory is more naturally motivated in this Throughout the discussion. I believe. 8. Both the weather channel and the arthri- relations between practical and epistemic reasons. my reason to have lunch with you does not recommend an also irreflexive. our reason for concluding that Tweety flies does not. guins over the default about birds has to do with specificity: a penguin The account set out here is intended to be independent of these is a specific kind of bird. ticular takes precedence over information about birds in general. venience. where D is a set eralization that I ought to do whatever I promise) is most naturally of defaults. horty Reasons as Defaults we restrict ourselves in this paper to a propositional language. and this view. and so information about penguins in par- theses. and will then use the same move from epistemic to practical reasons. we have interpreted as providing a practical reason. In sons can be assimilated to the other. but the weather channel is more reliable. we will be slipping back and forth. though sharing many important properties. concerning the relation between practical and epis. for example. the priority of the default about pen- kinds of reasons. but that they are simply distinct the Tweety Triangle. support a proposition. set of defaults. One such thesis is that epistemic reasons As we have seen. providing an epistemic reason. between what might be called practical and epistemic and then turn to practical reasons later on. then Premise(δ) is the statement A and Conclusion(δ) is the flies. than others. so that δ < δ( and δ( < δ(( implies δ < δ(( . reasons—reasons for actions. be discussing each precipitation. supporting the proposition that Tweety for example. but actually recommends an action: con. the default that I ought to have lunch with you given statement B. priority. so that δ < δ always fails. at various points. or higher should be subsumed as a species under the genus of practical reasons. of different positions on the topic. Another thesis is that practical reasons should higher priority than δ. we introduce an ordering relation < on the if fact. 3 (april 2007) . In order to On this view. The priority relations among defaults can have different sources. By contrast. another source for priority relations. john f. represent this information. rather casually. versus reasons for conclusions. with you. we will. We assume that this priority ordering on de- be subsumed as a species under the genus of epistemic reasons. But temic reasons. way. Various theses could be advanced concerning the relation between as the set of their conclusions. The de. with δ < δ( taken to mean that the default δ( has a cluding that Tweety flies. It does not support the conclusion that I will have lunch with you. it can be adapted. National laws typically override philosophers’ imprint -4. The second of these functions is lifted from individual that I promised to do so (a particular instance of the defeasible gen. On faults is transitive. some defaults have greater strength. to accommodate a variety there are also priority relations that have nothing to do with specificity. vol. these two kinds of reasons. defaults to sets of defaults in the obvious way. and clusion B in a default of the form A → B is supposed to represent a therefore focus only on particular defaults.

stood as recommending actions rather than supporting propositions. a scenario is sup. tained in the theory. at some stage of the reasoning process. individual might decide to perform. not on extensions themselves. those that are applicable in a particular scenario. 3 (april 2007) . W ∪ Conclusion(S). no. the extensions fixed priority default theory as a structure of the form )W . in selecting a scenario. and an ideal reasoning agent on the basis of the information contained in where W is. but here. relations or the way in which they are established. <* is reasons depends on the set of defaults it already accepts. and more recent court decisions have more agent’s initial information—that is. ence to a scenario is not accidental: according to the theory developed We will concentrate here. We will begin. or from a practical perspective. <*. ideal belief sets. the set of formulas derivable from authority than older decisions. of course. some set of ordinary formulas. current scenario. the belief sets they support are often referred to as extensions. The concept of a binding default is defined in terms of three pre- posed to represent the set of defaults that have been accepted by the liminary ideas. the agent. 8. If defaults provide themselves are thought of as rules for extending the beliefs derivable reasons. the set of defaults that an agent might take as providing good on scenarios. 2. those whose premises are entailed under that scenario—those defaults. Under the practical reading. The default that birds fly. can be thought faults. agent. for example. Direct orders override standing orders. tion to a full belief set: where S is a scenario based on )W . and so concentrates on the problem of char. Our initial task is to characterize. set of defaults—represents the initial data provided to the agent as a a proper scenario can be taken to specify a set of actions that a rational basis for its reasoning. Under the committed to the proposition that Tweety is a bird. With this notion in hand. and defeat. but only their ef. by considering the special case in the set of conclusions of the defaults belonging to a scenario S —that which all priority relations among defaults are fixed in advance. The defaults We begin with the concept of a binding default. in the context of a particular scenario. D . philosophers’ imprint -5. From an intuitive standpoint. are simply of as choosing the defaults it will use to extend its initial informa. that is. Formally. is even applicable in every scenario. then the binding defaults are supposed to represent those that from a set of formulas beyond their classical consequences. can be defined as those belief sets a structure—a body of ordinary information together with an ordered that are generated by proper scenarios. provides no reason at all for The concept of a scenario has a natural interpretation under both an agent to conclude that Tweety flies unless the agent is already the epistemic and the practical readings of default rules. we define a some ordered default theory. as providing sufficient Not every default. the agent’s defined simply as a particular subset S of the set D of defaults con. and for this provide good reasons. in addition. john f. <*. scenarios—those sets of defaults that might ultimately be accepted by where D is a set of defaults subject to the strict partial ordering <. which we turn to first—triggering. in this section. conflict. D . This refer- reason. Such of default theories. Most research in nonmonotonic logic is motivated by the epistemic interpretation of defaults. so is.2 Binding defaults acterizing the belief sets supported by default theories. vol. horty Reasons as Defaults state or provincial laws. D . the proper fect on the conclusions reached through default reasoning. as we will say. The triggered de- epistemic reading. whose premises follow from the agent’s initial information to- we can define the belief set that is generated by this scenario as gether with the conclusions of the defaults that the agent has already the set of formulas derivable from its conclusions together with the accepted. support for their conclusions. however. the set Conclusion(S)—can be taken as the set of actions that the that there is no need to consider either the source of these priority agent has settled upon. where a scenario based on a default theory )W . with defaults under- and orders from the Colonel override orders from the Major.

Imagine that an agent is provided with the ordered propositions—their premises—as reasons for their conclusions. and W stand. flies. leads to a terminological question: its conclusion. F. that the agent’s fixed priority default theory )W . let B. a true proposition. that Tweety flies. we will say that B. <*. D . the proposition that Tweety is a bird. scenario is simply S0 = ∅. δ2 }. or with the proposition B? The intuitive force of this restriction can be illustrated through an- This question. seems plain that the agent has a reason to conclude that Tweety flies. the fact the D = {δ1 .D . and so moves to the new scenario considered. When triggered in S are those belonging to the set it comes to reification. and the choice is TriggeredW . so that the default is triggered. and that Tweety has wings. both the default and the proposition are in- volved in providing the agent with a reason for concluding that Tweety 4.< (S) = {δ ∈ D : W ∪ Conclusion(S) % Premise(δ)}. not as reasons themselves. is a reason for concluding that Tweety flies. the default δ2 is The first is easy to describe. providing the agent order to be classified as binding in a scenario.D . toward defaults or propositions. Now suppose the Even if some default is triggered. for the propositions usage to reify reasons as propositions. it ConflictedW . but it is not sufficient.< (S0 ) = {δ1 } so Triggering is a necessary condition that a default must satisfy in that. and the ordering < is empty. known as the Nixon Diamond. 3 (april 2007) . the defaults from D that are dental feature of the situation if it did not trigger some default. together with B. those that can be thought of as conflicted—that is. in this initial scenario. two further aspects of the scenario could interfere. with a reason for its conclusion. then.D. fore be based on an analysis according to which reasons are identi- and let δ1 and δ2 stand for the defaults B → F and F → W. and we will speak of these for Tweety of the general defaults that birds fly and that flying animals triggered defaults. Again. R. it might not be binding. 8. if that default is This discussion of triggered defaults. The default would have no bearing if it were not triggered by philosophers’ imprint -6. providing a reason for the new conclusion W. like many questions concerning reification. Definition 2 (Conflicted defaults) Where S is a scenario based on the or with propositions? Suppose. In this case. respectively. and that has not yet accepted any of the defaults from D . the proposition F. an instance for Tweety conflicted in S are those belonging to the set of the general default that birds fly. if the agent is already committed to the negation of providing the agent with reasons. the defaults from D that are background theory contains the default B → F. vol. instances fied with the premises of triggered defaults. δ2 }. exactly. only δ1 is triggered. largely arbitrary. A default will not be classified as bind- now triggered as well. D . horty Reasons as Defaults Definition 1 (Triggered defaults) Where S is a scenario based on the some fact. where W = { B}. the fact would be nothing but an inci- fixed priority default theory )W . as in our example. no. We then have TriggeredW . even if it happens to be triggered.< (S1 ) = {δ1 . lustrate: in the case of our example.< (S) = {δ ∈ D : W ∪ Conclusion(S) % ¬Conclusion(δ)}. is some. and what artificial. because its depiction as an inheritance network has the shape of a diamond. Nevertheless. but as providing certain tend to have wings. john f. S1 = {δ1 }. <*. D .D . the reason relation could be projected in either direction. other standard example. But how. should these reasons then be identified with the defaults themselves. To il- default theory )W . ing in a scenario. Evidently. so that its initial this reason is provided by the default B → F. it seems to correspond most closely to our ordinary To illustrate. should this reason be reified? Should it be identified with the default B → F itself. all things agent does in fact accept this default.4 Let Q. Then since TriggeredW . The present paper will there- that Tweety is a bird. <* as its initial information. and suppose the agent Tweety is a bird.

D . DefeatedW . supporting the conclusion F. 3 (april 2007) . nario: ConflictedW . default. that is provided with the theory )W . even if it is triggered. it seems appropriate to say. DefeatedW . and let δ1 and δ2 W = { P.D . D . Now suppose that. 8. that the default support conflicting conclusions. with the conclusion P—and cording to which a default is defeated in a scenario if that scenario so moves to the new scenario S1 = {δ1 }. the preliminary definition is philosophers’ imprint -7. instances of the generaliza. to settle this conflict however it chooses. and that Tweety flies.D . case of the Tweety Triangle: it follows from the definition that This idea can be illustrated by returning to our original Tweety Tri. instances of the general considered here. <*. since the agent has already set- defaults from D that are defeated in S are those belonging to the set tled on a default that provides a reason for a conflicting conclusion. Although these two defaults Here. neither of these defaults is itself conflicted. provides a reason for the conclusion ¬ P.D . ConflictedW . that Tweety is a bird. R}. and we can safely rely on it as our official definition rules that birds fly and that penguins do not. since δ2 ∈ TriggeredW . D = {δ1 . by the theory )W . on whatever grounds. δ2 }. vol. we have TriggeredW . the basic idea is simple enough: an agent should not accept a default in the face of a stronger default supporting a conflicting con. Nevertheless.< (S0 ) = ∅. default is considerably more difficult to define than that of a conflicted (2) Conclusion(δ( ) % ¬Conclusion(δ)}.D .< (S0 ) and we have angle. In this new scenario. δ1 . B}. D = {δ1 . The second restriction governing the notion of a binding default holds that. on intuitive grounds. From Definition 3 (Defeated defaults: preliminary definition) Where S is the standpoint of the new scenario.< (S0 ) = {δ1 . so that its initial scenario is S0 = ∅.< (S0 ) = {δ1 }. the agent decides to Motivated by this example. Although the concept of a defeated (1) δ < δ ( .< (S) = {δ ∈ D : ∃δ( ∈ TriggeredW . other default will now be conflicted: ConflictedW . the reason provided by δ2 can no a scenario based on the fixed priority default theory )W . δ2 }. where W = { Q. as in fault δ1 provides a reason for the conclusion P. horty Reasons as Defaults P stand for the respective propositions that Nixon is a Quaker. This preliminary definition yields the correct results in the clusion. the default about pen- represent the defaults Q → P and R → ¬ P. it is natural to propose a definition ac- favor one of these two defaults—say δ1 . the de. while the de- not yet accepted either of these two defaults. Imagine that the agent throughout this paper. however.D .< (S) : ing if it happens to be defeated.D . with P. <*. we again have TriggeredW . john f. and the In this situation. and since it supports the conflicting way of dealing with the conflicting reasons presented by its epistemic conclusion ¬ F. And we again have is S0 = ∅. And suppose tions that Quakers tend to be pacifists and that Republicans tend not once again that the agent has not yet accepted either of these two de- to be pacifists. where Nixon is a Republican.< (S0 ) = {δ1 . it does not seem that the agent should be free. and suppose once more that the agent has the default δ1 provides a reason for concluding F. neither is conflicted in the initial sce. Imagine that the agent’s initial information is provided faults. In fact. so that its initial scenario fault δ2 provides a reason for concluding ¬ F. and now δ1 < δ2 . and F representing the propositions that Tweety is both (1) δ1 < δ2 and (2) Conclusion(δ2 ) % ¬Conclusion(δ1 ).D . δ2 }.< (S0 ) = ∅.D . a default cannot be classified as bind. and the default δ2 the previous Nixon Diamond. is defeated by the stronger default δ2 . <* as its initial information. Let us take δ1 preliminary definition yields correct results in all of the examples to be and δ2 as the defaults B → F and P → ¬ F. δ2 }. state. guins has higher priority than the default about birds. D . and that Nixon is a pacifist.D . The agent must therefore find some since this default is also triggered. ordering < is again empty. the triggers some stronger default with a conflicting conclusion. no. B. In this situation. Indeed. this a penguin.< (S1 ) = {δ2 }. the longer be classified as a good reason.

has no in. we can say that S is a stable turn to my (2007) for a discussion. scenario just in case The concept of a proper scenario can be illustrated by returning to the Tweety Triangle. classified as proper—that is. monotonic reasoning—defines a relation between default theories and centive either to abandon any of the defaults it has already accepted. however.< (S). no. Can we. in fly. so that Conclusion(S1 ) = {¬ F }.D . the agent. <*. 3 (april 2007) . then. it is natural to isolate the accept. the defaults from D that are nary answer can be solidified into a preliminary definition. Once the concept of defeat is in place. <*. therefore. that Tweety cannot exactly those defaults that it recognizes as providing good reasons.D . D . where δ2 is the default P → ¬ F. proper scenario based on )W . The preliminary answer is that. tional logical perspective: certain default theories may be associated philosophers’ imprint -8. the final answer is that there are also certain aberrant theories which allow stable scenarios that cannot really be Since the binding defaults are supposed to represent the good rea. Unfortunately. based on this default theory is S1 = {δ2 }. and leads to incorrect results in certain more 2. <*. the proper scenario S = BindingW . proposed here associates with the default theory a unique proper sce- nario is in an enviable position. In this case. in the vast range of ordinary cases. then. the account An agent that has accepted a set of defaults that forms a stable sce. however. on the initial information contained in some default theory. this account—like many others in non- the context of the defaults it accepts. As the reader can verify.< (S). as scenarios that an ideal reasoner would sons. D . where S is a scenario this paper. we can define the set of de. Formally. D . as well as the papers cited there. a preliminary answer and ply. Again. I offer two answers to this question. <* just in case S = BindingW .D . is to characterize the proper scenarios—those sets formulating a proper definition can find a more extensive discussion of defaults that an ideally rational agent might come to accept based in my (2007). horty Reasons as Defaults not uniformly accurate. Since these aberrant cases do not concern us here. scenario based on the ordered default theory )W . those readers interested in the problems involved in Our goal.D.3 Reasoning with proper scenarios complicated cases.< (S). In other cases.< (S)}. nor defeated. 8. Then S is a δ -∈ ConflictedW . but neither conflicted a final answer.< (S). D . in the context of a particular scenario. This prelimi- fixed priority default theory )W . binding in S are those belonging to the set Definition 5 (Proper scenarios: preliminary definition) Let S be a BindingW .D . vol. including all of those to be considered in this paper. we will again concept of a stable scenario as one containing all and only the defaults rely on the preliminary definition as our official definition throughout that are binding in that very context. simply identify the proper scenarios with the stable scenarios? faults that are classified as binding in a particular scenario quite sim. john f.D . those readers who are interested in a correct definition can based on the default theory )W . as those that are triggered in that scenario. we can indeed Definition 4 (Binding defaults) Where S is a scenario based on the identify the proper scenarios with the stable scenarios. δ -∈ DefeatedW . their proper scenarios that may seem anomalous from a more conven- or to accept any others.< (S) = {δ ∈ D : δ ∈ TriggeredW . Such an agent has already accepted nario supporting the intuitively correct conclusion. we recall.

might. but not a logical. that appeal to multiple equilibrium states in their char- The canonical example of a default theory with more than one acterization of rationality. 7. so conclusion that he is not. such as game theory. but as characterizing different. no. This approach. In light of these ply do adopt some internally coherent point of view in which these two extensions. say sets as rational outcomes based on the initial information. (1987). leads to an attractive deontic logic. as it turns out. possibly conflicting conclusion when different proper scenarios support conflicting conclusions. take B( A) to mean that there is good reason to plore the matter in detail here. and has not been adequately addressed even by some proper scenario must be given some weight. or choice. where δ1 is Q → P and δ2 is R → ¬ P. there are other coherent points of view in which the conflicts are re- when an ordered default theory allows more than one proper scenario. but not. tion of van Fraassen’s account within default logic was first established in zky et al. And others may be associated with no proper scenarios at all. I do not have space to ex. In the case of the Nixon Diamond. appropriately This general approach is particularly attractive when defaults are endorsing either the conclusion that Nixon is a pacifist. the agent could appropriately believe that he is not. Given conflicting defeasible rules. 8. And sions. philosophers’ imprint -9.(¬ A). performing the action indicated by B. incoherent. In the good reason to believe that Nixon is a pacifist.. interpretation. that the default A → B is taken to mean that A provides a reason for This option—now generally described as the credulous. representing what the agent ought to do. In that case.( A) and . and also good reason to case of the Nixon Diamond. which has two: both S1 = {δ1 } seems clear that this credulous option is frequently employed in our and S2 = {δ2 }.6 It involves viewing the task of a default logic. what is the conflicts are resolved in some particular way. it had my (1994a). This reasoning strategy was first labelled as “credulous” by Touret. and we might suppose that a default theory in order to illustrate the range of possibilities. and as the “choice” option by Makinson (1994). And regardless of its theoretical pedigree. a matter that need not concern us here. default logic A and ¬ A. arrive either at the scenario S1 or at the scenario S2 . we could then expect the reasoning agent to endorse both could then be seen as analogous to other fields. We in the literature on nonmonotonic reasoning. arbitrarily. the deontic operator . solved in different ways. for example. how should we define its consequences? A second option is to suppose that each formula that is supported The question is vexed. 3 (april 2007) . The interpreta- 6. horty Reasons as Defaults with multiple proper scenarios.7 5. rather than an epistemic. The resulting logic generalizes that of van Fraassen (1973). one of which contains P and the other ¬ P. . so that everyday reasoning. the modal opera- option—is highly nonstandard from a theoretical perspective. vol. tor wrapped around the conclusions supported by the various proper I think. con- flict. be expected to endorse both B( P) and B(¬ P)—since each of P and ¬ P The agent could then be expected to select. john f. some internally coherent with a default theory simply as different equilibrium states that an point of view. regardless of the fact that agent supposed to conclude: is Nixon a pacifist or not? More generally. a defense of the overall approach can be found in my (2003). believe the statement A. for example. but will simply describe three options. not scenarios associated with a default theory could naturally be read as as guiding the reasoning agent to a unique set of appropriate conclu. it proper scenario is the Nixon Diamond. we often sim- Conclusion(S1 ) = { P} and Conclusion(S2 ) = {¬ P}. the agent could then ideal reasoner might arrive at on the basis of its initial information. a particular one is supported by some proper scenario—thus concluding that there is of these scenarios and endorse the conclusions supported by it. provides good reason to believe a statement whenever that statement One option is to interpret the different proper scenarios associated is included in some extension of the theory. thereby facing a normative. or else the provided with a practical. at least.5 for example. earlier been characterized as “brave” by McDermott (1982).

for example. that δ2 is the default C → ¬ B. just as before. lock (1995. then. as defined here. 8. in which pri- agent would then conclude neither that Nixon is a pacifist nor that he ority relations among default rules are fixed in advance. 3. (1987). And second. and that δ3 is the default $ → d1 ≺ d2 . such as the law. There is at least one other form.1 Variable priority default theories clusion just in case it is supported by every proper scenario based on The definition the original default theory.10 . together sionary” reasons from the literature on practical reasoning—in which with a relation symbol representing priority. had earlier been described as “cautions” by McDermott (1982). representing priority among defaults. only coherent option for epistemic default reasoning is presented by Pol. 9. may sound complicated—perhaps forbiddingly so. is to show how this kind of reasoning can be ac- The central thesis of this paper is that reasons can usefully be thought commodated within the general framework presented here. 3 (april 2007) . the We have concentrated on fixed priority default theories. suppose that δ1 is the default A → B. 1996). generally called “rebut. want is an account in which.8 It is by far the most defeasibly. our reasoning is guided by vide an account of default reasoning. What we of as provided by defaults. some of my doubts can be found in my (2002). to be interpreted as names of defaults. and is sometimes considered to be the only coherent feasible reasoning. individual constants. niques. not by contradicting its conclusion. in the Nixon Diamond. though tive domains. Elaborating the theory Our first task. the idea was then refined and developed by a number of people. been taken as fixed in ad. but by we will assume that each of these new constants has the form dX . are the priorities among the very defaults that guide our de- popular option. through the adaptation of known tech- of defeat defined here captures only one form. For the sake of simplicity. The label is again due to Touretzky et al. I now want to support my central a set of defaults subject to a priority ordering. is not. and reason about This option is now generally described as skeptical. john f. some subscript X. presents the To illustrate this notation. 62–63). the notion account in four simple steps. Although this the priorities among defaults have. vol. 3. but so far I have done little more than pro. so far. where the resolution of a dispute often I have recently argued that the issue is more complex. since neither P nor ¬ P is supported by both proper scenarios. but in fact. 1996) as well as Prakken and Sartor (1995. This is particularly true in well-structured norma- form of reasoning in the presence of multiple proper scenarios. no. 10. philosophers’ imprint . but in which it is now thesis by showing how this account can be elaborated to deal with two possible for the priorities among defaults to be established through issues involved in developing a more robust theory of reasons.10 ting” defeat. most notably Brewka (1994. horty Reasons as Defaults A third option is to suppose that the agent should endorse a con. generally enable formal reasoning about priorities among defaults: a new set of called “undercutting” defeat—and related to the discussion of “exclu. for undermining its capacity to provide a reason. perhaps circular— vance. one default defeats another. the same reasoning strategy relation symbol ≺. some of the most important things we reason about. the same process of reasoning they serve to guide. First. pp.9 involves an explicit decision concerning the priority relations among different rules bearing on some issue. An argument that the skeptical approach. and that each such constant refers to the default δX . in which a stronger default defeats a weaker default by The first step is to enrich our object language with the resources to contradicting its conclusion. The basic idea underlying the techniques to be described here was first introduced by Gordon (1993). but there are cases in which these priorities must themselves it turns out that the present theory can be extended to provide such an be established through defeasible reasoning. And we will assume also that the object language now contains the 8.

when both A and C hold—the default δ1 will generally be defeated by δ2 . horty Reasons as Defaults where. ing our previous definition. Because the ordinary information from )W . Then what δ3 says is that. but no priority relation on the defaults that is The fourth and final step is to define the notion of a proper scenario fixed in advance. The intuitive picture is this. δ2 has a higher priority defined as follows: than δ1 . the third step. and then conclu. by default. scenario is proper. then. no. the derived of the form )W . This is accomplished by leverag- mation concerning priority relations among defaults. D . Because conclusions about the priorities where < can be any strict partial ordering over the defaults. Let )W . 8. Using among defaults might themselves vary depending on which defaults this previous definition.11 . john f. <*—that is. for variable priority default theories. from the background theory. where W ordinary formulas from a variable priority default theory must contain and D are carried over from the variable priority theory. and <S is the each possible instance of the irreflexivity and transitivity schemata priority relation derived from the scenario S itself. D* if and only if S is a proper scenario based on the fixed priority default theory )W . both W and D may contain initial infor. <*. where this notion is δ1 and δ2 . D . 3 (april 2007) . <S *. D . the information it the scenario S just in case the conclusions of the defaults belonging provides concerning the priority between δ1 and δ2 is defeasible as to this scenario. d1 and d2 refer to the defaults higher priority than δ according to the scenario S . As a result. these new structures are known as variable priority nario for the variable priority theory )W . like any other conclusions. the Now suppose the agent accepts some scenario containing these new agent arrives at some scenario S . when both of these defaults δ <S δ( if and only if W ∪ Conclusion(S) % d ≺ d( . Instead. when taken together with the ordinary information well. If these derived priority relations can be used to be used in our metalinguistic reasoning. Definition 6 (Proper scenarios: variable priority default theories) ¬(d ≺ d). from fixed priority default theories—to structures W contains all instances of transitivity and irreflexivity. (d ≺ d( ∧ d( ≺ d(( ) ⊃ d ≺ d(( . is to lift the priority ordering various aspects of the world. as a set D of defaults. are triggered—that is. telling us that The second step is to shift our attention from structures of the form δ( has a higher priority than δ. containing a set W of ordinary formulas as well priority relation <S is guaranteed to be a strict partial ordering. we would expect that. entail the formula d ≺ d( . in keeping with our convention. D*. We stipulate as part of this definition that the set W of scenario for the particular fixed priority theory )W . are arrived at S counts as a proper scenario for a fixed priority theory )W . Then S is a proper scenario based on )W . D*. through defeasible reasoning. which sets out the conditions under which sions about these priorities. D* just in case S is a proper default theories. In searching for a proper scenario. including priority relations among the that is implicit in the agent’s scenario to an explicit ordering that can agent’s own defaults. <S *. If S is some scenario based on the default theory )W . Of course. D . which then entails conclusions about priority statements. This is done in the simplest justify the agent in accepting exactly the original scenario S . then the possible way. D* be a variable priority default theory and S a scenario. since δ3 is itself a default. in which the variables are replaced with names of the defaults belong- ing to D . vol. we can now stipulate that S is a proper sce- the agent accepts. since the two defaults have conflicting What this means is that: δ( has a higher priority than δ according to conclusions. we now take the statement δ <S δ( to mean that the default δ( has a philosophers’ imprint . and can likewise be overridden.

and so it cannot. but then to accept δ2 anyway. then. S5 = {δ2 . Since there are reasons: δ1 tells Nixon that. and choose to help him resolve his dilemma. Given our practical perspective.12 . that supporting the statements d1 ≺ d2 and ¬ P. If we with a Republican party official. which again favor conflicting courses the Nixon Diamond. but instead. not the epistemic of action. After all. 3 (april 2007) . he ought not to become proper scenarios. not as evidence. and so become a pacifist. now represented by the two proper scenarios The approach described here can be illustrated through a variant of S3 = {δ1 . as a Republican. as a Quaker. What is especially inter- ing the formulas Q and R. each of which can go either way. for Nixon to accept δ3 . we can now set D with the new defaults δ3 and δ4 . a proper scenario based on the fixed priority theory example.11 Finally. where δ5 is $ → d4 ≺ d3 . <S5 *. Any reso- conflict between these two defaults. horty Reasons as Defaults Some examples his practical dilemma. who tells him that the church elder’s advice is to be preferred to that of the 11. favoring four courses of action? a pacifist. in which it is useful to adopt. still faced with the conflict. therefore. is not that it yields two proper sce- and a Republican. Nixon should place more weight on δ1 than on the a pacifist. reminding Nixon that he is both a Quaker esting about this theory. δ3 }—the scenario containing the combination of δ3 and δ2 — who tells him that δ1 should take priority over δ2 . Nixon has not yet resolved tinues to seek further counsel. and with D containing only δ1 and δ2 . this information can be represented by adding the rule place more weight on is discussed and defended in Schroeder (2007). these two defaults δ1 . should be interpreted. Suppose. ing to decide whether or not to become a pacifist. The scenario S5 should be interpreted. scenarios. and so he is faced with a practical lution of the conflict between δ3 and δ4 commits Nixon to a particular dilemma: his initial theory yields two proper scenarios. john f. according to which more weight is Now imagine that Nixon decides to consult with certain authorities to be given to δ1 than to δ2 . however. the default δ3 . but that it yields only two proper the default Q → P and δ2 is R → ¬ P. δ3 } and S4 = {δ2 . From an intuitive standpoint. not as providing Nixon with evidence )W . let us imagine that Nixon. δ5 to the set D of defaults. for example. δ4 }. The idea that our reasoning itself determines what reasons we ought to party official. but that he also talks leads to a derived priority ordering according to which δ2 <S5 δ1 . Nixon should instead place Nixon is confronted with the default theory )W . favoring only two courses of action. supporting the statements perspective of a third party trying to decide whether or not Nixon is d2 ≺ d1 and P. 8. vol. where δ1 is narios. The supplement our current variable priority theory with this derived pri- advice of these two authorities can be encoded by supplementing the ority ordering to get the fixed priority theory )W . while the default δ3 conflicts with δ4 . for is not. con- Since his chosen authorities disagree. favoring two courses of action. <S5 *. D . but as suggesting that he scenario based on our original variable priority theory either. With this new de- philosophers’ imprint . he ought to become a paci. the default δ1 these two defaults should now be interpreted as providing practical conflicts with δ2 . Nothing in his initial theory tells Nixon how to resolve the The answer is that the two conflicts are not independent. with W contain. Let us suppose that he discusses the not to become a pacifist. more weight on δ2 and not become a pacifist. which then determines the resolu- S1 = {δ1 } and S2 = {δ2 }. no. a stronger default with a conflicting conclusion. according to our definition. the practical perspective of a young Nixon try. but as advice. should place more weight on δ1 in his deliberations. where δ3 is $ → d2 ≺ d1 and see that the default δ2 is defeated in the context of S5 by the default δ4 is $ → d1 ≺ d2 . Given our current perspective. Perhaps he now goes to his father. D . why are there not four fist. according to the scenario S2 . This intuition is captured formally because problem first with a respected member of his Friends Meeting House. conflicting δ2 . while δ2 tells him that. two conflicts. who tells him just the opposite. supporting the conflicting conclusions P tion of that conflict. it would be incorrect and ¬ P. the familiar priority ordering between δ1 and δ2 . be a proper that δ1 actually has more weight than δ2 . According to the scenario S1 . D*.

various legal principles for resolving conflicts supporting the conclusions d4 ≺ d3 . rather than state law. we consider a situation from commercial SMA dates from 1920. and Italian law concerning the marketing of a particular product under the label of “pasta. Here. and Documents represent the respec- As it happens. then. where δUCC is curity interest in a ship can be perfected only by filing certain financial Possession → Perfected and δSMA is ¬Documents → ¬Perfected. δ5 }— There are. the conflict remains: according to Lex Posterior. perfected—roughly. while according to Lex Superior. To aid comprehension. we imagine. there are two relevant bodies of regulation govern. regulations. since it is federal law. who consider the issues surrounding a conflict between European Community Federal(d) ∧ State(d( ) → d( < d. quite generally. the ship—a right to recoup the loan value from the sale of the ship in a court decision—one of these two principles for conflict resolution case of default. and the Ship Mortgage Act (SMA). according to which a se. that later regulations are to be preferred philosophers’ imprint . Nixon has at last resolved his conflict. Possession. Miller has. originally described by Thomas Gordon (1993). and so dence to the more recent of two regulations. states (except Louisiana) in the period between 1940 and 1964. and can be represented in that arises is whether Smith’s security interest in the ship has been the same way. and P—according to of this kind. vol. according to that Smith possesses the ship. we use mnemonic symbols in our formal- ests that might be held by others. Then the relevant portions of UCC and collateral. but according to SMA. In this case. we are to imagine that Smith is in possession ciples of Lex Posterior and Lex Superior can be captured by the defeasible of the ship but has failed to file the necessary documents. but simplified and ity. The prin- documents. Other realistic examples are developed by Prakken and Satror (1996). The specific legal issue structure to the previous Nixon example. SMA derives from a higher author- law. which serves as col. legislation. Smith’s secu- Later(d. UCC We are to imagine that both Smith and Jones have individually lent should take precedence over SMA. of Lex Superior. whether it can be protected against security inter. of course. Another is the principle become pacifist. defaulted on both the loans. d( ) → d < d( 12. the theory now yields the single proper scenario S3 = {δ1 . tive propositions that Smith’s security interest in the ship is perfected. I hope. and that Smith has filed the appro- which a security interest can be perfected by taking possession of the priate financial documents. for whatever reason—custom. serve to the higher authority. SMA should take precedence over UCC. rity interest in the ship is perfected. δ3 . ization.12 formation. and the practical question is which of the two over Lex Superior. In that case. due to the whimsical nature of the example. However. UCC supplies the more recent of the two illustrate the workings of variable priority default theories. has gained favor over the other: perhaps Lex Posterior is now favored the ship will be sold. 8.13 . money to Miller for the purchase of an oil tanker. while For a more realistic example. having been drafted and then enacted by all the various haps not their usefulness. which gives precedence to the regulation supported by This modification of the Nixon Diamond should. 3 (april 2007) . so that both lenders have a security interest in But let us suppose that. As the reader can verify. d2 ≺ d1 . which gives prece- which Nixon should favor δ3 over δ4 . SMA can be represented as the defaults δUCC and δSMA . ing the situation: the Uniform Commercial Code (UCC). telling us. john f. no. One is the principle of Lex Posterior. it is not. such as that of Jones. Given only this in- adapted for present purposes. the current situation is analogous in lenders has an initial claim on the proceeds.” and also a conflict between separate Italian laws concern- ing the renovation of historic buildings. lateral for both loans. and so favor δ1 over δ2 . but per. horty Reasons as Defaults fault. so that the generalizations two statutes yield conflicting results: according to UCC. Let Perfected.

we can take plication of a general default according to which things that look red δLSLP as the default $ → dLS < dLP . according to We are to imagine that there are various reasons pro and con. not because it supports a conflicting conclusion. δLP . motivate the concept. D* be the variable priority default theory in which D Drug #2. δLSLP } conclusion. horty Reasons as Defaults over earlier regulations. But let us imagine two drugs. Suppose an object in front of me looks red. The effect of Drug #1 is principle telling us that Lex Posterior is to be favored over Lex Superior. 3 (april 2007) . and Perfected—and so recommending a course of action clusion that the object is red. if the ob- contains these five defaults—δUCC . philosophers’ imprint . where δLP is Later(dSMA . then it seems again that I am no longer entitled to the con- dSMA < dUCC . There is no stronger reason for concluding favored over δSMA . but the idea quickly evaporated in the artificial intelligence literature. on it challenges the connection between the premise and the conclusion of the other hand. δLP . flicting conclusion. and that federal regulations are to be pre. again. and State(dUCC )—telling us. the particular instances of these The distinction between these two forms of defeat can be illustrated two principles of concern to us here can be represented as δLP by a standard example. through an ap- is Federal(dSMA ) ∧ State(dUCC ) → dUCC < dSMA . Finally. so that δUCC is then defeated in the same way. another. In the literature on epistemic reasons. thus far. This default theory then yields the set S = {δUCC . such as the NETL system described in Fahlman (1979). again an instance of a general tend to be red. but because as well as an opportunity to meet a more varied group of friends. according to which things Later(dSMA . and no longer provides any reason for its conclusion. is dis- 3. john f. and δLSLP —and in ject looks red but I have taken Drug #1. is to make everything look red. dUCC ). the school will provide an excellent education for Colin’s son. I believe. Some of the early formalisms for knowledge representation in artificial in- telligence allowed for a form of undercutting defeat. But in this case. On one which one default supporting a conclusion is thought to be defeated by hand. dUCC ) → dSMA < dUCC and δLS Then it is reasonable for me to conclude that it is red. by contrast. it is as if the original default is itself interest in the oil tanker is perfected. undercut. the effect of Now let )W . δLS .13 ferred over those issued by states. no. by providing a stronger reason for a conflicting state law. Now. of Colin. the original default is not according to which δLP is to be favored over δLS .2 Threshold default theories cussed also in the literature on practical reasoning. If the object looks red but I have taken Drug #2. vol. and δLS . and that SMA is federal law while UCC is have considered so far. the tuition is high.14 . who must decide whether to send his son to a private school. where it is consid- The definition ered as part of the general topic of “exclusionary” reasons. only one form of defeat—generally called duced by Joseph Raz in (1975). This second form of defeat. ¬Documents. There is also a second form of defeat. this second form of defeat is generally referred to as “undercutting” defeat. and 13. another default. first intro- We have considered. to make red things look blue and blue things look red. Raz provides a number of examples to “rebutting” defeat—according to which a default supporting a conclu. or something very close to it. was first pointed out. but we consider here only the representative case sion is said to be defeated by a stronger default supporting a con. and so not that Smith has possession of the ship but did not file documents. and did not appear in this field again until it was reintroduced by writers explicitly reflecting on Pollock’s work. other hand. instead. Federal(dSMA ). on the as its unique proper scenario—supporting the conclusions dLS < dLP . δSMA . and we should therefore judge that Smith’s security that the object is not red. and Colin is concerned that a deci- the original default. that red. stronger than the original. This new default would then defeat the original in the sense we UCC is later than SMA. by John Pollock in (1970). 8. then it is natural to appeal to which W contains the facts of the situation—Possession. that look red once I have taken Drug #1 tend to be blue.

τ < δ. for each cutting defeat as a separate. since it has not yet been established nary reasons for sending his son to the private school. A default is then undercut when our then thinks of a “disabler” as a consideration that attenuates the strength of a reason more or less completely—or in our current vocabulary. But of course. considerations that strengthen or weaken the force of reasons. See Pollock (1995) and the papers cited there.15 attenuates the default providing the reason so thoroughly that it falls below threshold. horty Reasons as Defaults sion to send his own son to a private school might serve to undermine In order to implement this idea. Colin has promised his wife that. of the form $ → t ≺ d . who introduces the concepts of “intensifiers” and “attenuators” as enough that we feel safe in considering only those defaults whose pri. In order to guarantee that the school provides a good education. a variable priority default theory )W . 8. by Pollock. With this revision alone. or “first-order. linguistic constant corresponding to τ. triggering to include the additional requirement that a default cannot However. in all de. to disregard those defaults that do not bear on the interests faults and the set W of background facts are subject to two further of his son. for the threshold value—low (2004). The idea that undercutting comes in degrees. sent the entire set of threshold defaults from D . In our priority ordering. and only the right defaults. on this interpretation. the priority value assigned to the ordinary ever. with some additional information. now require that a default δ cannot belong to TriggeredW . The second constraint we posit some particular value—say.15 . The general idea developed here conforms to a proposal put forth by Dancy philosophers’ imprint .< (S) unless cisions regarding the education of his son. In formulating these constraints. who refers to this idea as the “undercutting hypothesis. this practice is fol. we refer to defaults of but an undercutting defeater in the practical domain. to be an. the sort considered thus far as ordinary defaults. τ. we must supplement our variable priority default theories excluding from consideration all those ordinary. and that what is 14. we revise our earlier definition of support for public education more generally.” 15. a survey of the work on this typically referred to as “undercutting” is best analyzed as an extreme case topic in nonmonotonic reasoning. instead. most notably. once I triggered default even further. we will dinary reasons pro and con. And this promise. cannot properly be viewed as just another one of the ordi. how. and we take t as the Now.” rea. vol. ∗ . The role of the threshold default δ∗ fault δX X X lowed. defeat be accounted for? The standard practice is to postulate under. no. ordinary default δX belonging to D . Raz this single revision is not enough. at all would now be triggered. simply as a special case of priority threshold default δX ∗ is represented by a term d∗ . and then modify our definition of a sons that do not bear on the interests of Colin’s son. Just as. form of defeat. It must be viewed. I should disregard the default according to which We begin by introducing the concept of a threshold default theory as things that look red tend to be red. lie above the thresh- as a reason of an entirely different sort—a “second-order” reason for old value. of attenuation in the strength of reasons. no defaults believes. once we have introduced the ability to reason about pri. Colin’s promise should lead him. one that reasoning forces us to conclude that its priority falls below threshold. the threshold weight. be triggered unless it lies above the threshold value—that is. Just as each ordinary default δX is rep- orities among defaults. which largely follows the same approach. is nothing constraints. he will consider only those W ∪ Conclusion(S) % Premise(δ) and. D* in which the set D of de- likewise. An exclusionary reason. we suppose that each then be analyzed. The basic idea is straightforward.D . like the fact that any defaults actually lie above threshold. in addition to these or. Raz asks us to imagine also that. and we let D ∗ repre- X adjustment. reasons that bear directly on his son’s interests. in addition. is that. how can this phenomenon of undercutting. is likewise noted by Schroeder can be found in Prakken and Vreeswijk in (2002). by default. default δX lies above threshold. more naturally. and primitive. 3 (april 2007) . or exclusionary. is to tell us that.14 What I would like to suggest. the phenomenon of undercutting defeat can resented by a term dX from the object language. (2005). The first constraint on threshold default theories is that. have taken Drug #2. there is also a special threshold de- alyzed alongside the concept of rebutting defeat. john f. and who ority lies above this threshold. that the right defaults.

Since our plan is to modify the definition of triggering so that only defaults lying above threshold can be triggered. to guide our reasoning.< (S) = The first of these two constraints can be provided with a sort of pragmatic justification. horty Reasons as Defaults is that the set W of ordinary information must contain. and so on. since that idea leads to a regress. placing it above threshold. restatement of our pragmatic idea that there is no point even in reg- There is. However. D . What this means is that a threshold default can never be possibility that they might be undercut—that our reasoning might lead undercut—we can never conclude that such a default cannot be trig- us to conclude that some particular default should fall below thresh. we halt the regress at the first step simply the possibility that threshold defaults might be defeated by stronger by stipulating that each threshold default is triggered. This result is. since we need to allow for the suspended. that it lies above threshold. by appealing schema d∗ ≺ d—can now be understood against the background of to a further set of defaults whose role is to place the threshold defaults this pragmatic idea. the ordinary defaults above threshold. Definition 7 (Triggered defaults: revised definition) Where S is a each instance of the schema d∗ ≺ d. TriggeredW . <*. john f. vol. there would be little point in including an ordinary default within D which follows from the information contained in any scenario what- unless we could assume. If we are to rely on threshold defaults to place threshold.16 . simply a default. above threshold. soever. We therefore stipulate the context of a scenario S are defined as including the set entire D ∗ that each ordinary default has a higher priority than any threshold of threshold defaults as well as those ordinary defaults lying above default. We want to be able to assume that the ordinary defaults lie above reasons for placing ordinary defaults above threshold. in a sense. in addition to that scenario. at least by default. a complication. then. Only defaults that are triggered istering a default unless we have reason to believe that it lies above can affect our reasoning. guaranteeing a strict partial ordering. then. we cannot simply postulate. the de- threshold defaults and d ranges over ordinary defaults. Since each threshold default has as its premise the trivial statement $. but we cannot require that they do. for threshold defaults. 8. the formulas mentioned earlier. however. we must first guarantee that the threshold defaults the set W of ordinary formulas must contain each instance of the themselves are triggered. the net effect of our stipulation is that. According to reasons for placing ordinary defaults below threshold. we do want to allow for threshold. we rely there will always be some reason for concluding that any ordinary de- on threshold defaults to place ordinary defaults above threshold by fault should lie above threshold. We do not want threshold defaults to be undercut. Instead.D . threshold whose premises follow from the information contained in philosophers’ imprint . in which d∗ ranges over arbitrary scenario based on the fixed priority default theory )W . This cannot be done. the defaults from D that are triggered in ordinary defaults can themselves be undercut. On the other hand. as a hard fact. of course. so that these our revised treatment. 3 (april 2007) . so that these ordinary defaults The second constraint governing threshold default theories—that can be triggered. that the the requirement that a triggered default must lie above threshold is ordinary defaults lie above threshold. The ordinary defaults belonging to D are there D ∗ ∪ {δ ∈ D : τ < δ & W ∪ Conclusion(S) % Premise(δ)}. therefore. Since threshold defaults provide old. The point of faults from D that are triggered in S are those belonging to the set these formulas is to guarantee that threshold defaults are uniformly lower in priority than ordinary defaults. it follows that threshold. we would then have there should always be some reason for taking any ordinary default se- to guarantee that the defaults belonging to this further set lie above riously. no. gered because it falls below threshold.

δ3 . we are forced to conclude that δ1 must lie I have taken Drug #1. but it is worth considering a situation that is just one degree premise is not entailed in the context of this scenario. the object looks red and I have Next. R. representing the situation in which the object looks red but object is not red. since its tail here. so that δ1 falls below threshold. according to the second. δ4∗ . a default δi∗ of the form $ → t ≺ di . According to the first of these defaults. Neither δ1 and according to the fourth. and δ1 because it fails to satisfy our new requirement Since )W . It is not as if δ1 is defeated. and δ4 as the ordinary defaults belonging to D . horty Reasons as Defaults Some examples D2. The theory now yields S2 = {δ2∗ . δ4∗ . supporting. that I have taken Drug #1. where i and j that the object is red. The default δ4 is not triggered. of the scenario. δ3 } as its unique proper scenario. vol. and that I have taken Drug #2. involving the two drugs. δ3 . then. as well as ¬ R It is possible. which would otherwise have placed conclude that the object is red if it looks red. δ3∗ . along with d1 ≺ d2 . representing the situation in which the object looks red but I have To illustrate these ideas. 3 (april 2007) . john f. so that it cannot itself be triggered. that it is now d1 ≺ t as well. additional constraints. no. in fact. it is reasonable to Here. Once defaults—that is. in addition. for i from 2 through 4. Instead. δ2 because its premise is not entailed in the context below threshold if I have taken Drug #2. The scenario S2 allows us to conclude. that red. then according to through 4. the formulas the contrary conclusion—no reason at all. Let L. D2 → d1 ≺ t. as before. conflicting conclusion that δ1 should fall below threshold. and δ4 lie above threshold. and so on. for i from 1 through 4. it is subject to our two that triggered defaults must lie above threshold. We scenario. that δ2 has a higher priority be defeated or undercut. we begin by describing a particular threshold taken Drug #2. that δ2 then take δ1 . δ3∗ . where δ1 is L → R. 8. S1 = {δ1∗ . and D2 stand for the of the form t ≺ di . and respective propositions that the object before me looks red. that the second default is stronger than the first. The default δ1 is defeated in the and undercutters can likewise be defeated or undercut. δ2 . D* is a threshold default theory. the threshold default theory yields below threshold. strength is established by the formula d1∗ ≺ d4 from W —supporting the according to the third. let us suppose instead that W contains the formulas L and taken Drug #2. D* representing the epistemic example sketched unique proper scenario. which makes everything look red. an antidote to Drug #2 that neutralizes its effects. and that the object is not red. the threshold default δ1∗ . this default no longer provides any reason for concluding las must contain sixteen statements of the form di∗ ≺ d j . The scenario S1 thus allows us to conclude that each of plicated than this: ordinary defeaters and undercutters can themselves the ordinary defaults lies above threshold. and therefore. D1. δ2∗ . to conclude that the L and D1. one corresponding to each of the ordinary earlier analysis of a reason as the premise of a triggered default. δ2 is L ∧ D1 → ¬ R. more complex. that the strength of the first default falls nor δ2 is triggered. in our standard sense range independently from 1 through 4. the set D must also con. and that δ1 in fact falls below threshold. in this case. In this case. has a higher priority than δ1 . δ2 . δ3 is $ → d1 ≺ d2 . supporting the provides no reason of its own. is defeated by δ4 . We can the three ordinary defaults δ2 . How philosophers’ imprint . only three statements earlier. our analysis. a stronger default—its greater that the object is not red if it looks red once I have taken Drug #1. By the first constraint. four statements of the form t ≺ di .17 . both defeaters and undercutters of defeaters than δ1 . δ4 } as its default theory )W . and δ4 is no conclusions can be reached about the actual color of the object. for situations to be considerably more com- and d1 ≺ d2 . since its conclusion conflicts with that of δ2 . for each i from 1 I have taken Drug #2. the set W of ordinary formu. of course. Suppose that. It is useful to consider this situation from the standpoint of our tain four threshold defaults. δ1 above threshold. There is no stronger triggered default supporting Now let us first suppose that W contains. but that I have also taken Drug #3. of rebutting defeat. δ3 . whose greater cannot explore the ramifications among these possibilities in any de- strength is established by δ3 . And by the second constraint.

the default default ¬Welfare(d2 ) → d2 ≺ t. which we can represent positive reason for concluding that the object is red. the default that is of concern here. what Drug #3 gives me is. Colin’s deliberation is also constrained the conclusion that the object is red. 8. for i from 1 through 3. that δ4 lies below threshold. that δ2 has a higher pri. unique proper scenario. If is high. δ2 . δ1 . however. once I have taken this new drug. provide any reason to the contrary. supporting the statements t ≺ d1 . horty Reasons as Defaults should this situation be represented? pose that these are the only two reasons bearing directly on the issue. and that the object is red. δ3 } as its Colin. S. since δ2 falls below threshold. incomparable reasons recommending different actions. vol. according to which: the education is excellent. By our threshold constraints. from consideration.16 cellent education favors sending his son to the private school. and D3—representing the situation in which concern the welfare of Colin’s son. john f. who is deliberating about sending his son to a private school. H. according to which. placing it below threshold. that Let )W . and δ3 as its ordinary defaults. Now if Colin were to consider only the defaults δ1 and δ2 in this ority than δ1 . the effect can be captured through δ5 . 2. 16. According to S1 . 3 (april 2007) . the consideration provided by the high tuition. By situation. δ3 . Formally. object is red. while W contains E. and I have taken both Drugs #2 and #3—then the default theory. δ3∗ . Of course. and that he should send tuition is high. previous default δ4 should fall below threshold. The scenario S3 therefore allows us to conclude that all the through 3. that δ2 does not. does not W also contains L. that W contains statements of the form di∗ ≺ d j for i and j from 1 and d4 ≺ t. exclu- philosophers’ imprint . δ5∗ . what Colin should conclude is that both the school. Simplifying somewhat.18 . since this is a threshold the object looks red. now. ordinary defaults except δ4 lie above threshold. and H represent the propositions that Colin’s son is sent to and d2 ≺ t. his son to the private school. E. a through the generalization reason for disregarding the reason provided by Drug #2 for disre- garding the reason provided by my senses for concluding that the ¬Welfare(d) → d ≺ t. and must contain δ5∗ . he should disregard any considera- dercutter. Let S. by the default δ3 . which can now emerge from below threshold to support promise to his wife. in this decision. Just as in the epistemic case. and take δ1 as the default E → S and δ2 as H → ¬S. and so allowing δ1 to cutting defeat—or exclusionary reasons—by reconsidering the case of stand unopposed. while the high tuition favors not doing so. d1 ≺ d2 . defaults associated with each of the ordinary defaults—a default δi∗ of ory is S3 = {δ1∗ . δ3∗ . Suppose. D* be the threshold default theory in which D contains the ordinary default δ5 is added to D . an exclusionary reason requiring him to remove δ2 Turning to the practical domain. the corresponding threshold default $ → t ≺ d5∗ . we must suppose also that D contains the threshold unique proper scenario for the corresponding threshold default the. D δ1 . but that its δ1 and δ3 lie above threshold. undercuts any reason for dis. D2. And we must suppose t ≺ di where i is 1. then. 3. but instead. or 5. the new δ5 . The theory thus yields S1 = {δ1∗ . supporting statements of the form the form $ → t ≺ di . δ1 . the particular instance of this general D3 → d4 ≺ t. t ≺ d3 . Because of his regarding δ1 . that the school provides an excellent education. δ2∗ . not any But there is also Colin’s promise to his wife. telling Colin that. but δ2 . along with the statements R. From an intuitive standpoint. no. and suppose δ3 is the have taken Drug #3. Its associated default therefore provides a reason for disregarding a reason for disregarding a reason—an undercutter un. we can illustrate the use of under. it is easy to see that he would be faced with a conflict— forcing δ4 below threshold. letting D3 stand for the proposition that I tions that do not center around his son’s welfare. δ5 }. let us sup. the tuition W must contain the formulas di∗ ≺ d j for i and j from 1 through 5. where undercutters can be undercut. and ¬Welfare(d2 ). it does not These two defaults should be interpreted as telling Colin that the ex.

Moderate particularism is an irenic doctrine. however. But how can we con. extreme particularism holds that principles but even there I suspect the ideal is seldom realized. His entire life. but the theory would ertheless. and must then be evaluation. Certainly the view presented here quali. must appeal to a more fundamental set of principles. at some level. who argues that extreme particularism follows from a more philosophers’ imprint . that the original result was wrong? It cannot In this final section. I believe. It is useful. and even in the epistemic literature—that reasons form a kind of everyday practical evaluation. a sensible view. but which also admits the possibility of situations in which the ticularism. As long as we admit lines as a form of generalism. that the application of a set suggests a promising research agenda centered around questions con- of principles in some particular situation yields what appears to be cerning the update and maintenance of complex systems of principles. and one that way. since it is these princi- veloped in this paper to an argument recently advanced by Jonathan ples that generated the result we now disagree with. Nev- wife to disregard any promises made to his mistress. be through an application of our principles. ciples. horty Reasons as Defaults 4. vol. to distinguish different versions to which our everyday evaluative reasoning is typically based on prin- of this particularist perspective. which is compatible and to emphasize instead a kind of receptivity to the features of partic. have promised his ever. Applying the theory clude. and it is hard to see why we could assume any sort of neat stratification in less regimented ar- have no role to play in practical evaluation at all. Colin might easily. extreme particularism is flatly inconsistent with generalism. Perhaps there are circumstances in which a straightforward application of these rules some domains. is not grounded in an appeal to general principles. which holds only that a significant part of our practical accepted stock of principles yields incorrect results. Dancy. Let us refer to any view along these for example—there is a limit to its applicability. In addition to moderate particularism. This is. and we therefore feel Many of these questions would have analogs in legal theory. moderate view allows for an appeal to principles in the course of our soning. and so on. for example. since it is based. the appeal to still further principles for the diagnosis capture defeasible generalizations. in a case like this. somehow lying It is often thought that our ability to settle on appropriate actions. 3 (april 2007) . such as the law. insisting only that there may be special hierarchy. We begin with some us as we revise our principles? Some writers have suggested that we general definitions. study of normative systems more generally. eas. first. however. Although this suggestion is useful in many cases— at least to justify these actions as appropriate. no. emended by a process of reasoning that does not itself involve appeal It is hard to argue with moderate particularism. it is exactly this radical position that has been advanced by apply all the same. And what guides Dancy in support of particularism in moral theory. that any finite set of principles can lead. john f. ultimately. which tends to downplay the importance of general principles. The two ideas can be combined in a view according ular situations. at least. I now want to apply the account of reasons de. so that. an appeal to general principles. 8. It frequently happens. We can imagine. and the reasons governing it. at times. phrased in this to further principles. a moderate par. and repair of these errors must eventually result in either regress or Standing in contrast to generalism is the position known as particu. yields incorrect results. there is also a more sionary reasons can themselves be excluded: perhaps Colin has promised radical position that might be called extreme particularism. under- cutter undercutters are “third-order” reasons. While the his mistress to disregard any promises made to his wife. In addition to promising his mistress to disregard any promises made Since it denies the legitimacy of any appeal to principles whatso- to his wife. the wrong result from an intuitive standpoint. with generalism. or in the background. could be a tangled mess. and to the that these principles must themselves be revised. to errors in practi- fies. as when we rely on moral principles to correct errors in a legal system. larism. What I disagree with is the suggestion—occasionally found in the literature on practical rea.19 . and at the same time. where this kind of stratification is the ideal. just as undercutters are “second-order” reasons. circularity. on a system of principles intended to cal evaluation. involves.

might even be a reason We might understand this principle.17 Let us grant. question. ability even to formulate such fully-qualified rules with any degree of Dancy turns to a standard example. or even all taken Drug #1. not that all acts that involve lying are wrong. however. possible to weaken the proposal by viewing the simple a book from you. but simply that lying is always a feature that counts against an that an object looks red no longer functions as a reason for thinking action. according to Dancy. 19. In most situations. But suppose I know that I have press the idea. In that context. when various other reasons are taken into account. a thoroughgoing rejection of In Dancy’s view. In most situations. and particularly the canonical (2004).19 setting need not support the same action or conclusion in another. we consider only two representa. The argument is set out with minor variations in a number of publications wrong. in both the practical and theoretical domains. people today would find such an unyielding conception to be tenable. p. Dancy presents a variety of cases intended to establish this quantified material conditional according to which any action that possibility. still a material conditional. considered. Beginning with the practical domain. guments have been advanced by others. we might take the principle that lying is wrong to ex- me a reason for thinking that it is red. and it is legitimate to doubt our In order to illustrate the same phenomenon in the epistemic domain. vol. as we recall. (2001). philosophers’ imprint . On this view. and indeed. Both. Since these involves lying is thereby wrong.1 The argument posed to lead to extreme particularism. that it is red. p. that examples like these are sufficient to establish a general holism of reasons. makes red things look blue and acts that involve lying and also satisfy some extensive list of qualifica- blue things look red. no. Con- or conclusion in one context need not serve as a reason for the same sider. according to Dancy. the example is discussed also in (2000. 3 (april 2007) . avoiding a pointless insult. the principle that lying is wrong. a “wrong-making” feature. even though that action might actually be judged as the right beginning with Dancy’s (1983). holism is a general phenomenon that applies to both any role for general principles in practical evaluation? To answer this practical and epistemic reasons. In this new context. we must consider the nature of the generalizations involved ing polarity: a consideration that functions as a reason for some action in these principles. are capable of shift. but I focus here on the versions found in his (1993). in fact. it is. first of all. but one laden with excep- I discover that the book I borrowed is one you had previously stolen tion clauses covering all the various circumstances in which it may be from the library. horty Reasons as Defaults general holism about reasons—the idea that the force of reasons is vari. Dancy (2004. which. How is this holistic view sup- 4. as a universally against it. But suppose complicated rule. the fact tions. of similar examples.18 of this form has ever been displayed. john f. p. This example can be found in Dancy (1993. the fact that I acceptable to lie—saving a life. so that what counts as a reason for an action or conclusion in one and so not red. let alone learn these rules or reason with them. though some writers have endorsed a view along these lines. then. the fact that an action involves lying would always count as some reason for taking it to be 17. it is followed by a number Section 3 of his (2001). the fact that an object looks red gives Alternatively. thing to do overall.20 . and so on. as he writes. instead. I no longer have any reason to return it to you at all. is that no satisfactory rule it to you. borrowed the book from you no longer functions as a reason to return The problem with this suggestion. 74). 132) and in 18. a particularly clear presentation can be found in Little (2000). Similar ar. 60). What could this mean? action or conclusion in another. imagine that I have borrowed It is. for the moment. 8. (2000). a reason for thinking that the object is blue. and it is useful to focus on a concrete example. Al- examples follow a common pattern. of course. the fact that I have borrowed a principle that lying is wrong as a sort of abbreviation for a much more book from you would give me a reason to return it to you. able. regardless of the circumstances. very few tives. and one that we have already confidence.

is indeed classified as a reason. philosophers’ imprint . but he tends to downplay the theoretical significance fault is triggered in the context of a scenario just in case its premise of allowing isolated exceptions like these. would be to articulate general reasons 4. such as “causing of gratuitous pain on unwilling vic. δ3∗ . and D1 stand for the respec- of principles that is endorsed by Dancy as the most attractive option tive propositions that the object before me looks red. In that case. Dancy admits that there with the situation represented as above—the proposition L. This example was represented earlier as a threshold de- ing one particular side. so that any considera. form t ≺ di . presumably claims that drug. according to which the object looks red but I have taken the ciple that it is wrong to lie.”20 our analysis of a reason as the premise of a triggered default. reason holism. for i from 1 through 3. and suppose that the default δ1 is L → R. 3 (april 2007) . not just an occasional is entailed by that scenario and the default lies above threshold. 76) through 3. that it is red. in addition. as he says. is to specify general reasons like this. 77). but instead. we find that— 20. there is simply nothing for and that the object is not red.2 Evaluating the argument for or against actions or conclusions. however we conceive of them. but Let us start with the epistemic example that Dancy offers to establish which at least play an invariant role in our deliberation. seem to be all in theory in which D contains δ1 . which we now reformulate here. there would be no general reasons. D must again contain. as well as ¬ R and d1 ≺ d2 .21 Suppose again that L. mendacity is always a wrong-making feature wherever it occurs in addition to these three ordinary defaults.” for example. And when we do explore the issue from this standpoint. suppose reason holism is correct. then. the corresponding thresh- (that is. on the statements would always be some context in which that reason plays a different that are or are not to be classified as reasons in those situations. no. of the form $ → t ≺ di . vol. p. that δ2 has a higher priority than δ1 . john f. theory thus allows us to conclude that each of the ordinary defaults ing the same force regardless of context. The principle here and there. δ2 . 8. Why? Because a de- tims. δ2∗ . it often does not succeed in making the action wrong overall). that δ2 is L ∧ D1 → ¬ R. it always makes the same negative contribution. and that δ3 is $ → d1 ≺ d2 . always favor. any principle telling us that a reason plays some par. then. It is easy to see that this default theory yields S1 = {δ1∗ . object looks red. fault theory. δ2 . the view convenience. But now. and I agree. and δ3 . somewhat simplified for Ross’s remarks about prima facie duties. however. δ3 } tion favoring an outcome in one situation might not favor the same as its unique proper scenario. supporting the three statements of the outcome in another. R. and δ3 lies above threshold. Dancy’s argument depends. This is the view suggested by some of W. The simplification is that we now ignore Drug #2. D. and available: that I have taken Drug #1. which no longer concerns us. “a principle-based is an issue that can usefully be explored here from the standpoint of approach to ethics is inconsistent with the holism of reasons. Since this is a threshold default theory. The no considerations to play an invariant role in our deliberation. 21. them to specify. and W must contain statements of the form di∗ ≺ d j for i and j from 1 (Dancy 2004. This role. The prin- and D1. and in which W contains L the business of specifying features as general reasons. a robust generalism would require a wide range of normative principles. which may not be decisive.21 . since there sions supported by particular situations. Dancy (2004. for instance. p. δ2 . This is Dancy’s conclusion—that. not so much on the conclu- ticular role in our deliberation would have to be incorrect. carry. that the may actually be a few factors whose role in normative evaluation is not sensitive to context. for each i from 1 through 3. A couple of paragraphs later. If the function of principles δ1 . D* be the Moral principles. though old defaults δi∗ . horty Reasons as Defaults The function of principles. Let )W . and it is.

D* be tation. and so. once again. (Dancy 2004. On our current represen. however. δ2 . it is natural to say that a reason. the statement R. as well as the corresponding threshold defaults I mention this possibility here only to show that there are different δi∗ . contains L and D1. but simply as one unique proper scenario. let )W . vol. I would have to agree. but simply as the threshold default theory in which D contains the ordinary defaults a defeated reason. that δ1 in fact example. Then the appropriate results can be reached by what is a reason in the normal run of cases is not a reason in this supplementing the previous theory with this new default. is the premise of a triggered default. δ3 . as a status of looking red as a reason is itself undermined. as well as d1 ≺ d2 . L is still classified as a reason for R. Let δ4 be the new default D1 → d1 ≺ t. horty Reasons as Defaults premise of the default δ1 . 8. In each case. in this particular case. which is then defeated by a R. reason for the conflicting conclusion ¬ R. is defeated whenever the default that provides And. then. it now loses its status as a reason entirely. is already included with the red still functions as a reason. indeed it is a reason for If L is a reason for R. or does it indeed function as a reason. for i from 1 through 4. the situation in which L and D1 both hold—the object looks red which the previous default δ1 —which established looking red as a rea- but I have taken the drug—provides a context in which. δ3 . and now δ4 . illustrates the pos. δ4∗ . and of course. satisfying the necessary threshold constraints: formally. and d1 ≺ t. the default lies above threshold. and in which W ways of interpreting those situations in which some familiar considera. of the form $ → t ≺ di . though that reason is overwhelmed its premise is classified as a reason for its conclusion. 74) conclusion? Well. the stronger reason to the contrary. it does not seem as if looking red still provides some rea- at hand. ¬ R. that there is something red before me. something red before me. by contrast. is one that ordinary information provided by W . δ1 . the statement L. which provides L ∧ D1. then why does the scenario not support this believing the opposite. we must ask: does the consideration fail to function as a This new default theory now yields S2 = {δ2∗ . Dancy entertains but immediately rejects: and in the context of the scenario S1 . the default δ1 . falls below threshold. I have taken the drug. the idea that looking Now a reason. and δ4 lie this is a point we will return to when we consider Dancy’s practical above threshold. which provides L as a reason for the conclusion son for concluding that the object is red. And in the particular case the drug. On Dancy’s dercutting defeat. supporting three statements of the form t ≺ di that is defeated? The answer often requires a delicate judgment. is defeated by the stronger default δ2 . Once I have taken that proposition as a reason is itself defeated. john f. δ2 . but simply a defeated reason. It is no longer any reason at all to believe that the object is red. and philosophers’ imprint . δ3∗ . δ3 . and so entailed by any scenario. p. in this case. In the particular case at hand. together with statements of the form di∗ ≺ d j for i tion appears not to play its usual role as a forceful or persuasive reason. 3 (april 2007) . and for i from 2 through 4. δ4 } as its reason at all. and so we are driven to reason holism. ation. although L son for being red—now falls below threshold in any situation in which is normally a reason for R. by appeal to our treatment of un- the sort that Dancy relies on to establish reason holism. What is crucial to see. as it happens. It is not as if it is some reason for me to believe that there is This default is therefore triggered. can likewise be accommodated sibility of an alternative interpretation of one of the key examples of within the framework set out here. The theory at times different interpretations of the same situation are possible— thus allows us to conclude that the ordinary defaults δ2 . and j from 1 through 4. is that this interpretation of the situ- Our initial representation of this situation. some proposition favoring a conclusion. by contrary reasons. according to view. according to our analysis. that δ2 has a higher priority than δ1 .22 . Dancy’s preferred interpretation. no. it just does seem that the proposition that the object looks red but I have taken the drug. and then case. that the object is not red.

strictly speaking. if what it means to say uation is determined by these defaults. it follows at of Dancy’s argument. while in the framework developed here. some reason for concluding that an object is red. How should it be diagnosed? Why eral principles. so Dancy’s argument is. and cannot properly guide our actions. faulty. If particularism. 8. In is it that Dancy considers holism to be inconsistent with any appeal developing a framework within which holism is consistent with gen- to general principles. it lies above supposed to mean that lying always provides some reason for evalu- threshold. I mean to rely on a different understanding that the two ideas. then. And presumably. can be accommodated to. the The disagreement has its roots. that holism is inconsistent with any form of generalism. then again. the de. to a first approximation. looking red. situations in which it does not. With this observation in place. follows from rea. Yet this framework is itself based on a system of principles. what is and what is not a reason in any particular sit. of these principles. ating an action less favorably. that that the principles guiding practical reasoning cannot usefully be taken once we learn that an action involves lying. however. then the practical principle itself is tions. allowing for the possibility that a consideration that counts as some consideration as playing an invariant role as a reason has to be a reason in one situation might not be classified as a reason in an. supports reason not be a reason in another. the epistemic principle is Reason holism is consistent with this form of generalism. no. given this understanding of general principles. horty Reasons as Defaults given this new representation of the situation. even in those cases in which our overall fault now falls below threshold. and faulty. is no longer triggered. there is a disagreement. not as specifying invariant reasons. indeed. What Dancy suggests instead. situations in which it does not. which volves lying is wrong. of course. so that what counts as a reason in one situation need son holism. conclusion is that the object is not red. The framework thus provides that looking red indicates being red is that looking red always provides a counterinstance to the idea that reason holism entails extreme par. is no longer the evaluation is favorable. I agree. Since L. and there are certain defaults that can be thought of as instances of defeasible generaliza. The principle that lying is wrong is its premise is entailed by that scenario and. On this view. even in those cases in which our overall claims. but instead. we ought to judge that it is to express universally generalized material conditionals. that looking red indicates being red should likewise be taken to mean philosophers’ imprint . then. And the principle principle that lying is wrong cannot mean that every action that in. I believe. had previously provided L as a reason for R. holism and generalism. 3 (april 2007) . and while the premise of δ1 is indeed entailed. We both acknowledge that lying is wrong by default—that is. as we have seen. vol. the default δ1 . The argument hinges on the idea that extreme once—it is obvious—that reason holism must lead to their rejection. the practical wrong. john f. not just taken to mean that looking red always provides some reason for con- a defeated reason. it seems eral principles. at least. always favors a negative evaluation of an action. given Dancy’s understanding of these principles. in our different views general principle that lying is wrong should be taken to mean simply concerning the meaning of general principles. What is a cording to which things that look red tend to be red must likewise be reason in the usual range of cases is in this situation. therefore. therefore. it is not classified as a reason. If what it means to say that lying is wrong is that lying other. any principle that identifies holism.23 . invalid. we can now turn to an evaluation Now. the rejection of general principles. The framework developed here. mistaken. the epistemic principle ac- premise of a triggered default. in addition. unless certain complicating factors interfere. gether? as codifying the defaults that guide our reasoning. that reason holism must entail the rejection of gen- Clearly. but no longer any reason at all. exactly as Dancy cluding that an object is red. and there are certain ticularism. holism is correct. is that these principles should be taken to specify considerations that Why not? Because a default is triggered in the context of a scenario if play an invariant role as reasons.

The criticisms have no merit in the present case. be taken as a reason for feated. do not move beyond the level of a first ap. complicating factors are to be resolved. clusion ¬ R. This the explication of general principles as defaults. and it can be. I believe. It would then be appropriate for me to respond. What the current theory tells us. What other complicating factors. pragmatic rather than semantic. john f. and therefore against many of the usual appeals to ceteris paribus generalizations. and the the underlying default theory.24 . “Because I proximation. our first approximation to the meaning of a general principle is looks red as a reason for concluding that it is not red. information. then. Can it be? Should it be? I believe tions. What the present account has to for concluding that the object is not red is. what makes L salient is simply that it provides new First. which supports the con- supported by our intuitions. he claims is that L not only fails to count as a reason for R but is ac- This explication of general principles as statements codifying de. In the present situation. being red. Suppose you and along the lines of “Lying is wrong except when it isn’t”). tell us nothing about the more eyes are closed. is that my real reason clusions to be arrived at in each case. The statement D1 was already part of our background stand the idea that L. not just that it looks red. tually. that it must be blue. a principle philosophers’ imprint . In formulating nothing but a high level summary of the workings of this principle in an account of reasons. in some cases. icized as vacuous (the joke is that an explication of this kind reduces To see why it sometimes seems natural to think of L as a reason for the substantive claim that lying is wrong to a less substantive claim ¬ R. In this case. undercut—but also how the various issues introduced by these not being red. supported by the present framework. however. which makes red things look gued that these statements. not red. 3 (april 2007) . Dancy claims more than this. and so announce. and consistent with reason holism. de. Both of these criticisms have. is a concrete theory of default reasoning to support that it looks red and I have taken the drug—not just L. no. which specifies in detail. It is also ar. we both know D1—but that my in the absence of complicating factors. I want to conclude by considering two remaining issues presented by Different propositions can be salient at different times and in different Dancy’s examples.3 Loose ends tive reason by mentioning only a salient part. the principles tell us what to conclude in the absence of external that it should be. we ought to conclude that it is red. sense in which looking red can. so that I cannot see the object before me. by contrast. the simple statement L. in this situation. ways. which normally counts as a reason knowledge. I both know that I have taken the drug. we have seen how the present account allows us to under. once we see that an object for R. then. not only what ease of my response in this situation certainly suggests that there is a the complicating factors might be—when a default is conflicted. we cannot just ignore how we speak. often. which specify the appropriate conclusions blue and blue things look red—that is. a reason for ¬ R. Ceteris paribus statements like these are sometimes crit. again in the absence of have taken the drug. horty Reasons as Defaults that this relation holds by default—that. fails to count as a reason in the situation in which I looks red. vol. realized that it looks red. explanation—roughly. but that it requires a different sort of complications. that I open my eyes. see that the object looks red. and the appropriate set of con. is the antecedent of the triggered default δ2 . looking red. Although I actually mean to cite the proposition L ∧ D1 as my reason for concluding that the object is not red. This further claim is not yet faults involves a frank and explicit appeal to ceteris paribus restric. not red. but in fact. however. 8. consider a slight variation on Dancy’s example. And suppose you ask me why I concluded that the object is since these appeals. some merit when lodged since I know I have taken the drug. a theory that is precise.” apparently citing the fact that the object Here. I would argue that there is a pragmatic principle at work that allows me to cite this conjunc- 4. but contribute. but L ∧ D1. Now imagine usual case in which complicating factors are present.

In the first case. having borrowed a book. In order to exactly the same way. and so not red. and concluding. no longer counts as such a reason. In the I knew that it looked red before. no. this situation is one in which my borrowing might well count as a reason supporting certain actions by the library a book from you. establish this point. but me a reason for returning it to you. just as in his discussion of the stolen it gives me some reason for returning it to the library. situation as described. this time. coherent interpretation of the situation Dancy describes. my initial judgment would be that it is red. but your having stolen it from the then learn that this book is one you have previously stolen from the li. but who also feels that your having In his discussion of this example. In both cases. (Dancy 1993. But imagine that I am now told about the drug. to you. Second. Dancy explicitly considers and rejects the possibility the new information that. I have no reason at all to I can see that the object looks red. nomenon: situations described at this level of generality often allow then. I arrive at serve so naturally to illustrate what I consider to be a pervasive phe- this final state by adding D1 to the background set { L}. I am able to cite L ∧ D1 itself as a reason for my conclusion in for a number of different. What is generally a reason is I can imagine someone who agrees that my borrowing the book gives not a reason in this particular situation. or even the most natural. I arrive at this final state of information by adding The situation is especially interesting.25 . horty Reasons as Defaults of conversational economy allows me to do so simply by mentioning epistemic case. D1}. which matter—carries little importance except that it suggests a different entails L ∧ D1. in fact. red. I do not think the matter is revise my judgment to conclude that the object is blue. and the library. that my eyes are open and reason to put it back in the library. we have holism. drug. that is someone else’s job. so suggests a common pragmatic principle at work. but I am not yet aware that I have return it to you. so straightforward. so that what I ought to do is return the book to you. In both cases. a reason. According to Dancy. and D1 in the second. p. arranged in a sort of spectrum de- proposition—L in the first case. I would. be inclined toward a very different interpretation myself. longer be appropriate to cite the fact that the object looks red. the actual reason for my conclusion that the object is not and. library gives me no particular reason to do anything at all (though it brary. I would tend to feel that my having borrowed I would now be more likely to say. and so But here. I cannot agree that Dancy’s reading of the situation In this case. After all. if I were again asked why I changed my mind. practical example offered by Dancy to illustrate reason holism.” and that it is simply not my business to supervise your relations with This response conforms to the pattern of the previous variant. me a reason for returning it to you. Suppose. is still functioning as information. which First. there is my interpretation: borrowing the book from you gives we have yet to consider in any detail: I borrow a book from you. equally coherent interpretations. precisely because it does L to the set { D1} of background information. my This autobiographical detail—how I personally would view the final state of information can be represented by the set { L. Since the object looks red. Instead. 3 (april 2007) . but simply as a defeated reason: This pragmatic analysis can be supported by considering another It isn’t that I have some reason to return it to you and more variant of the situation. I will simply list five different interpretations of taken together with our shared background knowledge. 8. by mentioning only the new information that. entails this the situation as Dancy describes it. when I judged it to be red. in contrast to the epistemic case. in fact. matter that I want to discuss is the book to you compared to my reasons for returning it to the library. in the second. “Because I learned that I took the the book from you gives me a personal obligation to return it to you. john f. vol. and so again. though the philosophers’ imprint . which generally functions as a reason for returning it police). I hope. it would no provides the unique interpretation. 60) taken the drug. entails this proposition. taken together with our shared background that the consideration. pending on the relative strength given to my reasons for returning the The second.

on balance. so that I which the set W of ordinary information contains B and S. δ2 . and the book gives me a reason for returning it to you. the set ought to return it to the library. Fourth. The unique proper scenario based on returning the book to the library and undercuts any reason I might this theory is S1 = {δ1∗ . which is what I ought to do. This is. distinguished from the others. δ1 } and S3( = {δ1∗ . which the set W of ordinary information contains B and S. By providing a precise repre. to suggest new possibilities. providing B as a reason for Y and S as a reason for L. so that I ought to return it Since δ1 lies above threshold and its premise is entailed. where both S3 and S3( support t ≺ di where i is 1 or 2. Third. that δ3 is $ → d2 ≺ d1 . both W and D contain. the set D escape notice. sentation of reasons and their interactions. This theory allows two proper scenarios. each of the five interpretations mentioned in the text of Dancy’s having stolen it also gives me a reason for returning it to the library. incomparable in priority. D* be the threshold default theory in case. since mal study of reasons presented here carries benefits analogous to the δ1 is accorded a higher priority than δ2 . in addition. but in this Interpretation #1: Let )W . I can imagine someone who feels that my borrowing that δ2 is S → L. these different formalizations suggests. and that I return it to the library. that the for. supporting the statements tions in the text of this paper—the interested reader can find them in t ≺ di for i from 1 to 3. δ3∗ . I believe. by the reason B for Y. Let B. in or- borrowing the book gives me a reason for returning it to you. I hope the reader will agree that each of these interpretations is in. the interpretation: your having stolen the book both gives me a reason for necessary threshold information. within the framework developed here. and both W and D contain. and δ3 . conflicting actions. no. I can imagine someone who feels that my This appendix presents threshold default theories representing. to localize the source of that disagreement. that your having that δ5 is S → d1 ≺ t. The library book example to return it to you. 8. The unique proper scenario based I will forbear from actually displaying these five different representa. perhaps flipping a coin or seeking further both to you and to the library. δ2∗ . triggered. a rational person could adopt any of them. and Y. I ought Appendix A. Interpretation #2: Let )W . And. but S3 supports Y while S3( supports L. δ1 }. and both W and D contain. D* be the threshold default theory in deed coherent. supporting the statements t ≺ d1 and Y. on this theory is S2 = {δ1∗ . D* be the threshold default theory in different interpretations of particular situations that might otherwise which the set W of ordinary information contains B and S. and L represent the respective propo- and that these two reasons are. so that. fifth. that your der. both S3 = {δ1∗ .26 . in addition. Suppose that δ1 is the default B → Y. both defaults are triggered. However. δ3 }. Since δ1 and δ2 philosophers’ imprint . d2 ≺ d1 . library book example. δ2∗ . S. we have Dancy’s preferred D of defaults contains δ1 . that the reason for returning it to the library is stronger. incomparable reasons supporting library. stolen it gives me a reason for returning it to the library. otherwise have had for returning it to you. that I return the book to you. Moreover. the reason S for L is defeated benefits of formal work in other areas. δ2 }. that δ4 is $ → d1 ≺ d2 . and defaults contains δ1 . in fact. and occurs. δ1 . so that what I ought to do is Y. and where disagreement of defaults contains δ1 and δ2 without any specification of priority. in addition. 3 (april 2007) . this default is to the library. the necessary threshold information. providing B as a reason for Y. so sitions that I borrowed the book from you. the necessary threshold informa- tion. it allows us to tease apart Interpretation #3: Let )W . δ2∗ . the set D of each of these interpretations can be given a precise representation. vol. I would then have to resolve the matter in whatever and suppose that Y and L are inconsistent—I cannot return the book way I resolve dilemmas. horty Reasons as Defaults reason for returning it to you is stronger. advice. Y. john f. that you stole it from the that I am now faced with a dilemma. Since both δ1 and δ2 lie above the appendix—but not from drawing the moral that the possibility of threshold and their premises are entailed.

and their premises are entailed. 1979. the default is triggered. and L.stanford.edu/- Three people were extraordinarily generous to me during the prepara. who read the early draft as represent different ways of resolving the conflict. [Dancy. defaults contains δ1 . in addition. Samuel Scheffler. providing S is a reason for L. and δ5 . Mind. Oxford Uni- defaults contains δ1 . The MIT Press. 1983. 2001] Jonathan Dancy. but since δ1 now falls below threshold. and both W and D contain. since δ2 lies [Dancy. and δ4 . editor. philosophers’ imprint . horty Reasons as Defaults lie above threshold in both scenarios. pages 940–945. 1996] Gerhard Brewka. the reason tended logic programs with dynamic preferences. 8. Ethical particularism and morally rele- on this theory is S5 = {δ2∗ . AAAI/MIT Press. and so B no longer counts as a reason for Y. In R. providing B as a reason for Y and S as a reason for L. D* be the threshold default theory in ing a series of conversations spanning a period of two years. versity Press. since δ2 is now accorded a higher priority than δ1 . pages 28–55. or 4. and both W and D contain. The unique proper scenario based [Dancy. Jonathan Dancy [Fahlman. 2004. no. 4:19–36. 2004] Jonathan Dancy. his insight and enthusiasm kept me working versity Press. Jay Wallace.27 . where I was able to discuss it with an audi- neither default is defeated and so there is a conflict. Henry Richardson read an early draft and discussed [Dancy. Acknowledgements Stanford University. and then invited me to present this material at the Texas for Y and S as a reason for L. L is what University Press. 2001. In Edward Zalta. vol. in addition. Moral Particularism. D* be the threshold default theory in Smith. Oxford Since there is a reason for L and no conflicting reason at all. [Brewka. Mark Schroeder. Moral particularism. Available at http://plato. 1993. the necessary threshold information. δ5∗ . sent me a remarkably perceptive set of both defaults are triggered in both scenarios. In this case. Hooker and Margaret Little. on his desk out of the blue). editors. as well as d1 ≺ t. so that what I ought to Intelligence Research. both defaults Intelligence (AAAI-94). In Proceedings of the Twelveth National Conference on Artificial δ2 lie above threshold and their premises are entailed. 1994] Gerhard Brewka. has been helpful beyond all measure dur- Interpretation #4: Let )W . Reasons. δ4 }. 1994. as well as d1 ≺ d2 . δ2 . do is L. Reasoning about priorities in default t ≺ di where i is 1. δ2 . Oxford Uni- it with me extensively. Michael Interpretation #5: Let )W . δ2∗ . on a project I would not have finished without him. 92:530–547. lisher. δ5 }. Since both δ1 and logic. archives/sum2001/entries/moral-particularism/. the necessary threshold information. particularism conference. where i is 2 or 5. and Philip Pettit. 2000] Jonathan Dancy. 2004] John Broome. the set D of the key ideas in this paper were formed during those conversations. and L. [Dancy. Journal of Artificial B for Y is now defeated by the reason S for L. 2000. supporting the statements [Brewka. john f. the set D of Essays on the Moral Philosophy of Joseph Raz. Ethics Without Principles. Reason and Value: which the set W of ordinary information contains B and S. are triggered. well as the current version. 1983] Jonathan Dancy. providing B as a reason comments. The Stanford Encyclopedia of Philosophy (Summer 2001 Edition). tion of this paper. [Broome. editors. NETL: A System for Representing and read the same draft (a long paper by an unknown author that arrived Using Real-world Knowledge. δ4∗ . Moral Reasons. Finally. 1979] Scott Fahlman. 2. The particularist’s progress. 1993] Jonathan Dancy. The unique proper scenario based References on this theory is S2 = {δ1∗ . I ought to do. In Brad it is no longer triggered. 3 (april 2007) . Since there is no priority information. the two scenarios ence of experts. δ2 . 2004. many of which the set W of ordinary information contains B and S. δ2 . Basil Blackwell Pub- above threshold and its premise is entailed. supporting the statements t ≺ di vant properties. Well-founded semantics for ex- However. 1996.

1994. PhD thesis. of the Association for Computing Machinery. [Touretzky. 2002] Henry Prakken and Gerard Vreeswijk. [Horty. Franklin: Representative Selections. The structure of epistemic justification. editors. Hand. C. Cognitive Carpentry: A Blueprint for How Reprinted in Frank Mott and Chester Jorgenson. [Horty. In Proceedings of the Tenth In- book of Logic in Artificial Intelligence and Logic Programming. [Raz. philosophers’ imprint . 23:35–65. Hutchinson and cial Intelligence. editors. Some direct theories of nonmonotonic in. Non-monotonic logic II. 1975. 2007] Mark Schroeder. Gabbay. 1994b] John Horty. Skepticism and floating conclusions. 2002. pages 62–78. 1982] Drew McDermott. Gabbay. pages 348–349. 2003] John Horty. The Mathematics of Inheritance Sys- [McDermott. 29:33–57. 70:5–19. Technische International Conference on Artificial Intelligence and Law (ICAIL-95). General patterns in nonmonotonic Thomason. Robinson. American Philosophical Quar- terly Monograph Series. ford University Press. [Touretzky et al. Volume Logical systems for defeasible argumentation. Journal tems. [van Fraassen. and dynamic priorities. The ACM Press. In Studies in the Theory of Knowledge. Moral Particularism. number 4. Nous. tions. heritance. 2002. 41:138–160. editors. vol. [Reiter. Hand. Forthcoming in [Schroeder. Guethner.. Ox. In Dov Gabbay and 3: Nonmonotonic Reasoning and Uncertain Reasoning. 3 (april 2007) . Robinson. F. 1982. pages 111–187. and J. 1993] Thomas Gordon. horty Reasons as Defaults [Franklin. Weighting for a plausible Humean Hooker and Margaret Little. ing. 1986. The Pleadings Game: An Artificial. [Horty. 1987] David Touretzky. 2003. Intelligence. Unpublished manuscript. 1994] David Makinson. and J. 2000] Margaret Little. Hogger. pagination refers to this version. Reprinted by Oxford University Press. Reasoning with moral conflicts. 1994. [Little. pages 35–110. 2007. [Horty. [Prakken and Vreeswijk. and Richmond [Makinson. John Horty. Hochschule Darmstadt.Hogger. 1980. 1973] Bas van Fraassen. The MIT Press. 1995. applicability. Defaults with priorities. 1772. Values and the heart’s com- [Pollock. Practical Reasoning and Norms. john f.. 1996. 1994. 13:81–132. and undercutting. 2000. Oxford theory of reasons. 1980] Raymond Reiter. Letter to Joseph Priestly. Basil Blackwell Publisher. 1986] David Touretzky. Handbook of Philosophical Logic (Second Edition). pages 219–318. A di- Journal of Philosophical Logic. 2007+. alectical model of assessing conflicting arguments in legal reason- [Horty. 1975] Joseph Raz. 8. Artificial 37:557–605. no. Company. weight. tonic multiple inheritance systems. Holism. 2005] Mark Schroeder. C. In D. 1970. In Brad [Schroeder. editors. In Proceedings of the Fifth Intelligence Model of Procedural Justice. 135:55–72. Artificial Intelligence and Law. A logic for default reasoning. 1996] Henry Prakken and Giovanni Sartor. Morgan Kaufmann. Volume 3: ternational Joint Conference on Artificial Intelligence (IJCAI-87). Benjamin to Build a Person. Inc. pages Nonmonotonic Reasoning and Uncertain Reasoning. 1995] Henry Prakken and Giovanni Sartor. [Prakken and Sartor. [Pollock. Journal of Philosophical Logic. Nous. Morgan Kaufmann. mand. American Book [Prakken and Sartor. 1993. 2002. Oxford University Press. the relation between legal language and legal argument: assump- [Gordon. The Journal of Philosophy. 2002] John Horty. 1995] John Pollock. 1995. 476–482.28 . University Press. 1994a] John Horty. Artifi. 2005. 2007] John Horty. 1987. A clash of intuitions: the current state of nonmono- reasoning. Moral generalities revisited. 1936. 1970] John Pollock. Moral dilemmas and nonmonotonic logic. 1973. On Company. editors. book of Logic in Artificial Intelligence and Logic Programming. 1772] Benjamin Franklin. In D. 4:331–368. Kluwer Academic Publishers.

Centres d'intérêt liés