Académique Documents
Professionnel Documents
Culture Documents
discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/8423681
CITATIONS READS
54 297
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Ulrike Hahn on 01 August 2017.
Abstract In this paper, we re-examine a classic informal of informal argument. We argue that a probabilistic
reasoning fallacy, the so-called argumentam ad ignoranti- approach to human reasoning (Chater & Oaksford,
am. We argue that the structure of some versions of this 2001; Oaksford & Chater, 1998, 2001) may be extended
argument parallels examples of inductive reasoning that are to many of these informal arguments. The goal of this
widely viewed as unproblematic. Viewed probabilistically, paper is to provide a Bayesian analysis of at least
these versions of the argument from ignorance constitute a some versions of the argument fr om ignorance
legitimate form of reasoning; the textbook examples are (Walton, 1992). This is “the mistake that is committed
inductive arguments that are not unsound but simply weak, whenever it is argued that a proposition is true simply
due to the nature of the premises and conclusions involved. on the basis that it has not been proved false, or that it
In an experiment, we demonstrated some of the variables is false because it has not been proved true.” For
affecting the strength of the argument, and conclude with example:
some general considerations towards an empirical theory of
argument strength. Ghosts exist because no one has proved (1)
that they do not.
However, the inference from God has all the virtues to A is not known (proved, presumed) to be true (false).
God is benevolent, considered as an informal argument Therefore, A is (presumably) false (true).
would be condemned as circular. That is, it assumes
what it is supposed to establish, even though it can be In general, of course, lack of knowledge, evidence
viewed as logically valid (at least given the additional or proof, is not sufficient to establish that a proposition
premise “benevolence is a virtue”). So this argument is false. Indeed, if it were, then all kinds of absurd
succeeds as a logical argument but fails as an informal conclusions could be licensed by such arguments. For
one (see e.g., Walton, 1989, 1996). These examples example, the fact that we have no evidence that flying
make it clear that standard logic provides little guid- pigs do not exist outside our solar system, does not
ance regarding the acceptability of a pattern of infor- imply that we should conclude that they do (we thank
mal reasoning. This conclusion is further borne out by an anonymous reviewer for this example). Similarly,
recent studies showing that the ability to identify infor- the fact that we have no evidence that flying pigs do
mal reasoning fallacies is not correlated with deductive exist outside our solar system, does not imply that we
reasoning performance (Neuman, in press; Ricco, should conclude that they do not. Both these argu-
2003). ments are instances of the argument from ignorance,
Furthermore, as many authors have observed, a pat- and both seem to be strictly fallacies (although our
tern of informal reasoning that is unacceptable in one prior beliefs seem to suggest that the latter argument is
context may be acceptable in another. This can often more acceptable).
be shown by placing the argument in an appropriate Walton (1996) identifies three basic types of the
dialogical context (Walton, 1992, 1996). For example, if argument from ignorance where fallacies may arise:
a novice Christian was ignorant of God’s properties shifting the burden of proof, epistemic closure, and
and asked his vicar if God was benevolent, then the negative evidence.
reply, “Yes, God has all the virtues,” seems acceptable.
Certainly it seems no less acceptable than the novice Shifting the Burden of Proof
Classicist asking whether Hercules managed to clean The classic example of shifting the burden of proof
the Augean stables, to be told by his lecturer that, comes from the anticommunist trials overseen by
“Yes, Hercules succeeded in all his labours.” We argue Senator Joseph McCarthy in the 1950s. The proposition
that acceptability is a matter of degree and that a in question is that the accused is a communist sympa-
Bayesian approach may provide a useful metric of thizer. In one case, the only evidence offered to sup-
acceptability for informal arguments. This approach is port this conclusion was the statement that “…there is
related to Kuhn’s (1993) attempts to relate scientific nothing in the files to disprove his Communist connec-
and informal reasoning. The most general formal tions” (Kahane, 1992, p. 64). This argument attempts to
model of scientific reasoning currently available is pro- place the burden of proof onto the accused person to
vided by Bayesian probability theory (Earman, 1992; establish that he is not a Communist sympathizer.
Howson & Urbach, 1989). Indeed, it is an attempt to reverse the normal burden
In some psychological work, it is assumed that the of proof in law that someone is innocent until proved
fallacies are instances of bad argumentation and the guilty, which itself licenses one of the few arguments
focus is on the factors that allow people to avoid them from ignorance that some philosophers regard as valid
(e.g., Neuman & Weizman, 2003). By contrast, follow- (e.g., Copi & Cohen, 1990). That is, if the prosecution
ing some recent philosophical work in this area (e.g., cannot prove that a defendant is guilty, then he/she is
Copi & Burgess-Jackson, 1996; Eemeren & innocent. In the McCarthy example, it is clear that the
Grootendorst, 1992; Walton, 1996), we argue that argument is open to question. The conditional premise
whether an argument ad ignorantiam is fallacious in this argument is, “if A were not a Communist sym-
depends on the context in which it occurs. Moreover, pathizer, there would be something in the files to
we attempt to show that at least some versions can be prove it.” However, there is no reason at all to believe
viewed as an instance of – fundamentally sound – that this should be the case.
inductive inference.
Epistemic Closure
The Argument From Ignorance The second type of argument from ignorance is
Walton (1996) identifies the following form for the knowledge-based and relies on the concept of epis-
argument from ignorance: temic closure (De Cornulier, 1988; Walton, 1992) or
what is known as the closed world assumption in
If A were true (false), it would be known (2) Artificial Intelligence (AI) (Reiter, 1980, 1985). The
(proved, presumed) to be true (false). negation-as-failure procedure (Clark, 1978) is a clear
CJEP 58-2 New Order 5/19/04 9:23 AM Page 77
example, where one argues that a proposition is false argue that these arguments do in fact rely on knowl-
and therefore that its negtion is true, because it Lcannot edge (i.e., for epistemic closure it is known that some-
be proved from the contents of a data base, . As a thing is not known and for negative evidence it is
result, the meaning of ¬A, is Lthat A cannot be proved known that there are failed tests of a hypothesis).
from the other statements in , that is, negation is intu- Hence, strictly they are not arguments from, at least
itionistic (McCarty, 1983), not classical. This pattern of total, ignorance. However, we follow Walton (1996) in
reasoning assumes that all knowledge relevant to this grouping all these types of argument under the same
question is in the
L data base. However, the simple addi- heading as they do have the same underlying form.
tion of A to would override this assumption and Walton (1992) points out that the conclusion of any
consequently this style of reasoning relativizes truth to argument from ignorance is open to refutation (i.e., the
truth in a knowledge state. It is clearly also non- conclusion can really only be accepted tentatively). He
monotonic or defeasible (Oaksford & Chater, 1991) suggests dealing with this uncertainty by regarding the
(i.e., conclusions can be overridden by new informa- argument from ignorance as a case of presumptive or
tion). nonmonotonic reasoning. That is, conclusions based
Negation-as-failure is a practical necessity in knowl- on the results of the argument from ignorance are pre-
edge-based systems but there are also more mundane sumed true until proven otherwise. In the next section,
examples. Walton (1992) provides the example of a we discuss some problems for this approach that we
railway timetable. Suppose the point at issue is think our probabilistic approach can resolve.
whether the 13:00 train from London, Kings Cross to
Newcastle stops at Hatfield. If the timetable is consult- Coping With Uncertainty
ed and it is found that Hatfield is not mentioned as Defeasible reasoning, where conclusions can be
one of the stops, then it can be inferred that the train defeated, create many problems in designing knowl-
does not stop there. That is, it is assumed that the edge-based systems in Artificial Intelligence (for exten-
timetable is epistemically closed such that if there were sive discussion in relation to the psychology of human
further stops they would have been included. The rea- reasoning, see Oaksford & Chater, 1991, 1993, 1995,
son why such arguments may fail is again related to 1998). Problems for this style of reasoning can be
the conditional premise in the argument from igno- readily illustrated using another form of presumptive
rance. In the real world, the closed world assumption reasoning using conditional rules (Oaksford & Chater,
is rarely justified so it is not reasonable to assume that 1991, 1993).
if A were true this would be known. For example, if I know that Jane is a runner I may
infer that she is fit. This is a case of presumptive rea-
Negative Evidence soning because she may have a heart condition that
The final type of the argument from ignorance that you do not know about. In an AI knowledge based
Walton (1996) identifies is based on negative evidence system, this style of presumptive reasoning is dealt
and so the conclusion of interest is a hypothesis under with in a similar way to negation-as-failure (Reiter,
test. If this hypothesis is true, then the experiments 1985). Jane is fit can be inferred from the fact that she
that are conducted to test it would reveal positive is a runner, as long as it cannot be proved from the
results (i.e., the predictions that can be deduced from contents of your knowledge base that Jane is unfit.
the hypothesis would actually be observed). However, While runners tend to be fit, academics tend not to be.
if they are not observed, then the hypothesis is false. A Thus, by the same pattern of reasoning, if you found
mundane example of this style of reasoning is testing out that Fred is an academic, and you could not prove
new drugs for safety. The argument from ignorance from the contents of your knowledge base that Fred is
here is that a drug is safe if no toxic effects have been fit, you could conclude that Fred is unfit. What hap-
observed in tests on laboratory animals (Copi & pens when you find out that Amelia is an academic
Cohen, 1990). The critical point about such arguments runner? The problem here is that, depending on which
is that the tests are well conducted and performed in general rule is applied first, you can conclude, pre-
sufficient number that if the drug were truly toxic the sumptively, either that Amelia is fit or that Amelia is
tests would have revealed it. As with the other argu- unfit. Overall, therefore, all that can be concluded is
ments from ignorance, if this conditional premise can- that Amelia is fit or she is unfit. But of course that was
not be established then fallacious conclusions may known before any presumptive inferences were drawn
result. (i.e., this is an uninformative tautology). However,
There is some disagreement in the literature as to intuitively, most people would probably infer that
whether these last two types are genuine arguments Amelia was fit.
from ignorance. Copi and Cohen (1990), for example, That presumptive, nonmonotonic reasoning systems
CJEP 58-2 New Order 5/19/04 9:23 AM Page 78
lead to such counterintuitive, tautological conclusions ment that it is not toxic given no toxic effects are
is what McDermott (1987) referred to as the “you don’t observed (negative argument, i.e., the argument from
want to know” problem and it is related to the “frame ignorance). Second, people’s prior beliefs should influ-
problem” in AI (Oaksford & Chater, 1991; Pylyshyn, ence argument acceptance. The higher the degree of
1987). One way round this problem is to deal with belief in the conclusion of an argument, the more
uncertainty in human reasoning using probability theo- acceptable an argument should be. Third, the more
ry (Oaksford & Chater, 1998). Thus the following state- evidence found that is compatible with the conclusion
ments are perfectly consistent with each other: the of an argument, the more acceptable it should be. We
probability of being fit given someone is an academic look at each prediction in turn.
is low, the probability of being fit given someone is a
runner is high, and the probability of being fit given Negative Versus Positive Arguments
someone is an academic runner is high. Consequently, We concentrate in this section on defining the con-
the intuitively correct inferences are at least possible. ditions for a legitimate test. Let e stand for an experi-
ment where a toxic effect is observed and ¬e stand for
A Bayesian Approach to the Argument From Ignorance an experiment where a toxic effect is not observed.
In this section, we show how the negative evidence Similarly, let T stand for the hypothesis that the drug
case of the argument from ignorance can be under- produces a toxic effect and ¬T stand for the alternative
stood from a Bayesian perspective. We describe our hypothesis that the drug does not produce toxic
approach to the argument from ignorance as effects. The strength of the argument from ignorance is
“Bayesian” not only because we use Bayes’ rule. We given by the conditional probability that the hypothe-
also regard the probabilities that figure in these calcu- sis, T, is false given that a negative test result, ¬e, is
lations as subjective degrees of belief and we treat found, P(¬T|¬e). This probability is referred to as neg-
prior probabilities as important in the reasoning ative test validity. The strength of the argument we
process (Oaksford & Chater, 1996). There has been wish to compare with the argument from ignorance is
one other attempt, to our knowledge, to offer a given by positive test validity, that is, the probability
Bayesian justification for an informal reasoning fallacy. that the hypothesis, T, is true given that a positive test
Shogenji (2000) provided such a justification for circu- result, e, is found, P(T|e). These probabilities can be
larity, which is also the only other informal reasoning calculated from the sensitivity (P(e|T)) and the selectiv-
fallacy that has been investigated experimentally (Rips, ity (P(¬e|¬T)) of the test and the prior belief that T is
2002). However, as far as we are aware, this is the first true (P(T)) using Bayes’ theorem:
attempt to offer a Bayesian analysis of the argument
from ignorance. As we pointed out earlier, the argu- P( e | T ) P (T )
ment from ignorance can be used nonfallaciously but P(T | e ) = (3)
when depends on the context. We now show that our P ( e)
Bayesian account can specify the contextual conditions
in which the argument from inference may be viewed
P (↓
¬e| ↓
¬ T )(1 − P( T ))
as nonfallacious. P( ↓¬ T | ↓
¬ e) = (4)
We use the example of testing drugs for safety and 1 − P(e )
first show how it fits the form of this argument (see (2)
above): P(e) can be calculated from these other probabilities
by total probability. P(e) corresponds to the base rate
If drug A were toxic, it would produce toxic effects of toxic effects in the population. Sensitivity corre-
in legitimate tests. sponds to the “hit rate” of the test and 1 minus the
Drug A has not produced toxic effects in such tests. selectivity corresponds to the “false positive rate.”
Therefore, A is not toxic. There is a trade-off between sensitivity and selectivity
that is captured in the receiver operating characteristic
A Bayesian analysis makes three main testable pre- curve (Green & Swets, 1966) which plots sensitivity
dictions, once some reasonable assumptions about against the false positive rate (1 - selectivity). Where
what constitutes a legitimate test are made. First, the the criterion is set along this curve will determine the
argument from ignorance (what we will refer to as sensitivity and selectivity of the test.
negative arguments) should be acceptable but less Generally, positive test validity will be greater than
acceptable than positive arguments. Thus the argument negative test validity when P(e) is low and P(e|T) is
that drug A is toxic given a toxic effect is observed high. Toxic effects are abnormal. That is, they are
(positive argument) is more acceptable than the argu- physical or psychological effects that are uncommon,
CJEP 58-2 New Order 5/19/04 9:23 AM Page 79
TABLE 1
The Mean Acceptance Ratings for Each Dialogue by Evidence (One vs. Fifty Experiments), Prior Belief (Strong vs. Weak), Argument Type
(Positive vs. Negative), and Scenario (Drugs or TV Violence) in the Experiment (N = 84)
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
One Experiment Fifty Experiments
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Strong Weak Strong Weak Total
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Scenario Argument Mean SD Mean SD Mean SD Mean SD Mean SD
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Drugs Positive 68.00 (25.10) 53.58 (23.85) 84.20 (19.11) 70.87 (21.58) 69.16 (24.93)
Negative 56.88 (25.16) 38.86 (22.49) 74.69 (22.71) 59.76 (24.07) 57.55 (26.76)
TV Positive 65.33 (26.32) 50.75 (23.24) 80.35 (21.59) 67.26 (22.37) 65.92 (25.60)
Negative 52.61 (26.76) 41.62 (22.66) 72.32 (23.04) 59.24 (22.99) 56.45 (26.29)
Total 60.71 (26.48) 46.20 (23.77) 77.89 (22.07) 64.28 (23.20)
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Experiment drug has side effects and the experiments that have
This experiment was designed to test the predic- found them. We devised two scenarios, one involving
tions we derived from our Bayesian account of nega- drug side effects and the other involving the relation-
tive evidence versions of the argument from ignorance. ship between violent behaviour and TV violence (see
That is, our substantial psychological claim is that peo- Appendix).
ple think about the negative evidence case of the argu-
ment from ignorance in this Bayesian way. We placed Method
these arguments in their normal dialogical context. We Participants. Eighty-four undergraduate psychology
chose this format because it allowed us to present the students from Cardiff University participated in this
argument in a natural, engaging format, and it also experiment on a volunteer basis.
allowed us to introduce prior beliefs without needing
to explicitly record or influence participants’ own Design. The experiment was a 2 x 2 x 2 x 2 repeat-
beliefs. This was achieved by presenting participants ed measures design. The independent variables manip-
with dialogues between two fictitious interlocuters. ulated within each argumentative dialogue were: belief
Participants acted as a third party who had to judge strength (weak or strong), polarity of the argument
how convinced one of the interlocuters should be by (positive or negative), amount of evidence (1 experi-
the argument. Participants would be presented with a ment or 50 experiments), and the scenario (drugs or
piece of dialogue such as: TV violence). The dependent measure was the rating
of how strongly one interlocutor should now believe
Barbara: Are you taking digesterole for it? the conclusion.
Adam: Yes, why?
Barbara: Good, because I strongly believe that it does Materials and procedure. The 16 dialogues were
not have side effects. presented in a booklet to groups of participants rang-
Adam: It does not have any side effects. ing in size from two to five people. There was one
Barbara: How do you know? dialogue per page and they appeared in random order
Adam: Because I know of an experiment where they in the booklet. The dialogues are illustrated in the
failed to find any. Appendix. The names of the interlocuters were differ-
ent in each dialogue. Participants were instructed to
Participants would then be asked, given Adam’s read the argumentative dialogue and rate how strongly
argument, how strongly they thought that Barbara one interlocutor should now believe the conclusion
should now believe that taking digesterole does not given the argument put forward by the other interlocu-
have side effects (the names of the drugs were ficti- tor by using a 0 to 100 scale, where 0 denotes the
tious). This is a strong prior belief, weak evidence interlocutor should not believe the conclusion at all
(only one experiment), negative polarity argument. In and 100 denotes the interlocutor is certain that the
the weak prior belief condition, “I strongly believe” conclusion is true.
was replaced with “I weakly believe.” In the strong
evidence condition, participants were told that there Results
were 50 experiments involved. In the positive argu- The means (and SDs) for each cell in this experi-
ment condition, rather than discussing whether the ment are shown in Table 1. We performed a 2
drug does not have side effects and experiments fail- (Scenario: drugs or TV violence) x 2 (Evidence: 1 vs.
ing to find them, the interlocuters discuss whether the 50 experiments) x 2 (Polarity: positive vs. negative) x 2
CJEP 58-2 New Order 5/19/04 9:23 AM Page 81
(Prior Belief: weak vs. strong) repeated measures This experiment bore out most of the predictions of
ANOVA with acceptance rating as the dependent vari- our Bayesian account (i.e., participants rated degree of
able. The drugs scenario arguments (M = 63.36, SE = belief in the conclusion given the arguments they were
1.02) were marginally more acceptable than the TV presented with varied, in the main, as Bayes’ rule
violence arguments (M = 61.19, SE = 1.02), F(1, 83) = would predict).
3.59, MSE = 440.55, p = .062.
Positive arguments (M = 67.54, SE = .98) were more Discussion
acceptable than negative arguments (M = 57.00, SE = In this experiment, we tested the four main predic-
1.02), F (1, 83) = 53.46, MSE = 698.81, p < .0001. tions that follow from our Bayesian account of the
Acceptability ratings for negative arguments were also argument from ignorance. The first was that negative
significantly above zero, for all eight dialogues involv- arguments should be acceptable, but less acceptable
ing a negative argument, t(83) > 15.83, p < .0001, in all than the positive arguments. This prediction was con-
cases. This result confirms the prediction that the argu- firmed in that negative arguments were significantly
ment from ignorance would be regarded as acceptable less acceptable than positive arguments, but their rat-
although not as acceptable as its positive counterpart. ings where nevertheless significantly above zero.
Arguments for which only one piece of evidence Second, the Bayesian account predicts that an argu-
was provided (M = 53.46, SE = 1.01) were less accept- ment should become more acceptable the more evi-
able than arguments for which 50 pieces of evidence dence is found for its conclusion. This prediction, too,
were provided (M = 71.09, SE = .91), F(1, 83) = 100.73, was confirmed, in that the manipulation of amount of
MSE = 1,036.93, p < .0001. It was also predicted that evidence had a significant effect on participants’ rat-
the impact of evidence should be greater for negative ings. Third, it was also predicted that the effect of evi-
arguments than for positive arguments; such an inter- dence should interact with argument polarity, such that
action was observed, F(1, 83) = 5.22, MSE = 122.51, p < a greater effect of evidence would be observed for
.025. All the simple effects comparisons were highly negative rather than positive arguments. This predic-
significant but while both positive and negative argu- tion was also confirmed. Finally, the Bayesian account
ments were endorsed more strongly when the evi- predicts that an argument should be more acceptable
dence was strong, this effect was greater for negative the stronger one’s prior belief in the conclusion. This
arguments (difference = 19.01) than for positive argu- effect was also observed.
ment (difference = 16.23). We also believe that our Bayesian account can be
Arguments for which prior belief in the conclusion generalized to the epistemic closure form of the argu-
was strong (M = 69.30, SE = 1.00) were more accept- ment from ignorance, because the subjective probabili-
able than arguments for which prior belief was weak ties involved may vary with other beliefs as well as
(M = 55.24, SE = .97), F(1, 83) = 92.67, MSE = 716.34, p with objective experimental tests. If a closed world
< .0001. It was also predicted that the advantage for assumption can be made, then the epistemic argument
positive arguments over negative arguments would from ignorance is deductively valid. For example, if all
reduce when prior belief was weak. The interaction relevant information about where trains stop is dis-
between prior belief and polarity was not significant, played in the timetable, then the conclusion that the
F(1, 83) < 1. However, the three-way interaction with train does not stop at Hatfield, because it does not say
scenario was significant, F(1, 83) = 6.46, MSE = 62.80, p it does in the timetable, follows deductively. Of course
< .025. For the TV scenario, although the belief by we may believe that the epistemic closure condition is
polarity interaction was not significant, F(1, 83) = 1.60, met in varying degrees. For example, suppose you ask
MSE = 84.75, p = .21, the increase in acceptability for one of your friends, who rarely travels on this line,
positive over negative arguments was greater when whether the train stops at Hatfield, and she says she
prior belief was strong (13.83) than when it was weak cannot recall it ever stopping there, so it probably
(12.04). This trend was in the predicted direction. For does not. You are more likely to be confident in this
the drugs scenario, the belief by polarity interaction conclusion if you asked another of your friends who
was again not significant, F(1, 83) = 3.36, MSE = 84.57, regularly uses that line. Similarly, you would be more
p = .07, but the increase in acceptability for positive confident again if you asked the railway guard. In all
over negative arguments was greater when the prior cases, your confidence in the conclusion rises in pro-
belief was weak (16.48) than when it was strong portion to your degree of belief that the epistemic clo-
(13.87). This trend was opposite to prediction. This sure condition is met. This degree of belief may also
effect may just be a statistical or materials artifact. We be affected by how effectively you believe that the
will only know when we use more powerful manipu- search has been conducted. So, for example, if you go
lations of prior belief with a wider variety of materials. to the inquiry desk where they conduct a computer
CJEP 58-2 New Order 5/19/04 9:23 AM Page 82
search, you may be more confident again than when Kuhn & Felton, 2000). A Bayesian approach may sug-
asking the railway guard. Our future experiments will gest that in justifying their informal conclusions, peo-
investigate the effects of these manipulations. ple concentrate on the evidence. Hence, such an
Our account may even apply to cases of shifting the approach cannot account for people’s reliance on
burden of proof. This is because it is not always clear causal mechanism. However, recent proposals for
which of the three types of argument from ignorance implementing a Bayesian approach indicate that it may
(Walton, 199) a particular instance falls under. capture the effects of both evidence and explanation.
Moreover, such an argument may have more than one In Bayesian networks (Pearl, 1988, 2000) and other
construal. For example, the argument that ghosts exist graphical approaches to statistical inference (Glymour
because no one has proved that they do not may be & Cheng, 1998; Spirtes, Glymour, & Scheines, 1993),
used to shift the burden of proof. If we assume joint probability distributions can only be identified
Occam’s Razor as a principle of reasoning, then this when appropriate causal mechanisms have been speci-
argument is illegitimate because the burden of proof is fied. Only once these have been determined can the
on those who want to introduce new entities into the force of evidence be assessed. Consequently, a
world. However, it could also be construed as a weak Bayesian approach may also capture one of the key
argument in terms of evidential considerations. That is, distinctions identified in the psychological literature on
the second premise of the argument, that there is no informal and scientific reasoning.
evidence that ghosts do not exist, can be questioned.
Extensive research into the paranormal has consistent- Conclusion
ly been able to explain these phenomena as hoaxes or In conclusion, we have provided a Bayesian recon-
normal physical effects. Consequently, just as in the struction of some versions of a classic informal reason-
drugs case, there is a host of legitimate tests that have ing fallacy – the argument from ignorance. We have
indeed provided negative evidence that ghosts do not also provided initial support for this reconstruction
exist. In our view, this particular argument from igno- through an experimental investigation of the perceived
rance is wrong headed both as a shifting of the burden acceptability of some everyday examples of this argu-
of proof or as an argument based on negative evi- ment structure. This is, as yet, clearly insufficient to
dence. establish that a Bayesian framework is an, or possibly
The effect of prior belief is an instance of a funda- even the, appropriate framework for the study of infor-
mental aspect of argumentation – namely, the nature mal argument. However, we think it does demonstrate
of the audience and its role in argumentation. Our the basic utility of the approach both for providing a
prior belief manipulation, which presents the recipi- novel perspective on classic issues, for suggesting
ents of the evidential argument as either strongly or novel research questions, and for motivating experi-
weakly believing the conclusion, constitutes an mental research. We hope this study will provide an
instance of differences between audiences. It has been impetus for future experimental investigations of this
widely assumed that the nature of the audience is a kind.
crucial variable both from the pragmatic perspective of
seeking to convince and for any rational reconstruction We thank Valerie Thompson for inviting us to contribute
of argumentation (e.g., Perelman & Olbrechts-Tyteca, to this very timely special edition of this journal, and Nick
1969). Hence, it would have been extremely surpris- Chater for directing us to the Shogenji (2000) paper. We
ing, and of considerable consequence, if sensitivity to also thank Tim van Gelder and an anonymous reviewer for
the nature of an audience had not been found in this their very useful and constructive comments on the original
experiment. It is also interesting that for arguments manuscript.
from ignorance involving negative evidence, such sen- Correspondence concerning this article may be sent to
sitivity can readily be assimilated to a Bayesian Mike Oaksford or to Ulrike Hahn, School of Psychology,
account. Cardiff University, P.O. Box 901, Cardiff, CF10 3YG, Wales,
In the Introduction, we observed that our approach UK (E-mail: oaksford@cardiff.ac.uk, hahnu@cardiff.ac.uk).
to informal reasoning in general (i.e., beyond the con-
fines of the argument from ignorance) is similar to References
Kuhn’s (1993) proposal that scientific reasoning and Birnbaum, M. H. (1983). Base rates in Bayesian inference:
informal reasoning are very closely related. In justify- Signal detection analysis of the cab problem. American
ing theoretical claims, people tend to concentrate on Journal of Psychology, 96, 85-94.
explanation in terms of causal mechanism rather than Brem, S. K., & Rips, L. J. (2000). Evidence and explanation
on evidence (Kuhn, 2001), although this tendency is in informal argument. Cognitive Science, 24, 573-604.
mediated by a variety of factors (Brem & Rips, 2000; Chater, N., & Oaksford, M. (2001). Human rationality and
CJEP 58-2 New Order 5/19/04 9:23 AM Page 83
the psychology of reasoning: Where do we go from Oaksford, M., & Chater, N. (1993). Reasoning theories and
here? British Journal of Psychology, 92, 193-216. bounded rationality. In K. I. Manktelow & D. E. Over
Clark, K. L. (1978). Negation as failure. In H. Gallaire & J. (Eds.), Rationality (pp. 31-60). London: Routledge.
Minker (Eds.), Logic and databases (pp. 293-322). New Oaksford, M., & Chater, N. (1995). Theories of reasoning
York: Plenum Press. and the computational explanation of everyday infer-
Copi, I. M., & Burgess-Jackson, K. (1995). Informal logic. ence. Thinking & Reasoning, 1, 121-152.
Englewood Cliffs, NJ: Prentice Hall. Oaksford, M., & Chater, N. (1996). Rational explanation of
Copi, I. M., & Cohen, C. (1990). Introduction to logic (8th the selection task. Psychological Review, 103, 381-391.
ed.). New York: Macmillan Press. Oaksford, M., & Chater, N. (1998). Rationality in an uncer-
De Cornulier, B. (1988). Knowing whether, knowing who, tain world: Essays on the cognitive science of human rea-
and epistemic closure. In M. Meyer (Ed.), Questions and soning. Hove, UK: Psychology Press.
questioning (pp. 182-192). Berlin: Walter de Gruyter. Oaksford, M., & Chater, N. (2001). The probabilistic
Earman, J. (1992). Bayes or bust? Cambridge, MA: MIT Press. approach to human reasoning. Trends in Cognitive
Edwards, W. (1968). Conservatism in human information Sciences, 5, 349-357.
processing. In B. Kleinmuntz (Ed.), Formal representa- Pearl, J. (1988). Probabilistic reasoning in intelligent
tion of human judgement (pp. 17-52). New York: Wiley. systems. San Mateo, CA: Morgan Kaufman.
Felton, M., & Kuhn, D. (2001). The development of argu- Pearl, J. (2000). Causality. Cambridge, UK : Cambridge
mentative discourse skill. Discourse Processes, 32, 135- University Press.
153. Perelman, C., & Olbrechts-Tyteca, L. (1969). The new
Freeman, J. B. (1991). Dialectics and the macrostructure of rhetoric: A treatise on argumentation. Notre Dame, IN:
arguments: A theory of argument structure. Berlin: Foris University of Notre Dame Press.
Publications. Pylyshyn, Z. W. (Ed.) (1987). The robot’s dilemma: The
Glymour, C., & Cheng, P. W. (1998). Causal mechanism and frame problem in artificial intelligence. Norwood, NJ:
probability: A normative approach. In M. Oaksford & N. Ablex.
Chater (Eds.), Rational models of cognition (pp. 295- Reiter, R. (1980). A logic for default reasoning. Artificial
313). Oxford, UK: Oxford University Press. Intelligence, 13, 81-132.
Green, D. M., & Swets, J. A., (1966). Signal detection theory Reiter, R. (1985). On reasoning by default. In R. Brachman
and psychophysics. New York: Wiley. & H. Levesque (Eds.), Readings in knowledge representa-
Harman, G. (1965). The inference to the best explanation. tion (pp. 401-410). Los Altos, CA: Morgan Kaufman.
Philosophical Review, 64, 88-95. Ricco, R. B. (2003). The macrostructure of informal argu-
Howson, C., & Urbach, P. (1989) Scientific reasoning: The ments: A proposed model and analysis. Quarterly
Bayesian approach. La Salle, IL: Open Court. Journal of Experimental Psychology, 56, 1021-1052.
Kahane, H. (1992). Logic and contemporary rhetoric. Rips, L. J. (1998). Reasoning and conversation.
Belmont, CA: Wadsworth. Psychological Review, 105, 411-441.
Kuhn, D. (1993). Connecting scientific and informal reason- Rips, L. J. (2002). Circular reasoning. Cognitive Science, 26,
ing. Merrill-Palmer Quarterly, 39, 74-103. 767-795.
Kuhn, D. (2001). How do people know? Psychological Shogenji, T. (2000). Self-dependent justification without cir-
Science, 12, 1-8. cularity. British Journal for the Philosophy of Science, 51,
Kuhn, D., & Felton, M. (2000). Developing appreciation of 287-298.
the relevance of evidence to argument. Paper presented Spirtes, P., Glymour, C., & Scheines, R. (1993). Causation,
at the Winter Conference on Discourse, Text, and prediction and search. New York: Springer-Verlag.
Cognition, Jackson Hole, WY. Van Eemeren, F. H., & Grootendorst, R. (1992).
McCarty, C. (1983). Intuitionism: An introduction to a semi- Argumentation, communication, and fallacies. Hillsdale,
nar. Journal of Philosophical Logic, 12, 105-149. NJ: Lawrence Erlbaum.
McDermott, D. (1987). A critique of pure reason. Voss, J. F., & Van Dyke, J. A. (2001). Argumentation in psy-
Computational Intelligence, 3, 151-160. chology: Background comments. Discourse Processes,
Neuman, Y. (in press). Go ahead, prove that God does not 32, 89-111.
exist! Learning and Instruction. Walton, D. N. (1989). Informal logic. Cambridge, UK :
Neuman, Y., & Weizman, E. (2003). The role of text repre- Cambridge University Press.
sentation in students’ ability to identify fallacious argu- Walton, D. N. (1992). Nonfallacious arguments from igno-
ments. Quarterly Journal of Experimental Psychology, rance. American Philosophical Quarterly, 29, 381-387.
56A, 849-864. Walton, D. N. (1996). Ar guments fr om ignorance.
Oaksford, M., & Chater, N. (1991). Against logicist cognitive Philadelphia, PA: Pennsylvania State University Press.
science. Mind & Language. 6, 1-38.
CJEP 58-2 New Order 5/19/04 9:23 AM Page 84
Appendix
This is a positive argument with strong prior belief and weak evidence. Changing this argument
means making the following changes: those marked “N” for a negative argument; those marked “W” for
a weak prior belief argument; and those marked “E” for a strong evidence argument. In each case
either the word or phrase in parentheses is introduced or it replaces the word or phrase in italics. Half
the dialogues used the made-up drug name “digesterole,” the other half used “parazepam.” The TV
violence dialogues were:
Sommaire
L’article examine à nouveau, dans une perspective analyse bayesienne, tant que le taux de base des effets
bayesienne, un sophisme du raisonnement informel clas- toxiques est faible et la sensibilité du test est élevée, alors
sique, à savoir argumentam ad ignorantiam. Le sophisme l’argument reposant sur l’ignorance est acceptable, quoique
est le suivant : l’erreur commise lorsqu’il est affirmé qu’une dans une moindre mesure que sa contrepartie positive,
proposition est vraie tout bonnement parce que sa fausseté c’est-à-dire que la drogue est toxique parce que des effets
n’a pas été prouvée ou qu’elle est fausse parce que sa toxiques ont été observés. Qui plus est, le volume des
véracité n’a pas été prouvée. En voici un exemple : les fan- preuves et des croyances antérieures devrait aussi influ-
tômes existent parce que personne n’a prouvé qu’ils n’exis- encer l’acceptabilité de la conclusion.
tent pas. Ces derniers temps, des philosophes ont com- Une expérience a permis de tester les prédictions préc-
mencé à recenser des conditions contextuelles dans itées en variant les trois facteurs, nommément polarité des
lesquelles l’argument fondé sur l’ignorance est acceptable arguments (négatifs, positifs), croyances antérieures (fortes,
(van Eemeren et Grootendorst, 1993; Walton, 1992, 1996). faibles) et preuves (1 test, 50 tests) dans le cadre de dia-
Les auteurs soutiennent qu’une analyse bayesienne de l’ar- logues argumentatifs. Deux scénarios ont été retenus : l’un
gument fondé sur l’ignorance est susceptible de produire concernant la violence à la télévision, l’autre des tests sur
une mesure de l’acceptabilité de cette forme d’argument. des drogues. La plupart des prédictions de l’analyse baye-
Dans l’article, ils présentent une analyse en ce sens et un sienne ont été confirmées. L’argument fondé sur l’ignorance
test expérimental de ces prédictions. a été avalisé, mais avec moins de vigueur que les argu-
Walton (1996) a reconnu trois formes de l’argument ments positifs. Plus les croyances antérieures d’une person-
fondé sur l’ignorance : transfert du fardeau de la preuve, ne étaient fortes et plus les preuves étaient nombreuses,
clôture épistémique, preuves négatives. Les auteurs se con- plus le crédit accordé à la conclusion était grand. Par
centrent sur le cas des preuves négatives, par exemple la ailleurs, l’interaction entre la polarité et les autres facteurs
conclusion qu’une drogue n’est pas toxique est acceptée du était largement conforme aux prédictions du théorème de
fait qu’aucun effet toxique n’a été observé. Selon une Bayes.
En conclusion, une approche bayesienne des sophismes l’étendant à des arguments informels. De plus, pareille
du raisonnement informel semblerait ouvrir la porte à un analyse pourrait s’appliquer aisément à d’autres formes de
domaine d’enquête prometteur. Elle généralise l’approche l’argument fondé sur l’ignorance et par-delà celui-ci et
probabiliste du raisonnement (Oaksford et Chater, 2001) en expliquer d’autres types d’argumentation informelle.