Vous êtes sur la page 1sur 10

Heuristics in judgment and decision-making

In psychology, heuristics are simple, ecient rules


which people often use to form judgments and make decisions. They are mental shortcuts that usually involve focusing on one aspect of a complex problem and ignoring
others.[1][2][3] These rules work well under most circumstances, but they can lead to systematic deviations from
logic, probability or rational choice theory. The resulting
errors are called "cognitive biases" and many dierent
types have been documented. These have been shown to
aect peoples choices in situations like valuing a house
or deciding the outcome of a legal case. Heuristics usually govern automatic, intuitive judgments but can also be
used as deliberate mental strategies when working from
limited information.

many more. Heuristics that underlie judgment are called


judgment heuristics. Another type, called evaluation
heuristics, are used to judge the desirability of possible
choices.[7]

1.1 Availability
Main article: Availability heuristic
In psychology, availability is the ease with which a particular idea can be brought to mind. When people estimate
how likely or how frequent an event is on the basis of
its availability, they are using the availability heuristic.[8]
When an infrequent event can be brought easily and
vividly to mind, this heuristic overestimates its likelihood.
For example, people overestimate their likelihood of dying in a dramatic event such as a tornado or terrorism.
Dramatic, violent deaths are usually more highly publicised and therefore have a higher availability.[9] On the
other hand, common but mundane events are hard to
bring to mind, so their likelihoods tend to be underestimated. These include deaths from suicides, strokes, and
diabetes. This heuristic is one of the reasons why people
are more easily swayed by a single, vivid story than by
a large body of statistical evidence.[10] It may also play a
role in the appeal of lotteries: to someone buying a ticket,
the well-publicised, jubilant winners are more available
than the millions of people who have won nothing.[9]

Cognitive scientist Herbert A. Simon originally proposed


that human judgments are based on heuristics, taking
the concept from the eld of computation.[lower-alpha 1] In
the early 1970s, psychologists Amos Tversky and Daniel
Kahneman demonstrated three heuristics that underlie a
wide range of intuitive judgments. These ndings set in
motion the Heuristics and Biases (HB)[4] research program, which studies how people make real-world judgments and the conditions under which those judgments
are unreliable. This research challenged the idea that human beings are rational actors, but provided a theory of
information processing to explain how people make estimates or choices. This research, which rst gained worldwide attention in 1974 with the Science paper "Judgment
Under Uncertainty: Heuristics and Biases",[5] has guided
almost all current theories of decision-making.[6]
When people judge whether more English words begin
Although a lot of research has focused on how heuris- with T or with K, the availability heuristic gives a quick
tics lead to errors, they can be seen as rational in an un- way to answer the question. Words that begin with T
derlying sense. According to this perspective, heuristics come more readily to mind, and so subjects give a corare good enough for most purposes without being too de- rect answer without counting out large numbers of words.
manding on the brains resources. Another theoretical However, this heuristic can also produce errors. When
perspective sees heuristics as fully rational in that they people are asked whether there are more English words
are rapid, can be made without full information and can with K in the rst position or with K in the third posibe as accurate as more complicated procedures. By un- tion, they use the same process. It is easy to think of
derstanding the role of heuristics in human psychology, words that begin with K, such as kangaroo, kitchen, or
marketers and other persuaders can inuence decisions, kept. It is harder to think of words with K as the third
such as the prices people pay for goods or the quantity letter, such as lake, or acknowledge, although objectively
these are three times more common. This leads people
they buy.
to the incorrect conclusion that K is more common at the
start of words.[11] In another experiment, subjects heard
the names of many celebrities, roughly equal numbers of
1 Types
whom were male and female. The subjects were then
asked whether the list of names included more men or
In their initial research, Tversky and Kahneman proposed more women. When the men in the list were more fathree heuristicsavailability; representativeness; and an- mous, a great majority of subjects incorrectly thought
choring and adjustment. Subsequent work has identied there were more of them, and vice versa for women.
1

TYPES

Tversky and Kahnemans interpretation of these results The representativeness heuristic is also an explanation of
is that judgments of proportion are based on availability, how people judge cause and eect: when they make these
which is higher for the names of better-known people.[8] judgements on the basis of similarity, they are also said
In one experiment that occurred before the 1976 U.S. to be using the representativeness heuristic. This can lead
Presidential election, some participants were asked to to a bias, incorrectly nding causal relationships between
imagine Gerald Ford winning, while others did the same things that resemble one another and missing them when
for a Jimmy Carter victory. Each group subsequently the cause and eect are very dierent. Examples of this
viewed their allocated candidate as signicantly more include both the belief that emotionally relevant events
ought to have emotionally relevant causes, and magical
likely to win. The researchers found a similar eect
[15]
when students imagined a good or a bad season for a associative thinking.
college football team.[12] The eect of imagination on
subjective likelihood has been replicated by several other
1.2.1 Ignorance of base rates
researchers.[10]
A concepts availability can be aected by how recently
and how frequently it has been brought to mind. In one
study, subjects were given partial sentences to complete.
The words were selected to activate the concept either
of hostility or of kindness: a process known as priming.
They then had to interpret the behavior of a man described in a short, ambiguous story. Their interpretation
was biased towards the emotion they had been primed
with: the more priming, the greater the eect. A greater
interval between the initial task and the judgment decreased the eect.[13]
Tversky and Kahneman oered the availability heuristic
as an explanation for illusory correlations in which people wrongly judge two events to be associated with each
other. They explained that people judge correlation on
the basis of the ease of imagining or recalling the two
events together.[8][11]

1.2

Representativeness

Main article: Base rate fallacy


A 1973 experiment used a psychological prole of Tom
W., a ctional graduate student.[16] One group of subjects
had to rate Toms similarity to a typical student in each
of nine academic areas (including Law, Engineering and
Library Science). Another group had to rate how likely
it is that Tom specialised in each area. If these ratings of
likelihood are governed by probability, then they should
resemble the base rates, i.e. the proportion of students
in each of the nine areas (which had been separately estimated by a third group). If people based their judgments
on probability, they would say that Tom is more likely to
study Humanities than Library Science, because there are
many more Humanities students, and the additional information in the prole is vague and unreliable. Instead, the
ratings of likelihood matched the ratings of similarity almost perfectly, both in this study and a similar one where
subjects judged the likelihood of a ctional woman taking
dierent careers. This suggests that rather than estimating probability using base rates, subjects had substituted
the more accessible attribute of similarity.[16]

Main article: Representativeness heuristic


The representativeness heuristic is seen when people use
categories, for example when deciding whether or not a
person is a criminal. An individual thing has a high representativeness for a category if it is very similar to a prototype of that category. When people categorise things on
the basis of representativeness, they are using the representativeness heuristic. Representative is here meant in
two dierent senses: the prototype used for comparison
is representative of its category, and representativeness
is also a relation between that prototype and the thing
being categorised.[11][14] While it is eective for some
problems, this heuristic involves attending to the particular characteristics of the individual, ignoring how common those categories are in the population (called the
base rates). Thus, people can overestimate the likelihood
that something has a very rare property, or underestimate
the likelihood of a very common property. This is called
the base rate fallacy. Representativeness explains this and
several other ways in which human judgments break the
laws of probability.[11]

1.2.2 Conjunction fallacy


Main article: Conjunction fallacy
When people rely on representativeness, they can fall
into an error which breaks a fundamental law of
probability.[14] Tversky and Kahneman gave subjects a
short character sketch of a woman called Linda, describing her as, 31 years old, single, outspoken, and very
bright. She majored in philosophy. As a student, she was
deeply concerned with issues of discrimination and social
justice, and also participated in anti-nuclear demonstrations. People reading this description then ranked the
likelihood of dierent statements about Linda. Amongst
others, these included Linda is a bank teller, and,
Linda is a bank teller and is active in the feminist movement. People showed a strong tendency to rate the latter, more specic statement as more likely, even though
a conjunction of the form Linda is both X and Y" can
never be more probable than the more general statement

1.3

Anchoring and adjustment

Linda is X". The explanation in terms of heuristics is


that the judgment was distorted because, for the readers,
the character sketch was representative of the sort of person who might be an active feminist but not of someone
who works in a bank. A similar exercise concerned Bill,
described as intelligent but unimaginative. A great majority of people reading this character sketch rated Bill is
an accountant who plays jazz for a hobby, as more likely
than Bill plays jazz for a hobby.[17]
Without success, Tversky and Kahneman used what they
described as a series of increasingly desperate manipulations to get their subjects to recognise the logical error. In one variation, subjects had to choose between
a logical explanation of why Linda is a bank teller is
more likely, and a deliberately illogical argument which
said that Linda is a feminist bank teller is more likely
because she resembles an active feminist more than
she resembles a bank teller. Sixty-ve percent of subjects found the illogical argument more convincing.[17][18]
Other researchers also carried out variations of this study,
exploring the possibility that people had misunderstood
the question. They did not eliminate the error.[19][20]
The error disappears when the question is posed in terms
of frequencies. Everyone in these versions of the study
recognised that out of 100 people tting an outline description, the conjunction statement (She is X and Y")
cannot apply to more people than the general statement
(She is X").[21]
1.2.3

Ignorance of sample size

Main article: Insensitivity to sample size


Tversky and Kahneman asked subjects to consider a
problem about random variation. Imagining for simplicity that exactly half of the babies born in a hospital are
male, the ratio will not be exactly half in every time period. On some days, more girls will be born and on others, more boys. The question was, does the likelihood of
deviating from exactly half depend on whether there are
many or few births per day? It is a well-established consequence of sampling theory that proportions will vary
much more day-to-day when the typical number of births
per day is small. However, peoples answers to the problem do not reect this fact. They typically reply that the
number of births in the hospital makes no dierence to
the likelihood of more than 60% male babies in one day.
The explanation in terms of the heuristic is that people
consider only how representative the gure of 60% is of
the previously given average of 50%.[11][22]

3
in one study were asked whether Paul or Susan was
more likely to be assertive, given no other information
than their rst names. They rated Paul as more assertive,
apparently basing their judgment on a gender stereotype.
Another group, told that Pauls and Susans mothers each
commute to work in a bank, did not show this stereotype eect; they rated Paul and Susan as equally assertive.
The explanation is that the additional information about
Paul and Susan made them less representative of men or
women in general, and so the subjects expectations about
men and women had a weaker eect.[23] This means irrelative and undiagnostic information about certain issue
can make relative information less powerful to the issue
when people understand the phenomenon.[24]

1.2.5 Misperception of randomness


Representativeness explains systematic errors that people make when judging the probability of random events.
For example, in a sequence of coin tosses, each of which
comes up heads (H) or tails (T), people reliably tend
to judge a clearly patterned sequence such as HHHTTT
as less likely than a less patterned sequence such as
HTHTTH. These sequences have exactly the same probability, but people tend to see the more clearly patterned
sequences as less representative of randomness, and so
less likely to result from a random process.[11][25] Tversky and Kahneman argued that this eect underlies the
gamblers fallacy; a tendency to expect outcomes to even
out over the short run, like expecting a roulette wheel to
come up black because the last several throws came up
red.[14][26] They emphasised that even experts in statistics
were susceptible to this illusion: in a 1971 survey of professional psychologists, they found that respondents expected samples to be overly representative of the population they were drawn from. As a result, the psychologists
systematically overestimated the statistical power of their
tests, and underestimated the sample size needed for a
meaningful test of their hypotheses.[11][26]

1.3 Anchoring and adjustment


Main article: Anchoring

Anchoring and adjustment is a heuristic used in many


situations where people estimate a number.[27] According to Tversky and Kahnemans original description, it
involves starting from a readily available numberthe
anchorand shifting either up or down to reach an answer that seems plausible.[27] In Tversky and Kahnemans
experiments, people did not shift far enough away from
1.2.4 Dilution eect
the anchor. Hence the anchor contaminates the estimate,
even if it is clearly irrelevant. In one experiment, subRichard E. Nisbett and colleagues suggest that represen- jects watched a number being selected from a spinning
tativeness explains the dilution eect, in which irrelevant wheel of fortune. They had to say whether a given quaninformation weakens the eect of a stereotype. Subjects tity was larger or smaller than that number. For instance,

4
they might be asked, Is the percentage of African countries which are members of the United Nations larger or
smaller than 65%?" They then tried to guess the true percentage. Their answers correlated well with the arbitrary
number they had been given.[27][28] Insucient adjustment from an anchor is not the only explanation for this
eect. An alternative theory is that people form their estimates on evidence which is selectively brought to mind
by the anchor.[29]

TYPES

variety of experiments both in laboratories and in the


real world.[28][30] It remains when the subjects are oered
money as an incentive to be accurate, or when they are explicitly told not to base their judgment on the anchor.[30]
The eect is stronger when people have to make their
judgments quickly.[31] Subjects in these experiments lack
introspective awareness of the heuristic, denying that the
anchor aected their estimates.[31]
Even when the anchor value is obviously random or extreme, it can still contaminate estimates.[30] One experiment asked subjects to estimate the year of Albert Einstein's rst visit to the United States. Anchors of 1215 and
1992 contaminated the answers just as much as more sensible anchor years.[31] Other experiments asked subjects
if the average temperature in San Francisco is more or
less than 558 degrees, or whether there had been more or
fewer than 100,025 top ten albums by The Beatles. These
deliberately absurd anchors still aected estimates of the
true numbers.[28]
Anchoring results in a particularly strong bias when estimates are stated in the form of a condence interval.
An example is where people predict the value of a stock
market index on a particular day by dening an upper and
lower bound so that they are 98% condent the true value
will fall in that range. A reliable nding is that people anchor their upper and lower bounds too close to their best
estimate.[11] This leads to an overcondence eect. One
much-replicated nding is that when people are 98% certain that a number is in a particular range, they are wrong
about thirty to forty percent of the time.[11][32]
Anchoring also causes particular diculty when many
numbers are combined into a composite judgment. Tversky and Kahneman demonstrated this by asking a group
of people to rapidly estimate the product 8 x 7 x 6 x 5 x
4 x 3 x 2 x 1. Another group had to estimate the same
product in reverse order; 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8.
Both groups underestimated the answer by a wide margin, but the latter groups average estimate was signicantly smaller.[33] The explanation in terms of anchoring
is that people multiply the rst few terms of each product
and anchor on that gure.[33] A less abstract task is to estimate the probability that an aircraft will crash, given that
there are numerous possible faults each with a likelihood
of one in a million. A common nding from studies of
these tasks is that people anchor on the small component
probabilities and so underestimate the total.[33] A corresponding eect happens when people estimate the probability of multiple events happening in sequence, such as
an accumulator bet in horse racing. For this kind of judgment, anchoring on the individual probabilities results in
an overestimate of the combined probability.[33]

The amount of money people will pay in an auction for a bottle


of wine can be inuenced by considering an arbitrary two-digit
number.

1.3.1 Applications

Peoples valuation of goods, and the quantities they buy,


The anchoring eect has been demonstrated by a wide respond to anchoring eects. In one experiment, peo-

1.5

Others

ple wrote down the last two digits of their social security
numbers. They were then asked to consider whether they
would pay this number of dollars for items whose value
they did not know, such as wine, chocolate, and computer
equipment. They then entered an auction to bid for these
items. Those with the highest two-digit numbers submitted bids that were many times higher than those with the
lowest numbers.[34][35] When a stack of soup cans in a
supermarket was labelled, Limit 12 per customer, the
label inuenced customers to buy more cans.[31] In another experiment, real estate agents appraised the value
of houses on the basis of a tour and extensive documentation. Dierent agents were shown dierent listing prices, and these aected their valuations. For one
house, the appraised value ranged from US$114,204 to
$128,754.[36][37]
Anchoring and adjustment has also been shown to aect
grades given to students. In one experiment, 48 teachers were given bundles of student essays, each of which
had to be graded and returned. They were also given a
ctional list of the students previous grades. The mean
of these grades aected the grades that teachers awarded
for the essay.[38]
One study showed that anchoring aected the sentences
in a ctional rape trial.[39] The subjects were trial judges
with, on average, more than fteen years of experience.
They read documents including witness testimony, expert
statements, the relevant penal code, and the nal pleas
from the prosecution and defence. The two conditions of
this experiment diered in just one respect: the prosecutor demanded a 34-month sentence in one condition and
12 months in the other; there was an eight-month dierence between the average sentences handed out in these
two conditions.[39] In a similar mock trial, the subjects
took the role of jurors in a civil case. They were either
asked to award damages in the range from $15 million
to $50 million or in the range from $50 million to $150
million. Although the facts of the case were the same
each time, jurors given the higher range decided on an
award that was about three times higher. This happened
even though the subjects were explicitly warned not to
treat the requests as evidence.[34]

5
tions are more persuasive than those framed in a purely
factual way.[41]

1.5 Others

2 Theories
There are competing theories of human judgment, which
dier on whether the use of heuristics is irrational. A
cognitive laziness approach argues that heuristics are inevitable shortcuts given the limitations of the human
brain. According to the natural assessments approach,
some complex calculations are already done rapidly and
automatically by the brain, and other judgments make use
of these processes rather than calculating from scratch.
This has led to a theory called attribute substitution,
which says that people often handle a complicated question by answering a dierent, related question, without
being aware that this is what they are doing.[42] A third
approach argues that heuristics perform just as well as
more complicated decision-making procedures, but more
quickly and with less information. This perspective emphasises the fast and frugal nature of heuristics.[43]

2.1 Cognitive laziness


An eort-reduction framework proposed by Anuj K. Shah
and Daniel M. Oppenheimer states that people use a
variety of techniques to reduce the eort of making
decisions.[44]

2.2 Attribute substitution

Main article: Attribute substitution


In 2002 Daniel Kahneman and Shane Frederick proposed a process called attribute substitution which happens without conscious awareness. According to this theory, when somebody makes a judgment (of a target attribute) which is computationally complex, a rather more
easily calculated heuristic attribute is substituted.[45] In effect, a dicult problem is dealt with by answering a rather
simpler problem, without the person being aware this is
1.4 Aect heuristic
happening.[42] This explains why individuals can be unaware of their own biases, and why biases persist even
Main article: Aect heuristic
when the subject is made aware of them. It also explains
why human judgments often fail to show regression to"Aect", in this context, is a feeling such as fear, pleasure ward the mean.[42][45][46]
or surprise. It is shorter in duration than a mood, occurring rapidly and involuntarily in response to a stimulus. This substitution is thought of as taking place in the auWhile reading the words lung cancer might generate an tomatic intuitive judgment system, rather than the more
aect of dread, the words mothers love can create an self-aware reective system. Hence, when someone tries
aect of aection and comfort. When people use aect to answer a dicult question, they may actually answer
without realizing that a
(gut responses) to judge benets or risks, they are us- a related but dierent question,
[42][45]
substitution
has
taken
place.
[40]
The aect heuristic has been
ing the aect heuristic.
used to explain why messages framed to activate emo- In 1975, psychologist Stanley Smith Stevens proposed

CONSEQUENCES

of attribute substitution is that, rather than work out


the sum, subjects parse the sum of $1.10 into a
large amount and a small amount, which is easy to
do. Whether they feel that is the right answer will
depend on whether they check the calculation with
their reective system.

A visual example of attribute substitution. This illusion works


because the 2D size of parts of the scene is judged on the basis
of 3D (perspective) size, which is rapidly calculated by the visual
system.

Kahneman gives an example where some Americans


were oered insurance against their own death in a terrorist attack while on a trip to Europe, while another group
were oered insurance that would cover death of any kind
on the trip. Even though death of any kind includes
death in a terrorist attack, the former group were willing to pay more than the latter. Kahneman suggests that
the attribute of fear is being substituted for a calculation
of the total risks of travel.[47] Fear of terrorism for these
subjects was stronger than a general fear of dying on a
foreign trip.

2.3 Fast and frugal

Gerd Gigerenzer and colleagues have argued that heuristics can be used to make judgments that are accurate
rather than biased. According to them, heuristics are fast
and frugal alternatives to more complicated procedures,
giving answers that are just as good.[48] The benets of
heuristic or 'less is more' decision-making strategies have
been observed in a variety of settings, ranging from food
[49]
[P]eople are not accustomed to thinking hard, and are consumption, to the stock market to online dating.
often content to trust a plausible judgment that comes to
mind.
that the strength of a stimulus (e.g. the brightness of a
light, the severity of a crime) is encoded by brain cells
in a way that is independent of modality. Kahneman and
Frederick built on this idea, arguing that the target attribute and heuristic attribute could be very dierent in
nature.[42]

Daniel Kahneman, American Economic Review 93 (5)


December 2003, p. 1450[46]

3 Consequences

3.1 Ecient decision heuristics

Kahneman and Frederick propose three conditions for atWarren Thorngate, an emeritus social psychologist, imtribute substitution:[42]
plemented 10 simple decision rules or heuristics in a simulation program as computer subroutines chose an alter1. The target attribute is relatively inaccessible.
native. He determined how often each heuristic selected
Substitution is not expected to take place in answer- alternatives with highest-through-lowest expected value
ing factual questions that can be retrieved directly in a series of randomly generated decision situations. He
from memory (What is your birthday?") or about found that most of the simulated heuristics selected alcurrent experience (Do you feel thirsty now?).
ternatives with highest expected value and almost never
selected alternatives with lowest expected value. More
2. An associated attribute is highly accessible.
This might be because it is evaluated automatically information about the simulation can be found in his Ef[50]
in normal perception or because it has been primed. cient decision heuristics article (1980).
For example, someone who has been thinking about
their love life and is then asked how happy they are
might substitute how happy they are with their love 3.2 Beautiful-is-familiar eect
life rather than other areas.
Psychologist Benot Monin reports a series of experi3. The substitution is not detected and corrected by the ments in which subjects, looking at photographs of faces,
reective system.
have to judge whether they have seen those faces before.
For example, when asked A bat and a ball together It is repeatedly found that attractive faces are more likely
cost $1.10. The bat costs $1 more than the ball. to be mistakenly labeled as familiar.[51] Monin interprets
How much does the ball cost?" many subjects in- this result in terms of attribute substitution. The heuristic
correctly answer $0.10.[46] An explanation in terms attribute in this case is a warm glow"; a positive feeling

7
towards someone that might either be due to their being
familiar or being attractive. This interpretation has been
criticised, because not all the variance in familiarity is accounted for by the attractiveness of the photograph.[44]

3.3

Judgments of morality and fairness

Legal scholar Cass Sunstein has argued that attribute substitution is pervasive when people reason about moral,
political or legal matters.[52] Given a dicult, novel problem in these areas, people search for a more familiar, related problem (a prototypical case) and apply its solution as the solution to the harder problem. According to
Sunstein, the opinions of trusted political or religious authorities can serve as heuristic attributes when people are
asked their own opinions on a matter. Another source of
heuristic attributes is emotion: peoples moral opinions
on sensitive subjects like sexuality and human cloning
may be driven by reactions such as disgust, rather than by
reasoned principles.[53] Sunstein has been challenged as
not providing enough evidence that attribute substitution,
rather than other processes, is at work in these cases.[44]

[2] Harris, Lori A. (21 May 2007). ClisAP Psychology. John


Wiley & Sons. p. 65. ISBN 978-0-470-19718-9. Retrieved 7 February 2013.
[3] Nevid, Jerey S. (1 October 2008). Psychology: Concepts
and Applications. Cengage Learning. p. 251. ISBN 9780-547-14814-4. Retrieved 7 February 2013.
[4] Kahneman, Daniel; Klein, Gary (2009). Conditions for
intuitive expertise: A failure to disagree. American Psychologist 64 (6): 515526. doi:10.1037/a0016755.
[5] Kahneman, Daniel (2011). Introduction. Thinking, Fast
and Slow. Farrar, Straus and Giroux. ISBN 978-1-42996935-2.
[6] Plous 1999, p. 109
[7] Hastie & Dawes 2009, pp. 210211
[8] Tversky, Amos; Kahneman, Daniel (1973), Availability: A Heuristic for Judging Frequency and Probability,
Cognitive Psychology 5: 207232, doi:10.1016/00100285(73)90033-9, ISSN 0010-0285
[9] Sutherland 2007, pp. 1617
[10] Plous 1993, pp. 123124

See also
Behavioral economics
Bounded rationality
Ecological Rationality
Cognitive miser

Low information voter


Methodology of heuristics

[14] Plous 1993, pp. 109120

Adaptive Toolbox

[15] Nisbett, Richard E.; Ross, Lee (1980). Human inference: strategies and shortcomings of social judgment. Englewood Clis, NJ: Prentice-Hall. pp. 115118. ISBN
9780134450735.

List of memory biases

Footnotes

[1] Heuristic was originally an adjective and is derived from


the Greek word (heuriskein) meaning serving to discover. Its use as a noun came later, as an abbreviation for heuristic method or heuristic principle.
(Baron 2000, p. 50)

[12] Carroll, J. (1978). The Eect of Imagining an Event on


Expectations for the Event: An Interpretation in Terms
of the Availability Heuristic. Journal of Experimental Social Psychology 14 (1): 8896. doi:10.1016/00221031(78)90062-8. ISSN 0022-1031.
[13] Srull, Thomas K.; Wyer, Robert S. (1979). The Role
of Category Accessibility in the Interpretation of Information About Persons: Some Determinants and Implications. Journal of Personality and Social Psychology
37 (10): 166072. doi:10.1037/0022-3514.37.10.1660.
ISSN 0022-3514.

List of cognitive biases

[11] Tversky & Kahneman 1974

Citations

[1] Lewis, Alan (17 April 2008). The Cambridge Handbook


of Psychology and Economic Behaviour. Cambridge University Press. p. 43. ISBN 978-0-521-85665-2. Retrieved 7 February 2013.

[16] Kahneman, Daniel; Amos Tversky (July 1973). On


the Psychology of Prediction. Psychological Review
(American Psychological Association) 80 (4): 23751.
doi:10.1037/h0034747. ISSN 0033-295X.
[17] Tversky, Amos; Kahneman, Daniel (1983). Extensional versus intuitive reasoning: The conjunction fallacy
in probability judgment.. Psychological Review 90 (4):
293315. doi:10.1037/0033-295X.90.4.293. reprinted
in Gilovich, Thomas; Grin, Daniel; Kahneman, eds.
(2002), Heuristics and Biases: The Psychology of Intuitive
Judgment, Cambridge: Cambridge University Press, pp.
1948, ISBN 9780521796798, OCLC 47364085
[18] Poundstone 2010, p. 89

[19] Tentori, K.; Bonini, N.; Osherson, D. (1 May 2004).


The conjunction fallacy: a misunderstanding about
conjunction?". Cognitive Science 28 (3): 467477.
doi:10.1016/j.cogsci.2004.01.001.
[20] Moro, Rodrigo (29 July 2008). On the nature of
the conjunction fallacy. Synthese 171 (1): 124.
doi:10.1007/s11229-008-9377-8.
[21] Gigerenzer, Gerd (1991). How to make cognitive
illusions disappear: Beyond heuristics and biases.
European Review of Social Psychology 2: 83115.
doi:10.1080/14792779143000033.
[22] Kunda 1999, pp. 7071
[23] Kunda 1999, pp. 6870
[24] Zukier, Henry (1982). The dilution eect: The role
of the correlation and the dispersion of predictor variables in the use of nondiagnostic information.. Journal
of Personality and Social Psychology 43 (6): 11631174.
doi:10.1037/0022-3514.43.6.1163.
[25] Kunda 1999, pp. 7172
[26] Tversky, Amos; Kahneman, Daniel (1971). Belief in the
law of small numbers.. Psychological Bulletin 76 (2):
105110. doi:10.1037/h0031322. reprinted in Daniel
Kahneman, Paul Slovic, Amos Tversky, ed. (1982).
Judgment under uncertainty: heuristics and biases. Cambridge: Cambridge University Press. pp. 2331. ISBN
9780521284141.
[27] Baron 2000, p. 235?
[28] Plous 1993, pp. 145146
[29] Koehler & Harvey 2004, p. 99
[30] Mussweiler, Englich & Strack 2004, pp. 185186,197
[31] Yudkowsky 2008, pp. 102103
[32] Lichtenstein, Sarah; Fischo, Baruch; Phillips, Lawrence
D. (1982), Calibration of probabilities: The state of the
art to 1980, in Kahneman, Daniel; Slovic, Paul; Tversky, Amos, Judgment under uncertainty: Heuristics and
biases, Cambridge University Press, pp. 306334, ISBN
9780521284141

CITATIONS

[39] Mussweiler, Englich & Strack 2004, p. 183


[40] Finucane, M.L.; Alhakami, A.; Slovic, P.; Johnson,
S.M. (January 2000). The Aect Heuristic in Judgment of Risks and Benets. Journal of Behavioral Decision Making 13 (1): 117. doi:10.1002/(SICI)10990771(200001/03)13:1<1::AID-BDM333>3.0.CO;2-S.
[41] Keller, Carmen; Siegrist, Michael, Gutscher, Heinz (June
2006). The Role of Aect and Availability Heuristics in
Risk Analysis 26 (3). pp. 631639. doi:10.1111/j.15396924.2006.00773.x.
[42] Kahneman, Daniel; Frederick, Shane (2002), Representativeness Revisited: Attribute Substitution in Intuitive
Judgment, in Gilovich, Thomas; Grin, Dale; Kahneman, Daniel, Heuristics and Biases: The Psychology
of Intuitive Judgment, Cambridge: Cambridge University Press, pp. 4981, ISBN 9780521796798, OCLC
47364085
[43] Hardman 2009, pp. 1316
[44] Shah, Anuj K.; Daniel M. Oppenheimer (March 2008).
Heuristics Made Easy: An Eort-Reduction Framework. Psychological Bulletin (American Psychological Association) 134 (2): 207222. doi:10.1037/00332909.134.2.207. ISSN 1939-1455. PMID 18298269.
[45] Newell, Benjamin R.; David A. Lagnado; David R.
Shanks (2007). Straight choices: the psychology of
decision making. Routledge. pp. 7174. ISBN
9781841695884.
[46] Kahneman, Daniel (December 2003).
Maps of
Bounded Rationality:
Psychology for Behavioral
Economics.
American Economic Review (American Economic Association) 93 (5): 14491475.
doi:10.1257/000282803322655392. ISSN 0002-8282.
[47] Kahneman, Daniel (2007). Short Course in Thinking
About Thinking. Edge.org. Edge Foundation. Retrieved
2009-06-03.
[48] Gerd Gigerenzer, Peter M. Todd, and the ABC Research Group (1999). Simple Heuristics That Make Us
Smart. Oxford, UK, Oxford University Press. ISBN 019-514381-7

[34] Hastie & Dawes 2009, pp. 7880

[49] van der Linden, S. (2011). Speed Dating and Decision


Making: Why Less is More. Scientic American - Mind
Matters (Nature). Retrieved 2013-11-14.

[35] George Loewenstein (2007), Exotic Preferences: Behavioral Economics and Human Motivation, Oxford University Press, pp. 284285, ISBN 9780199257072

[50] Thorngate, Warren (1980).


Ecient decision
heuristics.
Behavioral Science 25 (3): 219225.
doi:10.1002/bs.3830250306.

[36] Mussweiler, Englich & Strack 2004, p. 188

[51] Monin, Benot; Daniel M. Oppenheimer (2005),


Correlated Averages vs.
Averaged Correlations:
Demonstrating the Warm Glow Heuristic Beyond
Aggregation, Social Cognition 23 (3): 257278,
doi:10.1521/soco.2005.23.3.257, ISSN 0278-016X

[33] Sutherland 2007, pp. 168170

[37] Plous 1993, pp. 148149


[38] Caverni, Jean-Paul; Pris, Jean-Luc (1990), The
Anchoring-Adjustment Heuristic in an 'Information-Rich,
Real World Setting': Knowledge Assessment by Experts,
in Caverni, Jean-Paul; Fabr, Jean-Marc; Gonzlez,
Michel, Cognitive biases, Elsevier, pp. 3545, ISBN
9780444884138

[52] Sunstein, Cass R. (2005). Moral heuristics. Behavioral and Brain Sciences (Cambridge University Press) 28
(4): 531542. doi:10.1017/S0140525X05000099. ISSN
0140-525X. PMID 16209802.

[53] Sunstein, Cass R. (2009). Some Eects of Moral Indignation on Law. Vermont Law Review (Vermont Law
School) 33 (3): 405434. SSRN 1401432. Retrieved
2009-09-15.

References
Baron, Jonathan (2000), Thinking and deciding (3rd
ed.), New York: Cambridge University Press, ISBN
0521650305, OCLC 316403966
Gilovich, Thomas; Grin, Dale W. (2002), Introduction Heuristics and Biases: Then and Now,
in Gilovich, Thomas; Grin, Dale W.; Kahneman,
Daniel, Heuristics and biases: the psychology of intuitive judgement, Cambridge University Press, pp.
118, ISBN 9780521796798
Hardman, David (2009), Judgment and decision making: psychological perspectives, WileyBlackwell, ISBN 9781405123983
Hastie, Reid; Dawes, Robyn M. (29 September
2009), Rational Choice in an Uncertain World:
The Psychology of Judgment and Decision Making,
SAGE, ISBN 9781412959032
Koehler, Derek J.; Harvey, Nigel (2004), Blackwell
handbook of judgment and decision making, WileyBlackwell, ISBN 9781405107464
Kunda, Ziva (1999), Social Cognition: Making Sense
of People, MIT Press, ISBN 978-0-262-61143-5,
OCLC 40618974
Mussweiler, Thomas; Englich, Birte; Strack, Fritz
(2004), Anchoring eect, in Pohl, Rdiger F.,
Cognitive Illusions: A Handbook on Fallacies and
Biases in Thinking, Judgement and Memory, Hove,
UK: Psychology Press, pp.
183200, ISBN
9781841693514, OCLC 55124398
Plous, Scott (1993), The Psychology of Judgment and Decision Making, McGraw-Hill, ISBN
9780070504776, OCLC 26931106
Poundstone, William (2010), Priceless: the myth of
fair value (and how to take advantage of it), Hill and
Wang, ISBN 9780809094691
Reber, Rolf (2004), Availability, in Pohl, Rdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory,
Hove, UK: Psychology Press, pp. 147163, ISBN
9781841693514, OCLC 55124398
Sutherland, Stuart (2007), Irrationality (2nd ed.),
London: Pinter and Martin, ISBN 9781905177073,
OCLC 72151566

Teigen, Karl Halvor (2004), Judgements by representativeness, in Pohl, Rdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology
Press, pp. 165182, ISBN 9781841693514, OCLC
55124398
Tversky, Amos; Kahneman, Daniel (1974),
Judgments Under Uncertainty:
Heuristics
and Biases, Science 185 (4157): 11241131,
doi:10.1126/science.185.4157.1124,
PMID
17835457 reprinted in Daniel Kahneman, Paul
Slovic, Amos Tversky, ed. (1982). Judgment Under
Uncertainty: Heuristics and Biases. Cambridge:
Cambridge University Press. pp. 320. ISBN
9780521284141.
Yudkowsky, Eliezer (2008), Cognitive biases potentially aecting judgment of global risks, in
Bostrom, Nick; irkovi, Milan M., Global catastrophic risks, Oxford University Press, pp. 91129,
ISBN 9780198570509

8 Further reading
Slovic, Paul; Melissa Finucane; Ellen Peters; Donald G. MacGregor (2002). The Aect Heuristic.
In Thomas Gilovich, Dale Grin, Daniel Kahneman. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press. pp.
397420. ISBN 9780521796798.

9 External links
Test Yourself: Decision Making and the Availability
Heuristic

10

10

10
10.1

TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

Text and image sources, contributors, and licenses


Text

Heuristics in judgment and decision-making Source:


http://en.wikipedia.org/wiki/Heuristics%20in%20judgment%20and%
20decision-making?oldid=633993490 Contributors: BD2412, Rjwilmsi, Crisco 1492, Sardanaphalus, Chris the speller, MartinPoulter,
Udufruduhu, AnomieBOT, Omnipaedista, Haeinous, U3964057, Dexbot, DangerouslyPersuasiveWriter, Interested2013, Jacapsicum,
Monkbot, Kjeongeun, Luthien22, Ihaveacatonmydesk and Anonymous: 8

10.2

Images

File:Brain.png Source: http://upload.wikimedia.org/wikipedia/commons/7/73/Nicolas_P._Rougier%27s_rendering_of_the_human_


brain.png License: GPL Contributors: http://www.loria.fr/~{}rougier Original artist: Nicolas Rougier
File:Cote_de_Brouilly_bottle_of_Beaujolais_wine.jpg Source:
http://upload.wikimedia.org/wikipedia/commons/7/77/Cote_de_
Brouilly_bottle_of_Beaujolais_wine.jpg License: CC BY-SA 2.0 Contributors: Transferred from en.wikipedia Original artist: Original
uploader was Agne27 at en.wikipedia
File:Edit-clear.svg Source: http://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The
Tango! Desktop Project. Original artist:
The people from the Tango! project. And according to the meta-data in the le, specically: Andreas Nilsson, and Jakub Steiner (although
minimally).
File:Opt_taeuschung_groesse.jpg Source: http://upload.wikimedia.org/wikipedia/commons/c/ce/Opt_taeuschung_groesse.jpg License:
CC-BY-SA-3.0 Contributors: picture made by Anton, who uploaded it into German Wikipedia Original artist: Anton
File:Psi2.svg Source: http://upload.wikimedia.org/wikipedia/commons/6/6c/Psi2.svg License: Public domain Contributors: ? Original
artist: ?

10.3

Content license

Creative Commons Attribution-Share Alike 3.0

Vous aimerez peut-être aussi