Vous êtes sur la page 1sur 6

SPECIAL

U n c e SECTION:
r t a i n t Uy n ac ne a
r lt ya si ins t y

a n a l y s i s

7KHUDWLRQDOJHRVFLHQWLVW

Downloaded 07/11/15 to 95.211.174.172. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

MATT HALL, ConocoPhillips Canada


The first principle is that you must not fool yourself
and you are the easiest person to fool. So you have to be
very careful about that. After youve not fooled yourself, its
easy not to fool other scientists. Richard Feynman, 1974.

hat intuition can lead us astray is obvious and wellknown, but watch for examples of people relying on
intuition to solve a problem and you will see it everywhere.
When people talk of a hunch or gut feeling, theyre talking
about using their intuition.
I have done a seismic inversion and have a Poissons ratio
attribute volume. My hypothesis is that low Poissons ratio
means gas. I have some wells, represented by double-sided
cards in Figure 1, to calibrate to. The game is to answer this
question: Which cards do I need to turn over to prove the
hypothesis that all the cards with low PR on one side have gas
on the other? Take a moment to look at the four cards and
decide which you will flip.
In the course of its evolution, the human brain has developed heuristics, rules of thumb, for dealing with problems
like this one. These rules constitute our intuition. Were wary
of the outsider with the thick accent. We balk at a garden
hose in the grass; in the jungle, it could have been a snake.
We are programmed to see faces in the topography of Mars or
burnt toast. These rules are useful to us in urgent matters of
survival, letting us take the least risky course of action without delay. But theyre limiting and misleading when rational
decisions are required. Thats why most people, even educated
people, get this problem wrong; the so-called confirmation
bias is almost unavoidable.
As scientists we should be especially wary of this, but the
fact is that we all tend to seek information that confirms our
hypotheses, rather than trying to disprove them. In the problem above, the cards to flip are the low PR card (of course,
it had better have gas on the other side), and the wet card,
because it had better not say low PR. Most people select
the gas card, but it is not required, because its reverse cannot
prove our disprove our hypothesis; we dont care if high PR
also means gas sometimes (or even all the time). Its easier to
see why we dont care about the high PR card.
When is intuition useful?
I believe in intuition and inspiration. Imagination is
more important than knowledge. For knowledge is limited,
whereas imagination embraces the entire world, stimulating
progress, giving birth to evolution. It is, strictly speaking,
a real factor in scientific research. Albert Einstein, 1931.

It would be wrong to conclude that our intuition is broken


and unreliable. Theres plenty of evidence, and recent literature (e.g., Gladwell, 2005), to show that it can be accurate in
some situations, and we all know people in whom it seems
596

The Leading Edge

May 2010

Figure 1. Four cards, each with low or high Poissons ratio (PR) on
one side, and water or gas on the other. My hypothesis: Cards with low
PR on one side have gas on the other. Which cards must I turn over to
prove or disprove my hypothesis?

to be particularly sharp. We also know that experience and


reflective thought can hone our intuition, so we can be better
seismic interpreters, better scientists, better managers, or better squash players. If diagnosing problems had to be reasoned
from first principles, visits to the car mechanic or family doctor would be very tiresome.
Often people are proud of knowing things with their
gut feeling. Politicians might refer to knowing something in
my heart, or even my heart of hearts. As scientists, however, we should be wary, because these declarations usually
precede an assertion or belief for which there is no evidence,
no rational basis. In some cases, prognostication about the
future for example, this might be entirely justified and, if the
person has a track record of proven insight, we should listen.
But we are not necessarily hearing the imaginative insight of a
creative genius, and might possibly be subjected to the illogical ranting of a dogmatic bigot.
Perhaps because of the complexity of human interaction,
we feel especially drawn to intuitive feelings in dealing with
people, and may rely strongly on intuition when interviewing
a job candidate, for example. And we feel okay about it, even
though were often dead wrong (and plenty of research indicates that men are more wrong than women in this situation).
I think its important to recognize your first instinct, listen to
it, write it down if you have to, but then ignore it and work
on the problem. Maybe youll come back to the same conclusion, but maybe you wont.
There are things we learned to reason about as children,
or have learned by rote, and now we dont really think about
them any more. Reno is east of Los Angeles. We can tell when
someones lying. Other people are biased, but I am objective.
The Great Wall of China is visible from space. The Coriolis
force determines the swirl direction in a bathtub. There is a
scientific method. We can discriminate between different
colors (try yourself on Figure 2). Were confident in our reasoning and our knowledge, and comfortable with our confidence. Unfortunately, in these cases, were wrong about all
of it.
When does intuition falter?
There are dozens of situations in which were often fooled,

Downloaded 07/11/15 to 95.211.174.172. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

U n c e r t a i n t y

a n a l y s i s

Figure 3. One route for my walk from home to work, and some
simpler cases. The number of possible routes grows very quickly as
Fibonaccis series. All 2,496,144 of them are 1480 m long.

Figure 2. It doesnt matter how hard you stare; squares A and B


will never appear to be the same color, even though they are. Theyre
both 45% black. You cannot stop your brain from compensating for
the shadow you perceive in the scene. (Image is copyright of Edward
Adelson of MIT, and used with permission.)

but I think five situations are of special concern to us as geoscientists in the oil and gas industry. These are when were
thinking about big numbers, permutations and combinations, probability, proportions, and randomness.
Big numbers. Imagine you take a glass of water from the
sea, and somehow label or tag the water molecules in there so
that you can recognize them again. Then pour the water back
into the ocean and wait, say, 10,000 years for the tides and
currents to mix thoroughly. Now go somewhere, anywhere,
and take a glass of sea water. How many of those original
molecules will you see again?
Most people, confronted with this question, are suspicious. Of course, the answer would seem to be zero, they
reason, but since youre asking me in this way, it must be
nonzero. Maybe one molecule, they venture, or ten, or maybe
even 100. Some lowish, positive number. But for a 250-ml
glass, about half a pint, you will likely recognize about 1500
of the original molecules.
This amazing result, which is hard to believe but easy to
prove to yourself, simply reflects the fact that there are more
molecules in a glass than there are glassfuls in the ocean (indeed, about three orders of magnitude more). Molecules exist
on a scale which is so far outside our experience, so alien to
us, that most of us are unable to rely on our intuition to help
estimate results. In general, my own experience suggests that
if I am dealing with numbers beyond the range 10-6 to 10+6, I
need to pull out a calculator.
Its not a completely hopeless case. You can help tune your
intuition with so-called Fermi problems. The physicist Enrico
Fermi often set his students apparently unsolvable estimation
problems to solve in a short timemost famously: How many
piano tuners are there in Chicago? Heres one for you: How
many seismic traces have been recorded?
Permutations and combinations. I live in Calgary, which
is built on a regular grid street pattern. I walk to work every

Figure 4. Your acreage, top right, in a little-drilled area of the basin.


On average, 10% of structures are gas-filled. The seismic attribute has
a reliability of 80%. What is the probability your location will be a
discovery?

day, going five blocks south and six blocks east (Figure 3).
There are walkways along both sides of each street. The walk
takes about 15 minutes, and one day I pondered how many
different routes I might have to choose from, including both
sidewalks and always taking the shortest route, and I started
to count. Maybe I could walk them all!
First, the two-sided streets make the problem identical to
a regular 11 13 grid. Fair enough. One block is easy; there
are two ways around it. For a 1 2 grid, there are three ways.
A 2 2 grid has five ways. Anyone familiar with the Fibonacci
series may start to see a pattern, but Im still counting routes
in my head. When I imagined a grid of about 3 3, I started
to wish I had something to write on (turns out there are 20).
So I guessed about a few thousand and resolved to compute
it sometime.
Naturally I worked it out it as soon as I got home. And
I was a bit off. No, I was a long way off; the actual answer is
2,496,144. I could walk two different routes every working
day, and it would take me 6240 years to complete my collection. Even walking it every 15 minutes, nonstop, would take
71 years. And an obvious but still surprising point: All the
routes are exactly the same length (the so-called Manhattan
distance from my house to my office).
Probabilities (straightforward geosciences-related problem).
May 2010 The Leading Edge

597

Downloaded 07/11/15 to 95.211.174.172. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

U n c e r t a i n t y

a n a l y s i s

Negative

Positive

Water

72/100

18/100

Gas

2/100

8/100

Table 1. The possible scenarios in the exploration problem. Water


scenarios make up 90/100 cases; gas just 10/100. The taxicab problem
is identical, replace gas and water with blue and yellow, and replace
tested positive with saw blue and tested negative with saw yellow. The
high rate of false positives confounds the reliability for the minority
scenarios.

Imagine you are working in a newly accessible and underexplored area (Figure 4) of an otherwise mature basin. Statistics
show that on average 10% of structures are filled with gas and
the rest are dry. Fortunately, you have some seismic analysis technology that allows you to predict the presence of gas
with 80% reliability. In other words, four out of five gas-filled
structures test positive with the technique, and when it is applied to water-filled structures, it gives a negative result four
times out of five.
You acquire some undrilled acreage, map some structures,
and perform the seismic analysis. One of the structures tests
positive. Assuming this is the only information you have,
what is the probability that it is gas-filled?
This is a classic problem of embracing Bayesian likelihood
and ignoring your built-in representativeness heuristic
(Kahneman et al., 1982). Bayesian probability combination
does not come naturally to most people but, once understood, can at least help you see the way to approach similar
problems in the future. The way the problem is framed here is
identical to the original formulation of Kahneman et al., the
taxicab problem. This takes place in a town with 90 yellow
cabs and 10 blue ones. A taxi is involved in a hit-and-run,
witnessed by a passerby. Eye-witness reliability is shown to be
80%; so if the witness says the taxi was blue, what is the probability that the cab was indeed blue? Most people go with
80%, but in fact the witness is probably wrong. To see why,
lets go back to the exploration problem and look at 100 test
cases, shown in Table 1.
Looking at the rows, we see that there are 90 water cases
and 10 gas cases; 80% of the water cases test negative and
80% of the gas cases test positive. We can use this table to
compute that when we get a positive test, the probability that
the test is true is not 0.80, but much less: 8 / (8 + 18) =
0.31. In other words, a test that is mostly reliable is probably
wrong when applied to an event that doesnt happen very often (a structure being gas-charged). Its still good news for us,
though, because a probability of discovery of 0.31 is much
better than the 0.10 that we started with.
Here is Bayess theorem for calculating the probability P
of event A (say, an gas discovery) given event B (say, a positive
test in our seismic analysis):
P(A)
__________
P(A|B) = P(B|A)
P(B)
So we can express our problem in these terms:
598

The Leading Edge

May 2010

Figure 5. Predictive power


(in Bayesian jargon, the
posterior probability) as a
function of test reliability
and the base rate of
occurrence (also called the
prior probability of the
event of phenomenon in
question). The position
of the scenario in the
exploration problem
is shown by the white
square. (Thanks to
UBC Bioinformatics for
the heatmap software,
heatmap.notlong.com.)

0.8 0.1
P(gas | pos) = _______________
= 0.31
0.8 0.1 + 0.2 0.9
This result is so counter-intuitive, for me at least, that I
cant resist illustrating it with another well-known example
that takes it to extremes. Imagine you test positive for a rare
disease, seismitis. It affects only 1 person in 10,000. The test
is 99% reliable. What is the probability that you do indeed
have seismitis?
Notice that the unreliability (1%) of the test is much
greater than the rate of occurrence of the disease (0.01%).
Its not hard to see that there will be many false positives;
only 1 person in 10,000 is ill, and that person tests positive
99% of the time (almost always), but 1% of those other 9999
healthy people, which amounts to 100 people, will test positive too. So for every 10,000 people tested, 101 test positive
even though only 1 is ill. So the probability of being ill, given
a positive test, is only about 1/101!
Next time you confidently predict something with a seismic attribute, stop to think not only about the reliability of
the test you have made, but also the rate of occurrence of
the thing youre trying to predict. Figure 5 shows how actual
prediction power depends on both test reliability and the occurrence rate of the predicted event. You may be doing worse
(or better!) than you think.
Fortunately, in this case, there is a simple mitigation: Use
other, independent, methods of prediction. Mutually uncorrelated seismic attributes, well data, engineering test results,
if applied diligently, can improve the odds of a correct prediction. But computing the posterior probability of event A
given independent observations B, C, D, E, and F, is beyond
the scope of this article (not to mention this author!).
Proportions. Proportions are really the same thing as probabilities, so it shouldnt be surprising that they can catch us
out. Simpsons paradox is perhaps the best example of something we think we understand, weve always understood, suddenly turning on us.
Suppose you are comparing two new seismic attributes,
truth and beauty. You compare their hydrocarbon-predicting
success rates on 35 discoveries and its close, but beauty wins

Downloaded 07/11/15 to 95.211.174.172. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

U n c e r t a i n t y

a n a l y s i s

Figure 6. Two-dimensional projections can be misleading. Constellations, shown in blue, look even more random in three dimensions. The view
from 40,000 light years is included out of interest: the constellations are local to our solar system. (Images from Chris Laurels excellent Celestia
software, www.shatters.net. Used with permission.)

with a 83% hit rate. Truth manages only 77%. Not exactly
unequivocal, but if you had to pick one, all else being equal,
beauty it is.
But then someone asks you about predicting oil. You dig
out your data and show them Table 2. Clearly, truth did a
little better when you just look at oil. And what about gas,
they ask? Well, the data showed that truth was better than
beauty at predicting gas. So truth does a better job at each of
oil and gas, but somehow beauty edges out overall.
Impossible? Apparently not. In this case, hydrocarbon
type is a confounding variable, and its important to look
for such groupings in your data. Improbable? Actually, its
quite common in all kinds of data and well known among
statisticians. Be especially wary when one or more of the
groups is much smaller than the others, and even more so if
group sizes are inconsistent with respect to the confounding
variable as in my example. And what can we do about it? Try
to avoid it by keeping sample sizes consistent with variables
that might interest you. But ultimately, we cant guarantee it
wont crop up; thats just how proportions are. All you can do
is make sure you ask your data the questions you care about.
Randomness. There is a strong tendency for us to see, and
draw conclusions from, clusters and patterns in random data.
Talk to an economist or baseball fan to see how far we humans can take this. Figure 6 shows how constellations are

Truth

Beauty

Oil

8/8, 100%

25/29, 86%

Gas

19/27, 70%

4/6, 67%

Overall

27/35, 77%

29/35, 83%

Table 2. The performance of two seismic attributes, truth and beauty.


Which is better?

really pattern-free; astrologers clearly dont think in 3D. And


Figure 7 shows how not to do exploration.
Paradoxically, clusters and trends are sometimes a sure
sign of randomness. In one-dimensional data sets, this can
be striking. The so-called random walk is a sequence of random steps, but the walk still manages to produce a journey,
so to speak. For instance, we might guess that playing a game
of coin tosses, where we win a bonus for heads and pay an
equal penalty for tails, tends to break even. But in fact, most
iterations of this game get us a surprisingly large payout, or
loss, over time (Figure 8). Notice how small segments of these
curves resemble larger segments; they are fractal, and this selfsimilarity across all scales is a hallmark of such systems, and
many natural systems like coastlines and drainage networks.
There are ways to check for randomness. You can compare
your data to a random data set and compare them qualitatively or even quantitatively. And of course there are statistical
May 2010 The Leading Edge

599

Downloaded 07/11/15 to 95.211.174.172. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

U n c e r t a i n t y

a n a l y s i s

Figure 7. Maps of 100 wells. Which distribution is random? Can you visualize where the oil pools are? No, because the map on the left is
completely random, while that in the center is highly constrained with one well per section, as shown at right.

Figure 8. A sequence of 1000 coin-toss games, winning 1 unit for heads and paying 1 for tails. (a) What we expect. (b) and (c) What we usually
get. Most sequences look like one of these. Its not a coincidence if these charts remind you of the stock market.

tools for checking the probability of a pattern occurring by


chance. The important thing is to be vigilant about spurious
or arbitrary clusters and patterns. The ultimate test is obvious:
more data.
Geophysicists are trained to think of random noise as
a pest. One of the chief goals of the seismic processor is to
eradicate noise from the data as thoroughly as possible. But
there is one important and highly counterintuitive effect that
noise can have on a detection system: signal enhancement. I
know that sounds wrong, but the effect, known as stochastic
resonance, is real.
In some signal detection systems, the most common example perhaps being neuronal systems, a small amount of
noise can actually improve the signal-to-noise ratio. There are
two conditions: The detection system must have a nonlinear response, and it must have some lower-limit threshold of
detection. In such systems, adding a small amount of noise
raises the average energy of the low-level signal enough for it
to get above the detection limit. Adding more noise deteriorates the signal and returns us to normality. For example,
a study found that elderly peoples balance improved when
they listened to some very low level of white noise; the balance receptors in the ears were stimulated to close to their
detection limit, so that small but important balance signals
became detectable.
In my experience, when a claim runs counter to our scientific intuition, its because there are some unusual circum600

The Leading Edge

May 2010

Downloaded 07/11/15 to 95.211.174.172. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

U n c e r t a i n t y

a n a l y s i s

stances or conditions that must be met. For noise to help a


signal, you need a nonlinear response function and absolute
detection threshold, for example. So when you hear about
an unlikely sounding seismic inversion result, you might ask
questions about the conditions under which tests were performed, and about possible scenarios in which the proposed
result might be possible. Then ask yourself how likely those
conditions are to be met. The question must always be: Is
there another explanation? Someone has an 80% success rate
predicting sand with their sandiness attribute? I can do that
without technology if the stratigraphic interval in question is
80% sand; if its only 10% sand, then its an amazing result.
Ways to not fool yourself
I dont think you can stop your intuitive brain from fooling
your rationality. Theres even a bias called bias blind spot,
which is the ubiquitous tendency not to see, even to deny,
ones own biases. But a good start is to be aware of the biases we are prone to. Sometimes we know were being fooled,
but it doesnt matter because were having fun or theres little
at stake (many otherwise rational people buy lottery tickets,
play blackjack, or buy homeopathic remedies).
But if were offered evidence or were implored to act
on some information, and the outcome is important to us,
or comes at great cost, we need to stop and think. Banish

the rhetoric from the exhortation: assume it is biased. Banish emotion: Assume you are being manipulated. Banish any
expectations or preconceptions: Assume they are wrong.
Write down the first answer you think you see intuitively,
then ignore it. Consider the information on its own. Check
facts. Double-check calculations. Make a spreadsheet and
model the solution. Ask someone elses opinion. Ask how you
could be wrong. Ask why you are wrong. Then figure out how
to be rational.
References
Gladwell, M., 2005, Blink: the power of thinking without thinking:
Little, Brown and Company.
Kahneman, D., P. Slovic, and A. Tversky, 1982, Judgment under uncertainty: heuristics and biases: Cambridge University Press.
Myers, D., 2002, Intuition: Its powers and perils: Yale University
Press.
List of cognitive biases, 2010, http://en.wikipedia.org/wiki/List_of_
cognitive_biases, accessed 15 December 2009.

Acknowledgments: I am grateful to Tooney Fink for stimulating


conversation on this subject and helpful comments on the manuscript and to ConocoPhillips for permission to publish.
Corresponding author: matt.hall@conocophillips.com

May 2010 The Leading Edge

601

Vous aimerez peut-être aussi