Académique Documents
Professionnel Documents
Culture Documents
d1
d2
d3
s1
3
2
5
s2
2
0
2
s3
4
4
0
s4
6
1
3
This game is
(1) asymmetric: the decision maker is rational (looks at
the payoffs), while nature is a random player
(2) a simultaneous game (we do not know in advance
what state of nature will be chosen).
Consider the continuum between
certainty risk uncertainty
Certainty: we know exactly which strategy nature will
play.
Risk: we know the probability distribution nature uses
to play her strategies (e.g., by way of past
observations)
Uncertainty: we do not know even the probability
distribution of natures strategies (e.g., new untried
product).
Consequence
Profit
Example:
d1
d2
d3
d4
s1
2
0
2
2
s2
2
1
1
3
s3
5
7
1
4
3
2
0
0
0
4
6
3
0
2
s2
d1 $7.00 5.50
d 2 5.50 14.00
d3 4.00 12.50
d 4 2.50
11.00
s3
12.50
2.50
11.00
21.00
19.50
19.50
28.00
s4
4.00
s1
2
0
2
2
s2
2
1
1
3
s3
5
7
1
4
s1
2
0
2
2
s2
2
1
1
3
s3
5
7
1
4
p .5 .3 .2
Suppose that we are uncertain about a23. Rewrite the
payoff as a23 = 7 + with an unknown [2, 3],
meaning that we expect the payoff to be between 5 &
10.
Expected payoffs:
EMV =
1.4
1.1 .2
1.5
.9
This leads to
If > 2 (i.e., a23 > 9), then decision d2 is best, &
if 2 (i.e., a23 9), decision d3 is best.
Another source of uncertainty relates to the magnitude
of the probabilities.
Suppose that we are unsure about p1. Similar to the
above, we can use p1 + with some unknown .
However: The sum of probabilities must equal 1, so if
p1 increases, the other probabilities must decrease by
. Assume that the other two probabilities decrease by
the same amounts, i.e.
p = [.5 + , .3 , .2 ].
Given the same payoff matrix
d1
d2
d3
d4
p
s1
2
0
2
2
.5
s2
2
1
1
3
.3
s3
5
7
1
4
.2
EMV() =
1.4 .5
1.1 3
1.5 1
.9 1.5
Decision rule:
If .1 (i.e., p1 .4), then decision d2 is best, &
if > .1 (i.e., p1 > .4), then decision d3 is best.
Different example: Same payoff matrix, but as p1
p2 & p3 . The expected payoffs are then
1.4 5
3
5
1.1 3
1.5
.9 83
Decision rule:
If <.15 (i.e., p1 < .35), then decision d2 is optimal,
if .15 (i.e., p1 .35), then decision d3 is optimal.
9.5 Decision
Information
Trees
and
the
Value
of
s1
2*
0
2*
2*
.5
s2
2
1
1*
3
.3
s3
5
7*
1
4
.2
P(I|s):
s1
I1 .6
I2 .4
s2
.9
.1
s3
.2
.8
Decision tree:
Here:
For I1, we compute P(I1) and P(s|I1)
s P(s)
s1
.5
s2
.3
s3
.2
P(I1|s) P(I1|s)P(s)
.6
.30
.9
.27
.2
.04
P(I1) = .61
P(s|I1)
.4918
.4426
.0656
P(s)
.5
.3
.2
P(I2|s) P(I2|s)P(s)
.4
.20
.1
.03
.8
.16
P(I2) =.39
P(s|I2)
.5128
.0769
.4103
s2
.6
.4
s3
.2
.8
P(s)
.5
.3
.2
P(I1|s) P(I1|s)P(s)
.9
.45
.6
.18
.2
.04
P(I1) = .67
P(s|I1)
.6716
.2687
.0597
P(s)
.5
.3
.2
P(I2|s) P(I2|s)P(s)
.1
.05
.4
.12
.8
.16
P(I2) =.33
We then obtain EPII = 2.12, so that
P(s|I2)
.1515
.3636
.4848