Académique Documents
Professionnel Documents
Culture Documents
Probably I will take no insurance in state G and full insurance in state B. This assumes that the insurance company does not know I have done the testing! That would lead an outcome worse than v 0 , in which I am not able to buy insurance. We will see this when we study adverse selection.
Bayes formula follows from this; we shall come to this later. If P (S) = 0, then the formula does not apply. Any posterior is compatible with Bayesian updating after observing a 0 probability event.
separately: maximizing
So if I wait, learn that S, do a Bayesian update, and then takes the EUmaximizing decision based on conditional probability, I end up making the same decision. * Similarly for learning S. Conclusion: if I am a Bayesian EU agent, I do not want to change my plans after observing information. We used binary information (S or not S), but this holds for an arbitrary information structure (I learn that S i happened, where S 1 , S 2 , . . . are events such that exactly 1 must happen.2 ) If an EU maximizer is non-Bayesian, the following sort of inconsistency is possible: I will listen to the (unreliable) weather report, and decide in advance that its best to take an umbrella if it predicts rain, and not if it predicts sun.
2
I.e. the S i are disjoint and S i = . The S i are then called a partition of .
But I know that when I see that it predicts sun, I will make a non-Bayesian update about the probability of rain and decide to take an umbrella anyway. So because I do not agree with the strategy of my future self, I make sure to put the umbrella in the car before listening to the weather report.
1 ,2 ,...
:S=s 1
:S=s 2
p u (X () , )
This is the same as maximizing formula 1, under the constraints 1 = 2 = . So v 1 v 0 Strict inequality when it is not optimal to choose the same action for each signal. When information is used for decision making, it is valuable; otherwise it is not. Bernankes bad news principle (Bernanke, QJE 1983) about the timing of binary investments. Suppose a rm is deciding whether or not to make an investment. Under normal conditions, it would make the investment. Suppose it waits and aquires information (economic news). The value of waiting (option value) is governed by the likelihood (and distribution) of bad news, news that would cause the investment not to be made. So if bad news may come, it is valuable to wait, and not make current investments. 5.1.1 The value function associated with the probability distribution of X is convex For simplicity, take X {x 1 , x 2 }. Suppose the probability of x 2 is p. X = x 2 : it will rain, a company will perform well, a job candidate will perform well in the job... This results in a value v p = max 1 p u (x 1 , ) + pu (x 2 , ). This is convex. Why? Argument 1
Suppose there is a signal S s, s , where S = s happens with probability , and conditional on S = s, the probability of x 2 is p, and conditional on S = s , the probability of x 2 is p . Then the unconditional probability of x 2 is p = (1 ) p + p . If the agent does not observe the signal, v (1 ) p + p . he has resulting utility
If the agent does observe the signal, he has resulting utility (1 ) v p +v p . Since information is valuable, the second quantity is greater than the rst. So v is convex.
2
4 0 1 = = 2 1
Figure 1: v p is convex Argument 2 Fixing a decision , expected utility is linear in probability. * This has nothing to do with risk preferences! Maximizing over , the value is the maximum of a collection of linear functions, which is convex in p. Convexity of v p is another way to think about information being valuable.