Vous êtes sur la page 1sur 6

Economics of Uncertainty and Information

Charles Roddie (cr250@cam.ac.uk) February 23, 2012

Part IIIa: Information


1 What is information?
What is information, and how does it relate to decisions? Knowledge, or something that gives knowledge. In our models, knowledge about the state of the world.1

2 Complete information vs. no information


A strong form of information: I know the state. Without information, I act on the basis of a probability distribution over the state. With complete information, I act knowing the state. E.g. insurance. With probability p I will face a health problem (state B), and with probability 1 p I will not (state G). I may decide a level of insurance I . Results in utility v 0 = maxI pu (B, I ) + 1 p u (G, I ) Suppose I conduct genetic testing which reveals whether I will have the health problem. This results in utility v 1 = maxI B ,IG pu (B, I B ) + 1 p u (G, IG ) This is (weakly) greater than v 0 because it maximizes the same objective function pu (B, IG ) + 1 p u (G, I B ) without the constraint IG = I B .
1 Information can also mean knowledge about what other people have done: we shall study this in later models.

Probably I will take no insurance in state G and full insurance in state B. This assumes that the insurance company does not know I have done the testing! That would lead an outcome worse than v 0 , in which I am not able to buy insurance. We will see this when we study adverse selection.

3 Partial information: why use Bayesian updating?


3.1 Bayesian agents
States = {1 , 2 , . . . } A Bayesian agent starts with a prior P on states, assigning probability P () to each state . Suppose he receives information that tells him whether S or S. Whether the event S happened or not. Then if he learns that S he updates his probability distribution to the conditional probability: P () /P ( S) S P (| S) := 0 S Equivalently, summing the above over T : P (T |S) := P (T and S) /P (S) Similarly if he learns S. This gives his posterior beliefs. Example 1. All men are mortal. Socrates is mortal. Therefore Socrates is more likely to be a man. Proof. P (Socrates is a man|Socrates is mortal) = P (Socrates is a man and Socrates is mortal) P (Socrates is mortal) P (Socrates is a man) = P (Socrates is mortal) P (Socrates is a man)

Bayes formula follows from this; we shall come to this later. If P (S) = 0, then the formula does not apply. Any posterior is compatible with Bayesian updating after observing a 0 probability event.

3.2 Why be a Bayesian agent?


Both ascribing probabilities and doing Bayesian updating have justications in terms of decision making. Why start with a prior P? Savages theory of expected utility with subjective probability is a justication. Why update in a Bayesian way? It ensures consistent decisions for an EU maximizer: If I do (for example, quantity of investment), I get utility u (, ) depending on the state. * Comes from an outcome o (, ) depending on and (e.g. wealth), lead ing to utility u (o (, )), where u is the vNM utility function. Suppose I am an EU maximizer and make a plan in advance: do S if I am learn S, and S if I learn S. (This symbol means not S.) * Assume P (S) (0, 1) Then I maximize expected utility: This max is
S P () u (, S ) + S P () u (, S )

equivalent to maximizing the two expressions P () u (, S ) and similarly for learning S. S to

separately: maximizing

Maximizing is equivalent S P () u (, S ) 1 P u = w P (| S) u (, S ) P(S) S () (, S )

So if I wait, learn that S, do a Bayesian update, and then takes the EUmaximizing decision based on conditional probability, I end up making the same decision. * Similarly for learning S. Conclusion: if I am a Bayesian EU agent, I do not want to change my plans after observing information. We used binary information (S or not S), but this holds for an arbitrary information structure (I learn that S i happened, where S 1 , S 2 , . . . are events such that exactly 1 must happen.2 ) If an EU maximizer is non-Bayesian, the following sort of inconsistency is possible: I will listen to the (unreliable) weather report, and decide in advance that its best to take an umbrella if it predicts rain, and not if it predicts sun.
2

I.e. the S i are disjoint and S i = . The S i are then called a partition of .

But I know that when I see that it predicts sun, I will make a non-Bayesian update about the probability of rain and decide to take an umbrella anyway. So because I do not agree with the strategy of my future self, I make sure to put the umbrella in the car before listening to the weather report.

4 Signals and Bayes rule


Suppose we are interested in a random variable X taking values in {x 1 , x 2 , . . . } distance to the nearest habitable planet, X {20, 200} light years This will help us to take a decision , giving utility u (x, ) how much fuel to bring To help us evaluate the probability, we use another random variable called a signal S (s 1 , s 2 , . . . ) number of UFO sightings Often we know the distribution of X , and the distribution of S conditional on X Then the conditional probability is given by Bayes rule: P (X = x i |S = s i ) = = P (X = x i and S = s i ) P (S = s i ) P (S = s i |X = x i ) P (X = x i ) P (S = s i )

5 The value of partial information


The DARPA Policy Analysis Market allowed betting on future foreign events of interest to the US government. The objective was to use the prices to inform the US government of the probabilities of these events. It was quickly defeated by political and media criticism. One of the criticisms was that the information would be unreliable. Can unreliable (partial) information lead to worse decisions? No, at least not for (Bayesian) EU maximizers.

5.1 Partial information is valuable


Continuing with the framework in Section 4, what is expected utility if we observe the signal? v 1 = max p u (X () , 1 ) + p u (X () , 2 ) + (1)

1 ,2 ,...

:S=s 1

:S=s 2

What is expected utility if we do not observe the signal? v 0 = max


p u (X () , )

This is the same as maximizing formula 1, under the constraints 1 = 2 = . So v 1 v 0 Strict inequality when it is not optimal to choose the same action for each signal. When information is used for decision making, it is valuable; otherwise it is not. Bernankes bad news principle (Bernanke, QJE 1983) about the timing of binary investments. Suppose a rm is deciding whether or not to make an investment. Under normal conditions, it would make the investment. Suppose it waits and aquires information (economic news). The value of waiting (option value) is governed by the likelihood (and distribution) of bad news, news that would cause the investment not to be made. So if bad news may come, it is valuable to wait, and not make current investments. 5.1.1 The value function associated with the probability distribution of X is convex For simplicity, take X {x 1 , x 2 }. Suppose the probability of x 2 is p. X = x 2 : it will rain, a company will perform well, a job candidate will perform well in the job... This results in a value v p = max 1 p u (x 1 , ) + pu (x 2 , ). This is convex. Why? Argument 1

Suppose there is a signal S s, s , where S = s happens with probability , and conditional on S = s, the probability of x 2 is p, and conditional on S = s , the probability of x 2 is p . Then the unconditional probability of x 2 is p = (1 ) p + p . If the agent does not observe the signal, v (1 ) p + p . he has resulting utility

If the agent does observe the signal, he has resulting utility (1 ) v p +v p . Since information is valuable, the second quantity is greater than the rst. So v is convex.

2
4 0 1 = = 2 1

Figure 1: v p is convex Argument 2 Fixing a decision , expected utility is linear in probability. * This has nothing to do with risk preferences! Maximizing over , the value is the maximum of a collection of linear functions, which is convex in p. Convexity of v p is another way to think about information being valuable.

Vous aimerez peut-être aussi