Vous êtes sur la page 1sur 16

AN INTRODUCTION TO GAME THEORY

A game is any situation in which players make decisions that take into account each others’ actions and responses. Game Theory analyzes the way two or more players or parties choose actions or strategies that jointly affect each participant.

In economics, the study of games provides an effective tool as games are a convenient way in which to model the strategic interactions among economic agents. Also, many economic issues involve strategic interaction. For example, behavior of firms in imperfectly competitive markets, behavior in auctions (investment banks bidding on treasury bills) and responses of different interest groups in economic negotiations can all be studied, analyzed and understood using the theory of games.

History of the Theory of Games:

The very first Game theoretic ideas can be traced to the 18th century, but the major development of the theory began in the 1920s with the work of the mathematician Emile Borel (1871-1956) and John von Neumann (1903-57). A decisive event in the development of the theory was the publication in 1944 of the book “Theory of games and economic behavior” by von Neumann and Oskar Morgenstern, which established the foundations of the field. In the early 1950s, John F. Nash developed the key concept of Nash equilibrium and initiated the study of bargaining. Soon after Nash's work, models based on game theory began to be used in economic theory and political science, and psychologists began studying how human subjects behave in experimental games. In the 1970s game theory was first used as a tool in evolutionary biology. Subsequently, the methods of game theory have come to dominate microeconomic theory and are used also in many other fields of economics and a wide range of other social and behavioral sciences. The 1994 Nobel Prize in economics was awarded to the game theorists John C. Harsanyi (1920.2000), John F. Nash (1928.), and Reinhard Selten (1930.).

STRATEGIC GAMES AND EQUILIBRIUM

The Theory of Games is based on the Theory of Rational Choice. This theory states that a decision maker chooses the best action according to her preferences, among all actions available to her. No qualitative restriction is placed on the decision-maker's preferences. Her rationality lies in the consistency of her decisions when faced with different sets of available actions, not in the nature of her likes and dislikes. Formally, the theory of rational choice states:

“The action chosen by a decision-maker is at least as good, according to her preferences, as every other available action.”

Strategic Games The choice of a particular move or step in a game, taking into account the response and actions of other players, or the planned movement in a game-like setup of a particular player is termed as a Strategy.

Games may be strategic or non strategic. Some examples of non strategic games are games of pure chance such as lotteries, while games such as solitaire or chess are based on strategic interaction. A strategic game (with ordinal preferences) consists of a set of players, a set of actions for each player and preferences for each player over the set of action profiles.

A very wide range of situations may be modeled as strategic games. For example, the players may be firms, the actions prices, and the preferences a reflection of the firms' profits.

In considering possible strategies, the simplest case

is

that

of

Dominant

Strategy. This situation arises when one player has a best strategy no matter what strategy the other player follows.

Analyzing the situation of a duopoly price war, we assume two firms, Berkley and

Berkley’s price

Swells. In a duopoly price war, the market is supplied by to firms that are deciding whether to engage in economic warfare of ruinously low prices. Assuming that the two firms have similar cost and demand structures, in such a

game, the firm’s profits will depend on its rival’s strategy as well as on its own. The interaction between the two firms can be represented using a pay-off matrix:

Normal price

Price war

A price war Swells’ price

 

Normal price

Price war

A)

$10

B)

-$100

$10

-$10

C)

-$10

D)

-$50

-$100

-$50

According to the above pay-off matrix, each duopolist has four possible outcomes. The numbers in the cells show the profits earned by each firm for each of the four outcomes. Here, charging the normal price is a dominant strategy for both firms. When both firms or all players in the generalized case have a dominant strategy, the outcome is said to be a Dominant Equilibrium.

However, in most situations, a dominant equilibrium does not exist. Such cases were studied by John Forbes Nash. He derived the solution for games without a clear dominant strategy by introducing the concept of Nash Equilibrium.

NASH EQUILIBRIUM

In a game, the best action for any given player depends, in general, on the other players' actions. So when choosing an action a player must have in mind the actions the other players will choose. That is, she must form a belief about the other players' actions. Each player's belief is derived from her past experience playing the game, and that this experience is sufficiently extensive that she knows how her opponents will behave. No one tells her the actions her opponents will choose, but her previous involvement in the game leads her to be sure of these actions.

A Nash equilibrium is an action profile a* with the property that no player i can do better by choosing an action different from a* i given that every other player j

adheres to a* j.

In the idealized setting in which the players in any given play of the game are drawn randomly from a collection of populations, a Nash equilibrium corresponds to a steady state. If, whenever the game is played, the action profile is the same Nash equilibrium a*, then no player has a reason to choose any different action. Expressed differently, Nash equilibrium embodies a stable social norm: if everyone else adheres to it, no individual wishes to deviate from it. The second component of the theory of Nash equilibrium that the players' beliefs about each other's actions are correct implies in particular, that two players' beliefs about a third player's action are the same. For this reason, the condition is sometimes said to be that the players' .expectations are coordinated. Simply stated, a Nash Equilibrium is one in which no player can improve his or her payoff given the other player’s strategy. Each strategy is a best response against the other player’s strategy. This equilibrium is also called the non-cooperative equilibrium because each party chooses that strategy which is best for itself without collusion or cooperation and without regard for the welfare of society or any other party.

Prisoner’s Dilemma: an example of Nash Equilibrium

Two suspects in a major crime are held in separate cells. There is enough evidence to convict each of them of a minor offense, but not enough evidence to convict either of them of the major crime unless one of them acts as an informer against the other (finks). If they both stay quiet, each will be convicted of the minor offense and spend one year in prison. If one and only one of them finks, she will be freed and used as a witness against the other, who will spend four years in prison. If they both fink, each will spend three years in prison.

This situation may be modeled as a strategic game:

Players: The two suspects. Actions: Each player's set of actions is (Quiet, Fink). Preferences: Suspect 1's ordering of the action profiles, from best to worst, is (Fink, Quiet) (she finks and suspect 2 remains quiet, so she is freed), (Quiet, Quiet) (she gets one year in prison), (Fink, Fink) (she gets three years in prison), (Quiet, Fink) (she gets four years in prison). Suspect 2's ordering is (Quiet, Fink), (Quiet, Quiet), (Fink, Fink), (Fink, Quiet). Representing the players’ action profiles in the form of a payoff matrix, we have:

Prisoner’s Dilemma: an example of Nash Equilibrium Two suspects in a major crime are held in

The Prisoner's Dilemma models a situation in which there are gains from cooperation (each player prefers that both players choose Quiet than they both choose Fink) but each player has an incentive to free ride (choose Fink) whatever the other player does.

The action pair (Fink, Fink) is Nash equilibrium because

  • i) Given that player 2 chooses Fink, player 1 is better off choosing Fink than Quiet. ii) Given that player 1 chooses Fink, player 2 is better off choosing Fink than

Quiet. Here, according to Nash equilibrium, the incentive to free ride eliminates the possibility of occurrence of the mutually desirable outcome (Quiet, Quiet).

Problems associated with Nash equilibrium:

1.

A problem may have more than one Nash equilibrium: a game with a

payoff matrix like the one shown below has more than one Nash equilibrium. This can lead to confusion in decision making.

 

A

B

2,2

0,0

 

0,0

1,1

 

A

B

2.

There are games that have no Nash equilibrium: Often in case of pure strategies, a clear Nash equilibrium does not exist. In such cases, each player chooses the action opposite to the action taken by the opponent.

As previously observed in the Prisoner’s Dilemma example, the Nash equilibrium may not be Pareto efficient, as it is based on contemplation of probable actions

of other players.TYPES OF GAMES

Repeated Games

Real life is a bigger game in which what a player does early on can affect what

others choose to do later on. Games may be repeated a finite number of times or infinitely. Consider a game G, Let G be played several times (perhaps an infinite number of times) and award each player a payoff which is the sum (perhaps discounted) of the payoffs she got in each period from playing G. Then this sequence of stage games is itself a game: a repeated game or a super game

In repeated games, the sequential nature of the relationship allows for the adoption of strategies that are contingent on the actions chosen in previous plays of the game. Most contingent strategies are of the type known as "trigger" strategies. Example in the case of Prisoner’s Dilemma: In prisoners' dilemma:

Initially play Quiet. If your opponent plays Fink, then play Fink in the next round. If your opponent plays Quiet, then play Quiet in the next round. This is known as the "tit for tat" strategy. In case the game is repeated an infinite number of times, each player can influence the other players’ strategy. In fact, the threat of future non cooperation often enforces the adoption of the Pareto efficient strategy.

For

a finite

number of rounds, it is notable

that if there

is

no way to

enforce

cooperation in the last round, cooperation cannot be enforced in the other rounds. Players will be willing to cooperate only if there is a possibility of future cooperation.

In repeated games, information plays a significant role. Players have perfect information if they know exactly what has happened every time a decision needs to be made, e.g. Chess.

Sequential and Simultaneous Games

Games in which players choose their actions simultaneously are called simultaneous move games. The example of Prisoner’s Dilemma discussed previously is a simultaneous move game. In this case, the players must anticipate what the opponents must be doing, knowing that the opponents must also be anticipating the same.

Games where players choose actions in a particular sequence are called sequential move games. Examples include games of chess, bargaining and negotiations. In such games, the players must be prepared to react in an efficient manner to each of the opponents’ possible responses. Many sequential move games have time limits and deadlines. Some strategic interactions may include sequential as well as simultaneous moves.

Games in which players choose their actions simultaneously are called simultaneous move games. The example of

A two-stage two-player sequential game

Assuming two players of a sequential game P 1 and P2 as shown above, take turns in playing the game. It is usually assumed that P 1 always starts, followed by P 2 , then again P 1 , and so on. Player alternations continue until the game ends. The model reflects the rules of many popular games such as chess or poker. As each player takes a turn, it chooses from a nonempty, finite set of actions. At the end of the game, a cost for P 1 is incurred based on the sequence of actions chosen by each player. The cost is interpreted as a reward for P 2 . The amount of information that each player has when making its decision must be specified. This is usually expressed by indicating what portions of the action histories are known.

In the above game tree corresponding to the example, every vertex corresponds to a point at which a decision needs to be made by one player. Each edge emanating from a vertex represents an action. The root of the tree indicates the beginning of the game, which usually means that P 1 chooses an action. The

leaves of the tree represent the end of the game, which are the points at which a cost is received. The cost is usually shown below each leaf. In such games, information available to players prior to making any moves should be specified.

The representation of a sequential move game in the form of a decision tree is referred to as the extensive form. A common example of a sequential move game is the game for entry deterrence. In this game we consider a monopolist facing threat of entry by a new firm. The new firm has to decide whether or not to enter the market, while the original monopolist has to decide whether to cut its price in response or not.

Zero sum games

Zero-sum games are games where the amount of "winnable goods" is fixed. Whatever is gained by one actor is therefore lost by the other actor: the sum of gained (positive) and lost (negative) is zero. Games of competition such as soccer and baseball are zero sum games. Games whose payoffs do not sum up to zero, are non zero sum games.

APPLICATIONS OF GAME THEORY: COOPERATION, COMPETITION, COEXISTENCE AND COMMITMENT

Mixed strategies in Nash equilibrium

A mixed strategy is “the probability distribution over (some or all of) a player’s available pure strategies”. Considering the game of matching pennies, Bill and Ben simultaneously show the faces of a coin each. If the faces are the same, Ben wins both the coins, if the faces are different, Bill wins both the coins.

APPLICATIONS OF GAME THEORY: COOPERATION, COMPETITION, COEXISTENCE AND COMMITMENT Mixed strategies in Nash equilibrium A mixed

Here, there are two pure strategies - to play Head or to play Tail. The mixed strategy for either player is (p, 1-p), where p=prob(Head), 1-p=prob(Tail). Bill thinks Ben will play Heads with a probability of p and Tails with a probability of (1-p). Ben thinks Bill will play Heads with a probability of q and Tails with a probability of (1-q).

APPLICATIONS OF GAME THEORY: COOPERATION, COMPETITION, COEXISTENCE AND COMMITMENT Mixed strategies in Nash equilibrium A mixed

Expected payoffs for Bill Head: -1.p + 1. (1-p) = 1- 2p Expected payoffs for Ben Head: 1. q + (-1). (1-q) = 2q – 1

Tail: 1. p + (-1) . (1-p) = 2p - 1

Tail: -1. q + 1. (1-q) = 1 - 2q

Bill’s mixed strategy can be derived by equating the expected payoffs from playing Heads and Tails 1 – 2p = 2p –1 p = 1/2 Thus, Bill is indifferent between playing either Heads or Tails whenever Ben

plays Heads with probability 1/2. If p ≠ 1/2 Bill would NOT be indifferent between choosing Heads or Tails If p < 1/2 Bill will play Heads When p = 0

Head: 1 – 2p = 1 If p > 1/2 Bill will play Tails

Tail: 2p – 1 = -1

When p = 1 Head: 1 – 2p = -1

Tail: 2p – 1 = 1

And for Ben, 2q – 1 = 1 – 2q q = 1/2 If q < 1/2 Ben will play Tails, If q > 1/2 Ben will play Heads There is a unique Nash equilibrium Bill (q, 1-q) = (1/2, 1/2) Ben (p, 1-p) = (1/2, 1/2)

Games like poker essentially require a mixed strategy.

COORDINATION GAMES

Coordination games are those in which the corresponding payoffs of the players

are highest when the players coordinate their actions or strategies. Some common examples of games of coordination are:

Battle of the Sexes

The example of battle of the sexes consists of a fictional couple. The husband would most of all like to go to the football game. The wife would like to go to the opera. Both would prefer to go to the same place rather than different ones. If they cannot communicate, where should they go?

The payoff matrix shows that the wife chooses a row and the husband chooses a column.

This representation does not account for the additional harm that might come from going to different locations and going to the wrong one.

 

Opera

Football

Opera

3,2

0,0

Football

0,0

2,3

Battle of the Sexes

This game has two pure strategy Nash equilibrium, one where both go to the opera and another where both go to the football game. There is also a Nash equilibrium in mixed strategies, where the players go to their preferred. For the payoffs listed above, each player attends their preferred event with probability

3/5. However, as they prefer going together, they choose either of the first two equilibria depending on other variable factors such as distance of the football ground and opera theatre from their house, etc.

Chicken

Two drivers are involved in a game in which both speed towards each other along a narrow path. If they keep driving straight, they would crash into each other, but whoever swerves first in order to avoid the crash loses face. Here, there are two pure strategy Nash equilibria (A swerves, B doesn’t) and (B swerves, A doesn’t). The players are also aware that if one of them keeps driving straight, the other will chicken out. Hence, at the risk of crashing, the players may choose to keep driving straight, hoping that the other will swerve.

GAMES OF COMPETITION

Zero sum games correspond to a situation of pure competition. Taking the example of a game of soccer being played between Alex and Ben, we assume that Ben is kicking a penalty shoot. He can either kick left or kick to the right. Alex, the opposing goalkeeper either defends left or defends the right side. The following payoff matrix corresponds to such a situation:

Alex

 

Defend Left

Defend Right

Kick left

100,-100

160,-160

Kick right

180,-180

40,-40

Ben

percent success

The distinct feature about these games of competition is that the sum of each

percent success

cell’s payoffs

is zero.

In

the above

and similar cases,

information plays a

significant role as if either of the players knows the other’s strategy; he’ll have a

tremendous advantage. If Ben kicks left with probability and P, then, his expected

payoff is: 100P +180(1-P) when Alex defends left and 160P + 40(1-P) when Alex

defends right.

While Ben tries to maximize his payoff, Alex tries to minimize

Alex’s probability of jumping left, Q

Ben’s payoff. Thus equilibrium occurs at the point E as

shown in the figures

below, depicting Ben and Alex’s strategies respectively:

0100090000037800000002001c00000000000400000003010800050000000b02

00000000050000000c0294053408040000002e0118001c000000fb02100007000

percent success

0000000bc02000000000102022253797374656d0005340800004b4e82761c511

10004ee8339d00fb3060c020000040000002d01000004000000020101001c0000

00fb029cff0000000000009001000000000740001254696d6573204e657720526f

6d616e0000000000000000000000000000000000040000002d01010005000000

0902000000020d000000320a5a00000001000400000000003408910520002d00

Ben’s probability of kicking left, P

040000002d010000030000000000

0100090000037800000002001c00000000000400000003010800050000000b02

00000000050000000c0294053408040000002e0118001c000000fb02100007000

0000000bc02000000000102022253797374656d0005340800004b4e82761c511

10004ee8339d00fb3060c020000040000002d01000004000000020101001c0000

00fb029cff0000000000009001000000000740001254696d6573204e657720526f

6d616e0000000000000000000000000000000000040000002d01010005000000

0902000000020d000000320a5a00000001000400000000003408910520002d00

040000002d010000030000000000

The corresponding values of P and Q represent Nash equilibrium. Both the

players are optimizing at this point. The values of P and Q can be found

algebraically with the equations: 100P + 180(1-P) = 160P + 40(1-P) and 100Q +

160(1-Q) = 180Q + 40(1-Q). The equilibrium values are P = 0.7 and Q = 0.6.

COEXISTENCE: the Hawk-Dove game

The hawk dove game talks about strategic interaction where the players are

faced with the problem to share or fight over an object. Fighting is considered a

hawk

Support

Not support

strategy,

Save

3,-1

1,0

while

Squander

2,-1

-2,-2

peaceful sharing is considered a dove strategy. When both indulge in fighting,

they are badly wounded. The dilemma in this case is that if all players played

dove, it would be beneficial for one player to defect and play hawk. Using the

corresponding probabilities of a given payoff matrix, a Nash equilibrium can be

reached in such cases.

COMMITMENT

In case of sequential move games, the players may choose a commitment

strategy, in order to achieve a mutually favorable outcome. The committed

choice must be irreversible and observable on the part of all the players.

Problems of commitment can be applied to real life situations like making savings

for retirement. Two strategies faced by the player “Old”, (ageing persons) is to

save or to squander. The other player “Young” can choose between supporting

their elders or to save for their own retirement. A corresponding payoff matrix is:

Old

Young

There are two Nash equilibria. If Old chose “save”, Young will choose “not

support”. But if Old chose “Squander”, Young will choose “Support”. The older

generation is aware that if they do not save, the younger generation will support

them. Thus, it is optimal for them to squander.

The Bargaining Problem

B’s payoff

The bargaining problem is the problem of the division of a dollar between two

players. The Nash bargaining model specifies properties that a reasonable

solution should encompass and then proves that there is a unique outcome

A’s payoff

which satisfies the stated axioms. According to the Rubinstein bargaining model,

if A and B have to divide a dollar, first A makes an offer in the first period, B

either accepts or comes up with a counter offer in the next period. In the final

period, A either accepts or makes a final offer. They must come to an agreement

within a specified time period, say, 3 days, failing which neither gets anything.

Assuming that A discounts payoffs in future at a rate of p while B discounts

payoffs at the rate of q. the game will end in the third period if A offers (1,0) to B,

as he can make a take-it-or-leave-it offer. The game will end in the 2 nd period if B

makes an offer or (p, 1-p). In the 1 st period, the game shall end only if A makes

an offer of [1-q(1-p), q(1-p)]. In case of infinity as the time period, A’s payoff = (1-

q)/(1-pq) and B’s payoff = q(1-p)/(1-pq). In the figure below, the infinity case is

shown, the diagonal parallel lines representing the total payoffs in each period.

This value for the first period is 1, for the second period, it is p, and so on. The

broken line connects the subgame equilibrium outcomes at each time period.

0100090000037800000002001c00000000000400000003010800050000000b02

00000000050000000c0294053408040000002e0118001c000000fb02100007000

0000000bc02000000000102022253797374656d0005340800004b4e82761c511

10004ee8339d00fb3060c020000040000002d01000004000000020101001c0000

00fb029cff0000000000009001000000000740001254696d6573204e657720526f

6d616e0000000000000000000000000000000000040000002d01010005000000

0902000000020d000000320a5a00000001000400000000003408910520002d00

040000002d010000030000000000

The scope of game theory and its applications in several fields is tremendous. As

new applications and models are being developed, facilitating economists,

biologists, sports persons, diplomats, etc. in strategic decision making, game

theory, first popularized in the early 20 th century continues to intrigue students

and researchers from various fields.BIBLIOGRAPHY

  • 1. Game Theory by David. K. Levine, Department of Economics, UCLA

  • 2. Games and Decisions: An Introduction and critical survey, by R. D. Luce and H. Raiffa

  • 3. The Stag Hunt and Evolution of Social Structure, by Brian Skyrms

  • 4. Repeated Games, by Jim Ratliff

  • 5. An Introduction to Game Theory, by Martin. J. Osborne