Vous êtes sur la page 1sur 20

Curbing Negative externalities:

To many economists interested in environmental problems the key is to internalise external


costs and benefits to ensure that those who create the externalities include them when making
decisions.

Pollution Taxes

One common approach to adjust for externalities is to tax those who create negative
externalities.

This is known as "making the polluter pay".

Introducing a tax increases the private cost of consumption or production and ought to
reduce demand and output for the good that is creating the externality.

Some economists argue that the revenue from pollution taxes should be 'ring-fenced' and
allocated to projects that protect or enhance our environment.

For example, the money raised from a congestion charge on vehicles entering busy
urban roads, might be allocated towards improving mass transport services; or the
revenue from higher taxes on cigarettes might be used to fund better health care
programmes.

Examples of Environmental Taxes include:


1. The Landfill Tax - this tax aims to encourage producers to produce less waste and to
recover more value from waste, for example through recycling or composting and to use
environmentally friendly methods of waste disposal

2. The Congestion Charge: -this is a high profile environmental charge introduced in


February 2003. It is designed to cut traffic congestion in inner-London by charging
motorists 8 per day to enter the central charging zone

3. Plastic Bag Tax: A tax on plastic bags has not been introduced into England and Wales

4. Vehicle excise duty (VED): VED starts from a theoretical 'nil' rate and accelerating up
depending on the carbon emissions of the vehicle

There are two market-based solutions to controlling pollution:

fiscal measures (price-based)

trading in emissions quotas (rights-based)


Fiscal measures

A common approach to aligning the private and social costs of negative externalities is through a
tax on the polluter based on an evaluation of the damage caused. In the diagram below, X is the
level of output if the costs of pollution are ignored and K is the socially optimal production level.
Two points should be noted about this diagram. First, the optimal level of pollution is not
necessarily zero, as many environmentalists often argue. Second, the environment often has
some assimilative capacity, so that up to a certain level of production there are no pollution costs
incurred.

The government could impose a simple flat-rate tax QZ on output which would remove the
incentive to increase production beyond the socially-optimal level K. Note that a flat-rate tax can
be criticised as unjust because (a) it taxes output over levels OZ which do not generate external
costs, and (b) it has a uniform effect on all output YK, despite the fact that marginal increases in
output above Z add increasing marginal amounts of external costs. A tax system which would
overcome these objections is shown by the curve RK. Such a tax would be levied on pollution
emissions above the level where they incur environmental costs.

In law and economics, the Coase theorem (pronounced /kos/) describes the economic
efficiency of an economic allocation or outcome in the presence of externalities. The theorem
states that if trade in an externality is possible and there are sufficiently low transaction costs,
bargaining will lead to a Pareto efficient outcome regardless of the initial allocation of property.
In practice, obstacles to bargaining or poorly defined property rights can prevent Coasian
bargaining. This "theorem" is commonly attributed to Nobel Prize laureate Ronald Coase during
his tenure at the University of Chicago. However, Coase himself stated that the theorem was
based on perhaps four pages of his 1960 paper "The Problem of Social Cost",[1] and that the
"Coase theorem" is not about his work at all.[2]

What is 'Coase Theorem '

Coase theorem is a legal and economic theory that affirms that where there are complete
competitive markets with no transactions costs, an efficient set of inputs and outputs to and from
production-optimal distribution will be selected, regardless of how property rights are divided.
Coase theorem asserts that when property rights are involved, parties naturally gravitate toward
the most efficient and mutually beneficial outcome.

The Coase theorem states that where there is a conflict of property rights, the involved parties
can bargain or negotiate terms that are more beneficial to both parties than the outcome of any
assigned property rights. The theorem also asserts that in order for this to occur, bargaining must
be costless; if there are costs associated with bargaining (such as meetings or enforcement), it
will affect the outcome. The Coase theorem shows that where property rights are concerned,
involved parties do not necessarily consider how the property rights are granted if they can trade
to produce a mutually advantageous outcome.

This theorem was developed by Ronald Coase when considering the regulation of radio
frequencies. He posited that regulating frequencies was not required because stations with the
most to gain by broadcasting on a particular frequency would have an incentive to pay other
broadcasters not to interfere.

What is a 'Pigovian Tax': A way of correcting for negative externalities, or consequences for society,
arising from the actions of a company or industry sector, by levying additional taxes on that company or
sector. An example would be higher taxes on tobacco products, or taxes put in place for polluting power
companies.

Pigovian tax is form of tax which is levied on negative externalities that causes pollution to the
environment. Pigovian tax is also known as Pigouvian tax. It is also imposed when there are
excess social costs caused through negative externalities arising from business practices.
Pigovian tax is the most efficient and effective way to correct negative externalities. Pigovian tax
is named after the economist, Arthur Pigou who also developed the concept of economic
externalities.

Pigovian tax provides incentive to reduce negative externalities. Pigovian tax is also a form of
regulation that helps in controlling pollution caused in the market economy. Pigovian tax
provides an incentive to reduce pollution, whereas with direct regulation, a polluting company
has no incentive to pollute any less than what is allowable.

A Pigovian tax is a strategic effluent fee assessed against private individuals or businesses for engaging in
a specific activity. It is meant to discourage activities that impose a net cost of production on third parties;
economists call this a negative externality. Pigovian taxes were named after English economist Arthur
C. Pigou, a major contributor to early externality theory in the Cambridge tradition.

According to Pigou, negative externalities prevent a market economy from reaching equilibrium when
producers do not internalize all costs of production. This negative externality might be corrected, he
contended, by levying taxes equal to the externalized costs.
A Pigovian tax (also spelled Pigouvian tax) is a tax levied on any market activity that generates
negative externalities (costs not internalized in the market price). The tax is intended to correct
an inefficient market outcome, and does so by being set equal to the social cost of the negative
externalities. In the presence of negative externalities, the social cost of a market activity is not
covered by the private cost of the activity. In such a case, the market outcome is not efficient and
may lead to over-consumption of the product.[1] An often-cited example of such an externality is
environmental pollution.[2]

In the presence of positive externalities, i.e., public benefits from a market activity, those who
receive the benefit do not pay for it and the market may under-supply the product. Similar logic
suggests the creation of a Pigovian subsidy to make the users pay for the extra benefit and spur
more production.[3] An example sometimes cited is a subsidy for provision of flu vaccine.[4]

Pigovian taxes are named after English economist Arthur Pigou who also developed the concept
of economic externalities

Pigouvian taxes, named after Arthur C. Pigou, a renowned English economist from the early 20th
century, are designed to correct what economists call "market failures" or "negative externalities"
that impose spillover costs on society, such as pollution.

In theory, Pigouvian taxes are efficient and straightforward, but in practice, they're anything but
simple. Calculating the precise social costs of gasoline consumption, or some other good deemed
dangerous to the environment, is very difficult. Even if policymakers are able to solve the
"knowledge problem" that plagues Pigouvian taxes, finding the optimal policy solutions may
require additional analysis. If lawmakers overestimate the costs of externalities and implement
an excessive Pigouvian tax, those hit hardest by the tax will be lower-income Americans.

Negative Externalities

Negative externalities are not necessarily bad in the normative sense. Instead, a negative
externality occurs whenever an economic actor does not fully internalize the costs of their
activity, meaning third parties unwillingly subsidize extra production.

A popular example of a Pigovian-style tax is a tax on pollution. Pollution from a factory creates a
negative externality because part of the cost of pollution is borne by third parties nearby. This
cost might manifest through dirtied property or health risks.

The polluter only internalizes the marginal private costs, not the marginal external costs. Once
Pigou added in the external costs creating what he called marginal social cost he argued
the economy suffered deadweight loss from excess pollution beyond the social optimal level.
Social Optimal Taxes

A.C. Pigou popularized the concept of a Pigovian tax in his influential book The Economics of
Welfare (1920). Building on Alfred Marshalls analysis of markets, Pigou believed state
intervention should correct negative externalities, which he considered a market failure. This
could be accomplished, Pigou contended, through scientifically measured and selective taxation.

To arrive at the social optimal tax, the government regulator must estimate the marginal social
cost and marginal private cost, extrapolating from those the deadweight loss to the economy.

The Problem of Social Cost

Pigous externality theories were dominant in mainstream economics for 40 years, but lost favor
after Nobel Prize-winner Ronald Coase published The Problem of Social Cost (1960). Coase
demonstrated that Pigous examination and solution were often wrong, and for at least three
separate reasons.

Using Pigous analytical framework, Coase showed negative externalities did not necessarily
lead to an inefficient result. Second, even if they were inefficient, Pigovian taxes did not tend to
lead to an efficient result. Last, Coase argued the critical element was transaction cost theory, not
externality theory.
The diagram illustrates the working of a Pigovian tax. A tax shifts the marginal private cost curve
up by the amount of the tax. If the tax is placed on the quantity of emissions from the factory, the
producers have an incentive to reduce output to the socially optimum level. If the tax is placed on
the percentage of emissions per unit of production, the factory has the incentive to change to
cleaner processes or technology.

What is 'Game Theory'

Game theory is a model of optimality taking into consideration not only benefits less costs, but
also the interaction between participants.

Game theory attempts to look at the relationships between participants in a particular model and
predict their optimal decisions.

What is the 'Nash Equilibrium'

The Nash equilibrium is a concept of game theory where the optimal outcome of a game is one
where no player has an incentive to deviate from his or her chosen strategy after considering an
opponent's choice. Overall, an individual can receive no incremental benefit from changing
actions, assuming other players remain constant in their strategies. A game may have multiple
Nash equilibria or none at all.

This concept is named after its inventor John Nash and is incorporated in multiple disciplines
(ranging from behavioral ecology to economics). If you want to test for a Nash equilibrium,
simply reveal each person's strategy to all players. The Nash equilibrium exists if no players
change their strategy, despite knowing the actions of their opponents. For example, let's examine
a game between Tom and Sam. In this simple game both players can choose: A) received $1, or
B) lose $1
Logically, both players choose strategy A and receive a payoff of $1. If you revealed Sam's
strategy to Tom and vice versa, you will see that no player deviates from the original choice.
Knowing the other player's move means little, and doesn't change behavior. The outcome A,A
represents a Nash equilibrium.

In game theory, a subgame perfect equilibrium (or subgame perfect Nash equilibrium) is a
refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect
equilibrium if it represents a Nash equilibrium of every subgame of the original game.
Informally, this means that if (1) the players played any smaller game that consisted of only one
part of the larger game and (2) their behavior represents a Nash equilibrium of that smaller game,
then their behavior is a subgame perfect equilibrium of the larger game. Every finite extensive
game has a subgame perfect equilibrium.[1]

A common method for determining subgame perfect equilibria in the case of a finite game is
backward induction. Here one first considers the last actions of the game and determines which
actions the final mover should take in each possible circumstance to maximize his/her utility.
One then supposes that the last actor will do these actions, and considers the second to last
actions, again choosing those that maximize that actor's utility. This process continues until one
reaches the first move of the game. The strategies which remain are the set of all subgame
perfect equilibria for finite-horizon extensive games of perfect information.[1] However,
backward induction cannot be applied to games of imperfect or incomplete information because
this entails cutting through non-singleton information sets.

A subgame perfect equilibria necessarily satisfies the One-Shot deviation principle.

The set of subgame perfect equilibria for a given game is always a subset of the set of Nash
equilibria for that game. In some cases the sets can be identical.

The Ultimatum game provides an intuitive example of a game with fewer subgame perfect
equilibria than Nash equilibria

The payoff matrix of the game is shown in Table 1. Observe that there are two different
equilibria, which are also shown in Figure 1. Consider the equilibrium given by the strategy
profile (shown in the middle). Observe that while the profile is obviously a Nash equilibrium the
behaviour of player 2 is rather hard to justify when we look at his choice at the node : By
choosing strategy instead of player 2 would increase his profit if node would actually be reached
during the progress of the game. More formally, the equilibrium is not an equilibrium with
respect to the subgame induced by node . It is likely that in real life player 2 would choose the
strategy instead which would in turn inspire player 1 to change his strategy to . The resulting
profile (shown on the right) is not only a Nash equilibrium but it is also an equilibrium in all
subgames (induced by the nodes resp ). It is therefore a subgame perfect equilibrium.

Table 1: Payoff matrix

(K, K) (K, U) (U, U) (U, K)

L (3, 1) (3, 1) (1, 3) (1, 3)

R (2, 1) (0, 0) (0, 0) (2, 1)

A subgame perfect Nash equilibrium is an equilibrium such that players' strategies constitute a
Nash equilibrium in every subgame of the original game. It may be found by backward
induction, an iterative process for solving finite extensive form or sequential games. First, one
determines the optimal strategy of the player who makes the last move of the game. Then, the
optimal action of the next-to-last moving player is determined taking the last player's action as
given. The process continues in this way backwards in time until all players' actions have been
determined.

Subgame perfect equilibria eliminate noncredible threats.

A Nash equilibrium is a combination of strategies for all players in a game where each player is playing
a best response to each other player's actual strategy, which means that each player, acting in isolation,
cannot achieve a better outcome for themselves by altering their strategy, given the strategy each
other player has adopted. A subgame-perfect Nash equilibrium is a Nash equilibrium with the
additional restriction that each individual decision in a player's strategy would be the one that
gets them the best outcome, including the decisions which never come up in practice given the
strategies that are actually being played (these are called decisions "off the equilibrium path" in
game theory parlance). All subgame perfect equilibria are Nash equilibria, but the reverse is not
true. A more generally understandable way to describe subgame perfect equilibria are equilibria
where "all players' threats are credible."

One important thing about equilibrium analysis to understand is that in order to analyze whether
a game is in equilibrium, one has to know what decision a player plans to make at every decision
point that could occur--including the ones that don't in practice. Game theorists define a
'strategy' for a player in this way--a description of the actions that player is going to take in all
possible circumstances.
The issue of subgame perfection is most apparent in sequential games. Take the following Cold
War-themed example game:

There are two players, a president P and a party secretary S.


P has the first choice--they can begin instigations, or not. If they don't instigate, the game ends
and both P and S get a payoff of 0.
If P instigates, then S has a choice. S can escalate or back down. If S backs down, the game is
over, S loses reputation (loses 5 points) and P gains reputation (gains 5 points). If S escalates,
then P has one final decision--to back down, or to start nuclear war. If P backs down at this
point, P looks weak and loses 8 points and S gains 8 points. If P starts nuclear war, the whole
world suffers--both sides lose 10 points.

Consider the following set of strategies: P's strategy is to instigate at the first decision point, and
start nuclear war at the third decision point; S's strategy is to back down at the second decision
point. In practice with this set of strategies, P will instigate, S will back down, and the game will
end with P gaining 5 points and S losing 5 points.

Is this a Nash equilibrium? If P changes their first choice to not instigate, they get 0 instead of
+5, and if they change their later choice, the outcome is unchanged (+5 vs. +5). If S changes
their only choice to escalate, this will lead to nuclear war, so this strategy change would result in
-10 for S instead of -5. So, yes, it's a Nash equilibrium: each player considered in isolation
cannot improve their own outcome by changing an aspect of their strategy.

Is this a subgame perfect Nash equilibrium, though? If we get to the decision point where P can
elect to back down or fire the nukes, P's best option would not be what their strategy indicates--
they can back down and score -8 instead of firing the nukes and scoring -10. So, they aren't
making their best decision at each decision point; it's not a subgame perfect equilibrium.

Note that if we simply changed P's strategy to (instigate at stage 1, back down at the end), now
we no longer have a Nash equilibrium: S could change their strategy to escalate instead of
backing down and achieve a higher score.

The good news is that it's actually generally easier to find the subgame perfect equilibrium for
most simple games by working backwards. Since we know that each individual decision needs
to be economically rational, we can start at the last decision and observe the tradeoffs available
to the players directly. For example, in this game, as we said above, if we get to the last
decision, P would choose not to fire the nukes in the subgame perfect equilibrium. That means
that from S' perspective, the payoff of getting the game to P's final decision is +8, since that's
what P's 'rational' choice would lead to. That means S' best strategy is to escalate at their
decision point. We now apply the same logic once more to P's first decision: if they instigate, S
will escalate, and P will have to back down, so P will earn -8. That's worse than the 0 they get if
they don't instigate at all, so they would choose not to escalate. Thus, the subgame perfect Nash
equilibrium is (P: don't instigate, don't fire nukes; S: escalate).

Note that in this example, which equilibrium concept we use drastically changes the outcome we
expect to occur! If we discount P's threat to launch nukes as irrational, we conclude they won't
instigate in the first place. If we don't, then we expect them to instigate. Normally the subgame
perfect Nash equilibrium more closely mirrors what we expect to see in real life, but not always:
if a player is able to publicly commit to a strategy that otherwise seems irrational (committing to
launching the nukes a la Dr. Strangelove in this example), that can change the outcome (though,
strictly speaking, if we build this commitment into the game we've changed the rules of the
game).

Backward induction is the process of reasoning backwards in time, from the end of a problem
or situation, to determine a sequence of optimal actions. It proceeds by first considering the last
time a decision might be made and choosing what to do in any situation at that time. Using this
information, one can then determine what to do at the second-to-last time of decision. This
process continues backwards until one has determined the best action for every possible situation
(i.e. for every possible information set) at every point in time.

In the mathematical optimization method of dynamic programming, backward induction is one


of the main methods for solving the Bellman equation.[1][2] In game theory, backward induction is
a method used to compute subgame perfect equilibria in sequential games.[3] The only difference
is that optimization involves just one decision maker, who chooses what to do at each point of
time, whereas game theory analyzes how the decisions of several players interact. That is, by
anticipating what the last player will do in each situation, it is possible to determine what the
second-to-last player will do, and so on. In the related fields of automated planning and
scheduling and automated theorem proving, the method is called backward search or
backward chaining. In chess it is called retrograde analysis.

What is the 'Prisoner's Dilemma'

The prisoner's dilemma is a paradox in decision analysis in which two individuals acting in their
own best interest pursue a course of action that does not result in the ideal outcome. The typical
prisoner's dilemma is set up in such a way that both parties choose to protect themselves at the
expense of the other participant. As a result of following a purely logical thought process to help
oneself, both participants find themselves in a worse state than if they had cooperated with each
other in the decision-making process.

Suppose two friends, Dave and Henry, are suspected of committing a crime and are being
interrogated in separate rooms. Both individuals want to minimize their jail sentence. Both of
them face the same scenario: Dave has the option of pleading guilty or not guilty. If he pleads not
guilty, Henry can plead not guilty and get a two-year sentence, or he can plead guilty and get a
one-year sentence. It is in Henry's best interest to plead guilty if Dave pleads not guilty. If Dave
pleads guilty, Henry can plead not guilty and receive a five-year sentence. Otherwise he can
plead guilty and get a three-year sentence. It is in Henry's best interest to plead guilty if Dave
pleads guilty. Dave faces the same decision matrix and follows the same logic as Henry. As a
result, both parties plead guilty and spend three years in jail although through cooperation they
could have served only two. A true prisoner's dilemma is typically "played" only once; otherwise
it is classified as an iterated prisoner's dilemma.

Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary
confinement with no means of communicating with the other. The prosecutors lack sufficient
evidence to convict the pair on the principal charge. They hope to get both sentenced to a year in
prison on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain. Each
prisoner is given the opportunity either to: betray the other by testifying that the other committed
the crime, or to cooperate with the other by remaining silent. The offer is:
If A and B each betray the other, each of them serves 2 years in prison

If A betrays B but B remains silent, A will be set free and B will serve 3 years in prison
(and vice versa)

If A and B both remain silent, both of them will only serve 1 year in prison (on the lesser
charge)

Both cannot communicate, they are separated in two individual rooms. The normal game
is shown below:

Prisoner B stays silent Prisoner B betrays


(cooperates) (defects)
Prisoner A stays silent Prisoner A: 3 years
Each serves 1 year
(cooperates) Prisoner B: goes free
Prisoner A: goes free
Prisoner A betrays (defects) Each serves 2 years
Prisoner B: 3 years
Dominant Strategy
A strategy is dominant if, regardless of what any other players do, the strategy earns a player a larger
payoff than any other. Hence, a strategy is dominant if it is always better than any other strategy, for any
profile of other players' actions. Depending on whether "better" is defined with weak or strict inequalities,
the strategy is termed strictly dominant or weakly dominant. If one strategy is dominant, than all others
are dominated. For example, in the prisoner's dilemma, each player has a dominant strategy.
Moral Hazard
In economics, moral hazard occurs when one person takes more risks because someone else
bears the cost of those risks. A moral hazard may occur where the actions of one party may
change to the detriment of another after a financial transaction has taken place.

Moral hazard occurs under a type of information asymmetry where the risk-taking party to a
transaction knows more about its intentions than the party paying the consequences of the risk.
More broadly, moral hazard occurs when the party with more information about its actions or
intentions has a tendency or incentive to behave inappropriately from the perspective of the party
with less information.
Moral hazard also arises in a principalagent problem, where one party, called an agent, acts on
behalf of another party, called the principal. The agent usually has more information about his or
her actions or intentions than the principal does, because the principal usually cannot completely
monitor the agent. The agent may have an incentive to act inappropriately (from the viewpoint of
the principal) if the interests of the agent and the principal are not aligned.

Definition: Moral hazard is a situation in which one party gets involved in a risky event knowing
that it is protected against the risk and the other party will incur the cost. It arises when both the
parties have incomplete information about each other.

Description: In a financial market, there is a risk that the borrower might engage in activities that
are undesirable from the lender's point of view because they make him less likely to pay back a
loan.

It occurs when the borrower knows that someone else will pay for the mistake he makes. This in
turn gives him the incentive to act in a riskier way. This economic concept is known as moral
hazard.

Example: You have not insured your house from any future damages. It implies that a loss will
be completely borne by you at the time of a mishappening like fire or burglary. Hence you will
show extra care and attentiveness. You will install high tech burglar alarms and hire watchmen to
avoid any unforeseen event.

But if your house is insured for its full value, then if anything happens you do not really lose
anything. Therefore, you have less incentive to protect against any mishappening. In this case,
the insurance firm bears the losses and the problem of moral hazard arises.

The principalagent problem, in political science and economics, (also known as agency
dilemma or theory of agency) occurs when one person or entity (the "agent") is able to make
decisions on behalf of, or that impact, another person or entity: the "principal". [1] This dilemma
exists in circumstances where the agent is motivated to act in his own best interests, which are
contrary to those of the principal, and is an example of moral hazard.

Common examples of this relationship include corporate management (agent) and shareholders
(principal), or politicians (agent) and voters (principal).[2] Consider a legal client (the principal)
wondering whether his lawyer (the agent) is recommending protracted legal proceedings because
it is truly necessary for the client's well being, or because it will generate income for the lawyer.
In fact the problem can arise in almost any context where one party is being paid by another to
do something where the agent has a small or nonexistent share in the outcome, whether in formal
employment or a negotiated deal such as paying for household jobs or car repairs.

The problem arises where the two parties have different interests and asymmetric information
(the agent having more information), such that the principal cannot directly ensure that the agent
is always acting in his (the principal's) best interest,[3] particularly when activities that are useful
to the principal are costly to the agent, and where elements of what the agent does are costly for
the principal to observe (see Moral hazard and conflict of interest). Often, the principal may be
sufficiently concerned at the possibility of being exploited by the agent that he chooses not to
enter into the transaction at all, when it would have mutually beneficial: a suboptimal outcome
that can lower welfare overall. The deviation from the principal's interest by the agent is called
"agency costs".[3]

Various mechanisms may be used to align the interests of the agent with those of the principal. In
employment, employers (principal) may use piece rates/commissions, profit sharing, efficiency
wages, performance measurement (including financial statements), the agent posting a bond, or
the threat of termination of employment to align worker interests with their own.

What is the 'Principal-Agent Problem'

The principal-agent problem develops when a principal creates an environment in which an


agent's incentives don't align with its own. Generally, the onus is on the principal to create
incentives for the agent to ensure they act as the principal wants. This includes everything from
financial incentives to avoidance of information asymmetry.

What is the 'Lemons Problem'

The lemons problem is the issue of information asymmetry between the buyer and seller of an
investment or product. Lemons problem was popularized by a 1970 research paper by economist
George Akerlof. The term is derived from Akerlof's demonstration of the concept of asymmetric
information through the example of defective used cars, which are known as lemons in
marketplace. In the investment field, the lemons problem is apparent in areas such as insurance
and corporate finance.

BREAKING DOWN 'Lemons Problem'

Information asymmetry arises when the parties to a transaction do not have the same degree of
information necessary to make an informed decision. For example, in the market for used cars,
the buyer generally cannot ascertain the value of a vehicle accurately and may therefore only be
willing to pay an average price for it, somewhere between a bargain price and a premium price.

However, this tilts the scales in favor of a lemon seller, since even an average price for this
lemon would be higher than the price it would command if the buyer knew beforehand that it
was indeed a lemon. This phenomenon also puts the seller of a good used car at a disadvantage,
since the best price such a seller can expect is an average price, and not the premium price the
car should command.

What is 'Asymmetric Information'

Asymmetric information is a situation in which one party in a transaction has more or superior
information compared to another. This often happens in transactions where the seller knows
more than the buyer, although the reverse can happen as well. Potentially, this could be a harmful
situation because one party can take advantage of the other party's lack of knowledge.

What is 'Asymmetric Information'

Asymmetric information is a situation in which one party in a transaction has more or superior
information compared to another. This often happens in transactions where the seller knows
more than the buyer, although the reverse can happen as well. Potentially, this could be a harmful
situation because one party can take advantage of the other party's lack of knowledge.

BREAKING DOWN 'Asymmetric Information'

With increased advancements in technology, asymmetric information has been on the decline as
a result of more and more people being able to easily access all types of information.

Information Asymmetry can lead to two main problems:


1. Adverse selection- immoral behavior that takes advantage of asymmetric information before a
transaction. For example, a person who is not be in optimal health may be more inclined to
purchase life insurance than someone who feels fine.
2. Moral Hazard- immoral behavior that takes advantage of asymmetric information after a
transaction. For example, if someone has fire insurance they may be more likely to commit arson
to reap the benefits of the insurance.

The Market for Lemons


"The Market for Lemons: Quality Uncertainty and the Market Mechanism" is a 1970
paper by the economist George Akerlof which examines how the quality of goods traded in a
market can degrade in the presence of information asymmetry between buyers and sellers,
leaving only "lemons" behind. A lemon is an American slang term for a car that is found to be
defective only after it has been bought.

Suppose buyers can't distinguish between a high-quality car (a "peach") and a "lemon". Then
they are only willing to pay a fixed price for a car that averages the value of a "peach" and
"lemon" together (pavg). But sellers know whether they hold a peach or a lemon. Given the fixed
price at which buyers will buy, sellers will sell only when they hold "lemons" (since plemon < pavg)
and they will leave the market when they hold "peaches" (since ppeach > pavg). Eventually, as
enough sellers of "peaches" leave the market, the average willingness-to-pay of buyers will
decrease (since the average quality of cars on the market decreased), leading to even more sellers
of high-quality cars to leave the market through a positive feedback loop.

Thus the uninformed buyer's price creates an adverse selection problem that drives the high-
quality cars from the market. Adverse selection is the market mechanism that leads to a market
collapse.

Akerlof's paper shows how prices can determine the quality of goods traded on the market. Low
prices drive away sellers with high-quality goods leaving only lemons behind. Akerlof, Michael
Spence, and Joseph Stiglitz jointly received the Nobel Memorial Prize in Economic Sciences in
2001 for their research related to asymmetric information.

Akerlof's paper uses the market for used cars as an example of the problem of quality
uncertainty. A used car is one in which ownership is transferred from one person to another, after
a period of use by its first owner and its inevitable wear and tear. There are good used cars
("cherries") and defective used cars ("lemons"), normally as a consequence of several not-
always-traceable variables, such as the owner's driving style, quality and frequency of
maintenance, and accident history. Because many important mechanical parts and other elements
are hidden from view and not easily accessible for inspection, the buyer of a car does not know
beforehand whether it is a cherry or a lemon. So the buyer's best guess for a given car is that the
car is of average quality; accordingly, he/she will be willing to pay for it only the price of a car of
known average quality. This means that the owner of a carefully maintained, never-abused, good
used car will be unable to get a high enough price to make selling that car worthwhile.

Therefore, owners of good cars will not place their cars on the used car market. The withdrawal
of good cars reduces the average quality of cars on the market, causing buyers to revise
downward their expectations for any given car. This, in turn, motivates the owners of moderately
good cars not to sell, and so on. The result is that a market in which there is asymmetric
information with respect to quality shows characteristics similar to those described by Gresham's
Law: the bad drives out the good. (Although Gresham's principle applies more specifically to
exchange rates, modified analogies can be drawn.) [1]

Statistical abstract of the problem

Akerlof considers a situation in which demand D for used cars depends on the cars price p and
quality =(p) and the supply depends on price alone. [2] Economic equilibrium is given by
S(p)=D(p,) and there are two groups of traders with utilities given by:
Where M is the consumption of goods other than automobiles, x the car's quality and n the
number of automobiles. Let Yi, Di and Si be income, demand and supply for group i. Assuming
that utilities are linear, that the traders are Von NeumannMorgenstern utility maximizers and
that the price of other M goods is unitary, the demand D1 for cars is Y1/p if /p>1, otherwise null.
The demand D2 is Y2/p if 3/2>p, otherwise null. Market demand is given by:

Group 1 has N cars to sell with quality between 0 and 2 and group 2 has no cars to sell, therefore
S1= pN/2 and S2=0. For a given price p, average quality is p/2, and therefore D=0. The market for
used cars collapses when there is asymmetric information.

Asymmetric information

The paper by Akerlof describes how the interaction between quality heterogeneity and
asymmetric information can lead to the disappearance of a market where guarantees are
indefinite. In this model, as quality is undistinguishable beforehand by the buyer (due to the
asymmetry of information), incentives exist for the seller to pass off low-quality goods as higher-
quality ones. The buyer, however, takes this incentive into consideration, and takes the quality of
the goods to be uncertain. Only the average quality of the goods will be considered, which in
turn will have the side effect that goods that are above average in terms of quality will be driven
out of the market. This mechanism is repeated until a no-trade equilibrium is reached.

As a consequence of the mechanism described in this paper, markets may fail to exist altogether
in certain situations involving quality uncertainty. Examples given in Akerlof's paper include the
market for used cars, the dearth of formal credit markets in developing countries, and the
difficulties that the elderly encounter in buying health insurance. However, not all players in a
given market will follow the same rules or have the same aptitude of assessing quality. So there
will always be a distinct advantage for some vendors to offer low-quality goods to the less-
informed segment of a market that, on the whole, appears to be of reasonable quality and have
reasonable guarantees of certainty. This is part of the basis for the idiom buyer beware.

This is likely the basis for the idiom that an informed consumer is a better consumer. An example
of this might be the subjective quality of fine food and wine. Individual consumers know best
what they prefer to eat, and quality is almost always assessed in fine establishments by smell and
taste before they pay. That is, if a customer in a fine establishment orders a lobster and the meat
is not fresh, he can send the lobster back to the kitchen and refuse to pay for it. However, a
definition of 'highest quality' for food eludes providers. Thus, a large variety of better-quality and
higher-priced restaurants are supported.

Impact on markets

The article draws some conclusions about the cost of dishonesty in markets in general:
The cost of dishonesty, therefore, lies not only in the amount by which the purch

Critical reception

George E. Hoffer and Michael D. Pratt state that the economic literature is divided on whether a
lemons market actually exists in used vehicles. The authors research supports the hypothesis
that known defects provisions, used by US states (e.g., Wisconsin) to regulate used car sales,
have been ineffectual, because the quality of used vehicles sold in these states is not significantly
better than the vehicles in neighboring states without such consumer protection legislation.[3]

Both the American Economic Review and the Review of Economic Studies rejected the paper for
"triviality", while the reviewers for Journal of Political Economy rejected it as incorrect, arguing
that, if this paper were correct, then no goods could be traded.[4] Only on the fourth attempt did
the paper get published in Quarterly Journal of Economics.[5] Today, the paper is one of the
most-cited papers in modern economic theory and most downloaded economic journal paper of
all time in RePEC (more than 8,530 citations in academic papers as of May 2011).[6] It has
profoundly influenced virtually every field of economics, from industrial organisation and public
finance to macroeconomics and contract theory.

Criticism

Criticism for this theory stems from literalism, and the inability to see that the car market is
being used as an analogy for all markets with asymmetric information. Literalist critics note the
fact that it ignores that consumers themselves can seek ways to assure the quality of a car and
that a used-car salesperson may work to maintain his reputation rather than pass off a "lemon".
The issue of reputation, however, would not apply to private individual sellers who do not intend
to sell another car in the near future.[citation needed]

Libertarians, like William L. Anderson, oppose the regulatory approach proposed by the authors
of the paper, observing that some used-car markets haven't broken down even without lemon
legislation and that the lemon problem creates entrepreneurial opportunities for alternative
marketplaces or customers' knowledgeable friends.[7]

Conditions for a lemon market

A lemon market will be produced by the following:

1. Asymmetry of information, in which no buyers can accurately assess the value of a product
through examination before sale is made and all sellers can more accurately assess the value of a
product prior to sale
2. An incentive exists for the seller to pass off a low-quality product as a higher-quality one

3. Sellers have no credible disclosure technology (sellers with a great car have no way to disclose
this credibly to buyers)

4. Either a continuum of seller qualities exists or the average seller type is sufficiently low (buyers
are sufficiently pessimistic about the seller's quality)

5. Deficiency of effective public quality assurances (by reputation or regulation and/or of effective
guarantees/warranties)

6. bad money drives out good money


7. If there is counterfeit or inflated currency in circulation, people will hoard their genuine currency;
worthless things will drive valuable things out of circulation. (This principle is also known as
Gresham's Law.)

8. In economics, Gresham's law is a monetary principle stating that "bad money


drives out good". For example, if there are two forms of commodity money in
circulation, which are accepted by law as having similar face value, the more valuable
commodity will disappear from circulation.[1][2]

9. The law was named in 1860 by Henry Dunning Macleod, after Sir Thomas Gresham
(15191579), who was an English financier during the Tudor dynasty. However, there are
numerous predecessors. The law had been stated earlier by Nicolaus Copernicus; for this
reason, it is occasionally known as the Copernicus Law.[3][4] It was also stated in the
14th century, by Nicole Oresme c.1350,[5] in his treatise On the Origin, Nature, Law, and
Alterations of Money, [6] and by jurist and historian Al-Maqrizi (13641442) in the
Mamluk Empire;[7] and noted by Aristophanes in his play The Frogs, which dates from
around the end of the 5th century BC

10. Good money is money that shows little difference between its nominal value (the face
value of the coin) and its commodity value (the value of the metal of which it is made,
often precious metals, nickel, or copper).

11. In the absence of legal-tender laws,[dubious discuss] metal coin money will freely exchange at
somewhat above bullion market value. This may be observed in bullion coins such as the
Canadian Gold Maple Leaf, the South African Krugerrand, the American Gold Eagle, or
even the silver Maria Theresa thaler (Austria). Coins of this type are of a known purity
and are in a convenient form to handle. People prefer trading in coins rather than in
anonymous hunks of precious metal, so they attribute more value to the coins of equal
weight. The price spread between face value and commodity value is called seigniorage.
Because some coins do not circulate, remaining in the possession of coin collectors, this
can increase demand for coinage.
12. On the other hand, bad money is money that has a commodity value considerably lower
than its face value and is in circulation along with good money, where both forms are
required to be accepted at equal value as legal tender.

13. In Gresham's day, bad money included any coin that had been debased. Debasement was
often done by the issuing body, where less than the officially specified amount of
precious metal was contained in an issue of coinage, usually by alloying it with a base
metal. The public could also debase coins, usually by clipping or scraping off small
portions of the precious metal, also known as "stemming" (reeded edges on coins were
intended to make clipping evident). Other examples of bad money include counterfeit
coins made from base metal. Today all circulating coins are made from base metals,
known as fiat money.

14. In the case of clipped, scraped, or counterfeit coins, the commodity value was reduced by
fraud, as the face value remains at the previous higher level. On the other hand, with a
coinage debased by a government issuer, the commodity value of the coinage was often
reduced quite openly, while the face value of the debased coins was held at the higher
level by legal tender laws.

15. Gresham's law states that any circulating currency consisting of both "good" and "bad"
money (both forms required to be accepted at equal value under legal tender law) quickly
becomes dominated by the "bad" money. (For a formal model see Bernholz and Gersbach
1992). This is because people spending money will hand over the "bad" coins rather than
the "good" ones, keeping the "good" ones for themselves. Legal tender laws act as a form
of price control. In such a case, the artificially overvalued money is preferred in
exchange, because people prefer to save rather than exchange the artificially demoted one
(which they actually value higher).

16. Consider a customer purchasing an item which costs five pence, who possesses several
silver sixpence coins. Some of these coins are more debased, while others are less so
but legally, they are all mandated to be of equal value. The customer would prefer to
retain the better coins, and so offers the shopkeeper the most debased one. In turn, the
shopkeeper must give one penny in change, and has every reason to give the most
debased penny. Thus, the coins that circulate in the transaction will tend to be of the most
debased sort available to the parties.

17. If "good" coins have a face value below that of their metallic content, individuals may be
motivated to melt them down and sell the metal for its higher intrinsic value, even if such
destruction is illegal. As an example, consider the 1965 United States half dollar coins,
which contained 40% silver. In previous years, these coins were 90% silver. With the
release of the 1965 half dollar, which was legally required to be accepted at the same
value as the earlier 90% halves, the older 90% silver coinage quickly disappeared from
circulation, while the newer debased coins remained in use.[citation needed] As the value of the
dollar (Federal Reserve notes) continued to decline, resulting in the value of the silver
content exceeding the face value of the coins, many of the older half dollars were melted
down.[citation needed] Beginning in 1971, the U.S. government gave up on including any silver
in the half dollars, as even the metal value of the 40% silver coins began to exceed their
face value.

18. A similar situation occurred in 2007 in the United States with the rising price of copper,
zinc, and nickel, which led the U.S. government to ban the melting or mass exportation of
one-cent and five-cent coins.[10]

19. In addition to being melted down for its bullion value, money that is considered to be
"good" tends to leave an economy through international trade. International traders are
not bound by legal tender laws as citizens of the issuing country are, so they will offer
higher value for good coins than bad ones. The good coins may leave their country of
origin to become part of international trade, escaping that country's legal tender laws and
leaving the "bad" money behind. This occurred in Britain during the period of the gold
standard.[citation ne

20. Before I show how, let's first give an example of Gresham's law. Say that new full-bodied
silver coins and debased silver coins with the same face value circulate concurrently. If
authorities set a law requiring that all coins must be accepted by the populace at face
value, buyers and debtors will only settle their bills in debased silver coin (the "bad"
money). Full-bodied coins (the "good" money) will be held back as hoarders clip off a bit
of each coin's silver content, converting the entire full-bodied coinage into debased
coinage. After all, why spend x ounces of silver on goods when a smaller amount will
suffice? Thus the bad chases out the good.

Now let's vary our example to have good money chase out the bad. Say authorities
promise two-way conversion between all silver coins at face value. Everyone will bring
debased silver coins, the "bad" money, to the authorities for conversion into full bodied
coins, the "good" money. In essence, they are bringing in x ounces of silver and leaving
with x + y silver. This will continue until every bad coin has been deposited into the
authority's vaults so that only the good money circulates.