Vous êtes sur la page 1sur 21

Econometric Approaches to Causal Inference: Difference-in-Differences and Instrumental Variables

Graduate Methods Master Class Department of Government, Harvard University February 25, 2005

Overview: diff-in-diffs and IV

Data

Randomized experiment or natural experiment

Observational data

Problem

We cannot observe the counterfactual (what if treatment group had not received treatment) Difference-in-differences

OVB, selection bias, simultaneous causality

Method

Instrumental variables

Diff-in-diffs: basic idea


Suppose we randomly assign treatment to some units (or nature assigns treatment as if by random assignment)
To estimate the treatment effect, we could just compare the treated units before and after treatment However, we might pick up the effects of other factors that changed around the time of treatment Therefore, we use a control group to difference out these confounding factors and isolate the treatment effect

Diff-in-diffs: without regression


One approach is simply to take the mean value of each groups outcome before and after treatment
Treatment group Before After TB TA Control group CB CA

and then calculate the difference-in-differences of the means:


Treatment effect = (TA - TB ) - ( CA - CB )

Diff-in-diffs: with regression


We can get the same result in a regression framework (which allows us to add regression controls, if needed):
yi = 0 + 1 treati + 2 afteri + 3 treati*afteri + ei where treat = 1 if in treatment group, = 0 if in control group after = 1 if after treatment, = 0 if before treatment

The coefficient on the interaction term (3 ) gives us the difference-in-differences estimate of the treatment effect

Diff-in-diffs: with regression


To see this, plug zeros and ones into the regression equation:
yi = 0 + 1 treati + 2 afteri + 3 treati*afteri + ei Treatment Group Before After Difference 0 + 1 0 + 1 + 2 + 3 2 + 3 Control Group 0 0 + 2 2

Difference
1 1 + 3 3

Diff-in-diffs: example
Card and Krueger (1994)
What is the effect of increasing the minimum wage on employment at fast food restaurants? Confounding factor: national recession Treatment group = NJ Control group = PA Before = Feb 92 After = Nov 92

FTEi = 0 + 1 NJi + 2 Nov92i + 3 NJi*Nov92i + ei

Diff-in-diffs: example
FTEi = 0 + 1 NJi + 2 Nov92i + 3 NJi*Nov92i + e 23.33 -2.89 -2.16 2.75
FTE 23.33 20.44 Control group (PA) 21.17 Treatment group (NJ) 21.03 Time Treatment effect of minimum wage increase = + 2.75 FTE

Diff-in-diff-in-diffs
A difference-in-difference-in-differences (DDD) model allows us to study the effect of treatment on different groups
If we are concerned that our estimated treatment effect might be spurious, a common robustness test is to introduce a comparison group that should not be affected by the treatment For example, if we want to know how welfare reform has affected labor force participation, we can use a DD model that takes advantage of policy variation across states, and then use a DDD model to study how the policy has affected single versus married women

Diff-in-diffs: drawbacks
Diff-in-diff estimation is only appropriate if treatment is random - however, in the social sciences this method is usually applied to data from natural experiments, raising questions about whether treatment is truly random
Also, diff-in-diffs typically use several years of serially-correlated data but ignore the resulting inconsistency of standard errors (see Bertrand, Duflo, and Mullainathan 2004)

IV: basic idea


Suppose we want to estimate a treatment effect using observational data
The OLS estimator is biased and inconsistent (due to correlation between regressor and error term) if there is
-

omitted variable bias selection bias simultaneous causality

If a direct solution (e.g. including the omitted variable) is not available, instrumental variables regression offers an alternative way to obtain a consistent estimator

IV: basic idea


Consider the following regression model:
yi = 0 + 1 Xi + ei Variation in the endogenous regressor Xi has two parts
-

the part that is uncorrelated with the error (good variation) the part that is correlated with the error (bad variation)

The basic idea behind instrumental variables regression is to isolate the good variation and disregard the bad variation

IV: conditions for a valid instrument


The first step is to identify a valid instrument
A variable Zi is a valid instrument for the endogenous regressor Xi if it satisfies two conditions: 1. Relevance: corr (Zi , Xi) 0 2. Exogeneity: corr (Zi , ei) = 0

IV: two-stage least squares


The most common IV method is two-stage least squares (2SLS)
Stage 1: Decompose Xi into the component that can be predicted by Zi and the problematic component Xi = 0 + 1 Zi + i Stage 2: Use the predicted value of Xi from the first-stage regression to estimate its effect on Yi yi = 0 + 1 X-hati + i Note: software packages like Stata perform the two stages in a single regression, producing the correct standard errors

IV: example
Levitt (1997): what is the effect of increasing the police force on the crime rate?
This is a classic case of simultaneous causality (high crime areas tend to need large police forces) resulting in an incorrectlysigned (positive) coefficient To address this problem, Levitt uses the timing of mayoral and gubernatorial elections as an instrumental variable Is this instrument valid? Relevance: police force increases in election years Exogeneity: election cycles are pre-determined

IV: example
Two-stage least squares:
Stage 1: Decompose police hires into the component that can be predicted by the electoral cycle and the problematic component

policei = 0 + 1 electioni + i
Stage 2: Use the predicted value of policei from the first-stage regression to estimate its effect on crimei

crimei = 0 + 1 police-hati + i
Finding: an increased police force reduces violent crime (but has little effect on property crime)

IV: number of instruments


There must be at least as many instruments as endogenous regressors
Let k = number of endogenous regressors m = number of instruments The regression coefficients are exactly identified if m=k (OK) overidentified if m>k (OK) underidentified if m<k (not OK)

IV: testing instrument relevance


How do we know if our instruments are valid?
Recall our first condition for a valid instrument: 1. Relevance: corr (Zi , Xi) 0 Stock and Watsons rule of thumb: the first-stage F-statistic testing the hypothesis that the coefficients on the instruments are jointly zero should be at least 10 (for a single endogenous regressor) A small F-statistic means the instruments are weak (they explain little of the variation in X) and the estimator is biased

IV: testing instrument exogeneity


Recall our second condition for a valid instrument:
2. Exogeneity: corr (Zi , ei) = 0 If you have the same number of instruments and endogenous regressors, it is impossible to test for instrument exogeneity But if you have more instruments than regressors: Overidentifying restrictions test regress the residuals from the 2SLS regression on the instruments (and any exogenous control variables) and test whether the coefficients on the instruments are all zero

IV: drawbacks
It can be difficult to find an instrument that is both relevant (not weak) and exogenous
Assessment of instrument exogeneity can be highly subjective when the coefficients are exactly identified IV can be difficult to explain to those who are unfamiliar with it

Sources
Stock and Watson, Introduction to Econometrics
Bertrand, Duflo, and Mullainathan, How Much Should We Trust Differences-in-Differences Estimates? Quarterly Journal of Economics February 2004

Card and Krueger, "Minimum Wages and Employment: A Case Study of the Fast Food Industry in New Jersey and Pennsylvania," American Economic Review, September 1994
Angrist and Krueger, Instrumental Variables and the Search for Identification: From Supply and Demand to Natural Experiments, Journal of Economic Perspectives, Fall 2001 Levitt, Using Electoral Cycles in Police Hiring to Estimate the Effect of Police on Crme, American Economic Review, June 1997

Vous aimerez peut-être aussi