Vous êtes sur la page 1sur 5

Using Quantitative Methods for Causal Inference in Social Policy Research

2016- 2017

Instructors: Ankur Sarin [13 sessions] and Ambrish Dongre [15 sessions]
Guest Speakers: Kathan Shukla [2 sessions] and Anish Sugathan [2 sessions]
Credits: 1.5 units
Area: Public Systems Group

The search for causality in relationship between variables is as frustrating as it is necessary.


As elusive as they might be, claims about causality form the basis of much policy advice and
advance our understanding of factors influencing human development. Relatively recent
advances in the development and application of quantitative methods in identifying and
estimating causal relationships also make this an exciting and productive line of research.

The methods covered will include experiments, ‘natural’ experiments, instrument variables,
regression discontinuity designs, propensity score matching and value-add models.

The course emphasizes a close reading and discussion of research papers that are
generally considered to be good representatives of the application of these methods as well
as those that lend themselves to ideas for future work.

The purpose of this course will be introduce, explain and study the application of these
techniques in the specific context of gathering evidence on different dimensions of
education. Specific goals would be:

1. Introduce participants to methods that are at the cutting edge of quantitative


empirical research

2. Learn to critique and develop on existing research in the area

3. Learn to practically apply methods

4. Develop an independent research project based on the ideas in the course

In doing so, participants will also encounter literature on evidence on substantive issues in
education, including the relationship between resources and outcomes, comparison between
private and public schools , use of affirmative action, intergenerational transmission of
inequality and data-driven methods to improve accountability.

Relationship to other courses

While an introductory course in Econometrics is helpful, it is not a necessary pre-requisite


since we will be revising essential concepts as we go along. The course is also significantly
different from Econometrics and the required course in Research Methods as it applies
several tools and techniques not covered in these courses. This course is also designed to
be substantially more applied than the other courses.

Evaluation:

Individual assignments: 30%


Project: 40%
Examination(s): 20%
Class Preparedness: 10%

Page 1 of 5
Sessions Outline

1. Theory and empirical research [2 Sessions]

Required:

- Chapter 2 of Methods Matter.

- Card, D., DellaVigna, S., & Malmendier, U. (2011). The role of theory in field experiments
(No. w17047). National Bureau of Economic Research.

- Shang, J., & Croson, R. (2009). A field experiment in charitable contribution: The impact
of social information on the voluntary provision of public goods. The Economic
Journal, 119(540), 1422-1439. FOCUS: analyze how theory is used in this paper, if
at all.

- Popper, K. (2014). Conjectures and refutations. Routledge. Section 10.1 “The Growth of
Knowledge: Theories and Problems” pages 215-22.

Review from earlier coursework:

- King, G., Keohane, R. O., & Verba, S. (1994). Designing social inquiry: Scientific inference
in qualitative research. Princeton University Press. Chapter 1 “The Science in Social
Science”

2. The ‘gold standard’ of experimental designs [5 Sessions]

2a. Experiments and their analysis

Chapters 3, 4 & 5 of Methods Matter

Krueger, A. (1999). “Experimental Estimates of Education Production Functions.” Quarterly


Journal of Economics, 114(2), 497-532. Howell, W.G, Wolf, P. J., Campbell, D. E., &
Peterson, P. E. (2002). “School Vouchers And Academic Performance: Results From
Three Randomized Field Trials.” Journal of Policy Analysis and Management, 21(2),
191-217. (Prateek)

Howell, W.G, Wolf, P. J., Campbell, D. E., & Peterson, P. E. (2002). “School Vouchers And
Academic Performance: Results From Three Randomized Field Trials.” Journal of
Policy Analysis and Management, 21(2), 191-217.

Krueger, A. B. & Zhu, P. (2004). “Another Look At The New York City School Voucher
Experiment.” The American Behavioral Scientist, 47(5), 658-698 (note: analysis of
design complicated by blocking and weighting)

Myers, D. E. & Mayer, D. P. (2003). “Comments On: Another Look At The New York City
School Voucher Experiment.” Mathematica Policy Research.
K. Muralidharan and V. Sundararaman, “Teacher Incentives in Developing Countries:
Experimental Evidence from India,” Unpublished Manuscript (2006). (Deepak)

Page 2 of 5
2b. Sample Size and Clustered Designs

Chapters 6 and 7 of Methods Matter.

Greenberg, D., & Barnow, B. S. (2014). Flaws in Evaluations of Social Programs Illustrations
From Randomized Controlled Trials. Evaluation review, 0193841X14545782.

Raudenbush, S W. et al. (2007). “Strategies for Improving Precision in Group-Randomized


Experiments.” Educational Evaluation and Policy Analysis, 29(1), 5-29.

Borman, G. D. et al. (2007). “Final Reading Outcomes of the National Randomized Field
Trial of Success for All.” American Educational Research Journal, 44(3), 701-731.

3. Issues with Data Collection and Management [3 sessions]

Adcock, Robert. "Measurement validity: A shared standard for qualitative and quantitative
research." American Political Science Association. Vol. 95. No. 03. Cambridge
University Press, 2001.

Field visit

4. The Idea of ‘Natural’ Experiments [2 Sessions]

Chapter 8 of Methods Matter.

Tyler, J. H., Murnane, R. J., & Willett, J. B. (2000). “Estimating The Labor Market Signaling
Value of the GED.” Quarterly Journal of Economics, 115(2), 431-468. (E-Resources)

Rao, G. (2013). Familiarity does not breed contempt: Diversity, discrimination and generosity
in Delhi schools. Job Market Paper.

Dynarski, S. M. (2003). “Does Aid Matter? Measuring The Effect Of Student Aid On College
Attendance And Completion.” The American Economic Review, 93(1), 279-288.

Additional Reading

Bhavnani, Rikhil R. 2009. “Do Electoral Quotas Work after They Are Withdrawn? Evidence
from a Natural Experiment in India.” American Political Science Review 103 (1): 23–
35.

Jasjeet S. Sekhon and Rocío Titiunik, “When Natural Experiments Are Neither Natural nor
Experiments,” American Political Science Review 106, no. 01 (2012): 35–57.

Page 3 of 5
5. Regression Discontinuity Designs [4 Sessions]

Chapter 9 of Methods Matter.


Ludwig, J. & Miller, D. (2007). “Does Head Start Improve Children's Life Chances? Evidence
from a Regression Discontinuity Design,” Quarterly Journal of Economics, 122(1),
159-208.

Angrist, J. D., & Lavy, V. (1999). “Using Maimonides’ Rule To Estimate The Effect Of Class
Size On Scholastic Achievement.” Quarterly Journal of Economics, 114(2), 533-575.

Urquiola, M., Verhoogen, E., (March 2009). Class-size caps, sorting, and the regression-
discontinuity design. American Economic Review, 99(1), 179-215.

Card, D., Mas, A., & Rothstein, J. (2008). Tipping and the Dynamics of Segregation. The
Quarterly Journal of Economics, 177-218.

Davis, L. W. (2008). The effect of driving restrictions on air quality in Mexico City. Journal of
Political Economy, 116(1), 38-81.

Doyle, J., Graves, J., Gruber, J., & Kleiner, S. (2015). Measuring returns to hospital care:
Evidence from ambulance referral patterns. The journal of political economy, 123(1),
170.

6. Using panel data [4 Sessions]

Chapters 1 & 2 of Frees, Edward W. Longitudinal and Panel Data: Analysis and Applications
in the Social Sciences. Cambridge, UK ; New York: Cambridge University Press,
2004.

Available at:
http://instruction.bus.wisc.edu/jfrees/jfreesbooks/Longitudinal%20and%20Panel%20Data/B
ook/Chapters/FreesFinal.pdf

Duflo, Esther. “Schooling and Labor Market Consequences of School Construction in


Indonesia: Evidence from an Unusual Policy Experiment.” The American Economic
Review 91, no. 4 (September 1, 2001): 795–813.

7. Propensity score estimation [4 Sessions]

Chapter 12 and Chapter 13 of Methods Matter.


Diaz, J. J., & S. Handa. (2006). “An Assessment of Propensity Score Matching As a
Nonexperimental Impact Estimator: Evidence From Mexico's PROGRESA Program.”
Journal of Human Resources, 41(2), 319-345.

Verónica C. Frisancho Robles and Kala Krishna, “Affirmative Action in Higher Education in
India: Targeting, Catch Up, and Mismatch,” National Bureau of Economic Research
Working Paper Series No. 17727 (2012), http://www.nber.org/papers/w17727.

Page 4 of 5
8. Instrumental Variables [2 Sessions]

Chapter 10 & 11 of Methods Matter.

Becker, William. Issues of Endeogeneity and Instrumental Variables in Economic Education


Research in “An Online Handbook for the Use of Contemporary Econometrics in
Economic Education Research”

http://www.aeaweb.org/committees/AEACEE/Econometrics_Handbook/mod2/mod2par
t1.pdf

Currie, J. & Moretti, E. (2003). “Mother’s Education and the Intergenerational Transmission
of Human Capital: Evidence from College Openings.” Quarterly Journal of
Economics, 118(4), 1495-1532.
Dee, T. S. (2004). “Are There Civic Returns to Education?” Journal of Public Economics,
88(9-10), 1697-1720.

IV in Experiments.
Matsudaira, J. (2008). “Mandatory summer school and student achievement.” Journal of
Econometrics 142, 829-850. (E-Resources)
Kling, J.R., Liebman, J.B., and Katz, L.F. (2007). “Experimental Analysis of Neighborhood
Effects.” Econometrica 75, 83-129.
Angrist, J. D., & Lavy, V. (1999). “Using Maimonides’ Rule To Estimate The Effect Of Class
Size On Scholastic Achievement.” Quarterly Journal of Economics, 114(2), 533-575.

9. Value-Added models for accountability [ 2 sessions]

S. Rivkin and others, “Value-added Analysis and Education Policy” (2007),


http://www.urban.org/publications/411577.html.

Daniel F. McCaffrey et al., “Evaluating Value-Added Models for Teacher Accountability,”


Product Page, 2004, http://www.rand.org/pubs/monographs/MG158.html

10. Publishing in the Field [2 Sessions]

Chapter 14 of Methods Matter.

Steele, J., Murnane, R.J., & Willett, J.B. (2010). “Do Financial Incentives Help Low-
Performing Schools Attract and Keep Academically Talented Teachers? Evidence
from California.” Journal of Policy Analysis and Management, 29(3), 451-478.

Steele, J., Murnane, R.J., & Willett, J.B “Are Public Service Subsidies Good for the Public?”
Education Week, July 2010.

11. Class presentations [2 Sessions]

Page 5 of 5