Vous êtes sur la page 1sur 12

17.

806: Quantitative Research Methods IV


Spring 2013
Instructors: Danny Hidalgo & Teppei Yamamoto
TA: Chad Hazlett
Department of Political Science
MIT
1

Contact Information
Office:
Phone:
Email:
URL:

Danny
E53402
6172538078
dhidalgo@mit.edu

Teppei
E53401
6172536959
teppei@mit.edu
http://web.mit.edu/teppei/www

Chad
E404446
4127607544
hazlett@mit.edu

Logistics
Lectures: Mondays and Wednesdays 3:30 5:00pm, E53438
Recitations: TBA
Dannys office hours: Make an appointment.
Teppeis office hours: Make an appointment
Chads office hours: Mondays 1:30 3:00, E40-446

Course Description

This course is the fourth and final course in the quantitative methods sequence at the MIT political
science department. The course covers various advanced topics in applied statistics, including those
that have only recently been developed in the methodological literature and are yet to be widely
applied in political science. The topics for this year are organized into three broad areas: (1)
Advanced causal inference, where we build on the basic materials covered earlier in the sequence
(17.802) and study more advanced topics; (2) statistical learning, where we provide an overview of
machine learning, one of the most active subfields in applied statistics in the past decade; and (3)
Bayesian inference and statistical computing, where we extend the model-based inference techniques
covered in the previous course of the sequence (17.804) and study more technically sophisticated
materials as well as applications in political science.

Prerequisites

There are three prerequisites for this course:


1. Mathematics: basic calculus and linear algebra.
2. Probability and statistics covered in 17.800, 17.802, 17.804.
3. Statistical computing: familiarity with at least one statistical software.
For 1 and 3, refer to this years math camp materials to see the minimum you need to know; see
https://stellar.mit.edu/S/project/mathprefresher/

Course Requirements

The final grades are based on the following items:


Problem sets (30%): A total of four problem sets will be given throughout the semester.
Problem sets will contain analytical, computational, and data analysis questions. Each problem set will be counted equally toward the calculation of the final grade. The following
instructions will apply to all problem sets unless otherwise noted.
Neither late submission nor electronic submission will be accepted, except for special
circumstances which must be discussed with us prior to the deadlines.
Working in groups is encouraged, but each student must submit their own writeup of
the solutions. In particular, you should not copy someone elses answers or computer
code. We also ask you to write down the names of the other students with whom you
solved the problems together on the first sheet of your solutions.
For analytical questions, you should include your intermediate steps, as well as comments
on those steps when appropriate. For data analysis questions, include annotated code
as part of your answers. All results should be presented so that they can be easily
understood.
Final project (50%): The final project will be a short research paper which typically applies
a method learned in this course to an empirical problem of your substantive interest.
We strongly encourage you to co-author a paper with another student in the class. By
co-authoring you will (1) learn how to effectively collaborate with someone else on your
research, which is very important in political science where most cutting-edge research is
collaborative (see any recent issue of APSR or AJPS!) and (2) more likely have a good,
potentially publishable paper (multiple brains are usually better than one).
Unless you already have a concrete research project suitable for this course (e.g., from your
dissertation project), we recommend that you start with replicating the results in a published
article and then improve the original analysis using the methods learned in this course (or
elsewhere), methodologically or substantively. Oftentimes, gathering an original dataset is
too time-consuming and not suitable for a course project.
Students are expected to adhere to the following deadlines:

March 11: Turn in a one-page description of your proposed project. By this date
you need to have found your coauthor, acquired the data you plan to use, and completed
a descriptive analysis of the data (e.g. simple summary statistics, crosstabs and plots).
After submission, schedule a meeting with us to discuss your proposal.
April 17 and 22: Students will present interim reports on their projects in front
of the class. Each presentation should last about 10 minutes and will be followed by
a short Q&A session. Students should prepare electronic slides to accompany their
presentation. Performance on this presentation will be counted towards your total final
grade (see below).
May 15: Paper due. Please hand in one printed copy of your paper by 5pm, and also
email electronic copies to us by then. Your final paper should have all the proper format
of an academic journal article (except extensive literature review and theory sections),
including a title, abstract, introductory and concluding sections, tables and/or figures
with appropriate captions, and references with a coherent citation style. You will be
penalized if any of these elements is missing from your submitted manuscript.
You should use this project as an exercise to write a good scientific paper. We recommend that you closely follow the advice given in this article:
King, Gary. 2006. Publication, Publication. PS: Political Science and Politics, 39(1):
119125.
Participation in Applied Paper Sessions (10%): Throughout the semester, we will have
four applied paper sessions, in which we discuss journal articles and working papers which
apply the methods covered in the lectures to empirical problems in political science and related
fields. For each paper (or set of papers in some cases) marked as required, one student will
deliver a 15-minute oral report which will walk the rest of the class through its content
and provide comments on its merits and weaknesses, followed by a class discussion. The other
students must read the required papers in advance and submit short written comments
on each of the papers by the previous day. These comments should be no longer than a
few sentences for each paper, optionally followed by a list of questions for class discussion.
Although the written comments will not be graded, your participation in the class discussion
will count towards the participation grade.
Midterm Project Presentation (10%)
In addition, there will be required readings for each lecture which students must complete
in advance in order to enhance their understanding. The lectures and applied paper sessions will
also have optional readings, which are listed in the course schedule below.

Course Website

You can find the Stellar website for this course at:
http://stellar.mit.edu/S/course/17/sp13/17.806/
We will distribute course materials, including readings, lecture slides and problem sets, on this
website.

Questions about Course Materials

In this course, we will utilize an online discussion board called Piazza. Below is an official blurb
from the Piazza team:
Piazza is a question-and-answer platform specifically designed to get you answers fast.
They support LaTeX, code formatting, embedding of images, and attaching of files.
The quicker you begin asking questions on Piazza (rather than via individual emails
to a classmate or one of us), the quicker youll benefit from the collective knowledge
of your classmates and instructors. We encourage you to ask questions when youre
struggling to understand a concept ... See this New York Times article to learn more
about their founders story:
http://www.nytimes.com/2011/07/04/technology/04piazza.html
In addition to recitation sessions and office hours, please use the Piazza Q & A board when asking
questions about lectures, problem sets, and other course materials. You can access the Piazza
course page either directly from the below address or the link posted on the Stellar course website:
https://piazza.com/mit/spring2013/17806
Using Piazza will allow students to see other students questions and learn from them. Both the TA
and the instructor will regularly check the board and answer questions posted, although everyone
else is also encouraged to contribute to the discussion. A students respectful and constructive
participation on the forum will count toward his/her class participation grade. Do not email your
questions directly to the instructors or TAs (unless they are of personal nature) we will not
answer them!

Recitation Sessions

Recitation sections will be held during the two weeks each problem set is available. Time and
location will be announced after the first week of class. The purpose of these sessions will be to
clarify theoretical material and assist with computing issues, particularly as needed to complete
each problem set. Attendance is strongly encouraged.

Notes on Computing

In this course we use R, an open-source statistical computing environment that is very widely used
in statistics and political science. (If you are already well versed in another statistical software, you
are free to use it, but you will be on your own.) Problem sets will contain computing and/or data
analysis exercises which can be solved with R but often require going beyond canned functions and
writing your own program.
In addition to the materials from the departments math prefresher (see above), there are many
resources for R targeted at both introductory and advanced levels, including:
Fox, John and Sanford Weisberg. 2010. An R Companion to Applied Regression. Sage
Publications. (focused on regression analysis)
Venables, W. N. and B. D. Ripley. 2002. Modern Applied Statistics with S, 4th ed. Springer.
(general statistics)
For specific questions about R, searching the CRAN website with appropriate keywords will
often yield satisfactory results.
4

10

Books

The course has no required or recommended textbooks. All the reading materials are listed in the
course schedule below and will be made available electronically.

11

Course Schedule

Part I: Advanced Causal Inference


1. Complex Experiments (2/11)
Required:
Imai, Kosuke, Gary King, and Clayton Nall. 2009. The Essential Role of Pair Matching
in Cluster-Randomized Experiments, with Application to the Mexican Universal Health
Insurance Evaluation. Statistical Science 24(1): 2953.
Small, Dylan S, Thomas R Ten Have, and Paul R Rosenbaum. 2008. Randomization
Inference in a Group-Randomized Trial of Treatments for Depression. Journal of the
American Statistical Association 103(481): 27179.
Optional:
Kalton, Graham, J. Michael Brick, and Thahn Le. 2005. Estimating Components of
Design Effects for Use in Sample Design. In Household Sample Surveys in Developing
and Transition Countries, New York: United Nations.
Bloom, H. 2008. The Core Analytics of Randomized Experiments for Social Research.
In The SAGE Handbook of Social Research Methods, eds. Pertti Alasuutar, Leonard
Bickman, and Julia Brannen. London: SAGE.
Bruhn, Miriam, and David McKenzie. 2009. In Pursuit of Balance: Randomization
in Practice in Development Field Experiments. American Economic Journal: Applied
Economics 1(4): 200232.
2. Causal Inference with Interference between Units (2/13)
Required:
Bowers, Jake, Mark M Fredrickson, and Costas Panagopoulos. Forthcoming. Reasoning About Interference Between Units: a General Framework. Political Analysis.
Hudgens, Michael G, and M Elizabeth Halloran. 2008. Toward Causal Inference with
Interference. Journal of the American Statistical Association 103(482): 83242.
Optional:
Tchetgen, Eric, and Tyler VanderWeele. 2012. On Causal Inference in the Presence of
Interference. Statistical Methods in Medical Research 21(1): 5575.
Aronow, Peter, and Cyrus Samii. 2012. Estimating Average Causal Effects Under
General Interference. Working Paper.
Hong, Guanglei, and Stephen W Raudenbush. 2006. Evaluating Kindergarten Retention Policy: A Case Study of Causal Inference for Multilevel Observational Data,
Journal of the American Statistical Association 101(475): 90110.

3. Multiple Comparisons (2/19)


Required:
Benjamini, Yoav, and Yosef Hochberg. 1995. Controlling the False Discovery Rate: a
Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical
Society. Series B (Methodological) 57(1): 289300.
Optional:
Romano, Joseph P, and Michael Wolf. 2005. Stepwise Multiple Testing as Formalized
Data Snooping. Econometrica 73(4): 123782.
Schochet, Peter Z. 2008. Guidelines for Multiple Testing in Impact Evaluations.
National Center for Education Evaluation and Regional Assistance.
Dudoit, Sandrine, Mark J Van Der Laan, and Katherine Pollard. 2004. Multiple
Testing. Part I. Single-Step Procedures for Control of General Type I Error Rates.
Statistical Applications in Genetics and Molecular Biology 3(1).
Caughey, Devin, Allan Dafoe, and Jason Seawright. 2012. Testing Elaborate Theories
in Political Science: Nonparametric Combination of Dependent Tests. Working Paper.
Humphreys, Macartan, Raul Sanchez de la Sierra, and Peter van der Windt. 2013.
Fishing, Commitment, and Communication: a Proposal for Comprehensive Nonbinding
Research Registration. Political Analysis 21(1): 120.
4. Applied paper session (2/20)
Required:
Sinclair, Betsy, Margaret McConnell, and Donald Green. 2012. Detecting Spillover
Effects: Design and Analysis of Multilevel Experiments. American Journal of Political
Science 56(4): 105569.
Casey, Katherine, Rachel Glennerster, and Edward Miguel. 2012. Reshaping Institutions: Evidence on Aid Impacts Using a Preanalysis Plan. The Quarterly Journal of
Economics 127(4): 17551812.
(Optional) Centola, Damon. 2010. The Spread of Behavior in an Online Social Network
Experiment. Science 329(5996): 119497.
5. Causal Diagrams (2/25)
Required:
Pearl, Judea. 2010. The Foundations of Causal Inference.Sociological Methodology,
40(1): 75149.
Optional:
Pearl, Judea. 1995. Causal Diagrams for Empirical Research (with discussions).
Biometrika, 82(4):669710.
Pearl, Judea. 2009. Causality, 2nd ed. Cambridge University Press.
6. Partial Identification (2/27)
Required:

Chapter 7 in Manski, Charles F. 2007. Identification for Prediction and Decision, Harvard University Press.
Optional:
The rest of Manski, 2007.
Balke, Alexander and Judea Pearl. 1997. Bounds on Treatment Effects from Studies
with Imperfect Compliance. Journal of the Americal Statistical Association, 92(439):
11711176.
Imai, Kosuke. 2008. Sharp Bounds on the Causal Effects in Randomized Experiments
with Truncation-by-death. Statistics and Probability Letters, 78, 144149.
Imai, Kosuke and Teppei Yamamoto. 2010. Causal Inference with Differential Measurement Error: Nonparametric Identification and Sensitivity Analysis. American Journal
of Political Science, 54(2): 765789.
7. Causal Mediation (3/4)
Required:
Imai, Kosuke, Luke Keele, Dustin Tingley and Teppei Yamamoto. 2011. Unpacking
the Black Box of Causality: Learning about Causal Mechanisms from Experimental and
Observational Studies.American Political Science Review, 105(4):765789.
Optional:
Imai, Kosuke, Luke Keele and Teppei Yamamoto. 2010. Identification, Inference, and
Sensitivity Analysis for Causal Mediation Effects. Statistical Science, 25(1): 5171.
Pearl, Judea. 2001. Direct and Indirect Effects. In Proceedings of the Seventeenth
Conference on Uncertainty in Artificial Intelligence (J. S. Breese and D. Koller, eds.),
411420.
Robins, James M. and Sander Greenland. 1992. Identifiability and Exchangeability
for Direct and Indirect Effects. Epidemiology, 3: 143155.
8. Causal Attribution (3/6)
Required:
Yamamoto, Teppei. 2012. Understanding the Past: Statistical Analysis of Causal
Attribution. American Journal of Political Science, 56(1): 237256.
Optional:
Tian, Jin and Judea Pearl. 2000. Probabilities of Causation: Bounds and Identification. Annals of Mathematics and Artificial Intelligence. 28(14): 287313.
9. Experimental Approaches for Measurement (3/11)
Required
Payne, B. K., Cheng, C. M., Govorun, O., & Stewart, B. D. (2005). An Inkblot for
Attitudes: Affect Misattribution as Implicit Measurement. Journal of Personality and
Social Psychology, 89(3), 277.
Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. (1998). Measuring Individual
Differences in Implicit Cognition: the Implicit Association Test. Journal of Personality
and Social Psychology, 74(6), 1464.
7

Blair, G., Imai, K., & Lyall, J. (2012). Comparing and Combining List and Endorsement Experiments: Evidence from Afghanistan. Working Paper.
Optional:
Nosek, B. A., Greenwald, A. G., & Banaji, M. R. (2007). The Implicit Association
Test at Age 7: A Methodological and Conceptual Review. Automatic processes in
social thinking and behavior, 265-292.
Greenwald, A. G., Smith, C. T., Sriram, N., Bar-Anan, Y., & Nosek, B. A. (2009).
Implicit Race Attitudes Predicted Vote in the 2008 US Presidential Election. Analyses
of Social Issues and Public Policy, 9(1), 241-253.
10. Applied paper session (3/13)
Required: Controversy on Suicide Terrorism
Pape, Robert A. 2003. The Strategic Logic of Suicide Terrorism. American Political
Science Review.
Ashworth, Scott, Joshua D. Clinton, Adam Meirowitz, and Kristopher W. Ramsay.
2008. Design, Inference, and the Strategic Logic of Suicide Terrorism. American
Political Science Review, 102(2): 269273.
Pape, Robert A. 2008. Methods and Findings in the Study of Suicide Terrorism.
American Political Science Review, 102(2): 275277.
Required: Voter Registration and Turnout
Hanmer, Michael J. 2007. An Alternative Approach to Estimating Who is Most Likely
to Respond to Changes in Registration Laws. Political Behavior, 29: 130.
Glynn, Adam N. and Kevin M. Quinn. 2011. Why Process Matters for Causal Inference. Political Analysis, 19: 273286.
Optional: Ecological Inference
Duncan, O. and B. Davis. 1953. An Alternative to Ecological Correlation. American
Sociological Review, 18: 665666.
Cho, Wendy K. Tam and Charles F. Manski. 2008.Cross-Level/Ecological Inference.
In Oxford Handbook of Political Methodology, Ch. 22.
Cross, Philip J. and Charles F. Manski. 2002. Regressions, Short and Long. Econometrica, 70(1): 357368. (Technical; see the working paper version for an empirical
application)
Optional: Causal Mediation Analysis Applications
Becher, Michael and Michael Donnelly. 2012. Economic Performance, Individual Evaluations and the Vote: Investigating the Causal Mechanism. Working Paper. Princeton
University.
Tingley, Dustin and Michael Tomz. 2012. How Does the UN Security Council Influence
Public Opinion? Working Paper. Harvard University.

Part II: Statistical Learning


1. Introduction to Learning and Regularization (3/18)
Required:
Chapter 2 and 3 in Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. 2009.
The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2nd ed.
Springer.
Optional:
Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning. Springer.
Berk, Richard A. 2008. Statistical Learning From a Regression Perspective. Springer.
2. Model assessment and selection, Classification Trees (3/20)
Chapter 7 in Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. 2009. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2nd ed. Springer.
Pg. 305 - 313 in Elements of Statistical Learning.
Optional:
King, Gary, and Langche Zeng. 2001. Improving Forecasts of State Failure. World
Politics 53(4): 62358.
Ulfelder, Jay. 2012. Forecasting Political Instability: Results From a Tournament of
Methods. SSRN Working Paper.
3. Learning Algorithms (4/1)
Required:
Hainmueller, Jens, and Chad Hazlett. Kernel Regularized Least Squares: Moving
Beyond Linearity and Additivity Without Sacrificing Interpretability. Under Review.
Tommi Jaakkola. Lecture 7. Course materials for 6.867 Machine Learning, Fall 2006.
MIT OpenCourseWare(http://ocw.mit.edu/), Massachusetts Institute of Technology.
Downloaded July 2012.
Optional:
Sections 6.4-6.4.2 in Bishop, Christopher M. 2006. Pattern Recognition and Machine
Learning. Springer.
Yaser Abu-Mostafa, Lecture 14: Support Vector Machines. May 2012 lecture for
Learning From Data at Caltech. Available at
http://www.youtube.com/watch?v=eHsErlPJWUU.
Yaser Abu-Mostafa, Lecture 15: Kernel Methods. May 2012 lecture for Learning
From Data at Caltech. Available at
http://www.youtube.com/watch?v=XUj5JbQihlU.
Two tougher but very good background pieces on SVM, kernels, etc.
Burges, C. J. (1998). A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery, 2(2), 121-167.
Jakel, F., Scholkopf, B., & Wichmann, F. A. (2007). A Tutorial on Kernel Methods
for Categorization. Journal of Mathematical Psychology, 51(6), 343-358.
9

4. Ensemble Learning (4/3)


Required :
Hillard, Dustin, Stephen Purpura, and John Wilkerson. 2008. Computer-Assisted
Topic Classification for Mixed-Methods Social Science Research. Journal of Information Technology & Politics 4(4): 3146.
Polley, Eric C, and Mark J Van Der Laan. 2010. Super Learner in Prediction.
Working Paper. University of California, Berkeley.
Optional:
Montgomery, J., F. Hollenbach, and M. Ward. 2012. Improving Predictions Using
Ensemble Bayesian Model Averaging. Political Analysis 20(3): 27191.
Van Der Laan, Mark J, and Sherri Rose. 2011. Targeted Learning: Causal Inference for
Observational and Experimental Data. 1st ed. Springer.
5. Applied paper session (4/8)
Required :
Diermeier, Daniel, Jean-Franois Godbout, Bei Yu, and Stefan Kaufmann. 2011. Language and Ideology in Congress. British Journal of Political Science 42(01): 3155.
Stewart, Brandon M, and Yuri M Zhukov. 2009. Use of Force and Civil-Military
Relations in Russia: an Automated Content Analysis. Small Wars & Insurgencies
20(2): 31943.
Blair, Robert, Christopher Blattman, and Alexandra Hartman. 2012. Predicting
Local-Level Violence. Working Paper.
6. Unsupervised Learning: Guest lecture by Luke Miratrix (4/10)

Midterm Project Presentations (4/17, 4/22)


Part III: Bayesian Inference and Statistical Computing
1. Advanced Simulation Algorithms (4/24)
Required:
Jackman, Simon. 2000. Estimation and Inference via Bayesian Simulation: An Introduction to Markov Chain Monte Carlo. American Journal of Political Science, 44(2):
375404.
Chapter 12.3 in Gelman, Andrew, John B. Carlin, Hal S. Stern and Donald B. Rubin.
2004. Bayesian Data Analysis, 2nd ed. Chapman & Hall/CRC.
Optional:
Casella, George and Edward I. George, 1992, Explaining the Gibbs Sampler, The
American Statistician, 46(3), 167174.
Chib, Siddhartha and Edward Greenberg, 1995, Understanding the Metropolis-Hastings
Algorithm, The American Statistician, 49(4), 327335.
Tanner, Martin A. and W. H. Wong. 1987. The Calculation of Posterior Distributions by Data Augmentation (with discussion). Journal of the American Statistical
Association, 82: 528550.
10

Wei, Greg C. G. and Martin A. Tanner. 1990. A Monte Carlo Implementation of the
EM Algorithm and the Poor Mans Data Augmentation Algorithms. Journal of the
American Statistical Association, 85(411): 699704.
Liu, Jun S. 2004. Monte Carlo Strategies in Scientific Computing. Springer.
2. Discrete Choice Analysis (4/29)
Required:
Glasgow, Garrett. 2001. Mixed Logit Models for Multiparty Elections. Political
Analysis, 9(1): 116136.
Train, Kenneth E. 2001. A Comparison of Hierarchical Bayes and Maximum Simulated
Likelihood for Mixed Logit. Working Paper. University of California, Berkeley.
Optional:
Albert, J. and S. Chib. 1993. Bayesian Analysis of Binary and Polychotomous Data.
Journal of the American Statistical Association, 88, 669679.
Allenby, J. and Peter Rossi. 1999. Marketing Models of Consumer Heterogeneity.
Journal of Econometrics, 89(12): 5778.
Train, Kenneth E. 2009. Discrete Choice Methods with Simulation, 2nd ed. Cambridge
University Press.
Yamamoto, Teppei. 2010. A Multinomial Response Model for Varying Choice Sets,
with Application to Partially Contested Multiparty Elections. Working Paper. Massachusetts Institute of Technology.
3. Bayesian Measurement Techniques (5/1)
Required:
Chapter 9 in Jackman, Simon. 2009. Bayesian Analysis for the Social Sciences. Wiley.
Optional:
Bock, R. D. and M. Aitken. 1981. Marginal Maximum Likelihood Estimation of Item
Parameters: Application of an EM Algorithm. Psychometrika, 46: 443459.
Bafumi, Joseph, Andrew Gelman, David K. Park and Noah Kaplan. 2005. Practical
Issues in Implementing and Understanding Bayesian Ideal Point Estimation. Political
Analysis, 13: 171187.
4. Applied paper session (5/6)
Required: EM Algorithm Application
Slapin, Jonathan B. and Sven-Oliver Proksch. 2008. A Scaling Model for Estimating
Time-Series Party Positions from Texts. American Journal of Political Science, 52(3):
705722.
Required: Ideal Point Estimation
Bonica, Adam. 2012. Ideology and Interests in the Political Marketplace. American
Journal of Political Science, forthcoming.
Optional: Measuring Democracy
11

Treier, Shawn and Simon Jackman. 2008. Democracy as a Latent Variable. American
Journal of Political Science, 52(1): 201217.
Pemstein, Daniel, Stephen A. Meserve and James Melton. 2010. Democratic Compromise: A Latent Variable Analysis of Ten Measures of Regime Type. Political Analysis,
18(4): 426449.
Optional: More Ideal Point Estimation
Martin, Andrew D. and Kevin M. Quinn. 2002. Dynamic Ideal Point Estimation via
Markov Chain Monte Carlo for the U.S. Supreme Court, 19531999. Political Analysis,
10: 134153.
Clinton, Joshua, Simon Jackman and Douglas Rivers. 2004. The Statistical Analysis
of Roll Call Data. American Political Science Review, 98(2): 355370.
Lauderdale, Benjamin E. 2010. Unpredictable Voters in Ideal Point Estimation. Political Analysis, 18: 151171.
Optional: Bayesian Approaches to Ecological Inference
Greiner, D. James and Kevin M. Quinn. 2009. R C Ecological Inference: Bounds,
Correlations, Flexibility and Transparency of Assumptions. Journal of the Royal Statistical Society, Series A, 172(1): 7681.
Imai, Kosuke, Ying Lu and Aaron Strauss. 2008. Bayesian and Likelihood Inference
for 2 2 Ecological Tables: An Incomplete-Data Approach. Political Analysis, 16:
4169.
5. Multi-level Regression and Post-Stratification: Guest lecture by Chris Warshaw (5/8)
6. Missing Data: Guest lecture by James Honaker (5/13)

Bonus Sessions
1. Web Scraping (5/15)

12

Vous aimerez peut-être aussi