Time: 8:30am - 9:50am Monday and Wednesday
Weekly Activities: Students will read 30-40 pages per week, complete a weekly writing or analysis challenge in pairs, and document their work in a short blog post. The final is a project and a short paper. The class will also be coordinated with an optional speaker series featuring leading experimentalists in industry and public service.
Grading: Class participation: 20%. Weekly assignments: 20%. Midterm experiment design: 20%. Final project: 40%.
Pre-requisites: POL 345/SOC 301, or with permission of the instructor, other background in statistics or data analysis.
Meeting topics and assigned readings
Ninety-minute class sessions will alternate between discussions of key issues that include student-led components and workshops that focus on methods and project feedback.
Required readings are starred; others are recommended and will be presented by students who have chosen that week. Grading policies and specific instructions for assignments will be handed out on the first day of class, and available on the course website.
This syllabus is a living document, and the readings are subject to change as we go along, so please keep checking this document for the latest.
Part I: Understanding Field Experiments
Lecture & Discussion Field Experiments in Policy, Products, and Social Science (Feb 5)
Workshop Introduction to Randomized Trials (Feb 7)
- * William R. Shadish, Thomas D. Cook, and Donald T. Campbell. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton-Mifflin, 2002. (chapter 1)
- Salganik, M. J., & Watts, D. J. (2009). Web-Based Experiments for the Study of Collective Social Dynamics in Cultural Markets. Topics in Cognitive Science, 1 (3), 439–468.
- Rost, K., Stahel, L., & Frey, B. S. (2016). Digital social norm enforcement: Online firestorms in social media. PloS one, 11 (6), e0155923
Lecture & Discussion Studying Online Behavior at Scale (Feb 12)
- This workshop session reviews basic concepts needed for the class and introduces the process of conducting a randomized trial, including treatments, random assignment, and outcomes. Students will conduct our first randomized trial in this session. Students will also be introduced to the notations and conventions used in the class.
Workshop Research Ethics (Feb 14)
- * Kohavi, R., Deng, A., Frasca, B., Walker, T., Xu, Y., & Pohlmann, N. (2013). Online controlled experiments at large scale. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 1168–1176). ACM.
- * Scheiber, N. (2017, April 2). How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons. The New York Times. Retrieved from
- * Matias, J. Nathan, Mou Merry. Community-Led Experiments in Platform Governance. (under review).
- Ge, Y., Knittel, C. R., MacKenzie, D., & Zoepf, S. (2016). Racial and gender discrimination in transportation network companies (No. w22776). National Bureau of Economic Research.
- Matias, J. N. (2016, December 12). The Obligation To Experiment. MIT Media Lab.
- Grimmelmann, J. (2015). The law and ethics of experiments on social media users. 13 Colo. Tech. L.J. 219, 2015
Lecture & Discussion Moderation, Fact-Checking, Influence: Studying Online Behavior of Humans & Machines (Feb 19)
- This workshop focuses on research ethics procedures. Students should have completed Princeton's IRB training by this point in the class. Make sure to plan this into your week. IRB training can take as many as six hours to complete.
Workshop Statistics of Experiment Design (Feb 21)
- * Matias, J. Nathan. (2016) The Civic Labor of Online Moderators. Oxford Internet, Policy, and Politics Conference. Oxford, UK
- * Grimmelmann, J. (2015). The virtues of moderation. Yale JL & Tech., 17, 42.
- * (additional paper forthcoming)
- Muchnik, L., Aral, S., & Taylor, S. J. (2013). Social Influence Bias: A Randomized Experiment. Science, 341 (6146), 647–651.
- Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychological Science in the Public Interest, 13 (3), 106–131.
- This workshop outlines statistical work involved in designing an experiment, introducing core assumptions of experiment design, including excludability and non-interference.
Part II: Planning Your Field Experiment
Lecture & Discussion The Role of Experiments in a Democracy (Feb 26)
Workshop Planning An Experiment (Outcomes, Power Analysis) (Feb 28)
- * Richard H. Thaler, Cass R. Sunstein. (2008) Nudge: Improving decisions about health, wealth, and happiness. Yale University Press. (Chapter 1)
- * Campbell, D. T. (1998). The experimenting society. In The experimenting society: Essays in honor of Donald T. Campbell (p. 35). New Brunswick: Transaction Publishers.
- Desposato, S. (2014, November 3). Ethics and research in comparative politics. Washington Post.
Lecture & Discussion Improving the Quality of Experiments and Their Results (March 5)
Workshop Developing a Pre-Analysis Plan (March 7)
- In this workshop, students learn how to choose outcome variables and conduct a power analysis starting with observed behavior, projecting possible effects, and simulating the chance of observing the effect for a given study design
- Pettingill, L. M. (2017, March 21). 4 Principles for Making Experimentation Count. Airbnb Engineering and Data Science.
- We will also begin the process of matching student teams with community partners to develop experiment ideas together.
Lecture & Discussion Planning an Experiment with Community Partners (March 12)
- In this workshop, students will learn how to produce a pre-analysis plan. We will also discuss the midterm, which will be to write up an experiment design.
Workshop Designing and Planning an Experiment With Partners (March 14)
- * Glennerster, R. (2017). The practicalities of running randomized evaluations: partnerships, measurement, ethics, and transparency. Handbook of Economic Field Experiments, 1, 175-243. (pages 1-18)
- * Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. New Directions for Evaluation, 1998 (80), 5–23.
- * (2017) Research on the effect downvotes have on user civility. Discussion on reddit.com
Lecture & Discussion Context, Structure, and Mechanisms in Experiment Design (March 26)
- This workshop focuses on processes for developing experiment ideas with communities. In this workshop, we will work through the example problem space of fact-checking, imagine a range of possible experiments, and discuss their individual and collective contribution to the issue.
- We will also begin hosting conversations between community partners and student teams on study design
Workshop Managing Things You Cannot Control: Regression Adjustment, Stratification, Cluster Randomization (March 28)
- * Mortensen, C. R., & Cialdini, R. B. (2010). Full-Cycle Social Psychology for Theory and Application. Social and Personality Psychology Compass, 4 (1), 53–63.
- * Paluck, E. L., Shepherd, H., & Aronow, P. M. (2016). Changing climates of conflict: A social network experiment in 56 schools. Proceedings of the National Academy of Sciences, 113 (3), 566-571.
- Aral, S., & Walker, D. (2011). Creating social contagion through viral product design: A randomized trial of peer influence in networks. Management science, 57 (9), 1623-1639.
- TBA: Example or reading on using multiple arms to identify mediators or test rival hypotheses
Lecture & Discussion Combining Results from Many Experiments (April 2)
This workshop introduces methods for assigning treatments to groups, regions, and periods of time, as well as methods for analyzing clustered experiments
- As your student team has conversations with community partners, knowing these approaches to assignment will help you think creatively about turning community questions into an experiment design
- Green, D. P., & Vavreck, L. (2007). Analysis of cluster-randomized experiments: A comparison of alternative estimation approaches. Political Analysis, 16 (2), 138-152.
Workshop Feedback on Final Project Pre-Analysis Plan (April 4)
- * Bavel, J. V. (2016, May 27). Why Do So Many Studies Fail to Replicate? The New York Times.
- (more readings TBA- looking for a good readings on strengths/weaknesses of meta-analyses, as well as a clear howto on meta-analyses in R)
Lecture & Discussion Interpreting, Using, and Misusing Experiment Results (April 9)
- In this workshop, students will get feedback on draft pre-analysis plans that will also form part of conversations and negotiations with community partners.
- * Weiss, C. H. (1979). The many meanings of research utilization. Public Administration Review, 39 (5), 426–431.
- * Koerth-Baker, M. (2017, May 11). The Tangled Story Behind Trump’s False Claims Of Voter Fraud.
- Miller, J. E. (2015). The Chicago guide to writing about numbers. University of Chicago Press. Chicago. (just the section on Presenting statistical results to non-statistical audiences)
- Batley, Paul. Kill Or Cure. http://kill-or-cure.herokuapp.com/
- Mitton, C., Adair, C. E., McKenzie, E., Patten, S. B., & Perry, B. W. (2007). Knowledge transfer and exchange: review and synthesis of the literature. Milbank Quarterly, 85 (4), 729–768.
Part III: Deploying & Reporting Your Field Experiments
Workshop Deploying and Monitoring Your Experiment (April 11)
Lecture & Discussion Debriefing, Harm, and Accountability in Field Experiments (April 16)
- This workshop will review best practices for ensuring that a field experiment is deployed and administered successfully.
- Students will report on early results from pilot deployments of their experiment. Based on those results, students will hopefully deploy their experiment soon after
Workshop Analyzing and Communicating Experiment Results (April 18)
- * Krafft, P. M., Macy, M., & Pentland, A. (2016). Bots as Virtual Confederates: Design and Ethics. CSCW 2017
- * Desposato, S. (2015). Ethics and experiments: problems and solutions for social scientists and policy professionals. Routledge. The Value and Challenges of Using Local Ethical Review in Comparative Politics Experiments
- Munger, K. (2017). Tweetment effects on the tweeted: Experimentally reducing racist harassment. Political Behavior, 39 (3), 629-649.
- Gray, M. L. (2014, July 8). When Science, Customer Service, and Human Subjects Research Collide. Now What? marylgray.org
Workshop Graceful Recovery from Problems in Field Experiments (April 23)
This workshop reviews statistical methods for interpreting experiment results, focusing on describing results to a public audience and illustrating research findings.
Workshop Preparing for Public Knowledge of Your Research (April 25)
- This workshop will discuss strategies for recovering from problems in field experiments and work through problems that students may face in their experiments, which will have deployed by this point
Lecture & Discussion Advanced Topics in Field Experimentation (April 30)
This workshop focuses on developing strategies for handling public discussion of research results by affected communities and a wider audience
- Students will discuss their own strategy for community debriefing.
Presentations Final Project Presentations (May 2)
- Wager, S., & Athey, S. (2017). Estimation and inference of heterogeneous treatment effects using random forests. Journal of the American Statistical Association.
- Eckles, D., Karrer, B., & Ugander, J. (2017). Design and analysis of experiments in networks: Reducing bias from interference. Journal of Causal Inference, 5 (1).
- Allcott, H. (2015). Site selection bias in program evaluation. The Quarterly Journal of Economics, 130 (3), 1117–1165.
- Yong, E. (2017, January 5). An Ingenious Experiment of Jungle Bats and Evolving Artificial Flowers. The Atlantic.
- In this session, students will present their final projects for final feedback before submitting the final paper.