Skip to main content Accessibility help
×
×
Home

Demand Effects in Survey Experiments: An Empirical Assessment

  • JONATHAN MUMMOLO (a1) and ERIK PETERSON (a2)

Abstract

Survey experiments are ubiquitous in social science. A frequent critique is that positive results in these studies stem from experimenter demand effects (EDEs)—bias that occurs when participants infer the purpose of an experiment and respond so as to help confirm a researcher’s hypothesis. We argue that online survey experiments have several features that make them robust to EDEs, and test for their presence in studies that involve over 12,000 participants and replicate five experimental designs touching on all empirical political science subfields. We randomly assign participants information about experimenter intent and show that providing this information does not alter the treatment effects in these experiments. Even financial incentives to respond in line with researcher expectations fail to consistently induce demand effects. Research participants exhibit a limited ability to adjust their behavior to align with researcher expectations, a finding with important implications for the design and interpretation of survey experiments.

Copyright

Corresponding author

*Jonathan Mummolo, Assistant Professor of Politics and Public Affairs, Department of Politics and Woodrow Wilson School of Public and International Affairs, Princeton University, jmummolo@princeton.edu.
Erik Peterson, Assistant Professor of Political Science, Department of Political Science, Texas A&M University, erik.peterson@tamu.edu.

Footnotes

Hide All

The authors are grateful for feedback from Adam Berinsky, Cheryl Boudreau, Amber Boydstun, John Bullock, Brandice Canes-Wrone, Justin Esarey, Justin Grimmer, Erin Hartman, Samara Klar, Neil Malhotra, Nolan McCarty, Tali Mendelberg, Sean Westwood, and attendees of the 2017 Society for Political Methodology (PolMeth) annual meeting. Replication materials can be found on the American Political Science Review Dataverse at: https://doi.org/10.7910/DVN/HUKSID.

Footnotes

References

Hide All
Angrist, Joshua D., and Pischke, Jorn-Steffen. 2009. Mostly Harmless Econometrics: An Empiricists Companion. Princeton: Princeton University Press.
Arceneaux, Kevin. 2008. “Can Partisan Cues Diminish Democratic Accountability?Political Behavior 30 (2): 139–60.
Aarøe, Lene, and Petersen, Michael Bang. 2014. “Crowding Out Culture: Scandinavians and Americans Agree on Social Welfare in the Face of Deservingness Cues.” The Journal of Politics 76 (3): 684–97.
Berinsky, Adam J., Huber, Gregory A., and Lenz, Gabriel S.. 2012. “Evaluating Online Labor Markets for Experimental Research: Amazon.Com’s Mechanical Turk.” Political Analysis 20 (3): 351–68.
Bertrand, Marianne, and Mullainathan, Sendhil. 2004. “Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination.” American Economic Review 94 (4): 9911013.
Bortolotti, Lisa, and Mameli, Matteo. 2006. “Deception in Psychology: Moral Costs and Benefits of Unsought Self-Knowledge.” Accountability in Research 13: 259–75.
Bullock, John G. 2011. “Elite Influence on Public Opinion in an Informed Electorate.” American Political Science Review 105 (3): 496515.
Bullock, John G., Gerber, Alan S., Hill, Seth J., and Huber, Gregory A.. 2015. “Partisan Bias in Factual Beliefs about Politics.” Quarterly Journal of Political Science 10 (4): 519–78.
Butler, Daniel M., and Tavits, Margrit. 2017. “Does the Hijab Increase Representatives’ Perceptions of Social Distance?The Journal of Politics 79 (2): 727–31.
Chandler, Jesse, Mueller, Pam, and Paolacci, Gabrele. 2014. “Nonnaivete Among Amazon Mechanical Turk Workers: Consequences and Solutions for Behavioral Researchers.” Behavior Research Methods 46 (1): 112–30.
Chandler, Jesse, Paolacci, Gabriele, Peer, Eyal, Mueller, Pam, and Ratfliff, Kate A.. 2015. “Using Nonnaive Participants Can Reduce Effect Sizes.” Psychological Science 26 (7): 1131–9.
Charness, Gary, Gneezy, Uri, and Kuhn, Michael A.. 2012. “Experimental Methods: Between-Subject and Within-Subject Design.” Journal of Economic Behavior & Organization 81 (1): 18.
Cook, Thomas D., Bean, James R., Calder, Bobby J., Frey, Robert, Krovetz, Martin L., and Resiman, Stephen R.. 1970. “Demand Characteristics and Three Conceptions of the Frequently Deceived Subject.” Journal of Personality and Social Psychology 14 (3): 185–94.
Deming, David J., Yuchtman, Noam, Abulafi, Amira, Goldin, Claudia, and Katz, Lawrence F.. 2016. “The Value of Postsecondary Credentials in the Labor Market: An Experimental Study.” American Economic Review 106 (3): 778806.
de Quidt, Jonathan, Haushofer, Johannes, and Roth, Christopher. 2018. “Measuring and Bounding Experimenter Demand.” American Economic Review 108 (11): 3266–302.
Dickson, Eric S. 2011. “Economics versus Psychology Experiments.” In Cambridge Handbook of Experimental Political Science, eds. Druckman, James N., Green, Donald P., Kuklinski, Jame H., and Lupia, Arthur. New York: Cambridge University Press, 5869.
Druckman, James N., and Leeper, Thomas J.. 2012. “Learning More from Political Communication Experiments: Pretreatment and its Effects.” American Journal of Political Science 56 (4): 875–96.
Druckman, James N., and Kam, Cindy D.. 2011. “Students as Experimental Participants: A Defense of the ‘Narrow Data Base.” In Cambridge Handbook of Experimental Political Science, eds. Druckman, James N., Green, Donald P., Kuklinski, Jame H., and Lupia, Arthur. New York: Cambridge University Press, 4157.
Fowler, Anthony, and Margolis, Michele. 2014. “The Political Consequences of Uninformed Voters.” Electoral Studies 34: 100–10.
Frank, B. L. 1998. “Good News for the Experimenters: Subjects Do Not Care about Your Welfare.” Economics Letters 61: 171–4.
Gaines, Brian J., Kuklinski, James H., and Quirk, Paul J.. 2007. “The Logic of the Survey Experiment Reexamined.” Political Analysis 15 (1): 120.
Goodman, Joseph K., Cryder, Cynthia E., and Cheema, Amar. 2013. “Data Collection in a Flat World: The Strengths and Weaknesses of Mechanical Turk Samples.” Journal of Behavioral Decision Making 26 (3): 213–24.
Hainmueller, Jens, Hopkins, Daniel J., and Yamamoto, Teppei. 2014. “Causal Inference in Conjoint Analysis: Understanding Multidimensional Choices via Stated Preference Experiments.” Political Analysis 22 (1): 130.
Hitlin, Paul. 2016. Research in the Crowdsourcing Age, a Case Study: Pew Research Center Report. http://www.pewinternet.org/2016/07/11/research-in-the-crowdsourcing-age-a-case-study/.
Huber, Gregory A., Hill, Seth J., and Lenz, Gabriel S.. 2012. “Sources of Bias in Retrospective Decision Making: Experimental Evidence on Voters’ Limitations in Controlling Incumbents.” American Political Science Review 106 (4): 720–41.
Iyengar, Shanto. 2011. “Laboratory Experiments in Political Science.” In Cambridge Handbook of Experimental Political Science, eds. Druckman, James N., Green, Donald P., Kuklinski, James H., and Lupa, Arthur. New York: Cambridge University Press, 7388.
Iyengar, Shanto, and Hahn, Kyu S.. 2009. “Red Media, Blue Media: Evidence of Ideological Selectivity in Media Use.” Journal of Communication 59 (1): 1939.
Kam, Cindy D. 2007. “Implicit Attitudes, Explicit Choices: When Subliminal Priming Predicts Candidate Preferences.” Political Behavior 29: 343–67.
Kam, Cindy D., Wilking, Jennifer R., and Zechmeister, Elizabeth J.. 2007. “Beyond the ‘Narrow Data Base’: Another Convenience Sample for Experimental Research.” Political Behavior 29 (4): 415–40.
Krupnikov, Yanna, and Levine, Adam Seth. 2014. “Cross-Sample Comparisons and External Validity.” Journal of Experimental Political Science 1 (1): 5980.
McConnell, Christopher, Margalit, Yotam, Malhotra, Neil, and Levendusky, Matthew. 2018. “The Economic Consequences of Partisanship in a Polarized Era.” American Journal of Political Science 62 (1): 518.
McDermott, Rose. 2002. “Experimental Methods in Political Science.” Annual Review of Political Science 5: 3161.
Mullinix, Kevin J., Leeper, Thomas J., Druckman, James N., and Freese, Jeremy. 2015. “The Generalizability of Survey Experiments.” Journal of Experimental Political Science 2 (2): 109–38.
Mummolo, Jonathan. 2016. “News from the Other Side: How Topic Relevance Limits the Prevalence of Partisan Selective Exposure.” The Journal of Politics 78 (3): 763–73.
Mutz, Diana C. 2011. Population-Based Survey Experiments: Princeton, NJ: Princeton University Press.
Nelson, Thomas E., Clawson, Rosalee A., and Oxley, Zoe M.. 1997. “Media Framing of a Civil Liberties Conflict and its Effect on Tolerance.” American Political Science Review 91 (3): 567–83.
Orne, Martin T. 1962. “On the Social Psychology of the Psychological Experiment: With Particular Reference to Demand Characteristics and Their Implications.” American Psychologist 17 (11): 776–83.
Orne, Martin T., and Whitehouse, Wayne G.. 2000. “Demand Characteristics.” In Encyclopedia of Psychology, ed. Kazdin, Alan E.. Washington, D.C.: American Psychological Association and Oxford Press, 469–70.
Page, Monte M. 1970. “Role of Demand Awareness in the Communicator Credibility Effect.” Journal of Social Psychology 82: 5766.
Paolacci, Gabriele, and Chandler, Jesse. 2014. “Inside the Turk: Understanding Mechanical Turk as a Participant Pool.” Current Directions in Psychological Science 23 (3): 184–8.
Piper, Allison I. 1998. “Conducting Social Science Laboratory Experiments on the World Wide Web.” Library & Information Science Research 20 (1): 521.
Rosenbaum, Paul R. 1984. “The Consquences of Adjustment for a Concomitant Variable that Has Been Affected by the Treatment.” Journal of the Royal Statistical Society. Series A (General) 147: 656–66.
Rosenthal, Robert. 1976. Experimenter Effects in Behavioral Research. New York: Irvington Publishers.
Rosnow, Ralph, and Rosenthal, Robert. 1997. People Studying People: Artifacts and Ethics in Behavioral Research: New York: Freeman.
Sawyer, Alan G. 1975. “Demand Artifacts in Laboratory Experiments in Consumer Research.” Journal of Consumer Research 1 (4): 2030.
Sears, David O. 1986. “College Sophomores in the Laboratory: Influences of a Narrow Data Base on Social Psychology’s View of Human Nature.” Journal of Personality and Social Psychology 51: 515–30.
Sherman, Susan R. 1967. “Demand Characteristics in an Experiment on Attitude Change.” Sociometry 30 (3): 246–60.
Siah, Cha Yeow. 2005. “All that Glitters Is Not Gold: Examining the Perils and Obstacles in Collecting Data on the Internet.” International Negotiation 10 (1): 115–30.
Sniderman, Paul M. 2011. “The Logic and Design of the Survey Experiment: An Autobiography of a Methodological Innovation.” In Cambridge Handbook of Experimental Political Science, eds. Druckman, James N., Green, Donald P., Kuklinski, Jame H., and Lupia, Arthur. New York: Cambridge University Press, 102–15.
Tomz, Michael R., and Weeks, Jessica L. P.. 2013. “Public Opinion and the Democratic Peace.” American Political Science Review 107 (4): 849–65.
Weber, Stephen J., and Cook, Thomas D.. 1972. “Subject Effects in Laboratory Research: An Examination of Subject Roles, Demand Characteristics, and Valid Inference.” Psychological Bulletin 77 (4): 273–95.
White, Ariel, Strezhnev, Anton, Lucas, Christopher, Kruszewska, Dominika, and Huff, Connor. 2018. “Investigator Characteristics and Respondent Behavior in Online Surveys.” Journal of Experimental Political Science 5 (1): 5667.
Zizzo, Daniel John. 2010. “Experimenter Demand Effects in Economic Experiments.” Experimental Economics 13 (1): 7598.
Recommend this journal

Email your librarian or administrator to recommend adding this journal to your organisation's collection.

American Political Science Review
  • ISSN: 0003-0554
  • EISSN: 1537-5943
  • URL: /core/journals/american-political-science-review
Please enter your name
Please enter a valid email address
Who would you like to send this to? *
×
Type Description Title
UNKNOWN
Supplementary materials

Mummolo and Peterson Dataset
Dataset

 Unknown
PDF
Supplementary materials

Mummolo and Peterson supplementary material
Online Appendix

 PDF (3.7 MB)
3.7 MB

Metrics

Altmetric attention score

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed