Skip to main content Accessibility help
Hostname: page-component-55b6f6c457-rq6d8 Total loading time: 0.427 Render date: 2021-09-24T16:12:50.186Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": true, "newCiteModal": false, "newCitedByModal": true, "newEcommerce": true, "newUsageEvents": true }

Demand Effects in Survey Experiments: An Empirical Assessment

Published online by Cambridge University Press:  11 December 2018

Princeton University
Texas A&M University
*Jonathan Mummolo, Assistant Professor of Politics and Public Affairs, Department of Politics and Woodrow Wilson School of Public and International Affairs, Princeton University,
Erik Peterson, Assistant Professor of Political Science, Department of Political Science, Texas A&M University,


Survey experiments are ubiquitous in social science. A frequent critique is that positive results in these studies stem from experimenter demand effects (EDEs)—bias that occurs when participants infer the purpose of an experiment and respond so as to help confirm a researcher’s hypothesis. We argue that online survey experiments have several features that make them robust to EDEs, and test for their presence in studies that involve over 12,000 participants and replicate five experimental designs touching on all empirical political science subfields. We randomly assign participants information about experimenter intent and show that providing this information does not alter the treatment effects in these experiments. Even financial incentives to respond in line with researcher expectations fail to consistently induce demand effects. Research participants exhibit a limited ability to adjust their behavior to align with researcher expectations, a finding with important implications for the design and interpretation of survey experiments.

Research Article
Copyright © American Political Science Association 2018 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)


The authors are grateful for feedback from Adam Berinsky, Cheryl Boudreau, Amber Boydstun, John Bullock, Brandice Canes-Wrone, Justin Esarey, Justin Grimmer, Erin Hartman, Samara Klar, Neil Malhotra, Nolan McCarty, Tali Mendelberg, Sean Westwood, and attendees of the 2017 Society for Political Methodology (PolMeth) annual meeting. Replication materials can be found on the American Political Science Review Dataverse at:


Angrist, Joshua D., and Pischke, Jorn-Steffen. 2009. Mostly Harmless Econometrics: An Empiricists Companion. Princeton: Princeton University Press.Google Scholar
Arceneaux, Kevin. 2008. “Can Partisan Cues Diminish Democratic Accountability?Political Behavior 30 (2): 139–60.CrossRefGoogle Scholar
Aarøe, Lene, and Petersen, Michael Bang. 2014. “Crowding Out Culture: Scandinavians and Americans Agree on Social Welfare in the Face of Deservingness Cues.” The Journal of Politics 76 (3): 684–97.CrossRefGoogle Scholar
Berinsky, Adam J., Huber, Gregory A., and Lenz, Gabriel S.. 2012. “Evaluating Online Labor Markets for Experimental Research: Amazon.Com’s Mechanical Turk.” Political Analysis 20 (3): 351–68.CrossRefGoogle Scholar
Bertrand, Marianne, and Mullainathan, Sendhil. 2004. “Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination.” American Economic Review 94 (4): 9911013.CrossRefGoogle Scholar
Bortolotti, Lisa, and Mameli, Matteo. 2006. “Deception in Psychology: Moral Costs and Benefits of Unsought Self-Knowledge.” Accountability in Research 13: 259–75.CrossRefGoogle ScholarPubMed
Bullock, John G. 2011. “Elite Influence on Public Opinion in an Informed Electorate.” American Political Science Review 105 (3): 496515.Google Scholar
Bullock, John G., Gerber, Alan S., Hill, Seth J., and Huber, Gregory A.. 2015. “Partisan Bias in Factual Beliefs about Politics.” Quarterly Journal of Political Science 10 (4): 519–78.CrossRefGoogle Scholar
Butler, Daniel M., and Tavits, Margrit. 2017. “Does the Hijab Increase Representatives’ Perceptions of Social Distance?The Journal of Politics 79 (2): 727–31.CrossRefGoogle Scholar
Chandler, Jesse, Mueller, Pam, and Paolacci, Gabrele. 2014. “Nonnaivete Among Amazon Mechanical Turk Workers: Consequences and Solutions for Behavioral Researchers.” Behavior Research Methods 46 (1): 112–30.CrossRefGoogle ScholarPubMed
Chandler, Jesse, Paolacci, Gabriele, Peer, Eyal, Mueller, Pam, and Ratfliff, Kate A.. 2015. “Using Nonnaive Participants Can Reduce Effect Sizes.” Psychological Science 26 (7): 1131–9.CrossRefGoogle ScholarPubMed
Charness, Gary, Gneezy, Uri, and Kuhn, Michael A.. 2012. “Experimental Methods: Between-Subject and Within-Subject Design.” Journal of Economic Behavior & Organization 81 (1): 18.Google Scholar
Cook, Thomas D., Bean, James R., Calder, Bobby J., Frey, Robert, Krovetz, Martin L., and Resiman, Stephen R.. 1970. “Demand Characteristics and Three Conceptions of the Frequently Deceived Subject.” Journal of Personality and Social Psychology 14 (3): 185–94.Google Scholar
Deming, David J., Yuchtman, Noam, Abulafi, Amira, Goldin, Claudia, and Katz, Lawrence F.. 2016. “The Value of Postsecondary Credentials in the Labor Market: An Experimental Study.” American Economic Review 106 (3): 778806.CrossRefGoogle Scholar
de Quidt, Jonathan, Haushofer, Johannes, and Roth, Christopher. 2018. “Measuring and Bounding Experimenter Demand.” American Economic Review 108 (11): 3266–302.Google Scholar
Dickson, Eric S. 2011. “Economics versus Psychology Experiments.” In Cambridge Handbook of Experimental Political Science, eds. Druckman, James N., Green, Donald P., Kuklinski, Jame H., and Lupia, Arthur. New York: Cambridge University Press, 5869.CrossRefGoogle Scholar
Druckman, James N., and Leeper, Thomas J.. 2012. “Learning More from Political Communication Experiments: Pretreatment and its Effects.” American Journal of Political Science 56 (4): 875–96.CrossRefGoogle Scholar
Druckman, James N., and Kam, Cindy D.. 2011. “Students as Experimental Participants: A Defense of the ‘Narrow Data Base.” In Cambridge Handbook of Experimental Political Science, eds. Druckman, James N., Green, Donald P., Kuklinski, Jame H., and Lupia, Arthur. New York: Cambridge University Press, 4157.CrossRefGoogle Scholar
Fowler, Anthony, and Margolis, Michele. 2014. “The Political Consequences of Uninformed Voters.” Electoral Studies 34: 100–10.Google Scholar
Frank, B. L. 1998. “Good News for the Experimenters: Subjects Do Not Care about Your Welfare.” Economics Letters 61: 171–4.CrossRefGoogle Scholar
Gaines, Brian J., Kuklinski, James H., and Quirk, Paul J.. 2007. “The Logic of the Survey Experiment Reexamined.” Political Analysis 15 (1): 120.CrossRefGoogle Scholar
Goodman, Joseph K., Cryder, Cynthia E., and Cheema, Amar. 2013. “Data Collection in a Flat World: The Strengths and Weaknesses of Mechanical Turk Samples.” Journal of Behavioral Decision Making 26 (3): 213–24.CrossRefGoogle Scholar
Hainmueller, Jens, Hopkins, Daniel J., and Yamamoto, Teppei. 2014. “Causal Inference in Conjoint Analysis: Understanding Multidimensional Choices via Stated Preference Experiments.” Political Analysis 22 (1): 130.Google Scholar
Hitlin, Paul. 2016. Research in the Crowdsourcing Age, a Case Study: Pew Research Center Report. Scholar
Huber, Gregory A., Hill, Seth J., and Lenz, Gabriel S.. 2012. “Sources of Bias in Retrospective Decision Making: Experimental Evidence on Voters’ Limitations in Controlling Incumbents.” American Political Science Review 106 (4): 720–41.CrossRefGoogle Scholar
Iyengar, Shanto. 2011. “Laboratory Experiments in Political Science.” In Cambridge Handbook of Experimental Political Science, eds. Druckman, James N., Green, Donald P., Kuklinski, James H., and Lupa, Arthur. New York: Cambridge University Press, 7388.Google Scholar
Iyengar, Shanto, and Hahn, Kyu S.. 2009. “Red Media, Blue Media: Evidence of Ideological Selectivity in Media Use.” Journal of Communication 59 (1): 1939.Google Scholar
Kam, Cindy D. 2007. “Implicit Attitudes, Explicit Choices: When Subliminal Priming Predicts Candidate Preferences.” Political Behavior 29: 343–67.CrossRefGoogle Scholar
Kam, Cindy D., Wilking, Jennifer R., and Zechmeister, Elizabeth J.. 2007. “Beyond the ‘Narrow Data Base’: Another Convenience Sample for Experimental Research.” Political Behavior 29 (4): 415–40.CrossRefGoogle Scholar
Krupnikov, Yanna, and Levine, Adam Seth. 2014. “Cross-Sample Comparisons and External Validity.” Journal of Experimental Political Science 1 (1): 5980.CrossRefGoogle Scholar
McConnell, Christopher, Margalit, Yotam, Malhotra, Neil, and Levendusky, Matthew. 2018. “The Economic Consequences of Partisanship in a Polarized Era.” American Journal of Political Science 62 (1): 518.CrossRefGoogle Scholar
McDermott, Rose. 2002. “Experimental Methods in Political Science.” Annual Review of Political Science 5: 3161.Google Scholar
Mullinix, Kevin J., Leeper, Thomas J., Druckman, James N., and Freese, Jeremy. 2015. “The Generalizability of Survey Experiments.” Journal of Experimental Political Science 2 (2): 109–38.CrossRefGoogle Scholar
Mummolo, Jonathan. 2016. “News from the Other Side: How Topic Relevance Limits the Prevalence of Partisan Selective Exposure.” The Journal of Politics 78 (3): 763–73.CrossRefGoogle Scholar
Mutz, Diana C. 2011. Population-Based Survey Experiments: Princeton, NJ: Princeton University Press.Google Scholar
Nelson, Thomas E., Clawson, Rosalee A., and Oxley, Zoe M.. 1997. “Media Framing of a Civil Liberties Conflict and its Effect on Tolerance.” American Political Science Review 91 (3): 567–83.CrossRefGoogle Scholar
Orne, Martin T. 1962. “On the Social Psychology of the Psychological Experiment: With Particular Reference to Demand Characteristics and Their Implications.” American Psychologist 17 (11): 776–83.CrossRefGoogle Scholar
Orne, Martin T., and Whitehouse, Wayne G.. 2000. “Demand Characteristics.” In Encyclopedia of Psychology, ed. Kazdin, Alan E.. Washington, D.C.: American Psychological Association and Oxford Press, 469–70.Google Scholar
Page, Monte M. 1970. “Role of Demand Awareness in the Communicator Credibility Effect.” Journal of Social Psychology 82: 5766.CrossRefGoogle Scholar
Paolacci, Gabriele, and Chandler, Jesse. 2014. “Inside the Turk: Understanding Mechanical Turk as a Participant Pool.” Current Directions in Psychological Science 23 (3): 184–8.CrossRefGoogle Scholar
Piper, Allison I. 1998. “Conducting Social Science Laboratory Experiments on the World Wide Web.” Library & Information Science Research 20 (1): 521.CrossRefGoogle Scholar
Rosenbaum, Paul R. 1984. “The Consquences of Adjustment for a Concomitant Variable that Has Been Affected by the Treatment.” Journal of the Royal Statistical Society. Series A (General) 147: 656–66.CrossRefGoogle Scholar
Rosenthal, Robert. 1976. Experimenter Effects in Behavioral Research. New York: Irvington Publishers.Google Scholar
Rosnow, Ralph, and Rosenthal, Robert. 1997. People Studying People: Artifacts and Ethics in Behavioral Research: New York: Freeman.Google Scholar
Sawyer, Alan G. 1975. “Demand Artifacts in Laboratory Experiments in Consumer Research.” Journal of Consumer Research 1 (4): 2030.Google Scholar
Sears, David O. 1986. “College Sophomores in the Laboratory: Influences of a Narrow Data Base on Social Psychology’s View of Human Nature.” Journal of Personality and Social Psychology 51: 515–30.Google Scholar
Sherman, Susan R. 1967. “Demand Characteristics in an Experiment on Attitude Change.” Sociometry 30 (3): 246–60.Google Scholar
Siah, Cha Yeow. 2005. “All that Glitters Is Not Gold: Examining the Perils and Obstacles in Collecting Data on the Internet.” International Negotiation 10 (1): 115–30.CrossRefGoogle Scholar
Sniderman, Paul M. 2011. “The Logic and Design of the Survey Experiment: An Autobiography of a Methodological Innovation.” In Cambridge Handbook of Experimental Political Science, eds. Druckman, James N., Green, Donald P., Kuklinski, Jame H., and Lupia, Arthur. New York: Cambridge University Press, 102–15.CrossRefGoogle Scholar
Tomz, Michael R., and Weeks, Jessica L. P.. 2013. “Public Opinion and the Democratic Peace.” American Political Science Review 107 (4): 849–65.CrossRefGoogle Scholar
Weber, Stephen J., and Cook, Thomas D.. 1972. “Subject Effects in Laboratory Research: An Examination of Subject Roles, Demand Characteristics, and Valid Inference.” Psychological Bulletin 77 (4): 273–95.CrossRefGoogle Scholar
White, Ariel, Strezhnev, Anton, Lucas, Christopher, Kruszewska, Dominika, and Huff, Connor. 2018. “Investigator Characteristics and Respondent Behavior in Online Surveys.” Journal of Experimental Political Science 5 (1): 5667.CrossRefGoogle Scholar
Zizzo, Daniel John. 2010. “Experimenter Demand Effects in Economic Experiments.” Experimental Economics 13 (1): 7598.CrossRefGoogle Scholar
Supplementary material: Link

Mummolo and Peterson Dataset

Supplementary material: PDF

Mummolo and Peterson supplementary material

Online Appendix

Download Mummolo and Peterson supplementary material(PDF)
Cited by

Send article to Kindle

To send this article to your Kindle, first ensure is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

Note you can select to send to either the or variations. ‘’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Demand Effects in Survey Experiments: An Empirical Assessment
Available formats

Send article to Dropbox

To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

Demand Effects in Survey Experiments: An Empirical Assessment
Available formats

Send article to Google Drive

To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

Demand Effects in Survey Experiments: An Empirical Assessment
Available formats

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *