Hostname: page-component-8448b6f56d-c47g7 Total loading time: 0 Render date: 2024-04-22T10:21:21.759Z Has data issue: false hasContentIssue false

Should We Worry About Sponsorship-Induced Bias in Online Political Science Surveys?

Published online by Cambridge University Press:  15 October 2019

Thomas J. Leeper
Department of Methodology, London School of Economics, London WC2A 2AE, UK, e-mail:, Twitter: @thosjleeper
Emily A. Thorson
Department of Political Science, Syracuse University, Syracuse, NY, USA, e-mail:, Twitter: @emilythorson


Political scientists rely heavily on survey research to gain insights into public attitudes and behaviors. Over the past decade, survey data collection has moved away from personal face-to-face and telephone interviewing towards a model of computer-assisted self-interviewing. A hallmark of many online surveys is the prominent display of the survey’s sponsor, most often an academic institution, in the initial consent form and/or on the survey website itself. It is an open question whether these displays of academic survey sponsorship could increase total survey error. We measure the extent to which sponsorship (by a university or marketing firm) affects data quality, including satisficing behavior, demand characteristics, and socially desirable responding. In addition, we examine whether sponsor effects vary depending on the participant’s experience with online surveys. Overall, we find no evidence that response quality is affected by survey sponsor or by past survey experience.

Research Article
© The Experimental Research Section of the American Political Science Association 2019

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)


Belli, Robert F., Michael, W. Traugott, Young, Margaret and McGonagle, Katherine. 1996. Reducing Vote Overreporting in Surveys: Social Desirability, Memory Failure, and Source Monitoring. Public Opinion Quarterly 63(1): 90108.CrossRefGoogle Scholar
Binswanger, Johannes, Schunk, Daniel and Toepoel, Vera. 2013. Panel Conditioning in Difficult Attitudinal Questions. Public Opinion Quarterly 77(3): 783–97.CrossRefGoogle Scholar
Edwards, Michelle, Dillman, Don and Smyth, Jolene. 2014. An Experimental Test of the Effects of Survey Sponsorship on Internet and Mail Survey Response. Public Opinion Quarterly 78(3): 734–50.CrossRefGoogle Scholar
Fox, Richard, Crask, Melvin and Kim, Jonghoon. 1988. Mail Survey Response Rate: A Meta-Analysis of Selected Techniques for Inducing Response. Public Opinion Quarterly 52(4): 467–91.CrossRefGoogle Scholar
Glynn, Adam N. 2013. What Can We Learn with Statistical Truth Serum?: Design and Analysis of the List Experiment. Public Opinion Quarterly 77(S1): 159–72. URL: CrossRefGoogle Scholar
Groves, Robert, Cialdini, Robert and Couper, Mick. 1992. Understanding the Decision to Participate in a Survey. Public Opinion Quarterly 56(4): 475–95.CrossRefGoogle Scholar
Hauser, David J. and Schwarz, Norbert. 2016. Attentive Turkers: MTurk Participants Perform Better on Online Attention Checks than Do Subject Pool Participants. Behavior Research Methods 48(1): 400407.CrossRefGoogle ScholarPubMed
Jensen, Carsten and Thomsen, Jens Peter Frølund. 2013. Self-Reported Cheating in Web Surveys on Political Knowledge. Quality & Quantity 48(6): 3343–54.URL: CrossRefGoogle Scholar
Jones, Wesley H. and Linda, Gerald. 1978. Multiple Criteria Effects in a Mail Survey Experiment. Journal of Marketing Research 15(2): 280–4.CrossRefGoogle Scholar
Kreuter, Frauke, Preser, Stanley and Tourangeau, Roger. 2008. Social Desirability in CATI, IVR, and Web Surveys. Public Opinion Quarterly 72(5): 847–65.Google Scholar
Krosnick, Jon A. 1991. Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys. Applied Cognitive Psychology 5: 213–36.Google Scholar
Krosnick, Jon A. 1999. Survey Research. Annual Review of Psychology 50: 537–67.CrossRefGoogle ScholarPubMed
Leeper, Thomas and Thorson, Emily. 2019. Replication Data for: Should We Worry About Sponsorship-Induced Bias in Online Political Science Surveys? Harvard Dataverse. doi:10.7910/DVN/KKFS8Y.Google Scholar
McDonald, Michael P. 2003. On the Over-Report Bias of the National Election Survey. Political Analysis 11: 180–6.CrossRefGoogle Scholar
Nederhof, Anton J. 2006. Methods of Coping with Social Desirability Bias: A Review. European Journal of Social Psychology 15(3): 263–80.Google Scholar
Nichols, Austin Lee and Maner, Jon K.. 2008. The Good-Subject Effect: Investigating Participant Demand Characteristics. The Journal of General Psychology 135(2): 151–66.CrossRefGoogle ScholarPubMed
Porter, Stephen R and Whitcomb, Michael E.. 2003. The Impact of Contact Type on Web Survey Response Rates. The Public Opinion Quarterly 67(4): 579–88.Google Scholar
Presser, Stanley, Blair, Johnny and Triplett, Timothy. 1992. Survey Sponsorship, Response Rates, and Response Effects. Social Science Quarterly 73: 699702.Google Scholar
Tourangeau, Roger, Groves, Robert, Kennedy, Courtney and Yan, Ting. 2009. The Presentation of a Web Survey, Nonresponse and Measurement Error among Members of Web Panel. Journal of Official Statistics 25: 299321.Google Scholar
Tourangeau, Roger, Presser, Stanley and Sun, Hanyu. 2014. The Impact of Partisan Sponsorship on Political Surveys. Public Opinion Quarterly 78(2): 510–22. URL: CrossRefGoogle Scholar
Weber, Stephen J. and Thomas, D. Cook. 1972. Subject Effects in Laboratory Research: An Examination of Subject Roles, Demand Characteristics, and Valid Infererence. Psychological Bulletin 77(4): 273–95.CrossRefGoogle Scholar
Supplementary material: Link

Leeper and Thorson Dataset

Supplementary material: File

Leeper and Thorson supplementary material

Leeper and Thorson supplementary material

Download Leeper and Thorson supplementary material(File)
File 340.1 KB